id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,752,062,976 | langchain | LangChain HuggingFace Integration tool_choice parameter is now out of date with TGI v3.0 | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code
```python
class add(BaseModel):
"""Add two integers."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
class multiply(BaseModel):
"""Multiply two integers."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
tools = [add, multiply]
callbacks = None
model_endpoint = 'private_url'
model_name = 'private_name'
huggingfacehub_api_token = 'private'
hardcoded_args_dict = {
"model_endpoint": model_endpoint,
"max_new_tokens": "20000",
"top_k": 10,
"top_p": 0.95,
"typical_p": 0.95,
"temperature": 0.1,
"repetition_penalty": 1.03,
"timeout": 240,
"callbacks": None,
"huggingfacehub_api_token": huggingfacehub_api_token,
"model_kwargs": {"model_id": model_name},
}
chat_model = ChatHuggingFace(llm=llm, max_tokens=500)
chat_with_tools = chat.bind_tools(tools, tool_choice="required") # You can use this if you want actual functions instead of Pydantic Tools
chat_with_tools_parsed = chat_with_tools | PydanticToolsParser(
tools=tools, first_tool_only=True
) # shouldn't need this if it is correct
query = "Add 2 and 4"
try:
response = chat_with_tools_parsed.invoke(query)
print(response)
assert response == add(a=2, b=4)
except:
response = chat_with_tools.invoke(query)
print(response)
assert response.tool_calls is not None and response.tool_calls[0]["name"] == "add"
```
### Error Message and Stack Trace (if applicable)
```bash
huggingface_hub.errors.HfHubHTTPError: 422 Client Error: Unprocessable Entity for url: private_url
Tool error: Tool with name required not found
```
### Description
ChatHuggingFace should have support for the new `tool_choice` = "required" parameter from HuggingFace TGI v3.0.
Docs:
https://huggingface.co/docs/text-generation-inference/basic_tutorials/using_guidance#tool-choice-configuration
This is especially important because HuggingFace TGI changed their API to match OpenAI's schema of having "required" as the default and "auto" be a secondary option.
This discrepancy in HuggingFace's API vs OpenAI's has caused a number of people confusion & issues and they've finally fixed it. It would be great to have this updated properly in LangChain as well.
I think the code could look something like this:
```python
def bind_tools(
self,
tools: Sequence[Union[Dict[str, Any], Type, Callable, BaseTool]],
*,
tool_choice: Optional[
Union[dict, str, Literal["auto", "required", "none"]]
] = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, BaseMessage]:
"""
...
tool_choice: Which tool to require the model to call.
Must be the name of the single provided function,
"auto" (default if tools are provided) to automatically determine
which function to call (if any), "required" to force the LLM to
determine which function to call (similar to strict), or a dict
of the form:
{"type": "function", "function": {"name": <<tool_name>>}}.
...
"""
formatted_tools = [convert_to_openai_tool(tool) for tool in tools]
if tools:
if isinstance(tool_choice, str):
if tool_choice not in ("auto", "required", "none"):
if len(formatted_tools) != 1:
raise ValueError(
"When specifying `tool_choice`, you must provide exactly one "
f"tool. Received {len(formatted_tools)} tools."
)
else:
tool_choice = {
"type": "function",
"function": {"name": tool_choice},
}
elif isinstance(tool_choice, dict) and tool_choice != {}:
tool_names = [ft["function"]["name"] for ft in formatted_tools]
if tool_choice["function"]["name"] not in tool_names:
raise ValueError(
f"Tool choice {tool_choice} was specified, but the only "
f"provided tools were {tool_names}."
)
else:
raise ValueError(
f"Unrecognized tool_choice type. Expected str or dict. "
f"Received: {tool_choice}"
)
kwargs["tool_choice"] = tool_choice
return super().bind(tools=formatted_tools, **kwargs)
```
In addition, there are a few other tweaks in the class that would be helpful:
- Adding tool call id and type ("tool_call_type") to tool_calls in _convert_message_to_chat_message
- Alternative roles for ToolMessages. Many model tokenizer configs can only support user/assistant/user/assistant workflows even if the models can call tools
Also, for HuggingFaceEndpoints, please allow us to match HuggingFace's API spec for API keys ('-'):
- huggingfacehub_api_token default shouldn't throw an error
### System Info
System Information
------------------
> OS: Linux
> OS Version: #129-Ubuntu SMP Fri Aug 2 19:25:20 UTC 2024
> Python Version: 3.11.4 (main, Jun 7 2023, 18:32:58) [GCC 10.2.1 20210110]
Package Information
-------------------
> langchain_core: 0.3.21
> langchain: 0.3.8
> langchain_community: 0.3.8
> langsmith: 0.1.143
> langchain_cli: 0.0.31
> langchain_cohere: 0.3.2
> langchain_experimental: 0.3.3
> langchain_huggingface: 0.1.2
> langchain_openai: 0.2.9
> langchain_text_splitters: 0.3.2
> langchainhub: 0.1.21
> langgraph_sdk: 0.1.36
> langserve: 0.3.0 | 🤖:bug,Ɑ: models | low | Critical |
2,752,074,445 | vscode | Formatting changes ending on \r result in an extra new line | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96
- OS Version: Windows 10
Steps to Reproduce:
1. Install the clangd extension
2. Follow the steps from https://github.com/clangd/vscode-clangd/issues/759 :
Config:
````
IncludeBlocks: Regroup
SortIncludes: 'true'
````
The code:
````
#include "a.h"
#include "c.h"
#include "b.h"
using namespace NS;
````
Each time you format using clangd, a new line is added before the using namespace. I checked if I can remove the \r from the replacement in the formatting code, though it breaks other stuff while doing so. (clang-format.exe now reformats as it tries to remove the \r)
According to the maintainers of the clangd extension it is VS Code that applies these changes to the source. | info-needed | low | Critical |
2,752,078,136 | tensorflow | tensorflow-opt-cuda: error running on Linux via GPU | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
tf 2.18.0
### Custom code
No
### OS platform and distribution
manjaro
### Mobile device
_No response_
### Python version
3.12.7
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
12.4
### GPU model and memory
RTX 4090 - 24GB
### Current behavior?
Hello all,
thank you for tensorflow!
I have installed:
`sudo pacman -S python-tensorflow-opt-cuda`
Actually I cannot run a tensorflow programm in IDE that runs well in windows - without GPU usage only on CPU.
I have this error on Manjaro - with GPU:
`Process finished with exit code 134 (interrupted by signal 6:SIGABRT)`
Python PyTorch on GPU just runs fine, though!
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn import datasets
digits = datasets.load_digits()
#
#
# # 2. Skalieren der Merkmale auf den Bereich [0, 1]
scaler = MinMaxScaler()
digits_scaled = scaler.fit_transform(digits.data)
#
# # Creating the encoder
enc = OneHotEncoder(handle_unknown='ignore', sparse_output=False)
#
result = enc.fit_transform(digits.target.reshape(-1, 1))
ratio = 0.2
X_train, X_test, y_train, y_test = train_test_split(digits_scaled, result, test_size=ratio, random_state=42)
model1 = tf.keras.models.Sequential()
model1.add(tf.keras.layers.Input(X_train.shape[1:])) #Process finished with exit code 134 (interrupted by signal 6:SIGABRT)
# model1.add(tf.keras.layers.Dense(128, input_dim=64, activation="relu")) # hidden1
# model1.add(tf.keras.layers.Dense(64, activation="relu")) # hidden2
# model1.add(tf.keras.layers.Dense(10, activation='softmax')) # outputlayer
# model1.summary()
# model1.compile(loss="categorical_crossentropy", optimizer=tf.keras.optimizers.SGD(learning_rate=0.01), # Adam()
# metrics=(['accuracy']))
```
### Relevant log output
```shell
~/PycharmProjects/alfatraining_projekt_4/week1 main python digits_uebung.py ✔
2024-12-20 07:53:19.219496: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1734677599.229682 9678 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1734677599.232640 9678 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
onehot labels:
[[1. 0. 0. ... 0. 0. 0.]
[0. 1. 0. ... 0. 0. 0.]
[0. 0. 1. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 1. 0.]
[0. 0. 0. ... 0. 0. 1.]
[0. 0. 0. ... 0. 1. 0.]]
/usr/include/c++/14.1.1/bits/stl_vector.h:1130: constexpr std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](size_type) [with _Tp = pybind11::object; _Alloc = std::allocator<pybind11::object>; reference = pybind11::object&; size_type = long unsigned int]: Assertion '__n < this->size()' failed.
zsh: IOT instruction (core dumped) python digits_uebung.py
```
| stat:awaiting tensorflower,type:build/install,subtype: ubuntu/linux,TF 2.18 | low | Critical |
2,752,104,351 | flutter | [CI] LUCI `butlr` timeout on successful build | ### Type of Request
bug
### Infrastructure Environment
LUCI, engine builds.
### What is happening?
1. https://ci.chromium.org/ui/p/flutter/builders/try/Mac%20mac_ios_engine/38769/overview hangs for 4 hours.
2. https://ci.chromium.org/ui/p/flutter/builders/try/Linux%20linux_host_engine/39846/overview `collect builds|collect|wait` takes 20 minutes
3. https://ci.chromium.org/ui/p/flutter/builders/try/Linux%20linux_host_engine/39844/overview `collect builds|collect|wait` takes 20 minutes.
4. https://ci.chromium.org/ui/p/flutter/builders/try/Mac%20mac_ios_engine/38775/overview `collect builds|collect|wait` takes 12 minutes.
5. https://ci.chromium.org/ui/p/flutter/builders/try/Mac%20mac_ios_engine/38772/overview `collect builds|collect|wait` takes 10 minutes.
6. https://ci.chromium.org/ui/p/flutter/builders/try/Windows%20windows_host_engine/37454/overview `collect builds|collect|wait` takes 32 minutes.
4. https://ci.chromium.org/ui/p/flutter/builders/try/Mac%20mac_host_engine/39375/overview `collect builds|collect|wait` takes 44 minutes and failed.

### Expected results
Maybe not waiting for so long? | team-infra,P2,triaged-infra | low | Critical |
2,752,187,086 | vscode | Snippets Snippets | Currently snippets files don't have snippets for snippets syntax
Actual:

Expected:

https://code.visualstudio.com/docs/editor/userdefinedsnippets#_snippet-syntax
PR https://github.com/microsoft/vscode/pull/236675 | feature-request,snippets | low | Minor |
2,752,252,951 | next.js | Next.js+ React 19 incompatible with the code that was precompiled using react compiler | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/frosty-bird-xx8pz9
### To Reproduce
Import components from packages precompiled using the babel-plugin-react-compiler targeting React 19.
Build and run a Next.js project using the app directory and the precompiled dependencies.
<img width="784" alt="Screenshot 2024-12-20 at 16 54 52" src="https://github.com/user-attachments/assets/64809865-ec7a-4715-b72d-793df39c52f0" />
### Current vs. Expected behavior
#### Expected
* Next js works well with libraries precompiled with compiler
* The Next.js project builds and runs without errors during prerendering or runtime.
#### Current behavior
* Build errors during prerendering in the Next.js project.
* Runtime errors in react-compiler-runtime.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:26 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T8122
Available memory (MB): 24576
Available CPU cores: 8
Binaries:
Node: 23.3.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.15.0
Relevant Packages:
next: 15.1.2
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Vercel (Deployed)
### Additional context
I believe it's because nextjs isn't really using stable react build. | Runtime,linear: next | low | Critical |
2,752,292,605 | flutter | [camera][web] Torch not working on mobile browsers | ### Steps to reproduce
1. Implement a simple camera app with a preview (camera: ^0.11.0+2)
2. enable flash using CameraController.setFlashMode(FlashMode.always)
3. Build the app for WEB
4. deploy your app to a server (or use self-signed certs to use camera locally)
5. Notice how flash does not turn on
### Expected results
Torch turns on when setFlashMode is set to always
### Actual results
Torch does not turn on
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
class CameraScreen extends StatefulWidget {
const CameraScreen({super.key});
@override
State<CameraScreen> createState() => _CameraScreenState();
}
class _CameraScreenState extends State<CameraScreen> {
late CameraController controller;
bool isInitialised = false;
Future<void> initialiseCamera() async {
final cameras = await availableCameras();
controller = CameraController(cameras.last, ResolutionPreset.max, enableAudio: false);
if (!mounted) return;
try {
await controller.initialize();
setState(() {
isInitialised = true;
});
} catch (e) {
if (e is CameraException) {
switch (e.code) {
case 'CameraAccessDenied':
break;
default:
break;
}
}
}
}
void enableFlash() {
if (isInitialised) {
controller.setFlashMode(FlashMode.always);
}
}
@override
void initState() {
initialiseCamera();
super.initState();
}
@override
void dispose() {
controller.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: Stack(
children: [
Align(
alignment: Alignment.center,
child: Builder(builder: (context) {
if (!isInitialised) {
return const CircularProgressIndicator();
}
return AspectRatio(
aspectRatio: 9 / 16, child: CameraPreview(controller));
}),
),
Align(
alignment: Alignment.bottomCenter,
child: ElevatedButton(
onPressed: enableFlash,
child: const Text("Flash"),
),
),
],
),
);
}
}
```
</details>
### Website example
<details open>
<summary>WEBSITE EXAMPLE</summary>
I posted above code to a firebase app:
[https://camerafinder-3bbd8.web.app/](https://camerafinder-3bbd8.web.app/)
Try opening it on your android phone
</details>
### Logs
<details open><summary>Logs</summary>
Localhost
```console
TypeError: true: type 'bool' is not a subtype of type 'JSArray<Object?>?'
dart-sdk/lib/_internal/js_dev_runtime/private/ddc_runtime/errors.dart 288:3 throw_
dart-sdk/lib/_internal/js_dev_runtime/private/profile.dart 110:39 _failedAsCheck
dart-sdk/lib/_internal/js_shared/lib/rti.dart 1395:3 _generalNullableAsCheckImplementation
dart-sdk/lib/_internal/js_shared/lib/js_util_patch.dart 81:5 getProperty
packages/camera_web/src/camera_service.dart.js 854:64 [_setTorchMode]
packages/camera_web/src/camera_service.dart.js 846:26 setFlashMode
packages/camera_web/src/camera_web.dart.js 695:42 <fn>
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 610:19 <fn>
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 634:23 <fn>
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 532:3 _asyncStartSync
packages/camera_web/src/camera_web.dart.js 712:20 setFlashMode
packages/camera/src/camera_preview.dart.js 1329:80 <fn>
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 610:19 <fn>
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 634:23 <fn>
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 532:3 _asyncStartSync
packages/camera/src/camera_preview.dart.js 1363:20 setFlashMode
packages/vision_check_flutter/camera_screen.dart.js 456:56 <fn>
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 610:19 <fn>
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 634:23 <fn>
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 532:3 _asyncStartSync
packages/vision_check_flutter/camera_screen.dart.js 463:20 enableTorch
packages/vision_check_flutter/camera_screen.dart.js 524:28 <fn>
packages/flutter/src/material/time.dart.js 82994:35 handleTap
packages/flutter/src/gestures/recognizer.dart.js 271:18 invokeCallback
packages/flutter/src/gestures/tap.dart.js 474:20 handleTapUp
packages/flutter/src/gestures/tap.dart.js 247:12 [_checkUp]
packages/flutter/src/gestures/tap.dart.js 194:23 handlePrimaryPointer
packages/flutter/src/gestures/recognizer.dart.js 568:16 handleEvent
packages/flutter/src/gestures/pointer_router.dart.js 76:9 [_dispatch]
packages/flutter/src/gestures/pointer_router.dart.js 102:26 <fn>
dart-sdk/lib/_internal/js_dev_runtime/private/linked_hash_map.dart 21:7 forEach
packages/flutter/src/gestures/pointer_router.dart.js 100:29 [_dispatchEventToRoutes]
packages/flutter/src/gestures/pointer_router.dart.js 95:37 route
packages/flutter/src/gestures/binding.dart.js 428:26 handleEvent
packages/flutter/src/gestures/binding.dart.js 416:24 dispatchEvent
packages/flutter/src/widgets/unique_widget.dart.js 119315:13 dispatchEvent
packages/flutter/src/gestures/binding.dart.js 389:14 [_handlePointerEventImmediately]
packages/flutter/src/gestures/binding.dart.js 360:43 handlePointerEvent
packages/flutter/src/gestures/binding.dart.js 349:14 [_flushPointerEventQueue]
packages/flutter/src/gestures/binding.dart.js 324:40 [_handlePointerDataPacket]
lib/_engine/engine/platform_dispatcher.dart 1423:5 invoke1
lib/_engine/engine/platform_dispatcher.dart 336:5 invokeOnPointerDataPacket
lib/_engine/engine/pointer_binding.dart 405:30 [_sendToFramework]
lib/_engine/engine/pointer_binding.dart 225:7 onPointerData
lib/_engine/engine/pointer_binding.dart 1047:20 <fn>
lib/_engine/engine/pointer_binding.dart 948:7 <fn>
lib/_engine/engine/pointer_binding.dart 541:9 loggedHandler
dart-sdk/lib/_internal/js_dev_runtime/patch/js_allow_interop_patch.dart 212:27 _callDartFunctionFast1
```
Web server ([https://camerafinder-3bbd8.web.app/](https://camerafinder-3bbd8.web.app/))
```console
NoSuchMethodError: method not found: 'gt' (J.aZ(...).gt is not a function)
TypeError: J.aZ(...).gt is not a function
at Object.c8 (main.dart.js:190:22)
at b2.gt (main.dart.js:24920:16)
at b2.gJ (main.dart.js:27173:15)
at Object.fS (main.dart.js:184:22)
at main.dart.js:29899:10
at a01.a (main.dart.js:4328:63)
at a01.$2 (main.dart.js:25985:14)
at Object.J (main.dart.js:4314:10)
at IC.G7 (main.dart.js:29906:10)
at IC.nc (main.dart.js:29884:21)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor summary (to see all details, run flutter doctor -v):</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 14.6.1 23G93 darwin-arm64, locale en-DK)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.1)
[✓] IntelliJ IDEA Community Edition (version 2024.1.3)
[✓] VS Code (version 1.95.0)
[✓] Connected device (4 available)
[✓] Network resources
• No issues found!
```
</details>
| platform-web,p: camera,package,has reproducible steps,P2,browser: safari-ios,browser: chrome-android,fyi-ecosystem,team-web,triaged-web,found in release: 3.27,found in release: 3.28 | low | Critical |
2,752,302,603 | transformers | Default value for mean_resizing in resize_token_embeddings should be False | ### System Info
transformers>=4.46
### Who can help?
@Arthur
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When running resize_token_embeddings with additional tokens
### Expected behavior
It is much slower to run resize_token_embeddings with mean_resizing=True.
Maybe it's because nowadays token embedding size became much larger than before (like gemma2, qwen2, ...).
So I think the default value for mean_resizing in resize_token_embeddings should be False now,
or implementation has to be fix to keep resizing speed as before.
Note: Maybe it happens if I'm using deepspeed stage3, but I didn't thoroughly investigate that. | bug | low | Major |
2,752,370,138 | godot | UTF-8 Strings are incorrectly parsed as latin1. | ### Tested versions
All versions are affected. This is _old_ code.
### Issue description
The current implementation of Godot has a bug where UTF-8 encoded strings are parsed as `latin1`.
I'd like to hear what you all think is the best solution to this problem.
The `String(const char *)` constructor currently parses `char` strings using latin1 encoding:
https://github.com/godotengine/godot/blob/89001f91d21ebd08b59756841426f540b154b97d/core/string/ustring.h#L615-L617
(`parse_latin1` was [recently renamed](https://github.com/godotengine/godot/pull/100434) from `copy_from` because that's what the function does).
This constructor is used for many strings, including static C strings. The encoding of static C strings is controlled by `-fexec-charset` on GCC / Clang[^1] and `/execution-charset` on MSVC[^2]. In the Godot codebase, it defaults to UTF-8 on GCC / Clang, and is [explicitly set](https://github.com/godotengine/godot/blob/89001f91d21ebd08b59756841426f540b154b97d/platform/windows/detect.py#L448) to UTF-8 on Windows. `latin1` and `utf-8` are compatible for values below 128, and incompatible otherwise. Therefore, there is a mismatch between encodings that can be encountered for non-ascii strings. It is likely that there are mismatches in other, non-static string use-cases, because `latin1` encoded strings are a somewhat rare encounter (though ascii-only strings are pretty likely).
The mismatch has apparently [led to problems](https://github.com/godotengine/godot/pull/100434#discussion_r1885808663) a few times in the past (though I don't know how often). Most times, strings use ascii-only characters, in which range `latin1` and `utf-8` overlap.
### Possible Solutions
Here are some ideas that I came up with:
- Enforce ASCII-only for static C strings (and force the use of `u8"String"` for utf-8 strings).
- Using `-fexec-charset` (and force the use of `u8"String"` for utf-8 strings). This would be the best solution in my opinion, but it doesn't appear to be possible to pass anything else except `UTF-8` to `-fexec-charset` right now, at least in Clang.
- Using external linter tools or options. I don't know if such a tool exists.
- Use UTF-8 parsing for strings for the default constructors. This would be somewhat slower than `latin1` parsing (though accelerated by https://github.com/godotengine/godot/pull/99826). The exact difference would have to be measured and tested for significance.
- Use ASCII-only parsing for strings for the default constructors. Log errors if values > 127 are encountered. This should be negligibly slower than `parse_latin1`, and somewhat faster than `parse_utf8`.
Ideally, the default, implicit `String(char*)` constructor is removed, to avoid this problem in the future. Instead, every use of it should be replaced with an explicit call to `String::utf8` or similar (maybe with the exception of construction from string literals), so we can be sure intent was put behind choosing the encoding. This is a bigger task though and not realistic for the near future.
Solution 1 does not address possible encoding mismatches of non-static string constructor calls.
In either the second or third solution, users that actually _want to_ parse `latin1` have to be switched to use that function explicitly.
[^1]: https://gcc.gnu.org/onlinedocs/gcc/Preprocessor-Options.html#index-fexec-charset
[^2]: https://learn.microsoft.com/en-us/cpp/build/reference/execution-charset-set-execution-character-set?view=msvc-170
| enhancement,discussion,topic:core | low | Critical |
2,752,394,938 | godot | [TextServer] Some texts are shaped accidently when use system font | ### Tested versions
- Reproducible in: v4.3.stable, v4.4.dev.mono.custom_build [89001f91d]
### System information
Godot v4.4.dev.mono (89001f91d) - Windows 11 (build 22631) - System Locale: `zh_CN`
### Issue description
Input Chinese texts `工具系统,` and `经济系统,` and see shaped text in editor and runtime:
| Expected, Editor | Unexpected, Runtime |
|--------|--------|
|  |  |
1. When the characters `济` and `系` are combined, `系` is accidentally distorted.
2. It also affects the comma at the end (which should be at the bottom left).
3. It only happens at runtime and with fallback system fonts. Behaves normally in the editor or when specifying custom fonts.
### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
[TextShapeIssue.zip](https://github.com/user-attachments/files/18209174/TextShapeIssue.zip)
Note: This may not be consistent due to differences in system region or language. | bug,topic:gui | low | Major |
2,752,415,008 | godot | Buttons in tree jiggles when scrolling horizontally | ### Tested versions
Godot v4.4.dev7
### System information
Fedora Linux 40 (KDE Plasma) on Wayland - X11 display driver, Multi-window
### Issue description
Buttons in tree jiggles when scrolling horizontally
https://github.com/user-attachments/assets/7f4627bc-dc0d-4d1b-8a48-023a32606e7e
### Steps to reproduce
Scroll horizontally in scene tree
### Minimal reproduction project (MRP)
[jiggle.zip](https://github.com/user-attachments/files/18209281/jiggle.zip)
| bug,topic:gui | low | Minor |
2,752,426,849 | vscode | VSCode is getting irresponsive |
Type: <b>Performance Issue</b>
i was working on Go lang and vscode is just hangging, i need to restart after every 5 mins
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (16 x 2793)|
|GPU Status|2d_canvas: unavailable_software<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: disabled_software<br>multiple_raster_threads: enabled_on<br>opengl: disabled_off<br>rasterization: disabled_software<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: disabled_software<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: unavailable_software<br>webgl2: unavailable_software<br>webgpu: unavailable_software<br>webnn: unavailable_software|
|Load (avg)|undefined|
|Memory (System)|63.95GB (30.57GB free)|
|Process Argv|Q:\\repositories\\Azure-Workloads-TestAutomation\\ --crash-reporter-id 6be2fcd1-5efd-447e-b24a-50e39063cbbf|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
0 147 36564 code main
0 101 24316 fileWatcher [1]
0 101 27716 gpu-process
0 49 37968 utility-network-service
0 150 43312 shared-process
0 639 43436 extensionHost [1]
0 329 50636 window [1] (AcssMetadataTests.go - Azure-Workloads-TestAutomation - Visual Studio Code)
0 31 52936 crashpad-handler
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (AcssMetadataTests.go - Azure-Workloads-TestAutomation - Visual Studio Code)
| Folder (Azure-Workloads-TestAutomation): 427 files
| File types: json(128) go(86) out(45) md(16) txt(8) java(6) xlsx(5)
| ts(5) mod(4) sum(4)
| Conf files: package.json(1) tsconfig.json(1) launch.json(1)
| settings.json(1);
```
</details>
<details><summary>Extensions (25)</summary>
Extension|Author (truncated)|Version
---|---|---
midl3-language-server|Ale|0.0.31
azapi|aza|2.1.0
vscode-research|dev|1.2024.626001
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
copilot|Git|1.253.0
copilot-chat|Git|0.23.2
go|gol|0.44.0
terraform|has|2.34.2
vscode-guid|hea|1.9.0
rainbow-csv|mec|3.13.0
git-graph|mhu|1.30.0
tdpcode|Mic|10.2304.707
vscode-azdo-codereview|Mic|1.2023.331002
wavework|Mic|1.2024.115001
debugpy|ms-|2024.14.0
python|ms-|2024.22.0
vscode-pylance|ms-|2024.12.1
sarif-viewer|MS-|3.4.4
remote-wsl|ms-|0.88.5
powershell|ms-|2024.4.0
ansible|red|24.12.1
vscode-xml|red|0.27.2
vscode-yaml|red|1.15.0
typespec-vscode|typ|0.63.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,752,445,277 | godot | Shader editing window issues, vanishing boxes and blinking windows | ### Tested versions
was having issues with it in 4.3 and thought maybe updating to 4.4 would fix, it did not
### System information
Godot v4.4.dev7 - Windows 10 (build 19044) - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - Intel(R) HD Graphics 620 (Intel Corporation; 24.20.100.6025) - Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz (4 threads)
### Issue description
Godot v4.4.dev7 - Windows 10 (build 19044) - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - Intel(R) HD Graphics 620 (Intel Corporation; 24.20.100.6025) - Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz (4 threads)
### Steps to reproduce
I am new to this so please be patient with my lack of CS lingo
-gpu particles 3d node
-process material -new particle process material
-drawpass prism mesh
-accelerations gravity -0.1
-set to single particle
-geometry new shader material
-shader: new shader
-visual shader
-flags enable unshaded
-new node on shader edit, particle color
-connect particle color to albedo
result:
### Minimal reproduction project (MRP)
I tried to reproduce this in a fresh project following identical steps, and it didn't recreate. The project included doesn't have many files in it, but I don't know which ones are causing the issue. I am sorry if i did something wrong in the bug reporting process
[bug.zip](https://github.com/user-attachments/files/18209496/bug.zip)
| bug,topic:editor,topic:shaders | low | Critical |
2,752,454,116 | godot | Transparent + always on top + maximised = freeze browsers like chrome, firefox, opera | ### Tested versions
- Reproducible with 4.3.stable and 4.4.dev6
Tested with:
- Firefox 133.0
- Chrome 131.0.6778.109
- Brave v1.73.101
- Opera
Not tested on:
Linux / macOS / safari
### System information
Windows 10.0.22631 - Godot 4.3.stable - Forward+/mobile/compatibility - Intel Iris Xe Graphics
### Issue description
I'm working on a widget-like game, meaning transparent + always on top + maximised. The goal is to be able to allow using other softwares while the game runs on top.
For most softwares, everything is working perfectly. Godot works like a charm, windows no problem, I can drag windows, reduce, maximise, type etc...
But when opening a browser, the content of the browser window freezes, and does not get updated, except by system signals, like moving the window around. The "OS" part of the window is fine (minimize, maximise, close, drag window), but what is displayed in the browser does not get updated

Notes:
- there is no issue with microsoft edge
- there is no issue when changing to "windowed" mode
### Steps to reproduce
- create a project, with transparent + always on top + maximised settings
- in the main scene script, use DisplayServer.window_set_mouse_passthrough to allow clicking on programs on the background
- open a browser
- try and fail clicking on buttons, scrolling
- move the window around, see the image being updated
### Minimal reproduction project (MRP)
[issue_with_transparent_app.zip](https://github.com/user-attachments/files/18209493/issue_with_transparent_app.zip)
| bug,platform:web | low | Minor |
2,752,515,602 | three.js | 3MF Production Extension support | ### Description
The 3MF loader does not currently load models that use the [3MF production extension](https://github.com/3MFConsortium/spec_production/blob/master/3MF%20Production%20Extension.md).
### Solution
When a 3MF file using the extension is loaded, It would be great to be able to successfully load and display them, even if all the production features were ignored. Currently nothing is loaded.
### Alternatives
No alternatives that I can think of, other than asking users to save files without the extension, but that's not ideal in my application, which is based around 3d print models.
### Additional context
I need to look into it a bit to see exactly *how* it fails, and I might be able to spend some time on this soon, but I'm opening this in case anyone has already done it, is planning to, or already knows how to. | Loaders | low | Minor |
2,752,516,335 | PowerToys | Shortcut of PT Run stops working | ### Microsoft PowerToys version
0.87.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
After the upgrade when I try to launch any application as Admin (example Visual Studio), it doesn't launch it, and subsequent call to PT Run won't work, either by shortcut or even in the PT button for the effect.

### ✔️ Expected Behavior
PT Run to work as expected
### ❌ Actual Behavior
PT Run fails lauching app's in Admin mode, and after the first attempt, PT Run also fails to run and search does not open.
### Other Software
_No response_ | Issue-Bug,Product-PowerToys Run,Needs-Triage,Needs-Team-Response | low | Minor |
2,752,573,700 | rust | Polonius fails to infer lifetimes of borrows | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
struct Foo([u8; 10]);
impl Foo {
fn read_non_empty<'c, P>(&'c mut self) -> P
where
P: serde::Deserialize<'c> + std::string::ToString,
{
loop {
let b: P = self.read();
if !b.to_string().is_empty() {
return b;
}
}
}
fn read<'c, P>(&'c mut self) -> P
where
P: serde::Deserialize<'c>,
{
unimplemented!();
}
}
```
I expected to see this happen: Compilation to succeed.
Instead, this happened: Got an error:
```rust
❯ RUSTFLAGS='-Zpolonius' cargo +nightly c --tests
Checking tmp-scdnga v0.1.0 (/home/zeenix/.cache/cargo-temp/tmp-sCDnga)
error[E0499]: cannot borrow `*self` as mutable more than once at a time
--> src/main.rs:9:24
|
4 | fn read_non_empty<'c, P>(&'c mut self) -> P
| -- lifetime `'c` defined here
...
9 | let b: P = self.read();
| ^^^^-------
| |
| `*self` was mutably borrowed here in the previous iteration of the loop
| argument requires that `*self` is borrowed for `'c`
For more information about this error, try `rustc --explain E0499`.
error: could not compile `tmp-scdnga` (bin "tmp-scdnga" test) due to 1 previous error
```
If I modify the code to not use a loop, I still get the same:
```rust
fn read_non_empty<'c, P>(&'c mut self) -> P
where
P: serde::Deserialize<'c> + std::string::ToString,
{
{
let b: P = self.read();
if !b.to_string().is_empty() {
return b;
}
}
{
let b: P = self.read();
if !b.to_string().is_empty() {
return b;
}
}
unimplemented!();
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.85.0-nightly (9e136a30a 2024-12-19)
binary: rustc
commit-hash: 9e136a30a965bf4e63f03095c57df7257bf96fd6
commit-date: 2024-12-19
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.6
``` | C-bug,NLL-polonius,T-types | low | Critical |
2,752,589,968 | next.js | Turbopack "unreachable code" string causing warnings in Firefox | ### Link to the code that reproduces this issue
https://github.com/OleSkaar/with-turbopack-app
I've used the [with-turbopack](https://github.com/vercel/next.js/tree/canary/examples/with-turbopack) example here, with two dependencies that were giving the same error in my project:
`"@tanstack/react-query": "5.62.2",
"@tanstack/react-query-devtools": "5.62.2",`
### To Reproduce
1. Start the application with `pnpm run dev`
2. Open the app at `localhost:3000`
3. Check the browser console in Firefox. You should see several warnings about unreachable code:

This is happening because of the `"TURBOPACK unreachable";` statement that's inserted after the return statement in the code from react query dev tools:

### Current vs. Expected behavior
I expect being able to load up this project without warnings in the console.
I believe Firefox is detecting the `"TURBOPACK unreachable";` string as unreachable code and therefore reporting a warning (see [this page](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Errors/Stmt_after_return)).
The string appears before a function declaration, and this seemed to be the case for the other warnings reported here as well. This function is declared below the return the statement, but because of hoisting it is reachable (and used) higher up in the code. Could the `"TURBOPACK unreachable";` string be inserted after the function declaration instead?
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6030
Available memory (MB): 36864
Available CPU cores: 12
Binaries:
Node: 20.13.1
npm: 10.5.2
Yarn: 1.22.22
pnpm: 9.14.4
Relevant Packages:
next: 15.1.2 // Latest available version is detected (15.1.2).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Module Resolution, Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | Turbopack,Module Resolution | low | Critical |
2,752,625,294 | react-native | onViewableItemsChanged isn't correct without specifying the header height in getItemLayout | ### Description
I am unsure if this is a bug or expected behaviour. If it's the latter then could be worth adding to the docs.
Also I can only test on my version (0.73.8) unfortunately. But I can't find any issues that suggest it's been fixed recently, so hopefully a useful issue for people. At the very least I couldn't find anything about this online so maybe useful for others.
If I have a simple Flatlist with a header like this:
```
<FlatList
data={Array.from({ length: 20 })}
renderItem={({ index }) => (
<V height={index === 1 ? 250 : 150} p={20} dbg>
<T>{index}</T>
</V>
)}
getItemLayout={(data, index) => {
if (index === 0) {
return { length: 150, offset: 200, index }
}
if (index === 1) {
return { length: 250, offset: 150 + 200, index }
}
return { length: 150, offset: 150 * (index - 2) + 200 + 150 + 200, index }
}}
ListHeaderComponent={<V height={200} width="100%" />}
viewabilityConfig={viewabilityConfig}
onViewableItemsChanged={({ viewableItems }) => {
console.log('>>>', viewableItems, viewableItems.length)
}}
/>
```

If I remove the header height (200) from the getItemLayout offset calculation, onViewableItemsChanged fires incorrectly.
If I remove getItemLayout entirely, the logic works fine regardless.
I would have thought by the time the layout has rendered, the onViewableItemsChanged logic would be able to use the actual layout value vs relying on getItemLayout? It would certainly make things robust for us since the header height is quite dependent on various things.
### Steps to reproduce
Run the code snippet above. Scroll down and observe that onViewableItemsChanged fires correctly. Remove 200 from getItemLayout calculation and observe it is now incorrect.
### React Native Version
0.73.8
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.4
CPU: (8) x64 Apple M1 Pro
Memory: 29.45 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.19.1
path: ~/.nvm/versions/node/v18.19.1/bin/node
Yarn:
version: 1.17.3
path: ~/.yarn/bin/yarn
npm:
version: 10.2.4
path: ~/.nvm/versions/node/v18.19.1/bin/npm
Watchman:
version: 2024.12.02.00
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/mattdalton/.rvm/gems/ruby-2.7.7/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK: Not Found
IDEs:
Android Studio: 2021.2 AI-212.5712.43.2112.8609683
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /usr/bin/javac
Ruby:
version: 2.7.7
path: /Users/mattdalton/.rvm/rubies/ruby-2.7.7/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.73.8
wanted: ^0.73.8
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
https://github.com/matt-dalton/flatlistviewable
### Screenshots and Videos
_No response_ | Issue: Author Provided Repro,Needs: Attention,Type: Unsupported Version | low | Critical |
2,752,631,036 | pytorch | Getting "Could not initialize NNPACK! Reason: Unsupported hardware." warning even though NNPACK is enabled | ### 🐛 Describe the bug
Hi everyone,
I am trying to deploy EasyOCR (an OCR library built with PyTorch) locally on a VM. When executing the following lines:
```
import easyocr
reader = easyocr.Reader(['en'], gpu=False)
result = reader.readtext('test.png')
```
I get the following warning: "Could not initialize NNPACK! Reason: Unsupported hardware.". I am deploying in a CPU only environment, on CPUs with the AVX512 instructions enabled. When the warning is displayed the model takes a lot more time to process and triggers a Timeout. I executed the following command `print(torch.__config__.show())` to see if NNPACK is available at runtime and indeed it is. This is the output right before the inference is processed:
```
PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v3.4.2 (Git Hash 1137e04ec0b5251ca2b4400a4fd3c667ce843d67)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX512
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.4.0, USE_CUDA=0, USE_CUDNN=OFF, USE_CUSPARSELT=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,
```
I am aware that this is not a pure pytorch related issue but from what I've seen this warning comes from the PyTorch side. I don't understand why the warning is triggered, when PyTorch is built with this capability. Any help would be greatly appreciated.
My environment is:
```
easyocr==1.7.2
torch==2.5.1
torchvision==0.20.1
```
### Versions
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.20 (main, Dec 20 2024, 10:20:30) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 40 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: QEMU Virtual CPU version 2.5+
CPU family: 15
Model: 107
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
BogoMIPS: 8983.12
Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm rep_good nopl cpuid extd_apicid tsc_known_freq pni ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c hypervisor lahf_lm cmp_legacy abm 3dnowprefetch vmmcall bmi1 avx2 bmi2 avx512f avx512dq avx512cd avx512bw avx512vl
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1 MiB (16 instances)
L1i cache: 1 MiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Not affected
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.5.1+cpu
[pip3] torchvision==0.20.1+cpu
[conda] Could not collect
| needs reproduction,triaged,module: nnpack | low | Critical |
2,752,656,910 | vscode | Does not remember minimized windows nor Z order | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
As the title says. For example, if I have 5 windows in total with 3 of them being minimized at the time of exit, then upon starting VSCode those 3 minimized windows are not minimized as they were previously, moreover windows seem to be spawned in random order, so it doesn't respect their previous Z order too (I find this a bit annoying). This is on Windows 10 x64 (in case that matters).
Another issue where this was mentioned in is (about virtual desktops): #146915
**But there's more bad news**, I found that VSCode does not work correctly for that kind of windows which you drag-n-drop (from the tab bar) to create a single/minimal code editor window, and if you resize/move and then minimize such minimal window, then upon starting VSCode it will bring such minimal window to the middle of the primary screen/monitor with wrong size too, so it doesn't remember minimal window's location either. 🐛
Care to fix this finally? | feature-request,workbench-window | low | Minor |
2,752,719,521 | pytorch | A very weird bug involving ddp | ### 🐛 Describe the bug
A very weird bug involving ddp. After exchange order of train code and eval code, the program hangs forever
```python
# eval
@torch.no_grad()
def estimate_loss():
out = {}
model.eval()
for split in ['val']:
losses = torch.zeros(eval_iters)
for k in range(eval_iters):
X, Y = get_batch(split)
with ctx:
# logits, loss = model(X, Y)
# loss = model(X, labels = Y).loss
logits = model(X).logits.view(-1, 50304)
loss = F.cross_entropy(logits, Y.view(-1), ignore_index=-1)
losses[k] = loss.item()
out[split] = losses.mean()
out['train'] = 0.
model.train()
return out
# train code
for micro_step in range(gradient_accumulation_steps):
print(f"{ddp_local_rank}: {micro_step}")
if ddp:
model.require_backward_grad_sync = (micro_step == gradient_accumulation_steps - 1)
with ctx:
logits = model(X).logits.view(-1, 50304)
loss = F.cross_entropy(logits, Y.view(-1), ignore_index=-1)
loss = loss / gradient_accumulation_steps # scale the loss to account for gradient accumulation
X, Y = get_batch('train')
scaler.scale(loss).backward()
if grad_clip != 0.0:
scaler.unscale_(optimizer)
torch.nn.utils.clip_grad_norm_(model.parameters(), grad_clip)
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad(set_to_none=True)
# eval code
if iter_num % eval_interval == 0 and master_process:
losses = estimate_loss()
```
Whole code can be obtained at [DDP_bug](https://github.com/Wongboo/DDP_bug) running
```
TORCH_DISTRIBUTED_DEBUG=DETAIL torchrun --standalone --nproc_per_node=2 train_eval_llama.py
TORCH_DISTRIBUTED_DEBUG=DETAIL torchrun --standalone --nproc_per_node=2 eval_train_llama.py.
```
The problem is so weird that I worked whole day on it.
When running with `TORCH_DISTRIBUTED_DEBUG=DETAIL`, the program throw error
```
[rank1]: RuntimeError: Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(SequenceNumber=8, OpType=BROADCAST, TensorShape=[76], TensorDtypes=Int, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))), but Rank 0 is running collective: CollectiveFingerPrint(SequenceNumber=8, OpType=BROADCAST, TensorShape=[288], TensorDtypes=Float, TensorDeviceTypes=TensorOptions(dtype=float (default), device=cuda, layout=Strided (default), requires_grad=false (default), pinned_memory=false (default), memory_format=(nullopt))).Collectives differ in the following aspects: Tensor Tensor shapes: 76vs 288 Tensor Tensor dtypes: Intvs Float
```
Looks like rank1 is running the second train, while rank0 is running the first eval. However, `no_sync()` or `barrier()` cannot prevent this. It is very weird.
And it seems to have a connection with the model type.
### Versions
Collecting environment information...
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 6.0.0 (tags/RELEASE_600/final)
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:23:07) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-192-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.54.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2101.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] blas 1.0 mkl conda-forge
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.0 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.5.39 0 nvidia
[conda] cuda-runtime 12.4.0 0 nvidia
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcublas 12.4.2.65 0 nvidia
[conda] libcufft 11.2.0.44 0 nvidia
[conda] libcurand 10.3.6.82 0 nvidia
[conda] libcusolver 11.6.0.99 0 nvidia
[conda] libcusparse 12.3.0.142 0 nvidia
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] libnvjitlink 12.4.99 0 nvidia
[conda] mkl 2022.2.1 h84fe81f_16997 conda-forge
[conda] numpy 1.26.4 py312heda63a1_0 conda-forge
[conda] pytorch 2.4.0 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_6 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchtriton 3.0.0 py312 pytorch
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,module: ddp | low | Critical |
2,752,733,889 | storybook | [Bug]: Сonsecutive channel events on storybook@8 with "@storybook/react-webpack5" cause page to reload | ### Describe the bug
When using storybook@8, react-webpack@5, emitting some consecutive events leads page to reload.
The problem does not reproduce when using react-vite.
"page reload" is the problem here
### Reproduction link
https://stackblitz.com/edit/github-axcvs6ro
### Reproduction steps
1. Go to above link
2. Open "example-button--primary" story
3. Execute the following script in devtools inside "storybook-preview-iframe" frame:
```js
window.__STORYBOOK_ADDONS_CHANNEL__.emit("updateGlobals", {globals: window.__STORYBOOK_ADDONS_CHANNEL__.data.setGlobals[0].globals})
window.__STORYBOOK_ADDONS_CHANNEL__.emit("setCurrentStory", { storyId: "" })
window.__STORYBOOK_ADDONS_CHANNEL__.emit("setCurrentStory", { storyId: "example-button--primary" })
```
This leads page to reload.
after removing "updateGlobals" call or "setCurrentStory" call, page won't reload.
### System
```bash
Storybook Environment Info:
System:
OS: Linux 5.0 undefined
CPU: (8) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Shell: 1.0 - /bin/jsh
Binaries:
Node: 18.20.3 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn <----- active
npm: 10.2.3 - /usr/local/bin/npm
pnpm: 8.15.6 - /usr/local/bin/pnpm
npmPackages:
@storybook/addon-essentials: ^8.5.0-beta.3 => 8.5.0-beta.3
@storybook/addon-interactions: ^8.5.0-beta.3 => 8.5.0-beta.3
@storybook/addon-onboarding: ^8.5.0-beta.3 => 8.5.0-beta.3
@storybook/addon-webpack5-compiler-swc: ^1.0.5 => 1.0.5
@storybook/blocks: ^8.5.0-beta.3 => 8.5.0-beta.3
@storybook/react: ^8.5.0-beta.3 => 8.5.0-beta.3
@storybook/react-webpack5: ^8.5.0-beta.3 => 8.5.0-beta.3
@storybook/test: ^8.5.0-beta.3 => 8.5.0-beta.3
storybook: ^8.5.0-beta.3 => 8.5.0-beta.3
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,752,739,384 | neovim | 'shelltemp' is broken in MSYS2/MINGW64 | ### Problem
Also reported here https://github.com/msys2/MINGW-packages/issues/16301
Summary:
1. Paste the following into an empty buffer:
```
first name,last name,email
john,smith,[email protected]
drew,neil,[email protected]
jane,doe,[email protected]
```
2. Sort rows 2 to $ using external [gnu sort](https://www.gnu.org/software/coreutils/sort)
3. `:2,$!sort -k2 -t,`
```
shell returned 127
E485: Can't read file C:\msys64\tmp\nvim.0\...
```
### Steps to reproduce
Run from MINGW64 shell.
```help
nvim --clean config.lua
i
```
```lua
vim.o.shelltemp = true
vim.o.shellcmdflag = '-c'
vim.o.shellxquote = ''
vim.o.shellquote = ''
```
```help
<ESC>:x<CR>
TMPDIR=any/temp/folder nvim --clean -u config.lua
i
```
```txt
first name,last name,email
john,smith,[email protected]
drew,neil,[email protected]
jane,doe,[email protected]
```
```help
<ESC>:2,$!sort -k2 -t,<CR>
```
### Expected behavior
The buffer is sorted by column 2 starting from row 2:
```
first name,last name,email
jane,doe,[email protected]
drew,neil,[email protected]
john,smith,[email protected]
```
The `'shell'` is set to `$SHELL` = `/usr/bin/bash` which MSYS2 converts to Windows-style `C:\msys64\usr\bin\bash.exe`.
I'm not sure why, but the following workaround works with the same `config.lua` from above.
```lua
vim.o.shell = 'cmd.exe /s /c C:\\msys64\\usr\\bin\\bash.exe'
```
### Nvim version (nvim -v)
v0.11.0-dev-1388+g39781be14b
### Vim (not Nvim) behaves the same?
no, vim 9.1.0866-1 compiled by MSYS2
### Operating system/version
Windows 11; MINGW64_NT-10.0-26100
### Terminal name/version
Windows Terminal 1.21.3231.0
### $TERM environment variable
xterm-256color
### Installation
Nightly build for Windows | bug,platform:windows | low | Critical |
2,752,754,740 | TypeScript | Add JSDoc `@export` tag | ### 🔍 Search Terms
`jsdoc @export `
### ✅ Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
TypeScript 5.5 added the [`@import`](https://devblogs.microsoft.com/typescript/announcing-typescript-5-5/#the-jsdoc-import-tag) JSDoc tag which can be used for type-only imports. It would be useful to add an analogous `@export` tag for type-only exports.
### 📃 Motivating Example
Let’s say you maintain a library that contains the following file:
```js
// types.js
/**
* @typedef {string} Username
*/
/**
* @typedef {number} UserId
*/
```
Maybe you want to expose these types in the main entry point. You can now do this with the JSDoc `@export` tag. It works the same as type-only exports in TypeScript files.
```js
// index.js
/**
* @import { Username } from './types.js'
*/
/**
* @export { Username }
* @export { UserId } from './types.js'
*/
```
### 💻 Use Cases
This is useful to have full control over exports in a library. Also this is just parity between JSDoc based types and TypeScript files.
I can think of 3 workarounds, all of which are not great.
1. Define a new type in the index file using `@typedef`
```js
// index.js
/**
* @typedef {import('./types.js').Username} Username
* @typedef {import('./types.js').UserId} UserId
*/
```
Pros:
- It works
Cons:
- It defines a new type
- Hover/autocompletion information is lost (#55572)
- Type parameters must be redefined
2. In addition to `index.js`, create a TypeScript file, e.g. `exports.ts`. Then in `exports.ts` write something along the lines of:
```ts
export { someValue, anotherValue } from './index.js'
export { Username, UserId } from './types.js'
```
and in `package.json`, write something like:
```json
{
"types": "./types/exports.d.ts",
"exports": {
"types": "./types/exports.d.ts",
"default": "./lib/index.js",
}
}
```
Pros:
- It works
- No new types are defined, but actual exports
- Hover information is preserved
- Type parameters are preserved
Cons:
- Exports from `index.js` need to be synced to `exports.ts` manually.
- You’re essentially tricking TypeScript into thinking `exports.ts` is the main entry, whereas in reality it’s `index.js`
3. Manually write the declaration file.
Pros:
- It works
- No new types are defined
Cons:
- You need to author the declaration files manually. | Suggestion,Awaiting More Feedback,Domain: JSDoc | low | Minor |
2,752,773,264 | flutter | [In-App-Purchase][iOS] Purchase Stream Not Triggering for Purchased Status in TestFlight on iOS on Subscription | ### Steps to reproduce
1. Set up a listener for the purchase stream:
```dart
InAppPurchase.instance.purchaseStream.listen(_listenToPurchaseUpdated);
```
2. Deploy the app to TestFlight on iOS.
3. Purchase a subscription in the app.
4. Observe the following:
- The purchase stream is triggered correctly for the **Pending** state.
5. Complete the subscription purchase.
6. Check if the purchase stream is triggered for the **Purchased** status.
### Notes:
- The issue occurs only for **subscriptions**, not regular in-app purchases.
- This behavior is observed **only in TestFlight**; it works correctly during development.
### Expected results
The purchase stream (`InAppPurchase.instance.purchaseStream`) should be triggered with the **Purchased** status after completing a subscription purchase.
### Actual results
The purchase stream (`InAppPurchase.instance.purchaseStream`) is **not triggered** with the **Purchased** status after completing a subscription purchase.
However, it works correctly for consumable purchases.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:async';
import 'dart:developer';
import 'package:in_app_purchase/in_app_purchase.dart';
class InAppPurchaseService {
late StreamSubscription<List<PurchaseDetails>> _subscription;
void init() {
log("init is called on InAppPurchaseService");
final instance = InAppPurchase.instance;
final Stream<List<PurchaseDetails>> purchaseUpdated =
instance.purchaseStream;
_subscription = purchaseUpdated.listen(
_listenToPurchaseUpdated,
);
}
void _listenToPurchaseUpdated(List<PurchaseDetails> purchaseDetailsList) {
purchaseDetailsList.forEach((PurchaseDetails purchaseDetails) async {
if (purchaseDetails.status == PurchaseStatus.error) {
} else if (purchaseDetails.status == PurchaseStatus.pending) {
/// This get called
} else if (purchaseDetails.status == PurchaseStatus.purchased) {
/// This is not called for subscription
///
/// Handle verification and consume the purchase
} else if (purchaseDetails.status == PurchaseStatus.canceled) {
} else if (purchaseDetails.status == PurchaseStatus.restored) {}
});
}
}
/// From the UI we will make purchase using
await InAppPurchase.instance.buyNonConsumable(
purchaseParam: PurchaseParam(
productDetails: _selectedProduct!,
),
);
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.22.0, on macOS 15.2 24C101 darwin-arm64, locale en-NP)
• Flutter version 3.22.0 on channel stable at /Users/pawanacharya/fvm/versions/3.22.0
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5dcb86f68f (8 months ago), 2024-05-09 07:39:20 -0500
• Engine revision f6344b75dc
• Dart version 3.4.0
• DevTools version 2.34.3
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/pawanacharya/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• ANDROID_SDK_ROOT = /Users/pawanacharya/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
[✓] IntelliJ IDEA Ultimate Edition (version 2022.1.4)
• IntelliJ at /Applications/IntelliJ IDEA.app
• Flutter plugin version 74.0.1
• Dart plugin version 221.6103.1
[✓] VS Code (version 1.76.2)
• VS Code at /Users/pawanacharya/Desktop/Visual Studio Code.app/Contents
• Flutter extension version 3.92.0
[✓] Connected device (4 available)
• Pawan’s iPhone (mobile) • 00008120-001E49E4210BC01E • ios • iOS 18.2 22C152
• macOS (desktop) • macos • darwin-arm64 • macOS 15.2 24C101 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.2 24C101 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| platform-ios,p: in_app_purchase,package,a: production,P2,team-ios,triaged-ios | low | Critical |
2,752,774,247 | go | proposal: crypto: ignore rand io.Reader where behavior is not specified | There are a few crypto APIs that take an `io.Reader` as a source of random bytes, but that don't commit to how those bytes are used. This caused issues over and over, for example any time we wanted to change the algorithm. These days we both document that they are not deterministic, and use `randutil.MaybeReadByte` to somewhat enforce it. See #58637.
Now that we have GODEBUGs, it might be time to rip the band-aid off. I propose we start ignoring the random `io.Reader` parameter of the following APIs, and always use the system random source (`crypto/internal/sysrand.Read`, *not* `crypto/rand.Reader` which may be overridden by the application).
* `rsa.GenerateKey` and `rsa.GenerateMultiPrimeKey`
* `rsa.EncryptPKCS1v15`
* `ecdsa.GenerateKey`
* `ecdsa.SignASN1`, `ecdsa.Sign`, and `ecdsa.PrivateKey.Sign`
* `ecdh.Curve.GenerateKey`
Using `GODEBUG=cryptocustomrand=1` restores the old behavior. (Suggestions for a better name welcome.) This is a GODEBUG that I would like to remove in a few releases.
`rsa.SignPKCS1v15` is not randomized, while `rsa.SignPSS` and `rsa.EncryptOAEP` have a fairly well-specified way to use random bytes. Aside from those and `ed25519.GenerateKey` (see below), I think I listed all APIs in non-deprecated packages that take a random `io.Reader`.
This might be an issue for the crypto/tls tests, which defuse `MaybeReadByte` by producing a stream of identical bytes. That's an abuse of GenerateKey anyway, because there is no guarantee that algorithms that expect random inputs will work with constant repeating streams. See for example #70643.
`ed25519.GenerateKey` is a weird exception in that it is well defined, but also documented to use `crypto/rand.Reader` if nil is passed. This is annoying because it forces a dependency on `crypto/rand` and therefore on `math/big`. We can't just use `crypto/internal/sysrand.Read` because the user might have overridden `crypto/rand.Reader`. I am tempted to also propose replacing "`crypto/rand.Reader`" with "the system random source" but it's probably not worth the risk.
/cc @golang/security | Proposal,Proposal-Crypto | low | Critical |
2,752,775,969 | ant-design | Typography.Paragraph Ellipsis not consistent when using icon for expand/contract when within Row | ### Reproduction link
[](https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-jyh2k9)
### Steps to reproduce
Include the ellipsis property for a Typography.Paragraph element with symbol set to provide icons for expanding/contracting the text and place the element within a Row tag
### What is expected?
It is expected that the element only ever takes up the number of rows specified and expands and contracts consistently.
### What is actually happening?
The shortened text is displayed correctly on initial render with icon appearing at the end of the row. On expanding and then contracting the text, the contracted version changes to include more text and pushes the icon onto the next row.
| Environment | Info |
| --- | --- |
| antd | 5.22.5 |
| React | 18.3.1 |
| System | Windows 10 Enterprise 19045.5131 |
| Browser | Google Chrome 131.0.6778.205 |
---
The functionality is correct and consistent when no symbol is used (i.e. the default text is displayed to expand/contract)
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Minor |
2,752,793,507 | deno | VSCode LSP Panic in Unsaved Jupyter Notebook | Version: Deno 2.1.4
When I open a new Jupyter Notebook in VSCode by doing the following
- Open Command Palette (`Ctrl + Shift + P`)
- Click `Create: New Jupyter Notebook`
- Select the `Deno` kernel
I get the following error messages

Here are the Deno Language Server logs
```
Starting Deno language server...
version: 2.1.4 (release, x86_64-pc-windows-msvc)
executable: C:\Users\Student\.deno\bin\deno.EXE
Connected to "Visual Studio Code" 1.96.2
Enabling import suggestions for: https://deno.land
Refreshing configuration tree...
Resolved Deno configuration file: "file:///E:/Documents/AdventOfCode2024/deno.json"
Resolved lockfile: "file:///E:/Documents/AdventOfCode2024/deno.lock"
============================================================
Deno has panicked. This is a bug in Deno. Please report this
at https://github.com/denoland/deno/issues/new.
If you can reliably reproduce this panic, include the
reproduction steps and re-run with the RUST_BACKTRACE=1 env
var set and include the backtrace in your report.
Platform: windows x86_64
Version: 2.1.4
Args: ["C:\\Users\\Student\\.deno\\bin\\deno.EXE", "lsp"]
thread 'main' panicked at C:\a\deno\deno\resolvers\node\package_json.rs:88:41:
called `Option::unwrap()` on a `None` value
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
[Error - 1:50:08 PM] Client Deno Language Server: connection to server is erroring. Shutting down server.
[Error - 1:50:08 PM] Connection to server got closed. Server will not be restarted.
[Error - 1:50:08 PM] Stopping server failed
Message: Pending response rejected since connection got disposed
Code: -32097
[Error - 1:50:08 PM] Stopping server failed
Message: Pending response rejected since connection got disposed
Code: -32097
Error retrieving config tasks: Error: Client is not running
```
**NB: This only happens when the Notebook is UNSAVED. It works correctly on saved files when there is a deno.json file in the same directory.** | bug,deno jupyter,panic | low | Critical |
2,752,800,942 | ant-design | Make XXXL breakpoint available from Grid.useBreakpoint | ### What problem does this feature solve?
XXXL breakpoint has been introduced for the Grid in 2022 in the following PR but isn't available from Grid.useBreakpoint hook.
The commit that introduced XXXL breakpoint https://github.com/ant-design/ant-design/pull/39105/commits/ee3fb62f9b5df949a898eb96db7c7f6cf5daeb1b
### What does the proposed API look like?
The same as today `useBreakpoint()` should return `xxxl` as a boolean such as:
```ts
const { xxxl } = useBreakpoint();
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Major |
2,752,816,284 | pytorch | Floating Point Exception (core dumped) when running floordiv/remainder/fmod under torch.compile | ### 🐛 Describe the bug
It is likely a division-by-zero problem.
Under eager mode, these APIs do not throw the `ZeroDivisionError` exception instead of a floating point error.
Here is the code to reproduce:
```
import torch
@torch.compile
def div(input,value):
return torch.Tensor.floor_divide_(input,value) # change the API to torch.fmod or torch.remainder will also lead to the same error
input = torch.tensor([2,5])
value = torch.tensor([0])
div(input, value)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gitdeb1da1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 4368.1641
CPU min MHz: 2200.0000
BogoMIPS: 7000.73
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 16 MiB (32 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gitdeb1da1
[pip3] torchaudio==2.5.1+cu124
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.20.1+cu124
[pip3] triton==3.1.0
[conda] magma-cuda124 2.6.1 1 pytorch
[conda] mkl 2025.0.0 h901ac74_941 conda-forge
[conda] mkl-include 2025.0.0 hf2ce2f3_941 conda-forge
[conda] numpy 2.1.2 py311h71ddf71_0 conda-forge
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+gitdeb1da1 pypi_0 pypi
[conda] torchaudio 2.5.1+cu124 pypi_0 pypi
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchvision 0.20.1+cu124 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu | module: crash,triaged,oncall: pt2,oncall: cpu inductor | low | Critical |
2,752,817,045 | vscode | Let's stop using Icons for Secondary Sidebar views | Icons make the Copilot Edits view not discoverable.
Right now

My proposal

Code pointer to change (`true` to `false`) https://github.com/microsoft/vscode/blob/main/src/vs/workbench/browser/parts/auxiliarybar/auxiliaryBarPart.ts#L162
What do you think? I am happy to provide a PR to change this.
Along with this it would be good if we change "Chat" -> "Copilot Chat"
fyi @roblourens @pierceboggan | under-discussion,layout,workbench-views,workbench-auxsidebar,workbench-copilot | low | Major |
2,752,830,963 | angular | Export @defer typescript primitives for use in component codebehind | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
The `@defer` syntax is great, but the some of the triggers would also be useful in typescript, such as via a decorator to call a typescript function.
I have a lazy-loaded component that is fetched via the `async import` in my typescript, which is fed into a dynamic dialog service. This should follow the same logic/heuristics as the template-based defer, and I would love to be able to reuse:
* [`IdleScheduler`](https://github.com/angular/angular/blob/main/packages/core/src/defer/idle_scheduler.ts)
* [`TimerScheduler`](https://github.com/angular/angular/blob/main/packages/core/src/defer/timer_scheduler.ts)
It'd be great if somehow the dom triggers could also be considered.
### Proposed solution
Minimal: export `IdleScheduler` and `TimerScheduler` from `@angular/core`
Best: create and provide a `@defer` decorator that can be used from TypeScript code that works with all the same functionality as the `@defer` block in templates
### Alternatives considered
I can re-implement what I need in user-land, but would prefer to reuse Angular internals for consistency. | area: core,core: defer | low | Minor |
2,752,880,136 | material-ui | [docs][Alert] Missing root slot for style customization | ### Summary
Currently, the `MuiAlert` component does not support the `paper` slot for direct styling customization int the theme file. Other MUI components, such as `MuiAutocomplete`, use the `slotProps` API to allow passing specific props to subcomponents like `paper`. Adding support for this in `MuiAlert` would simplify the application of custom `Paper` variants and other styles directly to the alert’s underlying `Paper` element.
### Examples
```
MuiAlert: {
defaultProps: {
slotProps: {
paper: {
variant: 'overlay', // Apply the overlay variant directly to the paper slot
},
},
},
styleOverrides: {
// Default styles for MuiAlert can still be used
},
}
```
### Motivation
1. Simplified Styling: Currently, developers need to override `styleOverrides` to apply specific styles such as custom `Paper` variants. Introducing the `paper` slot would allow this styling to be applied more easily through the `slotProps` API, similar to other components like `MuiAutocomplete`. This would streamline styling customization and reduce the need for manual overrides.
2. Consistency with Other Components: Other components in MUI, such as `MuiAutocomplete`, support the `slotProps` API to customize internal elements (e.g., `paper`). Adding this capability to `MuiAlert` would align the API across different MUI components, enhancing consistency and developer experience.
3. Improved Maintainability: By allowing developers to pass props directly to the `paper` element (e.g., `variant`: 'overlay'), this approach reduces the need for duplicate code and manual style overrides, making the codebase easier to maintain and less prone to errors.
**Search keywords**: alert, slot, defaultProps, paper variant | docs,component: alert | low | Critical |
2,752,889,957 | pytorch | `MemoryDenyWriteExecute` in systemd service causes `RuntimeError: could not create a primitive` | `MemoryDenyWriteExecute` would be nice to use, but when pytorch is ran in this context, it throws the following, likely due to generating assembly at runtime, where that generated code is both +w and +x, which is generally a security issue. The code should be `-w`'d when it is done generating.
https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html#MemoryDenyWriteExecute=
```
File "/nix/store/4b0mw59pv52w2kvli1hraqcybww0yy0z-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 549, in _conv_forward
return F.conv2d(
^^^^^^^^^
RuntimeError: could not create a primitive
```
A larger trace is below, for the application I'm running in a systemd service (comfyui)
```
model weight dtype torch.float32, manual cast: None
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SD1ClipModel
loaded completely 9.5367431640625e+25 235.84423828125 True
Requested to load SD1ClipModel
loaded completely 9.5367431640625e+25 235.84423828125 True
Requested to load BaseModel
loaded completely 9.5367431640625e+25 3278.812271118164 True
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
0%| | 0/1 [00:00<?, ?it/s] 0%| | 0/1 [00:00<?, ?it/s]
!!! Exception during processing !!! could not create a primitive
Traceback (most recent call last):
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/execution.py", line 324, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/execution.py", line 199, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/execution.py", line 170, in _map_node_over_list
process_inputs(input_dict, i)
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/execution.py", line 159, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/nodes.py", line 1467, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/nodes.py", line 1434, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 1020, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 918, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 904, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 873, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 857, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 714, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/k_diffusion/sampling.py", line 155, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 384, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 839, in __call__
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 842, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 364, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 200, in calc_cond_batch
return executor.execute(model, conds, x_in, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/samplers.py", line 313, in _calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/model_base.py", line 128, in apply_model
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/model_base.py", line 157, in _apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 832, in forward
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 874, in _forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 39, in forward_timestep_embed
x = layer(x, emb)
^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 240, in forward
return checkpoint(
^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ldm/modules/diffusionmodules/util.py", line 191, in checkpoint
return func(*inputs)
^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 253, in _forward
h = self.in_layers(x)
^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfh
packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/container.py", line 250, in forward
input = module(input)
^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/css11jdci8dblbzwskpxi47ln5m8iw1x-comfyui-0.3.7/lib/python3.12/site-packages/comfy/ops.py", line 98, in forward
return super().forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 554, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/minbaa1r1viqhnfhq9gna38fgmj5psxc-python3.12-torch-2.5.1/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 549, in _conv_forward
return F.conv2d(
```
cc @malfet @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @seemethere @pytorch/pytorch-dev-infra | needs reproduction,module: error checking,module: convolution,triaged,module: mkldnn,security | low | Critical |
2,752,908,127 | godot | BoneAttachment3D doesn't update position / rotation of attachment correctly | ### Tested versions
Reproducible in 4.4Dev-5, 4.4Dev-7
### System information
Godot v4.4.dev7 - Manjaro Linux #1 SMP PREEMPT_DYNAMIC Tue Oct 8 03:24:49 UTC 2024 on X11 - X11 display driver, Multi-window, 1 monitor - Vulkan (Mobile) - integrated Intel(R) HD Graphics 4600 (HSW GT2) - Intel(R) Core(TM) i7-4700HQ CPU @ 2.40GHz (8 threads)
### Issue description
if I move around or look around the attachment doesn't follow correctly if I force it it looks it attached correctly with slight off how it should be
### Steps to reproduce
simply play the project and look around and move around
the arrow will move not as it should
if you comment the IProjectile the else statement it moves even weirder
### Minimal reproduction project (MRP)
I couldn't get MRP less than 10MB because the FBX files were large enough
and they are essential for the MRP
[MRP](https://drive.proton.me/urls/CT1T16RYM4#K1FdBXektiPR) | bug,needs testing,topic:animation | low | Minor |
2,752,927,665 | rust | `raw-dylib` usage in std broke thumbv7a-*-windows-msvc targets | Starting from `nightly-2024-02-26` rust version `1.78.0-nightly (0ecbd0605 2024-02-25)`, `std` for `thumbv7a-uwp-windows-msvc` no longer builds:
```bash
cargo +nightly-2024-02-26 build -Z build-std=std,panic_abort --target thumbv7a-uwp-windows-msvc --release
```
<details>
<summary>Output:</summary>
```
...
error: could not compile `std` (lib)
Caused by:
process didn't exit successfully: `C:\Users\bdbai\.rustup\toolchains\nightly-2024-02-26-x86_64-pc-windows-msvc\bin\rustc.exe --crate-name std --edition=2021 C:\Users\bdbai\.rustup\toolchains\nightly-2024-02-26-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\std\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=158 --crate-type rlib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg "feature=\"addr2line\"" --cfg "feature=\"backtrace\"" --cfg "feature=\"gimli-symbolize\"" --cfg "feature=\"miniz_oxide\"" --cfg "feature=\"object\"" --cfg "feature=\"panic_unwind\"" --cfg "feature=\"std_detect_dlsym_getauxval\"" --cfg "feature=\"std_detect_file_io\"" -C metadata=e432070f6e8a70e3 -C extra-filename=-e432070f6e8a70e3 --out-dir C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps --target thumbv7a-uwp-windows-msvc -Z force-unstable-if-unmarked -L dependency=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps -L dependency=C:\project_dir\target\release\deps --extern priv:addr2line=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\libaddr2line-08740835d495a638.rmeta --extern alloc=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\liballoc-ffcd7e68b1b4d768.rmeta --extern priv:cfg_if=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\libcfg_if-ed9aa18cde72cb83.rmeta --extern priv:compiler_builtins=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\libcompiler_builtins-4dc2d4fb24785f49.rmeta --extern core=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\libcore-e9d96bbb4564a7c8.rmeta --extern priv:hashbrown=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\libhashbrown-bbafedfdf9ba97f2.rmeta --extern libc=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\liblibc-b57e343fcebc03aa.rmeta --extern priv:miniz_oxide=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\libminiz_oxide-58abf01af28732ef.rmeta --extern priv:object=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\libobject-d7dbd142c5e57f98.rmeta --extern priv:panic_abort=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\libpanic_abort-d509d9c2350c464a.rmeta --extern priv:panic_unwind=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\libpanic_unwind-ba05a49921f1f9c0.rmeta --extern priv:rustc_demangle=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\librustc_demangle-d8165352be6fe153.rmeta --extern priv:std_detect=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\libstd_detect-937e46cacab5d26e.rmeta --extern priv:unwind=C:\project_dir\target\thumbv7a-uwp-windows-msvc\release\deps\libunwind-bd6619bcb4fe0878.rmeta -Z unstable-options --cap-lints allow --cfg backtrace_in_libstd` (exit code: 0xc000001d, STATUS_ILLEGAL_INSTRUCTION)
warning: build failed, waiting for other jobs to finish...
...
```
</details>
This looks very similar to the error reported in #120921 , which indicates somewhere in std is using `raw-dylib` already and triggers the same issue within llvm. Looks like #121337 is the first PR that introduces `raw-dylib` and so far there are a handful of other places (like https://github.com/rust-lang/rust/commit/f68529f2cf364469575b61fdeb3b901eb3d86328) using it as well. Given llvm is having issues for this particular platform, can we consider excluding `thumbv7a` from using `raw-dylib` in `std`, or at least there would be a way to workaround this?
Ping @ChrisDenton @bjorn3 | I-crash,A-LLVM,A-cross,T-compiler,O-windows-msvc,C-bug,-Zbuild-std | low | Critical |
2,752,934,748 | PowerToys | PowerToys 0.87.1-x64 does not install | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Installer
### Steps to reproduce
Attempted to install PowerToys as Admin and received a popup message that it could not be installed as the previous version must be uninstalled first. I tried to uninstall PowerToys with Control Panel - Programs and Features and got the same popup. I then used Iobit Uninstaller to remove it and all residual files, then rebooted the PC. The same error window came up when attempting to install PowerToys 0.87.1 and 0.87
### ✔️ Expected Behavior
Expected PowerToys to update as usual.
### ❌ Actual Behavior
Unable to update or install PowerToys
### Other Software
_No response_ | Issue-Bug,Area-Setup/Install,Needs-Triage,Needs-Team-Response | low | Critical |
2,752,946,979 | svelte | Inline Svelte components for tests | ### Describe the problem
I'll start off by saying I don't know whether this would be a Svelte feature, or maybe part of a Vite plugin, or even if this should be in another library.
When I'm writing component tests using something like `testing-library`, I'll need to write some code like this (which requires jumping from file to file):
```ts
import {render} from '@testing-library/svelte'
import MyComponent from './MyComponent.svelte'
const result = render(MyComponent, componentOptions, renderOptions)
```
This leaves me jealous of frameworks like React where you can inline JSX and see a lot more of your component code (all of it, if you really wanted to):
```jsx
import {render, screen} from '@testing-library/react'
import Fetch from './fetch'
test('loads and displays greeting', async () => {
render(<Fetch url="/greeting" />)
```
I'm attempting to port Tailwind's HeadlessUI-React library to Svelte 5, and in the process of porting, there are hundreds (thousands?) of inline JSX examples to test components in different ways, eg:
https://github.com/tailwindlabs/headlessui/blob/main/packages/%40headlessui-react/src/components/switch/switch.test.tsx
```jsx
it('should be possible to render an (on) Switch using a render prop', () => {
render(
<Switch checked={true} onChange={console.log}>
{({ checked }) => <span>{checked ? 'On' : 'Off'}</span>}
</Switch>
)
...
```
Writing a custom Svelte file per test would be annoying, but moreso, just trying to debug that becomes a nightmare very quickly (I know, I've tried).
### Describe the proposed solution
I have a partial solution that I'm kinda meh about, but think this might be a better first-party plugin or solution. Or maybe I can directly use the compiler or `createRawSnippet` to facilitate this, but I honestly am not sure about any of that.
Using a hacked together Vite plugin, I - on-the-fly - compile Svelte template strings into virtual modules, and async import them into my test methods.
https://github.com/RobotPajamas/headlessui-svelte/blob/main/src/lib/switch/switch.dom.test.ts
```ts
it("should be possible to use in an uncontrolled way", async () => {
let handleSubmission = vi.fn();
const component = await sveltify(`
<script>
import Switch from "$lib/switch/Switch.svelte";
let { handleSubmission } = $props();
</script>
<form
onsubmit={(e) => {
e.preventDefault();
handleSubmission(Object.fromEntries(new FormData(e.target)));
}}
>
<Switch name="notifications" />
<button id="submit">submit</button>
</form>
`);
render(component, { handleSubmission });
```
The plugin is here, just a couple dozen lines of code - cobbled together from other similar plugins I'd seen in the past (and some guesswork) when this library was originally intended for Svelte 3 or 4: https://github.com/RobotPajamas/headlessui-svelte/blob/24ef0380838e50241add1a86dc6aaaad89a8d21b/vite.config.ts#L27-L94
Some things I currently run into (without having spent much time to look into them, as this is "good enough" when compared against my overall goal):
- Hot-reload isn't working when you change string component content, need to re-run tests from scratch
- No type safety, until you run tests and see what fails
- All imports must be explicit, but it would be nice to ambiently `import Switch` just once in the test file to reduce total lines of code (e.g. 35 lines of `import Switch` in that one test file)
- Need to ensure my tests are all asynchronous in order to import the virtual component
- Requires type hinting via `types.d.ts` or a do-nothing function in the file to satisfy the type checker (`function sveltify(input: string): Promise<typeof SvelteComponent>`)
The feature idea would be some way to "natively" (or "pluginly") have a similar result as above. In my ideal world, it wouldn't be a bag of strings without type checking, but something ... else... that someone smarter than me can come up with 😄
### Importance
nice to have | feature request | low | Critical |
2,752,963,355 | flutter | WidgetsBinding.instance.platformDispatcher.displays.first.size returns Size(0.0, 0.0) zero on Windows | ### Steps to reproduce
Open a new Flutter project (default increment counter example)
Add the following code:
```
Size _screenSize = WidgetsBinding.instance.platformDispatcher.displays.first.size;
print("display: ${_screenSize}");
```
Run the project on Windows 11.
Only one monitor is connected to the device, with a resolution of 3440x1440 pixel and a refresh rate of 100Hz.
### Expected results
`display: Size(3440.0, 1440.0)`
Output on Chrome
### Actual results
`display: Size(0.0, 0.0)`
Output on Windows 11
Version 10.0.22631 Build 22631
### Code sample
```
void _incrementCounter() {
setState(() {
_counter++;
Size _screenSize = WidgetsBinding.instance.platformDispatcher.displays.first.size;
print("display: ${_screenSize}");
});
}
```
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
Additional logs printed by:
```
Size _screenSize = WidgetsBinding.instance.platformDispatcher.displays.first.size;
print("display: ${_screenSize}");
print("display: ${WidgetsBinding.instance.platformDispatcher.displays}");
print("display: ${WidgetsBinding.instance.platformDispatcher.displays.first}");
```
```
flutter: display: Size(0.0, 0.0)
flutter: display: (Display(id: 0, size: Size(0.0, 0.0), devicePixelRatio: 0.0, refreshRate: 99.9750062484379))
flutter: display: Display(id: 0, size: Size(0.0, 0.0), devicePixelRatio: 0.0, refreshRate: 99.9750062484379)
```
actual refresh rate of the monitor is 100Hz
### Flutter Doctor output
```
> flutter doctor -v
[√] Flutter (Channel stable, 3.22.3, on Microsoft Windows [Version 10.0.22631.4602], locale en-DE)
• Flutter version 3.22.3 on channel stable at C:\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision b0850beeb2 (5 months ago), 2024-07-16 21:43:41 -0700
• Engine revision 235db911ba
• Dart version 3.4.4
• DevTools version 2.34.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 33.0.2)
• Android SDK at C:\Users\user\AppData\Local\Android\sdk
• Platform android-33-ext5, build-tools 33.0.2
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 11.0.15+0-b2043.56-9505619)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.9.0)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.9.34607.119
• Windows 10 SDK version 10.0.22000.0
[√] Android Studio (version 2022.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 11.0.15+0-b2043.56-9505619)
[√] IntelliJ IDEA Community Edition (version 2022.1)
• IntelliJ at C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.1
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
[√] VS Code (version 1.95.3)
• VS Code at C:\Users\user\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.102.0
[√] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4602]
• Chrome (web) • chrome • web-javascript • Google Chrome 124.0.6367.207
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.99
[√] Network resources
• All expected network resources are available.
• No issues found!
``` | framework,platform-windows,has reproducible steps,P3,team-windows,triaged-windows,found in release: 3.27,found in release: 3.28 | low | Minor |
2,752,976,174 | angular | The `load` event on image elements is triggered twice with event replay | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
When using `withEventReplay()`, the `(load)` event listener is called twice: once by `invokeRegisteredReplayListeners -> invokeListeners`, and again by the `listener` instruction, which also sets up the `load` event listener.

### Please provide a link to a minimal reproduction of the bug
https://github.com/arturovt/ng-issue-event-replay-twice
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.0.4
Node: 20.11.1
Package Manager: yarn 1.22.22
OS: linux x64
Angular: 19.0.4
... animations, cli, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, platform-server
... router, ssr
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.4
@angular-devkit/build-angular 19.0.4
@angular-devkit/core 19.0.4
@angular-devkit/schematics 19.0.4
@schematics/angular 19.0.4
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else?
_No response_ | area: core,core: event dispatch | low | Critical |
2,752,978,774 | pytorch | Pytorch 3.13t wheels for release 2.6 - triton dependency | ### 🐛 Describe the bug
While testing I noticed that wheel constraint does not work :
```
Requires-Dist: pytorch-triton==3.2.0+git35c6c7c6; platform_system == "Linux" and platform_machine == "x86_64" and python_version != "3.13t"
```
Workflow:
https://github.com/pytorch/pytorch/actions/runs/12427438642/job/34700799523#step:15:533
Looks like this is currently not supported :
Related Doc: https://packaging.python.org/en/latest/specifications/dependency-specifiers/
Discussion: https://discuss.python.org/t/environment-marker-for-free-threading/60007/4
Hence I propose following:
For release and nightly Remove triton constrain from 3.13t wheels METADATA:
```
Requires-Dist: pytorch-triton==3.2.0+git35c6c7c6; platform_system == "Linux" and platform_machine == "x86_64" and python_version != "3.13t"
```
When publishing these wheels to pypi, publish them after linux 3.9-3.13 wheels are uploaded as separate step to avoid possible issues with poetry.
### Versions
2.6.0
cc @seemethere @malfet @osalpekar | module: binaries,triaged | low | Critical |
2,752,984,436 | langchain | ChatOpenAI: bind_tools not callable after with_structured_output | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from langchain.tools import StructuredTool
class ResponseModel(BaseModel):
a_value:str = Field(description="This doesn't matter much")
def a_func(val: int):
return True
a_tool = StructuredTool.from_function(
func=a_func,
name="A func",
description="A function you will need",
)
llm = ChatOpenAI(model="gpt-4o-mini",temperature=0)
structured_llm = llm.with_structured_output(ResponseModel)
llm_with_tools = structured_llm.bind_tools([a_tool]) <----- not available
```
### Error Message and Stack Trace (if applicable)
'RunnableSequence' object has no attribute 'bind_tools'
### Description
I am attempting to retrieved structured output in a json format (to pass via an api to a frontend), and I also require calling out to tools. I cannot figure out how to combine the two, or there is an issue with code to do so.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64
> Python Version: 3.13.1 (main, Dec 3 2024, 17:59:52) [Clang 16.0.0 (clang-1600.0.26.4)]
Package Information
-------------------
> langchain_core: 0.3.28
> langchain: 0.3.13
> langchain_community: 0.3.13
> langsmith: 0.2.4
> langchain_experimental: 0.3.4
> langchain_openai: 0.2.14
> langchain_text_splitters: 0.3.4
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.58.1
> orjson: 3.10.9
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2 | Ɑ: models,investigate | low | Critical |
2,752,986,137 | material-ui | [docs][Select] `placeholder` prop is omitted | ### Related page
https://mui.com/material-ui/api/select/
### Kind of issue
Missing information
### Issue description
Following issue https://github.com/mui/material-ui/pull/44502, the `placeholder` prop has been removed from the `Select` component for having no effect. While I'm inclined to disagree (MUI *could* render a placeholder, and the [Select documentation](https://mui.com/material-ui/react-select/) event suggests on how to render one) the API documentation should at least reflect that decision.
As of the time of writing, the [Select API documentation](https://mui.com/material-ui/api/select/) makes no mention of the omission of the `placeholder` prop. In fact, it refers to `OutlinedInput`:
> Props of the [OutlinedInput](https://mui.com/material-ui/api/outlined-input/) component are also available.
Which is where it is rightfully listed, leading developers to believe it should be available in `Select`.
I was ready to file a bug report, but having found that issue, I discovered `placeholder` was omitted intentionally. At first I went to the documentation to figure out what's what with this prop, but no mention of it was made, which lead me to create the issue you're reading now.
### Context
_No response_
**Search keywords**: select placeholder | docs,component: select,scope: docs-infra,support: docs-feedback | low | Critical |
2,752,994,072 | flutter | Using Apple Pencil on a text field makes the full size keyboard unavailable until the textfield is tapped multiple times | ### Steps to reproduce
1. Press text field with finger, full size keyboard appears.
2. Dismiss the keyboard.
3. Press text field with Pencil, a menu appears in the corner with option to show mini keyboard.
4. Dismiss the menu.
5. Press text field with finger again, now you get the menu or mini keyboard instead of the full keyboard.
This seems to have something to do with the Pencil's Scribble feature, even though stylusHandwritingEnabled or scribbleEnabled is set to false on the text field. It only happens when Scribble is turned on in the iPad's Apple Pencil settings - if it's turned off, you always get the full size keyboard.
### Expected results
Compare to native behavior in for example Safari's search bar:
1. Press with finger, full size keyboard
2. Dismiss keyboard
3. Press with Pencil, menu in the corner
4. Dismiss menu
5. Press with finger, full size keyboard again
### Actual results
.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/cupertino.dart';
void main() async {
WidgetsFlutterBinding.ensureInitialized();
runApp(const MainApp2());
}
class MainApp2 extends StatefulWidget {
const MainApp2({super.key});
@override
State<MainApp2> createState() => _MainApp2State();
}
class _MainApp2State extends State<MainApp2> {
@override
Widget build(BuildContext context) {
return CupertinoApp(home: DefaultTextStyle(style: TextStyle(), child: Container(color: Color(0xffaaaaaa), padding: EdgeInsets.all(50), child: Center(child:
CupertinoTextField(
stylusHandwritingEnabled: false
)
))));
}
}
```
</details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel master, 3.28.0-2.0.pre.38551, on macOS 13.6.7 22G720 darwin-arm64, locale en-US)
• Flutter version 3.28.0-2.0.pre.38551 on channel master at /Users/weirdhat/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 188d1e1999 (6 hours ago), 2024-12-20 17:13:10 +0800
• Engine revision 188d1e1999
• Dart version 3.7.0 (build 3.7.0-260.0.dev)
• DevTools version 2.41.0
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/weirdhat/Library/Android/sdk
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/macos-android-setup for more details.
[!] Xcode - develop for iOS and macOS (Xcode 15.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15A240d
! CocoaPods 1.12.1 out of date (1.16.2 is recommended).
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
To update CocoaPods, see https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.96.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (4 available)
• JK iPad 2024 (mobile) • 00008103-001659302668801E • ios • iOS 17.6.1 21G93
• macOS (desktop) • macos • darwin-arm64 • macOS 13.6.7 22G720 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 13.6.7 22G720 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205
! Error: Browsing on the local area network for JACOB’s iPad (2). Ensure the device is unlocked and attached with a cable or associated with the same local area
network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 2 categories.
```
</details>
| a: text input,platform-ios,a: tablet,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.28 | low | Critical |
2,752,995,767 | angular | Event replay memory leak | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
Elements are not being garbage collected properly when the app is destroyed because they are captured inside the early event contract event info list. When calling `appRef.destroy()`, we can capture the heap snapshot and observe the detached `<app-root />`, which cannot be garbage collected:

### Please provide a link to a minimal reproduction of the bug
https://github.com/arturovt/ng-issue-event-replay-leak
### Please provide the exception or error you saw
Elements are captured in that part of the code:
https://github.com/angular/angular/blob/2c3630aafdf0f375e1e5fc45f9068a274b46b7c7/packages/core/primitives/event-dispatch/src/earlyeventcontract.ts#L69-L83
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.0.4
Node: 20.11.1
Package Manager: yarn 1.22.22
OS: linux x64
Angular: 19.0.4
... animations, cli, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, platform-server
... router, ssr
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.4
@angular-devkit/build-angular 19.0.4
@angular-devkit/core 19.0.4
@angular-devkit/schematics 19.0.4
@schematics/angular 19.0.4
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else?
_No response_ | memory leak,area: core,P3,core: event dispatch | low | Critical |
2,753,024,551 | vscode | Git - Trial GIT_NO_OPT_LOCKS environment variable to avoid conflicts between git extension and git in terminal | null | plan-item,git,terminal | low | Minor |
2,753,036,339 | godot | [3.6] Copy Property Does Not Work | ### Tested versions
v3.6.stable.official [de2f0f147]
### System information
w11 64
### Issue description
When clicking "Copy Property," nothing is copied, and pasting still uses the previously stored content in the clipboard.
https://github.com/user-attachments/assets/b9aa25d3-f810-413b-8680-55a26b672692
### Steps to reproduce
See the video.
### Minimal reproduction project (MRP)
Any project. | bug,topic:editor | low | Minor |
2,753,043,413 | pytorch | 反射填充的反向传播没有确定性实现 | ### 🐛 Describe the bug
UserWarning: reflection_pad2d_backward_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True, warn_only=True)'. You can file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. (Triggered internally at ../aten/src/ATen/Context.cpp:91.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
### Versions
Python-3.12.7 torch-2.5.1+cu118
cc @mruberry @kurtamohler | triaged,module: determinism | low | Critical |
2,753,045,748 | flutter | Add more platforms to MediaQueryData.alwaysUse24HourFormat's documentation | Currently `alwaysUse24HourFormat`'s documentation only discusses what happens on iOS and Android. This should also include desktop platforms (Windows, macOS, Linux) as well as whatever happens on Web. Looking through the engine code for the web, it seems like it's always set to `false` unless I'm missing something.
For reference, here is the current documentation:
> Whether to use 24-hour format when formatting time.
>
> The behavior of this flag is different across platforms:
>
> - On Android this flag is reported directly from the user settings called "Use 24-hour format". It applies to any locale used by the application, whether it is the system-wide locale, or the custom locale set by the application.
> - On iOS this flag is set to true when the user setting called "24-Hour Time" is set or the system-wide locale's default uses 24-hour formatting. | a: fidelity,d: api docs,P2,a: adaptivity,team-framework,triaged-framework | low | Minor |
2,753,054,741 | PowerToys | Change the name of the system-wide installer to indicate it's a system-wide installer | ### Description of the new feature / enhancement
**Issue:**
The global, system-wide installer does not specifically indicate that the installer is for the global, system-wide installation package.
In the same way that the per-user installation package is named "PowerToysUserSetup-0.87.1-x64.exe", the global installer should have its name indicate it is the global, system-wide installer like "PowerToysSystemSetup-0.87.1-x64.exe" or "PowerToysAllUsersSetup-0.87.1-x64.exe".
### Scenario when this would be used?
**Why this is important:**
I can't speak for others, but in my case I often download both versions of an installation package, (and save it to my local file server), so that I can use the package most appropriate to a particular installation use-case.
Having the installers self-identify as either per-user or system-wide is extremely useful. As it is, I am forced to rename the system-wide installer to make it self-identifying.
### Supporting information
**Severity:** Minimal.
This feature request is not for something critical or of high importance. It is simply there to make life easier for the end user and avoid confusion.
**Priority:** Low.
Though it would be desirable to have this done quickly, it is not critically important and should not take priority over more important work.
**Impact:** Low.
It does not impact product performance as it is simply a rename of the final executable.
**Possible impact scenarios:**
Possible impact scenarios are unit and/or build testing if they have a hard-coded product-name prefix. If this is true, the test suite would need to be searched for all instances of the old name and have the new name substituted. This should be a low-impact change. | Idea-Enhancement,Area-Setup/Install,Needs-Triage | low | Major |
2,753,064,201 | kubernetes | Prevent dueling during upgrade when `--requestheader-uid-headers` is only set on some servers | /assign stlaz
/milestone v1.33
xref: #115834 #129081
```
@stlaz / @enj, can you make sure an issue exists to track resolving dueling during upgrade when we *do* populate a default value for this in the future, and make sure that is planned for 1.33?
```
_Originally posted by @liggitt in https://github.com/kubernetes/kubernetes/issues/128986#issuecomment-2528720969_
| sig/auth,triage/accepted | low | Minor |
2,753,108,276 | pytorch | Log more contextual data when nan is detected under the anomaly mode | ### 🚀 The feature, motivation and pitch
Currently the anomaly mode only reports a nan is found at a particular op, without showing the full input/output tensors as well as their sizes. This makes it difficult to narrow down the root cause of nan.
PR #143633 addresses this.
### Alternatives
Log the problematic (nan) tensors in the python interface. This seems unnecessary and inefficient given the c++ anomaly mode code is already available.
### Additional context
We are in the process of debugging a nan issue caused by the dummy tensor in the All2All_Seq_Req_Wait backward pass here
https://github.com/pytorch/torchrec/blob/main/torchrec/distributed/comm_ops.py#L110
With PR #143633, it was immediately clear we were looking at the dummy_tensor, which has dimension [1].
```
RuntimeError: Function 'All2All_Seq_Req_WaitBackward' returned nan values in its 0th output; num_outputs = 1; num_inputs = 0; outputs[0].shape = [1, ]; outputs[i] = nan
[ torch.cuda.FloatTensor{1} ]
```
This is the fix PR for dummy_tensor: https://github.com/pytorch/torchrec/pull/2648
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan | module: autograd,triaged,module: NaNs and Infs,actionable | low | Critical |
2,753,116,888 | PowerToys | Support for markdown formatting in PowerToys RUN utility | ### Description of the new feature / enhancement
Currently, when developing custom plugins for PowerToys Run, the output is limited to using the Result class (wox.Plugin.Result), which makes the plugin look outdated and not very customizable.
It would be great if Markdown formatting could be supported for plugin outputs, allowing for more modern and flexible presentation.
### Scenario when this would be used?
Many more complex plugins require more than just a single line of plain text.
### Supporting information
_No response_ | Idea-Enhancement,Product-PowerToys Run,Run-Plugin | low | Minor |
2,753,120,295 | vscode | Document how to use VS Code Tunnels reliably for more than 2 or 3 days without disconnection | Many vscode users have reported that it's difficult or impossible to keep vscode tunnels running for multiple days.
This an ongoing problem with no clear solution.
Maybe what is needed is a suggestion for some kind of hack that will send a ping or data every few minutes, to keep the connection alive?
Various issues about this have been created and then closed by the vscode developers, most recently here: https://github.com/microsoft/vscode/issues/230058#issuecomment-2393103391
Before this issue was closed @connor4312 suggested that logging in with a Microsoft account might make VS code tunnels more reliable.
We've done extensive testing of this over the last week, using Microsoft accounts instead of Github and found that we have not fixed the problem this way.
Typically a vs code tunnel will stay online for roughly 2 to 5 days, and then will fail.
Now that we've switch to MS account, we reliably get a failure message like this when the tunnel disconnects:
```
Dec 20 08:55:08 code-tunnel[3683048]: [2024-12-20 08:55:08] warn Error refreshing access token, will retry: failed to lookup tunnel for host token: response error: HTTP status 401 Unauthorized from https://usw3.rel.tunnels.api.visualstudio.com/tunnels/REDACTED-94lzwh5?tokenScopes=host&api-version=2023-09-27-preview (request ID fb9d5243-5aa8-48fd-aace-71f027a4fee8):
```
To fix this, we have found we need to follow these steps on Linux.... it seems to require opening the Vs code GUI application, there is no reliable way we have found to do this on the CLI.
1. Close vscode if it is running,
2. Re-open vscode
3. Stop tunnel if it is running.
4. Start tunnel
5. Login with microsoft
| bug | low | Critical |
2,753,125,877 | PowerToys | monaco preview tool is very slow for plain text | ### Description of the new feature / enhancement
as the monaco editor is very slow, it would be possible not to use it for plaintext !
### Scenario when this would be used?
other than plaintext.
### Supporting information
_No response_ | Idea-Enhancement,Area-Quality,Product-File Explorer,Needs-Triage | low | Major |
2,753,148,152 | PowerToys | copy using CTRL-C from file preview (monaco) : format not supported by Office Clipboard | ### Microsoft PowerToys version
0.87.0
### Installation method
GitHub
### Running as admin
None
### Area(s) with issue?
File Explorer: Preview Pane, File Explorer: Thumbnail preview
### Steps to reproduce
1/ file preview of a txt file
2/ select a part of the document
3/ CTRL-C
4/ message: **Clipboard** item not collected: Format not supported by Office Clipboard

### ✔️ Expected Behavior
when copying using the item menu copy it works well !

### ❌ Actual Behavior
see step to reproduce
### Other Software
monaco and WebView2. | Issue-Bug,Product-File Explorer,Needs-Triage | low | Minor |
2,753,156,878 | flutter | [Flutter Analyze] - Running `flutter analyze` leads to a Exception and crash | ### Steps to reproduce
As part of our CICD process, we are running `flutter analyze .` across our suite of flutter packages. Occasionally, the CICD process fails with the following kind of error
```
[app_foundation_account]: Exception: analysis server exited with code -9 and output:
21416info11/22/2024, 8:40:45 AM[app_foundation_account]: [stdout] {"event":"server.connected","params":{"version":"1.38.0","pid":9084}}
21417info11/22/2024, 8:40:45 AM[app_foundation_account]: [stdout] {"id":"1"}
21418info11/22/2024, 8:40:45 AM[app_foundation_account]: [stdout] {"event":"analysis.errors","params":{"file":"/harness/packages/layer2_app_foundation/app_foundation_account/analysis_options.yaml","errors":[]}}
21419info11/22/2024, 8:40:45 AM[app_foundation_account]: [stdout] {"event":"analysis.errors","params":{"file":"/harness/packages/layer2_app_foundation/app_foundation_account/pubspec.yaml","errors":[]}}
21420info11/22/2024, 8:40:45 AM
```
Our CICD system uses Harness. Our developers are able to run `flutter analyze .` on the same packages in their local environments without issue. We are currently use Flutter 3.24.3
### Expected results
`flutter analyze .` runs successfully on our packages without throwing a code -9 error and exiting.
### Actual results
`flutter analyze .` throws a code -9 error and exits causing our CICD pipeline to fail.
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
Since this issue is an issue with the `flutter analyze .` failing i'm not able to have a specific reproducible code sample
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
```
</details>
[harness cicd console log_20241125.txt](https://github.com/user-attachments/files/18213526/harness.cicd.console.log_20241125.txt)
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
Since this issue is happening in our CICD environment, i'm unable to capture the output of running `flutter doctor -v` there
</details>
| tool,team-tool | low | Critical |
2,753,194,099 | transformers | 'do_sample' model default cannot be overridden | ### System Info
transformers 4.47.1, python 3.10.
Basically while using Qwen-2-VL-Instruct (default config sets do_sample=True), if I set the model_kwargs with do_sample=False, I am unable to override the model_config. I had to make changes to the generations/utils.py to override....
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Use a model like Qwen2-VL-Instruct with default config using do_sample = True
2. Try to override the behavior through kwargs used during model.generate
### Expected behavior
If I set do_sample=False during generation, the default config of the model should be overridden | bug | low | Minor |
2,753,218,389 | vscode | easily accessible "Open Editors" dropdown menu | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
There should be an option to view the "Open Editors" view in a dropdown menu from the editor tab bar. This would be similar to the dropdown menu that appears at the end of the tab bar in both Firefox and Chrome. This would make this functionality more easily accessible since currently the open editors view can only be opened in a sidebar which can require multiple clicks to access depending on which view was last open in that sidebar. Additionally a sidebar needs a final click to dismiss while a menu closes immediately after taking an action such as switching to or closing an editor. Alternatives such as `workbench.action.showAllEditors` are not equivalent since they cannot be activated by the mouse and because quick-pick panels do not support features such as reordering editors or closing them via middle-click. | feature-request,open-editors | low | Minor |
2,753,236,415 | tauri | [bug] Android use convertFileSrc() with customized protocol to load local video source not work and trigger so many same range request header | ### Describe the bug
on android platform,when use convertFileSrc method with customized protocol to load local file to gen a video src, when load video, the request header will like range: bytes=0-, when load large video file,this will trigger OOM and crashed, if response with not all file, it will trigger so many same range request,such as range: bytes=123-, and the video could not play. but it work well in http src.
### Reproduction
// customized response
http::Response::builder()
.status(code)
.header("Content-Type", get_mime_type(&path))
.header("Connection", "Keep-Alive")
.header("Keep-Alive", "timeout=58")
.header("Accept-Ranges", "bytes")
.header("Last-Modified", last_modified_str)
.header("Content-Length", (byte_range.2).to_string())
.header(
"Content-Range",
format!("bytes {}-{}/{}", byte_range.0, byte_range.1, file_size),
)
.body(data)
.unwrap();
### Expected behavior
expect the customize protocol can behave as same as normal http src, because the same custmized protocol work well on ios, is there the android webview video load request as same as ios?
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.17763 x86_64 (X64)
✔ WebView2: 131.0.2903.99
✔ MSVC: Visual Studio Community 2022
✔ rustc: 1.79.0 (129f3b996 2024-06-10)
✔ cargo: 1.79.0 (ffa9cf99a 2024-06-03)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.12.2
- npm: 10.5.0
[-] Packages
- tauri �: 2.1.1
- tauri-build �: 2.0.3
- wry �: 0.47.2
- tao �: 0.30.8
- tauri-cli �: 1.6.0
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-fs �: 2.0.3
- @tauri-apps/plugin-fs : 2.0.3 (outdated, latest: 2.2.0)
- tauri-plugin-localhost �: 2.0.1
- @tauri-apps/plugin-localhost : not installed!
- tauri-plugin-persisted-scope �: 2.0.3
- @tauri-apps/plugin-persisted-scope : not installed!
- tauri-plugin-sql �: 2.0.2
- @tauri-apps/plugin-sql : not installed!
- tauri-plugin-http �: 2.0.3
- @tauri-apps/plugin-http : not installed!
- tauri-plugin-upload �: 2.1.0
- @tauri-apps/plugin-upload : 2.2.1
- tauri-plugin-os �: 2.0.1
- @tauri-apps/plugin-os : 2.0.0 (outdated, latest: 2.2.0)
- tauri-plugin-store �: 2.1.0
- @tauri-apps/plugin-store : 2.1.0 (outdated, latest: 2.2.0)
- tauri-plugin-barcode-scanner �: 2.0.1
- @tauri-apps/plugin-barcode-scanner : not installed!
- tauri-plugin-shell �: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1 (outdated, latest: 2.2.0)
- tauri-plugin-log �: 2.0.2
- @tauri-apps/plugin-log : 2.0.1 (outdated, latest: 2.2.0)
```
### Stack trace
```text
12-20 17:42:59.596 21235 21290 I RustStdoutStderr: headerName:"origin",headerValue:Ok("http://tauri.localhost")
12-20 17:42:59.596 21235 21290 I RustStdoutStderr: headerName:"referer",headerValue:Ok("http://tauri.localhost/")
12-20 17:42:59.597 21235 21290 I RustStdoutStderr: headerName:"accept-encoding",headerValue:Ok("identity;q=1, *;q=0")
12-20 17:42:59.597 21235 21290 I RustStdoutStderr: headerName:"range",headerValue:Ok("bytes=753664-")
12-20 17:42:59.597 21235 21290 I RustStdoutStderr: headerName:"accept",headerValue:Ok("*/*")
12-20 17:43:01.720 21235 21290 I RustStdoutStderr: headerName:"origin",headerValue:Ok("http://tauri.localhost")
12-20 17:43:01.720 21235 21290 I RustStdoutStderr: headerName:"referer",headerValue:Ok("http://tauri.localhost/")
12-20 17:43:01.721 21235 21290 I RustStdoutStderr: headerName:"accept-encoding",headerValue:Ok("identity;q=1, *;q=0")
12-20 17:43:01.721 21235 21290 I RustStdoutStderr: headerName:"range",headerValue:Ok("bytes=753664-")
12-20 17:43:01.721 21235 21290 I RustStdoutStderr: headerName:"accept",headerValue:Ok("*/*")
```
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,753,294,424 | rust | auto trait leakage can be used to leak arbitrary types | When leaking the hidden type of opaques when proving auto traits, we can leak their hidden type to the caller by relying on type inference. See https://github.com/lcnr/random-rust-snippets/issues/13 for minimized examples.
Note that this allows us to access foreign closures during typeck/borrowck. This very easily results in ICE, as this code expects all encountered closures to be local:
```rust
// crate dep
struct WaddupGamers<T, U>(Option<T>, U);
impl<T: Leak<Assoc = U>, U> Unpin for WaddupGamers<T, U> {}
pub trait Leak {
type Assoc;
}
impl<T> Leak for T {
type Assoc = T;
}
pub fn define<T>() -> impl Sized {
WaddupGamers(None::<T>, || ())
}
// root
#![feature(type_alias_impl_trait)]
#![allow(unused)]
#![crate_type = "rlib"]
use dep::*;
fn require_auto<T: Unpin>(x: T) -> T { x }
type NameMe<T> = impl Sized;
fn leak<T>() -> NameMe<T>
where
T: Leak<Assoc = NameMe<T>>,
{
// Proving `impl Sized: Unpin` constrains `NameMe<T>` to
// the closure of `define`.
let opaque = require_auto(define::<T>());
let closure;
loop {}
return closure; // This constrains this infer var to that closure
}
````
results in
```
thread 'rustc' panicked at compiler/rustc_hir_typeck/src/coercion.rs:1202:62:
DefId::expect_local: `DefId(20:20 ~ dep[6a51]::define::{closure#0})` isn't local
```
https://github.com/rust-lang/rust/blob/fcc1615e47100b376d0a6166faccdd4a8253c314/compiler/rustc_hir_typeck/src/coercion.rs#L1202
There are a lot of such uses, this was simply the first one i've triggered. | I-ICE,A-impl-trait,A-auto-traits,I-types-nominated,T-types | low | Minor |
2,753,300,281 | pytorch | String representation of nn.MultiheadAttention should contain arguments | ### 🐛 Describe the bug
Following the recommendation of the [python docs](https://docs.python.org/3/reference/datamodel.html#object.__repr__), the string representation of an object should contain enough information to be re-constructable. Hence, the `nn.MultiheadAttention` class should follow this, like other modules do, c.f. `nn.Linear`, `nn.RNN`, etc.
Current behaviour:
```
import torch.nn as nn
repr(nn.MultiheadAttention(num_heads=2, embed_dim=4)) = 'MultiheadAttention( (out_proj): NonDynamicallyQuantizableLinear(in_features=4, out_features=4, bias=True))'
```
Expected behaviour:
```
repr(nn.MultiheadAttention(num_heads=2, embed_dim=4)) = 'MultiheadAttention(num_heads=2, embed_dim=4)'
```
For example the string representation of a linear layer allows you to reconstruct it:
```
repr(nn.Linear(in_features=3, out_features=5)) = 'Linear(in_features=3, out_features=5, bias=True)'
```
as it contains all the information about the arguments passed. Modules with more parameters such as `nn.RNN` solves this by repeating all arguments that was passed.
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.29.6
Libc version: N/A
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 10:07:17) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==2.3.3
[pip3] torch==2.5.1
[pip3] torchmetrics==1.4.0.post0
[pip3] torchtext==0.17.2
[pip3] torchvision==0.20.1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-lightning 2.3.3 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchmetrics 1.4.0.post0 pypi_0 pypi
[conda] torchtext 0.17.2 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Critical |
2,753,304,541 | vscode | Hover dialog no longer closes when pressing Esc, must be focused |
Type: <b>Bug</b>
I used to press `ctrl+k ctrl+i` to open the hover menu, and press `esc` to close it.
Now, the hover menu is no longer focused when opened, and in order to close it, it is necessary to click it with the mouse or use the very long shortcut sequence `ctrl+k ctrl+i` (open) + `ctrl+k ctrl+i` (again, to focus) `esc` (close).
Could you please revert to when the hover was focused on open? I'm not sure the current behavior is accessible either.
VS Code version: Code 1.96.0 (138f619c86f1199955d53b4166bef66ef252935c, 2024-12-11T02:29:09.626Z)
OS version: Linux x64 6.8.0-49-generic
Modes:
<details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | info-needed,editor-hover | low | Critical |
2,753,305,219 | flutter | `flutter test --platform chrome` on plugins supporting `ffi` should skip `@TestOn('vm')` | ### Use case
Some plugins support both web as well as native interop via `dart:ffi`. If you write a unit test that imports directly from the source tree without a conditional import (because it's guarded much higher in the import tree when actually used as a library), tests fail even though they're marked `@TestOn('vm')` because imports are evaluated regardless of the annotatni.
```
flutter test --platform chrome --test-randomize-ordering-seed=random --exclude-tags canvasKit
org-dartlang-app:///native_memory_test.dart:4:8: Error: Dart library 'dart:ffi' is not available on this platform.
import 'dart:ffi';
^
Context: The unavailable library 'dart:ffi' is imported through these packages:
native_memory_test.dart => dart:ffi
native_memory_test.dart => package:sentry_flutter => dart:ffi
native_memory_test.dart => package:sentry_flutter => package:ffi => dart:ffi
replay/replay_native_test.dart => package:sentry_flutter => dart:ffi
replay/replay_native_test.dart => package:sentry_flutter => package:ffi => dart:ffi
```
Writing conditional imports just to be able to test stuff is rather cumbersome.
### Proposal
I have a couple of alternative proposals how running `flutter test` for non-ffi-compatible targets could be improved:
1. add an exclude filter based on files to `flutter test` command
2. exclude files automatically if they contain `@TestOn('vm')`
I'm willing to submit a PR if we agree on a proposal (be it one from above or something else) | a: tests,tool,platform-web,c: proposal,team-tool | low | Critical |
2,753,319,835 | pytorch | Potential rooms for fewer recompilations by introducing higher-level guards | ### 🐛 Describe the bug
I encountered this while investigating recompilations in #128071. The relevant model code is [here](https://github.com/HazyResearch/based/blob/5cee0bf62be1582580d073af069b96f7fb8dc6b2/based/models/mixers/convolution.py#L131-L146).
## Repro
```python
import torch
@torch.compile(backend="eager")
def f(x, int_dict, n):
if n in int_dict:
return x + 1
return x + 2
x = torch.ones(2)
f(x, {1 : '1'}, 1)
f(x, {1 : '1', 2 : '2'}, 1)
f(x, {2 : '2'}, 2)
```
Running `TORCH_LOGS="recompiles" repro.py` gives 2 recompilations:
```
V1220 10:52:51.577000 12464 torch/_dynamo/guards.py:2817] [0/1] [__recompiles] Recompiling function f in /Users/ryanguo99/Documents/work/scratch/test-dict-contains-guards.py:3
V1220 10:52:51.577000 12464 torch/_dynamo/guards.py:2817] [0/1] [__recompiles] triggered by the following guard failure(s):
V1220 10:52:51.577000 12464 torch/_dynamo/guards.py:2817] [0/1] [__recompiles] - 0/0: len(L['int_dict']) == 1
V1220 10:52:51.592000 12464 torch/_dynamo/guards.py:2817] [0/2] [__recompiles] Recompiling function f in /Users/ryanguo99/Documents/work/scratch/test-dict-contains-guards.py:3
V1220 10:52:51.592000 12464 torch/_dynamo/guards.py:2817] [0/2] [__recompiles] triggered by the following guard failure(s):
V1220 10:52:51.592000 12464 torch/_dynamo/guards.py:2817] [0/2] [__recompiles] - 0/1: L['n'] == 1
V1220 10:52:51.592000 12464 torch/_dynamo/guards.py:2817] [0/2] [__recompiles] - 0/0: KeyError on L['int_dict'][1]
```
Relevant guards by running `TORCH_LOGS="guards" python repro.py`:
- For `f(x, {1 : '1'}, 1)`
```
[__guards] TREE_GUARD_MANAGER:
[__guards] +- RootGuardManager
[__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:493 in init_ambient_guards
[__guards] | +- GLOBAL_STATE: ___check_global_state()
[__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
[__guards] | +- GuardManager: source=L['n'], accessed_by=FrameLocalsGuardAccessor(key='n', framelocals_idx=2)
[__guards] | | +- EQUALS_MATCH: L['n'] == 1 # if n in int_dict: # scratch/test.py:22 in f
[__guards] | +- GuardManager: source=L['x'], accessed_by=FrameLocalsGuardAccessor(key='x', framelocals_idx=0)
[__guards] | | +- TENSOR_MATCH: check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[2], stride=[1]) # return x + 1 # scratch/test.py:23 in f
[__guards] | | +- NO_HASATTR: hasattr(L['x'], '_dynamo_dynamic_indices') == False # return x + 1 # scratch/test.py:23 in f
[__guards] | +- GuardManager: source=L['int_dict'], accessed_by=FrameLocalsGuardAccessor(key='int_dict', framelocals_idx=1)
[__guards] | | +- DICT_LENGTH: len(L['int_dict']) == 1 # if n in int_dict: # scratch/test.py:22 in f
[__guards] | | +- GuardManager: source=L['int_dict'][1], accessed_by=DictGetItemGuardAccessor(1)
[__guards] | | | +- EQUALS_MATCH: L['int_dict'][1] == '1' # if n in int_dict: # scratch/test.py:22 in f
```
- For `f(x, {2 : '2'}, 2)`
```
[__guards] TREE_GUARD_MANAGER:
[__guards] +- RootGuardManager
[__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:493 in init_ambient_guards
[__guards] | +- GLOBAL_STATE: ___check_global_state()
[__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
[__guards] | +- GuardManager: source=L['n'], accessed_by=FrameLocalsGuardAccessor(key='n', framelocals_idx=2)
[__guards] | | +- TYPE_MATCH: ___check_type_id(L['n'], 4308563856) # if n in int_dict: # scratch/test.py:22 in f
[__guards] | +- GuardManager: source=L['x'], accessed_by=FrameLocalsGuardAccessor(key='x', framelocals_idx=0)
[__guards] | | +- TENSOR_MATCH: check_tensor(L['x'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.float32, device=None, requires_grad=False, size=[2], stride=[1]) # return x + 2 # scratch/test.py:24 in f
[__guards] | | +- NO_HASATTR: hasattr(L['x'], '_dynamo_dynamic_indices') == False # return x + 2 # scratch/test.py:24 in f
[__guards] | +- GuardManager: source=L['int_dict'], accessed_by=FrameLocalsGuardAccessor(key='int_dict', framelocals_idx=1)
[__guards] | | +- DICT_LENGTH: len(L['int_dict']) == 1 # if n in int_dict: # scratch/test.py:22 in f
[__guards] | | +- GuardManager: source=L['int_dict'][1], accessed_by=DictGetItemGuardAccessor(1)
[__guards] | | | +- EQUALS_MATCH: L['int_dict'][2] == '2'
[__guards] +- LAMBDA_GUARD: L['n'] == 2 # if n in int_dict: # scratch/test.py:22 in f (_dynamo/variables/tensor.py:1200 in evaluate_expr)
```
## Thoughts
As shown above, currently Dynamo specializes pretty hard on `int_dict` and `n`, when processing the expression `n in int_dict`; this causes a lot of recompilations both in this contrived example and #128071.
However, in theory we don't care about the specifics of `int_dict` and `n`, rather we just care about whether `int_dict` contains `n`. Thus, we could emit a more general and higher level guard `DICT_CONTAINS` that's parameterized over both the dictionary source and integer source (the current `DICT_CONTAINS` still specializes over the integer source, as we only allow [1 source](https://github.com/pytorch/pytorch/blob/3ee029d4020c40a07c7e20d4f36f08d9697f8d8f/torch/_guards.py#L217) for each guard).
Is this a big problem? Well, for #128071 we would be able to circumvent this problem by fixing the graph breaks, although in other words this rare-looking scenario is exposed by graph breaks in the wild.
Fixing this feels like a non-trivial undertaking, and it's unclear what the ROI is, so I'm creating this issue to track some findings for now.
### Error logs
_No response_
### Versions
main d8ea4ce63, python 3.12
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,753,323,149 | godot | PathFollow3D Ratio w/ Loop Enabled Incorrectly Sets to 1 When FPS is Capped to 60 | ### Tested versions
- Reproducible in v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Ubuntu 24.04.1 LTS 24.04 - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 Ti (nvidia; 535.183.01) - 12th Gen Intel(R) Core(TM) i9-12900F (24 Threads)
### Issue description
When capturing a video using Godot's movie writer option, I noticed that a platform I animate using a `Path3D` + `RemoteTransform3D` would seemingly disappear on the first frame that it would start to move; but never during normal gameplay without movie writer enabled.
After some digging, the issue seems to be a combination of a few things:
1. A frame rate cap of 60fps (which movie writer enforces, or that can be set using the `application/run/max_fps` setting)
2. The `loop` property of the `PathFollow3D` node being set to `true` (which it is by default)
When both these conditions are met, if the value of the `progress_ratio` property is monitored, you can see that on the first frame that it is beginning to be animated from `0.0` to `1.0` that it jumps to a value of `1.0` before going back down to `~0` and animating as expected:
```
0
0
0
1
0.00666666636243
0.01333333272487
0.01999999955297
0.02666666544974
```
If either of the prerequisites are not met (by either disabling `loop` or removing the FPS cap), the log will instead read as you'd expect, without the jump to `1.0`:
```
0
0
0
0
0.00537359947339
0.01092915516347
0.01928600110114
0.02484155632555
0.03317489102483
```
When testing this in my main project, it consistently happened every time, in the MRP it seems to sporadically not do it and instead behave as expected, so it seems there may be some chance to it too.
I'm making the assumption, based on `loop` being disabled preventing it from happening is that when the value is being set to `1.0` at the start of the animation that it is some how getting a value slightly below `0` that is then looping back to `1.0`; which may mean the issue is with `AnimationPlayer` rather than `PathFollow3D`.
I feel as though it may have some relation to https://github.com/godotengine/godot/issues/98602 as setting the initial value in the animation player to `0.001` seems to prevent the erroneous value of `1.0` appearing; however, unlike that issue it's not affecting the initial value when the scene loads, it is only later in execution when the ratio is being animated that it happens.
### Steps to reproduce
- Run the MRP and take note of the values logged to the console
- Optionally, if testing with movie writer enabled, move frame by frame through the video and you should see the mesh disappear on the first frame that it begins to animate and immediately return on the next one
### Minimal reproduction project (MRP)
[path_follow_3d_60fps_bug.zip](https://github.com/user-attachments/files/18214320/path_follow_3d_60fps_bug.zip)
| bug,topic:animation,topic:3d | low | Critical |
2,753,335,229 | rust | Bad parse error on token sequences `safe unsafe` and `unsafe safe` | ### Code
```Rust
// #1
unsafe extern {
safe unsafe fn foo();
unsafe safe fn bar();
}
// #2
unsafe safe fn baz() {}
safe unsafe fn qux() {}
// #3
unsafe safe extern "C" fn ham() {}
safe unsafe extern "C" fn egg() {}
```
### Current output
```Shell
# example #1:
`safe` must come before `unsafe`: `safe unsafe`
`unsafe` must come before `safe`: `unsafe safe`
# Example #2
items outside an `unsafe extern {...} block may not be annotated with `safe``
`safe` must come before `unsafe`: `safe unsafe`
`unsafe` must come before `safe`: `unsafe safe`
# example #3 is the same as #2
```
### Desired output
```Shell
# Example #1
error: `safe` and `unsafe` are incompatible
note: extern functions are unsafe by default
help: if you want to make this function safe, remove `unsafe`
# Example #2
error: items outside an `unsafe extern {...}` block may not be annotated with `safe`
help: remove `safe`
# example #3
error: items outside an `unsafe extern {...}` block may not be annotated with `safe`
note: extern functions are safe by default
help: remove `safe`
```
### Rationale and extra context
look at the errors for #2 and #3:
```
`safe` must come before `unsafe`: `safe unsafe`
`unsafe` must come before `safe`: `unsafe safe`
```
this means that it *thinks* that the order is incorrect
which is likely because they are both included in the check for order
### Other cases
```Rust
// #1
unsafe safe static FOO: i32 = 42;
unsafe safe const BAR: i32 = 42;
unsafe safe trait Baz {}
unsafe trait _X {}
unsafe safe impl _X for i32 {}
// #2
unsafe extern {
unsafe safe static QUX: usize = 42;
// the rest of the examples just copy pasted
}
```
Output:
```shell
# #1
items outside an `unsafe extern` blocks cannot be `safe`
# shuffle safe and unsafe like usual
# #2
# same as 1 but without the 'items outside `unsafe extern` ...'
```
### Rust Version
```Shell
$ rustc --version --verbose
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
```
### Anything else?
This is a follow up to [#133586](https://github.com/rust-lang/rust/issues/133586) and [#133631](https://github.com/rust-lang/rust/issues/133630)
[#133586](https://github.com/rust-lang/rust/issues/133586) has the PR [#133618](https://github.com/rust-lang/rust/pull/133618)
(all of the above are closed) | A-diagnostics,A-parser,T-compiler,D-confusing,D-invalid-suggestion,D-incorrect,D-terse | medium | Critical |
2,753,343,535 | PowerToys | New+ creates files/folders shifted to the left of the cursor position, often on the wrong monitor in multi-display setups | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
New+
### Steps to reproduce
1. Enable the "new+" feature in PowerToys.
2. Use "new+" to create a new file or folder with a template.
3. Observe that the file or folder is created to the left of the mouse cursor position, often ending up on the adjacent monitor if you have multiple screens.
### ✔️ Expected Behavior
The new file or folder should be created at the exact position of the mouse cursor when the "new+" command is used.
### ❌ Actual Behavior
The new file or folder is created to the left of the mouse cursor's position, and on multi-monitor setups, it may end up on an adjacent screen. This can create confusion as it feels like no file or folder has been created.
### Other Software
Windows 10 | Issue-Bug,Needs-Triage,Product-New+ | low | Minor |
2,753,407,936 | TypeScript | TypeScript slow project load times for large projects on MacOS | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: v1.96.2
- OS Version: MacOS
- TypeScript: v5.7.2 (workspace version)
- Node.JS: v20.18.0
Steps to Reproduce:
1. Add the following to `.vscode/settings.json`
```jsonc
"typescript.tsserver.log": "verbose",
"files.watcherExclude": {
"**/node_modules": true
},
```
This causes:
```
2024-12-20 13:49:42.785 [info] <syntax> Falling back to legacy node.js based file watching because of user settings.
```
2. Open a TS source file belonging to a large TS project with many project references in VSCode running on MacOS. For example, our project references a total of 25k+ files (transitively through project references):
```
Info 57414[13:12:03.001] Project '.../web/src/pages/editor/tsconfig.json' (Configured)
Info 57414[13:12:03.001] Files (26167)
```
3. Notice that the tsserver logs are littered with the following log lines:
```
Info 57094[13:34:37.437] DirectoryWatcher:: Triggered with .../web/src/pages/editor :: WatchInfo: .../web/src/pages/editor 1 {"watchFile":2,"watchDirectory":2,"excludeDirectories":["/Users/mjames/work/canva/web/**/node_modules",".../web/**/target",".../web/**/bazel-bin",".../web/**/bazel-out",".../web/**/bazel-canva"]} Config: /Users/mjames/work/<redacted>/tsconfig.json WatchType: Wild card directory
Info 57095[13:34:37.437] Invoking sourceFileChange on .../web/src/pages/editor/editor_bootstrap_proto.ts:: 1
Info 57096[13:34:37.438] Scheduled: .../web/src/pages/editor/tsconfig.json
Info 57097[13:34:37.438] Scheduled: *ensureProjectForOpenFiles*
Info 57098[13:34:37.438] Invoking sourceFileChange on .../web/src/pages/<redacted>:: 1
Info 57099[13:34:37.438] Scheduled: .../web/src/pages/editor/tsconfig.json, Cancelled earlier one
Info 57100[13:34:37.438] Scheduled: *ensureProjectForOpenFiles*, Cancelled earlier one
Info 57101[13:34:37.438] Invoking sourceFileChange on .../web/src/pages/editor/<redacted>:: 1
Info 57102[13:34:37.438] Scheduled: .../web/src/pages/editor/tsconfig.json, Cancelled earlier one
Info 57103[13:34:37.438] Scheduled: *ensureProjectForOpenFiles*, Cancelled earlier one
Info 57104[13:34:37.438] Invoking sourceFileChange on .../web/src/pages/editor/<redacted>:: 1
Info 57105[13:34:37.438] Scheduled: .../web/src/pages/editor/tsconfig.json, Cancelled earlier one
Info 57106[13:34:37.442] Scheduled: *ensureProjectForOpenFiles*, Cancelled earlier one
Info 57107[13:34:37.442] Invoking sourceFileChange on .../web/src/pages/editor/<redacted>:: 1
Info 57108[13:34:37.442] Scheduled: .../web/src/pages/editor/tsconfig.json, Cancelled earlier one
Info 57109[13:34:37.442] Scheduled: *ensureProjectForOpenFiles*, Cancelled earlier one
Info 57110[13:34:37.442] Invoking sourceFileChange on .../web/src/pages/editor/<redacted>:: 1
Info 57111[13:34:37.443] Scheduled: .../web/src/pages/editor/tsconfig.json, Cancelled earlier one
Info 57112[13:34:37.443] Scheduled: *ensureProjectForOpenFiles*, Cancelled earlier one
Info 57113[13:34:37.443] Invoking sourceFileChange on .../web/src/pages/editor/editing/<redacted>:: 1
Info 57114[13:34:37.443] Scheduled: .../web/src/pages/editor/tsconfig.json, Cancelled earlier one
Info 57115[13:34:37.443] Scheduled: *ensureProjectForOpenFiles*, Cancelled earlier one
Info 57116[13:34:37.443] Invoking sourceFileChange on .../web/src/pages/editor/<redacted>:: 1
Info 57117[13:34:37.443] Scheduled: .../web/src/pages/editor/tsconfig.json, Cancelled earlier one
Info 57118[13:34:37.443] Scheduled: *ensureProjectForOpenFiles*, Cancelled earlier one
Info 57119[13:34:37.443] Invoking sourceFileChange on .../web/src/pages/editor/<redacted>:: 1
Info 57120[13:34:37.443] Scheduled: .../web/src/pages/editor/tsconfig.json, Cancelled earlier one
Info 57121[13:34:37.443] Scheduled: *ensureProjectForOpenFiles*, Cancelled earlier one
Info 57122[13:34:37.443] Invoking sourceFileChange on .../web/src/pages/editor/<redacted>:: 1
Info 57123[13:34:37.443] Scheduled: .../web/src/pages/editor/tsconfig.json, Cancelled earlier
...
```
The strange thing is that this is NOT a problem with `.vscode-server` running on Linux.
None of the 20k+ `Invoking sourceFileChange on...` log lines appear in the tsserver logs when running in our Linux based remote dev envs and **the load time for this project is ~52s vs 3m45s!**
Interestingly the problem also does NOT occur on MacOS when tsserver reuses VSCode's file watcher (i.e. [the `node_modules` excludes aren't in the `files.watcherExclude` hash](https://github.com/microsoft/vscode/blob/acd32b17b837b05a64275c297949753df46dbe6d/extensions/typescript-language-features/src/configuration/configuration.ts#L230-L238)) | Needs Investigation | low | Critical |
2,753,431,019 | svelte | Allow snippet to use `animate` directive | ### Describe the problem
This is currently not allowed:
```html
{#snippet test(item)}
<div animate:flip>{item}</div>
{/snippet}
{#each arr as item (item)}
{@render test(item)}
{/each}
```
This would be useful if you for instance, make a snippet that renders a list of items;
```html
{#snippet test(item)}
<div animate:flip>{item}</div>
{/snippet}
{#snippet list(array, itemSnippet)}
{#each arr as item (item)}
{@render itemSnippet(item)}
{/each}
{/snippet}
{@render list(arr, test)}
```
As a practical example; I'm working on a drag-and-drop library [runic-reorder](https://github.com/Refzlund/runic-reorder?tab=readme-ov-file) which utilises above strategy.
This way of working with snippets expands the ease of use for design choices when creating a library.
I bring this up, as I noticed on [Svelte Playground](https://svelte.dev/playground/9936f34beb2e4809b9cb734f0fadfb22?version=5.15.0) that snippets are a lot like regular elements, and attaching `$.animation(div_1, () => flip, null);` within a snippet might be feasable🙏
### Describe the proposed solution
```html
{#snippet test(item)}
<div animate:flip>{item}</div>
{/snippet}
{#each arr as item (item)}
{@render itemSnippet(item)}
{/each}
```
or in compiled version words
```js
// snippet
const test = ($$anchor, item = $.noop) => {
var div = root_1();
var text = $.child(div, true);
$.reset(div);
$.template_effect(() => $.set_text(text, item()));
$.animation(div, () => flip, null); // this
$.append($$anchor, div);
};
```
### Importance
would make my life easier | transition/animation | low | Minor |
2,753,445,855 | kubernetes | [Flaking Test] k8s.io/client-go/tools: cache | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-unit/1870087703664529408
### Which tests are flaking?
k8s.io/client-go/tools: cache
### Since when has it been flaking?
I see one failure instance today https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-unit/1870087703664529408 but haven't been able to find others in k8s-triage
### Testgrid link
testgrid.k8s.io/sig-release-master-blocking#ci-kubernetes-unit
### Reason for failure (if possible)
_No response_
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig api-machinery | priority/important-soon,sig/api-machinery,kind/flake,triage/accepted | low | Critical |
2,753,453,668 | PowerToys | When app makes itself full screen, behavior is full screen and window is full screen, but offset is at the zone | ### Microsoft PowerToys version
0.86.0
### Installation method
Other (please specify in "Steps to Reproduce")
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
I don't know if this is specific [Windows App](https://apps.microsoft.com/detail/9N1F85V9T8BN?hl=en-us&gl=US&ocid=pdpshare), but that's where I'm seeing it.
When you connect to a cloud PC with this app, the window makes itself full screen. The behavior is correctly full screen, for example alt-tab switches between apps in the VM. And the RDP window size is correctly the size of the screen. But the _offset_ of the window is set to the offset of the zone I last had this window in. So I'm seeing the desktop of the client, and the window gets clipped by the edges of the monitor.
The fix is to right click on the window in the client desktop task bar and click "Restore"
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Product-FancyZones,Needs-Triage | low | Minor |
2,753,454,698 | flutter | `CarouselView` throws `AssertionError` when parent size is 0 | ### Steps to reproduce
1. Run the code sample
2. Note that there is an error in the console
### Expected results
`CarouselView` should not throw when parent size is 0.
### Actual results
`CarouselView` throws an `AssertionError` when parent size is 0.
The documentation for the `itemExtent` property mentions that _the item extent should not exceed the available space that the carousel view occupies to ensure at least one item is fully visible_. However, the same error is thrown when both the `SizedBox` and `itemExtent` is set to 0. I would also argue that the widget should gracefully handle having 0 size since this can happen indirectly through the parent widget size.
Additionally, an `UnsupportedError` (`Unsupported operation: Infinity or NaN toInt`) is thrown when the `SizedBox` size is set to 100 and `itemExtent` is set to 0.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: SizedBox(
width: 0,
child: CarouselView(
itemExtent: 100,
children: [
Container(
color: Colors.red,
width: 100,
height: 100,
),
],
),
),
),
);
}
}
```
</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib/main.dart on iPhone 15 Pro in debug mode...
Xcode build done. 14.2s
Connecting to VM Service at ws://127.0.0.1:64008/DOq4En-aJYE=/ws
Connected to the VM Service.
════════ Exception caught by rendering library ═════════════════════════════════
The following assertion was thrown during performLayout():
'package:flutter/src/rendering/viewport.dart': Failed assertion: line 1515 pos 12: 'correctedOffset.isFinite': is not true.
Either the assertion indicates an error in the framework itself, or we should provide substantially more information in this error message to help you determine and fix the underlying cause.
In either case, please report this assertion by filing a bug on GitHub:
https://github.com/flutter/flutter/issues/new?template=2_bug.yml
The relevant error-causing widget was:
CarouselView CarouselView
When the exception was thrown, this was the stack:
#2 RenderViewport._attemptLayout (package:flutter/src/rendering/viewport.dart:1515:12)
#3 RenderViewport.performLayout (package:flutter/src/rendering/viewport.dart:1467:20)
#4 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#5 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#6 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#7 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#8 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#9 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#10 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#11 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#12 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#13 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#14 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#15 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#16 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#17 _RenderLayoutBuilder.performLayout (package:flutter/src/widgets/layout_builder.dart:393:14)
#18 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#19 RenderConstrainedBox.performLayout (package:flutter/src/rendering/proxy_box.dart:293:14)
#20 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#21 MultiChildLayoutDelegate.layoutChild (package:flutter/src/rendering/custom_layout.dart:180:12)
#22 _ScaffoldLayout.performLayout (package:flutter/src/material/scaffold.dart:1118:7)
#23 MultiChildLayoutDelegate._callPerformLayout (package:flutter/src/rendering/custom_layout.dart:249:7)
#24 RenderCustomMultiChildLayoutBox.performLayout (package:flutter/src/rendering/custom_layout.dart:419:14)
#25 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#26 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#27 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#28 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#29 _RenderCustomClip.performLayout (package:flutter/src/rendering/proxy_box.dart:1483:11)
#30 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#31 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#32 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#33 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#34 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#35 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#36 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#37 ChildLayoutHelper.layoutChild (package:flutter/src/rendering/layout_helper.dart:62:11)
#38 RenderStack._computeSize (package:flutter/src/rendering/stack.dart:646:43)
#39 RenderStack.performLayout (package:flutter/src/rendering/stack.dart:673:12)
#40 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#41 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#42 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#43 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#44 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#45 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#46 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#47 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#48 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#49 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#50 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#51 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#52 RenderOffstage.performLayout (package:flutter/src/rendering/proxy_box.dart:3750:13)
#53 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#54 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#55 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#56 _RenderTheaterMixin.layoutChild (package:flutter/src/widgets/overlay.dart:1076:13)
#57 _RenderTheater.performLayout (package:flutter/src/widgets/overlay.dart:1422:9)
#58 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#59 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#60 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#61 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#62 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#63 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#64 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#65 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#66 RenderCustomPaint.performLayout (package:flutter/src/rendering/custom_paint.dart:574:11)
#67 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#68 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#69 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#70 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#71 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#72 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#73 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#74 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#75 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#76 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#77 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#78 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#79 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#80 RenderProxyBoxMixin.performLayout (package:flutter/src/rendering/proxy_box.dart:115:18)
#81 RenderObject.layout (package:flutter/src/rendering/object.dart:2715:7)
#82 RenderView.performLayout (package:flutter/src/rendering/view.dart:294:12)
#83 RenderObject._layoutWithoutResize (package:flutter/src/rendering/object.dart:2548:7)
#84 PipelineOwner.flushLayout (package:flutter/src/rendering/object.dart:1112:18)
#85 PipelineOwner.flushLayout (package:flutter/src/rendering/object.dart:1125:15)
#86 RendererBinding.drawFrame (package:flutter/src/rendering/binding.dart:617:23)
#87 WidgetsBinding.drawFrame (package:flutter/src/widgets/binding.dart:1231:13)
#88 RendererBinding._handlePersistentFrameCallback (package:flutter/src/rendering/binding.dart:483:5)
#89 SchedulerBinding._invokeFrameCallback (package:flutter/src/scheduler/binding.dart:1442:15)
#90 SchedulerBinding.handleDrawFrame (package:flutter/src/scheduler/binding.dart:1355:9)
#91 SchedulerBinding.scheduleWarmUpFrame.<anonymous closure> (package:flutter/src/scheduler/binding.dart:1064:9)
#92 PlatformDispatcher.scheduleWarmUpFrame.<anonymous closure> (dart:ui/platform_dispatcher.dart:873:16)
#96 _RawReceivePort._handleMessage (dart:isolate-patch/isolate_patch.dart:194:12)
(elided 5 frames from class _AssertionError, class _Timer, and dart:async-patch)
The following RenderObject was being processed when the exception was fired: RenderViewport#74279 NEEDS-LAYOUT NEEDS-PAINT NEEDS-COMPOSITING-BITS-UPDATE
needs compositing
parentData: <none> (can use size)
constraints: BoxConstraints(w=0.0, 0.0<=h<=852.0)
size: Size(0.0, 852.0)
axisDirection: right
crossAxisDirection: down
offset: _CarouselPosition#7e9dc(offset: NaN, range: null..null, viewport: 0.0, ScrollableState, BouncingScrollPhysics -> RangeMaintainingScrollPhysics -> BouncingScrollPhysics -> RangeMaintainingScrollPhysics, IdleScrollActivity#a8b19, ScrollDirection.idle)
anchor: 0.0
center child: _RenderSliverFixedExtentCarousel#5c1a8 NEEDS-LAYOUT NEEDS-PAINT
parentData: paintOffset=Offset(0.0, 0.0)
constraints: MISSING
geometry: null
no children current live
RenderObject: RenderViewport#74279 NEEDS-LAYOUT NEEDS-PAINT NEEDS-COMPOSITING-BITS-UPDATE
needs compositing
parentData: <none> (can use size)
constraints: BoxConstraints(w=0.0, 0.0<=h<=852.0)
size: Size(0.0, 852.0)
axisDirection: right
crossAxisDirection: down
offset: _CarouselPosition#7e9dc(offset: NaN, range: null..null, viewport: 0.0, ScrollableState, BouncingScrollPhysics -> RangeMaintainingScrollPhysics -> BouncingScrollPhysics -> RangeMaintainingScrollPhysics, IdleScrollActivity#a8b19, ScrollDirection.idle)
anchor: 0.0
center child: _RenderSliverFixedExtentCarousel#5c1a8 NEEDS-LAYOUT NEEDS-PAINT
parentData: paintOffset=Offset(0.0, 0.0)
constraints: MISSING
geometry: null
no children current live
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel main, 3.28.0-2.0.pre.38555, on macOS 14.4.1 23E224 darwin-arm64, locale en-SE)
• Flutter version 3.28.0-2.0.pre.38555 on channel main
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 181f4244b4 (3 hours ago), 2024-12-20 18:46:55 +0100
• Engine revision 181f4244b4
• Dart version 3.7.0 (build 3.7.0-266.0.dev)
• DevTools version 2.41.0
```
</details>
| c: crash,framework,f: material design,waiting for PR to land (fixed),has reproducible steps,P2,team-design,triaged-design,found in release: 3.27,found in release: 3.28 | low | Critical |
2,753,479,074 | vscode | Dev notes on using extension services | Inspired by https://github.com/microsoft/vscode/issues/236537
I've found it a little confusing to find the right extension service to work with so here are some random notes on things that have tripped me up.
- There are many different services with "extension" in the name, and when I need to do something, it's hard to know where to start.
- Probably the most common ones are `IExtensionService`, `IExtensionsWorkbenchService`, `IExtensionManagementService`, and I don't fully understand the distinction
- In the above issue, we used `IExtensionManagementService#getInstalled` https://github.com/microsoft/vscode/blob/820447acdc9c613e8c02f74a4c278ec8be5a29f5/src/vs/platform/extensionManagement/common/extensionManagement.ts#L576 to check whether the Copilot extension was installed. This seemed reasonable but it actually returns all local and remote extensions, whether enabled or disabled due to being in the wrong EH. And, it returns them the two sets in random order based on a race. I think this is a serious footgun.
- And I think we've made the same mistake in other places. If I look at references to `getInstalled` I see a lot of usages. eg I get `@ext:` completions in the settings editor for workspace extensions in a remote window that are only installed locally and not on the remote
- I think `IExtensionsWorkbenchService#queryLocal` was the right thing to use but there's also `IExtensionService#extensions` and I'm not sure what the difference is
- There are two different types called `IExtension` at https://github.com/microsoft/vscode/blob/820447acdc9c613e8c02f74a4c278ec8be5a29f5/src/vs/workbench/contrib/extensions/common/extensions.ts#L56 and https://github.com/microsoft/vscode/blob/820447acdc9c613e8c02f74a4c278ec8be5a29f5/src/vs/platform/extensions/common/extensions.ts#L309
- The name `queryLocal` and `ILocalExtension` were a little confusing now that "remote extensions" exist
Some things that might help could be some tsdoc on the services and methods that explain when to use them or what exactly they return. Or a .md doc file with a "guide to extensions services" that lays this stuff out. | debt,extensions | low | Minor |
2,753,514,876 | PowerToys | Pressing Alt Gr affects shortcut | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
1. Remap a shortcut of "`Ctrl(left)` `J` to `Left`"
2. Press `Alt Gr` and `Q` on a German keyboard to input "@"
### ✔️ Expected Behavior
The shortcut should stay effective
### ❌ Actual Behavior
The shortcut becomes ineffective
### Other Software
_No response_ | Issue-Bug,Product-Keyboard Shortcut Manager,Needs-Triage | low | Minor |
2,753,518,692 | PowerToys | New+ is hard to search for in GitHub | ### Provide a description of requested docs changes
I think New+ is a great name. Very representative of the features in the application.
However, it is terrible to search for in GitHub. When you try searching for `is:issue state:open new+` the search results are the same as for `is:issue state:open new`. Putting it in brackets makes no difference. Most of the issues are regarding novel features, not this particular extension.
Would changing the name for NewPlus or something like it be an improvement? Or is this minor enough for not warranting a change, in face of a great name? | Issue-Docs,Needs-Triage,Product-New+ | low | Minor |
2,753,538,893 | rust | MaybeUninit::uninit_array still refers to const as future | ### Location
https://doc.rust-lang.org/std/mem/union.MaybeUninit.html#method.uninit_array
### Summary
It didn’t catch up on const {} being stabilized. It might now follow up, announcing that this method will be deprecated in edition 2027, or whatever the intention was. | A-docs,T-libs | low | Minor |
2,753,540,516 | react | [DevTools Bug]: Profiling not supported error points to stale documentation link. | ### Website or app
https://vercel.com/login
### Repro steps
I opened developer tools and went to the react dev tools profiling tab. There it says "Profiling not supported. Profiling support requires either a development or profiling build of React v16.5+. Learn more at [reactjs.org/link/profiling](https://fb.me/react-devtools-profiling)."
When I follow this URL, the top of the page says, "This site is no longer updated. [Go to react.dev](https://react.dev/blog/2023/03/16/introducing-react-dev)"
### How often does this bug happen?
Every time
### DevTools package (automated)
_No response_
### DevTools version (automated)
_No response_
### Error message (automated)
_No response_
### Error call stack (automated)
_No response_
### Error component stack (automated)
_No response_
### GitHub query string (automated)
_No response_ | Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,753,545,726 | react | [DevTools Bug]: Profiling not supported error with up-to-date React. | ### Website or app
https://vercel.com/login
### Repro steps
When I open React Developer Tools profiler page, I get the following error:
```
Profiling not supported.
Profiling support requires either a development or profiling build of React v16.5+.
Learn more at [reactjs.org/link/profiling](https://fb.me/react-devtools-profiling).
```
I first saw the problem on my own app, which is running React 18 and NextJS 15, but was able to reproduce the problem on every other NextJS react page I could find.
I am running Google Chrome 131.0.6778.204 on Fedora 40. My version of React Developer Tools is 6.0.1 (10/15/2024).
If I test the same conditions, but on Firefox, profiling works, suggesting it is a problem with React Dev Tools.
### How often does this bug happen?
Every time
### DevTools package (automated)
_No response_
### DevTools version (automated)
_No response_
### Error message (automated)
_No response_
### Error call stack (automated)
_No response_
### Error component stack (automated)
_No response_
### GitHub query string (automated)
_No response_ | Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,753,555,011 | next.js | next build isn't detecting generateStaticParams returning empty array in page.tsx | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/intelligent-lichterman-wjpvwn
### To Reproduce
1. Build the application.
### Current vs. Expected behavior
According to the documentation, expected behavior is "To statically render all paths the first time they're visited, return an empty array (no paths will be rendered at build time)" and it should build and run
Current behavior: The build fails and isn't detecting the generateStaticParams solution.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:11 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6020
Available memory (MB): 16384
Available CPU cores: 12
Binaries:
Node: 20.18.1
npm: 10.8.2
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.1.2 // Latest available version is detected (15.1.2).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.2.0
typescript: 5.6.3
Next.js Config:
output: export
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Output (export/standalone), Runtime, TypeScript, Webpack
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
If I made a generateStaticParams that didn't return an empty array, it seemed to work. However, that stops me from navigating to other, not-statically defined routes. I have a million different combinations these slugs can have, so I don't want to write them all and I want them to dynamically generate at runtime. The issue is that I need a static export for my AspNetCore middleware to properly serve the files. If anyone could help me, that would be great. | Output (export/standalone),Webpack,TypeScript,Runtime | low | Minor |
2,753,558,885 | flutter | [camera] Audit camera plugin for deprecated iOS API usage | https://github.com/flutter/flutter/issues/156373 reported usage of several deprecated APIs, including captureToFileWithCompletion and highResolutionPhotoEnabled. Audit the package and replace deprecated APIs, with fallbacks for older versions as needed. | platform-ios,p: camera,package,P2,team-ios,triaged-ios | low | Minor |
2,753,568,558 | PowerToys | Remove "prerelease: true" from WinGet Configuration | ### Description of the new feature / enhancement
The Microsoft.WinGet.DSC module is GA. There may be conflicts introduced with preview versions of the module.
The "prerelease: true" directive should be removed to avoid problems.
> [!NOTE]
> I'll make a PR shortly
### Scenario when this would be used?
When a developer uses the WinGet Configuration File to setup their development environment it should be using stable modules if they are available. Other modules may still be in preview, so those should still have the directive for prerelease.
### Supporting information
* https://github.com/microsoft/winget-cli/issues/5069 | Resolution-Fix Committed | low | Minor |
2,753,583,454 | flutter | bash_entrypoint_test.dart breaking after change to shared.sh | Broke on: https://ci.chromium.org/ui/p/flutter/builders/prod/Linux%20tool_integration_tests_5_6/452/infra
After https://github.com/flutter/flutter/pull/160668/files
```
06:12 +12 ~11 -1: test/integration.shard/bash_entrypoint_test.dart: shared.sh does not compile flutter tool if PROG_NAME=dart [E]
Expected: a process with exit code 0
Actual: <Instance of 'ProcessResult'>
Which: Actual exitCode was 127
Actual stderr:
/b/s/w/ir/x/t/bash_entrypoint_testGDPYBE/bin/internal/shared.sh: line 117: /b/s/w/ir/x/t/bash_entrypoint_testGDPYBE/bin/internal/update_engine_version.sh: No such file or directory
package:matcher expect
test/integration.shard/bash_entrypoint_test.dart 111:7 main.<fn>
``` | a: tests,team-tool | low | Minor |
2,753,589,467 | flutter | [Android] Consider configuring network access on a per-test basis | https://flutter-review.googlesource.com/c/recipes/+/61660 will add the `--enable-network` flag that gives Android virtual devices access to the network when they run tests in CI. This is needed to run tests like [this one](https://github.com/flutter/packages/blob/3515abab07d0bb2441277f43c2411c9b5e4ecf94/packages/video_player/video_player/example/integration_test/video_player_test.dart#L57) in video player that access resources hosted by Flutter on the web.
We should consider enabling network access on a per-test basis instead of globally allowing it. | a: tests,platform-android,c: proposal,P2,team-android,triaged-android | low | Minor |
2,753,604,001 | pytorch | [ROCm] Inductor CK GEMM backend very slow | ### 🐛 Describe the bug
When using the CK backend via `TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_BACKENDS="CK,ATEN,TRITON,CPP"` compilation of CK kernels is very slow. (>one minute per file in some cases)
It looks like some very long symbol names in these files are making compilation slower because LLVM uses `SmallString<128>` buffers to build up symbol names and now has to allocate a bunch in places that were otherwise allocation free. From some `perf` sampling it looks like this is causing LLVM to spend a lot more time in TargetMachine::getSymbol.
https://github.com/LunNova/llvm-project-rocm/blob/5a9ddc6f57430d5e8c5154779c647219c8e7cb99/llvm/lib/Target/TargetMachine.cpp#L283-L292
It's possible I've misdiagnosed this and the long symbol allocations here are mostly inside the CK code and the top level long names aren't important, or the slow compilation is entirely unrelated to the symbol names. In any case, it's very slow.
<details>
<summary>Long logs/code on torch 20241218 nightly</summary>
```
[torch/_inductor/codecache.py:3202] Compilation took 70.78182601928711 seconds. Compile command: /nix/store/baxdbiqlbq3xgfrycxz0l3lhgqr30gpg-rocmcxx/bin/clang -O3 -x hip -std=c++17 --offload-arch=gfx908 -fno-gpu-rdc -fPIC -mllvm -amdgpu-early-inline-all=true -mllvm -amdgpu-function-calls=false -mllvm -enable-post-misched=0 -DNDEBUG -DCK_TILE_FMHA_FWD_FAST_EXP2=1 -fgpu-flush-denormals-to-zero -ffast-math -I/nix/store/jx8h8d85ymv8l950czxz8wpvd562mblx-unpack-composable_kernel-6.4.0a20241217/lib/python3.12/site-packages/ck4inductor/include -I/nix/store/jx8h8d85ymv8l950czxz8wpvd562mblx-unpack-composable_kernel-6.4.0a20241217/lib/python3.12/site-packages/ck4inductor/library/include -I/nix/store/qp3zdqix9pwcc505jgmlf27xcdq9v8xq-rocm-hip-libraries-meta/include -include __clang_hip_runtime_wrapper.h -L/nix/store/qp3zdqix9pwcc505jgmlf27xcdq9v8xq-rocm-hip-libraries-meta/lib -L/nix/store/qp3zdqix9pwcc505jgmlf27xcdq9v8xq-rocm-hip-libraries-meta/hip/lib -lamdhip64 -shared -o ~/ml-cache/torchinductor/yo/cyoxkhocdaqxywmu537oo7n6qsdsnn4hk6n75y3lg5sknrsei4ol.so ~/ml-cache/torchinductor/yo/cyoxkhocdaqxywmu537oo7n6qsdsnn4hk6n75y3lg5sknrsei4ol.cpp
$ cat ~/ml-cache/torchinductor/yo/cyoxkhocdaqxywmu537oo7n6qsdsnn4hk6n75y3lg5sknrsei4ol.cpp
/**
* Generated code for CK inductor backend
* See torch._inductor.codegen.rocm.ck_universal_gemm_template.CKGemmTemplate
*
* Template instance CKGemmOperation(a_layout='Row', b_layout='Col', ds_layouts=(), c_layout='Row', a_element_dtype='BF16', b_element_dtype='BF16', ds_element_dtypes=(), c_element_dtype='BF16', acc_dtype='F32', c_shuffle_dtype='BF16', a_elementwise_op='PassThrough', b_elementwise_op='PassThrough', c_elementwise_op='PassThrough', gemm_specialization='GemmSpecialization::NPadding', block_size=256, m_per_block=128, n_per_block=128, k_per_block=64, a_k1=8, b_k1=8, m_per_xdl=32, n_per_xdl=32, m_xdl_per_wave=2, n_xdl_per_wave=2, a_block_transfer_thread_cluster_lengths_ak0_m_ak1=(8, 32, 1), a_block_transfer_thread_cluster_arrange_order=(1, 0, 2), a_block_transfer_src_access_order=(1, 0, 2), a_block_transfer_src_vector_dim=2, a_block_transfer_src_scalar_per_vector=8, a_block_transfer_dst_scalar_per_vector_ak1=8, a_block_lds_extra_m=0, b_block_transfer_thread_cluster_lengths_bk0_n_bk1=(8, 32, 1), b_block_transfer_thread_cluster_arrange_order=(1, 0, 2), b_block_transfer_src_access_order=(1, 0, 2), b_block_transfer_src_vector_dim=2, b_block_transfer_src_scalar_per_vector=8, b_block_transfer_dst_scalar_per_vector_bk1=8, b_block_lds_extra_n=0, c_shuffle_m_xdl_per_wave_per_shuffle=1, c_shuffle_n_xdl_per_wave_per_shuffle=1, c_shuffle_block_transfer_cluster_lengths_m_block_m_per_block_n_block_n_per_block=(1, 16, 1, 16), c_shuffle_block_transfer_scalar_per_vector_n_per_block=(4,), block_gemm_pipeline_scheduler='BlockGemmPipelineScheduler::Intrawave', block_gemm_pipeline_version='BlockGemmPipelineVersion::v3', a_compute_dtype=None, b_compute_dtype=None)
*
* torch.__version__='2.6.0a.post20241218'
* torch.version.git_version=Unknown
*/
#include <exception>
#include <iostream>
#include <memory>
#include <random>
#include <vector>
// CK headers
#ifdef DEBUG_LOG
#define DEBUG_LOG_TMP DEBUG_LOG
#undef DEBUG_LOG
#else
#define DEBUG_LOG_TMP 0
#endif
#include "ck/ck.hpp"
#undef DEBUG_LOG
#define DEBUG_LOG DEBUG_LOG_TMP
#include "ck/utility/data_type.hpp"
#include "ck/library/utility/check_err.hpp"
#include "ck/library/utility/device_memory.hpp"
#include "ck/library/utility/fill.hpp"
#include "ck/library/utility/host_tensor.hpp"
#include "ck/library/utility/host_tensor_generator.hpp"
#include "ck/library/utility/literals.hpp"
// CK GEMM header(s)
#include "ck/tensor_operation/gpu/device/impl/device_gemm_multiple_d_xdl_cshuffle_v3.hpp"
// We compile all models with -fvisibility=hidden. Any symbols that need to be
// exposed in the final shared library must be declared with PT_EXPORT to make
// them visible.
#ifdef __GNUC__ // Applies to any compiler with GNU extensions (clang and g++)
#define PT_EXPORT __attribute__((__visibility__("default")))
#else
#ifdef _WIN32
#define PT_EXPORT __declspec(dllexport)
#else
#define PT_EXPORT
#endif
#endif
// as long as there is no custom arithmetic it's fine
using bfloat16 = uint16_t;
using float8_e4m3fnuz = uint8_t;
using float8_e5m2fnuz = uint8_t;
// CK globals
template <ck::index_t... Is>
using S = ck::Sequence<Is...>;
template<typename... Ts>
using Tuple = ck::Tuple<Ts...>;
using PassThrough = ck::tensor_operation::element_wise::PassThrough;
using Bilinear = ck::tensor_operation::element_wise::Bilinear;
using Scale = ck::tensor_operation::element_wise::Scale;
using ScaleAdd = ck::tensor_operation::element_wise::ScaleAdd;
using MultiplyMultiply = ck::tensor_operation::element_wise::MultiplyMultiply;
// see "composable_kernel/include/ck/utility/data_type.hpp"
using F8 = ck::f8_t;
using BF8 = ck::bf8_t;
using F16 = ck::half_t;
using F32 = float;
// using F64 = double;
using BF16 = ck::bhalf_t;
// using I32 = int32_t;
// using I8 = int8_t;
// using I4 = ck::int4_t;
#if DEBUG_LOG
static constexpr auto kDEBUG_LOG = 1;
#else
static constexpr auto kDEBUG_LOG = 0;
#endif
// CK GEMM globals
using Row = ck::tensor_layout::gemm::RowMajor;
using Col = ck::tensor_layout::gemm::ColumnMajor;
using BlockGemmPipelineScheduler = ck::BlockGemmPipelineScheduler;
using GemmSpecialization = ck::tensor_operation::device::GemmSpecialization;
using BlockGemmPipelineVersion = ck::BlockGemmPipelineVersion;
struct MultiplyMultiplyAdd {
template <typename E, typename C, typename D0, typename D1, typename D2>
__host__ __device__ constexpr void
operator()(E& e, const C& c, const D0& d0, const D1& d1, const D2& d2) const {
e = ck::type_convert<E>(
ck::type_convert<float>(c)
* ck::type_convert<float>(d0)
* ck::type_convert<float>(d1)
+ ck::type_convert<float>(d2)
);
}
};
// Gemm operator ck_devicegemm_multid_xdl_shuffle_v3_KalayoutVRow_KblayoutVCol_KdslayoutsV_KclayoutVRow_KaelementdtypeVBF16_KbelementdtypeVBF16_KdselementdtypesV_KcelementdtypeVBF16_KaccdtypeVF32_KcshuffledtypeVBF16_KaelementwiseopVPassThrough_KbelementwiseopVPassThrough_KcelementwiseopVPassThrough_KgemmspecializationVGemmSpecializationNPadding_KblocksizeV256_KmperblockV128_KnperblockV128_KkperblockV64_Kak1V8_Kbk1V8_KmperxdlV32_KnperxdlV32_KmxdlperwaveV2_KnxdlperwaveV2_Kablocktransferthreadclusterlengthsak0mak1V8x32x1_KablocktransferthreadclusterarrangeorderV1x0x2_KablocktransfersrcaccessorderV1x0x2_KablocktransfersrcvectordimV2_KablocktransfersrcscalarpervectorV8_Kablocktransferdstscalarpervectorak1V8_KablockldsextramV0_Kbblocktransferthreadclusterlengthsbk0nbk1V8x32x1_KbblocktransferthreadclusterarrangeorderV1x0x2_KbblocktransfersrcaccessorderV1x0x2_KbblocktransfersrcvectordimV2_KbblocktransfersrcscalarpervectorV8_Kbblocktransferdstscalarpervectorbk1V8_KbblockldsextranV0_KcshufflemxdlperwavepershuffleV1_KcshufflenxdlperwavepershuffleV1_KcshuffleblocktransferclusterlengthsmblockmperblocknblocknperblockV1x16x1x16_KcshuffleblocktransferscalarpervectornperblockV4_KblockgemmpipelineschedulerVBlockGemmPipelineSchedulerIntrawave_KblockgemmpipelineversionVBlockGemmPipelineVersionv3_KacomputedtypeVNone_KbcomputedtypeVNone
using Operation_ck_devicegemm_multid_xdl_shuffle_v3_KalayoutVRow_KblayoutVCol_KdslayoutsV_KclayoutVRow_KaelementdtypeVBF16_KbelementdtypeVBF16_KdselementdtypesV_KcelementdtypeVBF16_KaccdtypeVF32_KcshuffledtypeVBF16_KaelementwiseopVPassThrough_KbelementwiseopVPassThrough_KcelementwiseopVPassThrough_KgemmspecializationVGemmSpecializationNPadding_KblocksizeV256_KmperblockV128_KnperblockV128_KkperblockV64_Kak1V8_Kbk1V8_KmperxdlV32_KnperxdlV32_KmxdlperwaveV2_KnxdlperwaveV2_Kablocktransferthreadclusterlengthsak0mak1V8x32x1_KablocktransferthreadclusterarrangeorderV1x0x2_KablocktransfersrcaccessorderV1x0x2_KablocktransfersrcvectordimV2_KablocktransfersrcscalarpervectorV8_Kablocktransferdstscalarpervectorak1V8_KablockldsextramV0_Kbblocktransferthreadclusterlengthsbk0nbk1V8x32x1_KbblocktransferthreadclusterarrangeorderV1x0x2_KbblocktransfersrcaccessorderV1x0x2_KbblocktransfersrcvectordimV2_KbblocktransfersrcscalarpervectorV8_Kbblocktransferdstscalarpervectorbk1V8_KbblockldsextranV0_KcshufflemxdlperwavepershuffleV1_KcshufflenxdlperwavepershuffleV1_KcshuffleblocktransferclusterlengthsmblockmperblocknblocknperblockV1x16x1x16_KcshuffleblocktransferscalarpervectornperblockV4_KblockgemmpipelineschedulerVBlockGemmPipelineSchedulerIntrawave_KblockgemmpipelineversionVBlockGemmPipelineVersionv3_KacomputedtypeVNone_KbcomputedtypeVNone =
ck::tensor_operation::device::DeviceGemmMultiD_Xdl_CShuffle_V3<
/* a_layout */ Row,
/* b_layout */ Col,
/* ds_layouts */ Tuple<>,
/* c_layout */ Row,
/* a_element_dtype */ BF16,
/* b_element_dtype */ BF16,
/* ds_element_dtypes */ Tuple<>,
/* c_element_dtype */ BF16,
/* acc_dtype */ F32,
/* c_shuffle_dtype */ BF16,
/* a_elementwise_op */ PassThrough,
/* b_elementwise_op */ PassThrough,
/* c_elementwise_op */ PassThrough,
/* gemm_specialization */ GemmSpecialization::NPadding,
/* block_size */ 256,
/* m_per_block */ 128,
/* n_per_block */ 128,
/* k_per_block */ 64,
/* a_k1 */ 8,
/* b_k1 */ 8,
/* m_per_xdl */ 32,
/* n_per_xdl */ 32,
/* m_xdl_per_wave */ 2,
/* n_xdl_per_wave */ 2,
/* a_block_transfer_thread_cluster_lengths_ak0_m_ak1 */ S<8, 32, 1>,
/* a_block_transfer_thread_cluster_arrange_order */ S<1, 0, 2>,
/* a_block_transfer_src_access_order */ S<1, 0, 2>,
/* a_block_transfer_src_vector_dim */ 2,
/* a_block_transfer_src_scalar_per_vector */ 8,
/* a_block_transfer_dst_scalar_per_vector_ak1 */ 8,
/* a_block_lds_extra_m */ 0,
/* b_block_transfer_thread_cluster_lengths_bk0_n_bk1 */ S<8, 32, 1>,
/* b_block_transfer_thread_cluster_arrange_order */ S<1, 0, 2>,
/* b_block_transfer_src_access_order */ S<1, 0, 2>,
/* b_block_transfer_src_vector_dim */ 2,
/* b_block_transfer_src_scalar_per_vector */ 8,
/* b_block_transfer_dst_scalar_per_vector_bk1 */ 8,
/* b_block_lds_extra_n */ 0,
/* c_shuffle_m_xdl_per_wave_per_shuffle */ 1,
/* c_shuffle_n_xdl_per_wave_per_shuffle */ 1,
/* c_shuffle_block_transfer_cluster_lengths_m_block_m_per_block_n_block_n_per_block */ S<1, 16, 1, 16>,
/* c_shuffle_block_transfer_scalar_per_vector_n_per_block */ S<4>,
/* block_gemm_pipeline_scheduler */ BlockGemmPipelineScheduler::Intrawave,
/* block_gemm_pipeline_version */ BlockGemmPipelineVersion::v3>;
extern "C" {
PT_EXPORT int rocm_fused_1(const bfloat16* X, const bfloat16* W, bfloat16* Y, int32_t M, int32_t N, int32_t K, int32_t LDA, int32_t LDB, int32_t LDC, int32_t LDD, size_t* workspace_size, uint8_t* workspace, hipStream_t stream) {
auto gemm =
Operation_ck_devicegemm_multid_xdl_shuffle_v3_KalayoutVRow_KblayoutVCol_KdslayoutsV_KclayoutVRow_KaelementdtypeVBF16_KbelementdtypeVBF16_KdselementdtypesV_KcelementdtypeVBF16_KaccdtypeVF32_KcshuffledtypeVBF16_KaelementwiseopVPassThrough_KbelementwiseopVPassThrough_KcelementwiseopVPassThrough_KgemmspecializationVGemmSpecializationNPadding_KblocksizeV256_KmperblockV128_KnperblockV128_KkperblockV64_Kak1V8_Kbk1V8_KmperxdlV32_KnperxdlV32_KmxdlperwaveV2_KnxdlperwaveV2_Kablocktransferthreadclusterlengthsak0mak1V8x32x1_KablocktransferthreadclusterarrangeorderV1x0x2_KablocktransfersrcaccessorderV1x0x2_KablocktransfersrcvectordimV2_KablocktransfersrcscalarpervectorV8_Kablocktransferdstscalarpervectorak1V8_KablockldsextramV0_Kbblocktransferthreadclusterlengthsbk0nbk1V8x32x1_KbblocktransferthreadclusterarrangeorderV1x0x2_KbblocktransfersrcaccessorderV1x0x2_KbblocktransfersrcvectordimV2_KbblocktransfersrcscalarpervectorV8_Kbblocktransferdstscalarpervectorbk1V8_KbblockldsextranV0_KcshufflemxdlperwavepershuffleV1_KcshufflenxdlperwavepershuffleV1_KcshuffleblocktransferclusterlengthsmblockmperblocknblocknperblockV1x16x1x16_KcshuffleblocktransferscalarpervectornperblockV4_KblockgemmpipelineschedulerVBlockGemmPipelineSchedulerIntrawave_KblockgemmpipelineversionVBlockGemmPipelineVersionv3_KacomputedtypeVNone_KbcomputedtypeVNone {};
auto invoker = gemm.MakeInvoker();
auto argument = gemm.MakeArgument(
reinterpret_cast<const BF16*>(X),
reinterpret_cast<const BF16*>(W),
std::array<const void*, 0>{ },
reinterpret_cast<BF16*>(Y),
M,
N,
K,
LDA,
LDB,
std::array<ck::index_t, 0>{ },
LDC,
1, // kBatch
PassThrough {},
PassThrough {},
PassThrough {} // c_elementwise_op
);
if (!gemm.IsSupportedArgument(argument)) {
// we do our best to statically avoid this case in `filter_op`
std::cerr << "invalid argument for gemm instance " << gemm.GetTypeString() << std::endl;
argument.Print();
return -23;
}
if (workspace_size) {
*workspace_size = gemm.GetWorkSpaceSize(&argument);
return 0;
}
// run the kernel
#ifdef GENERATE_CK_STANDALONE_RUNNER
const auto stream_config = StreamConfig{
stream,
/* time kernel */ 1,
/* log level */ 1,
/* n_cold_iter */ 100,
/* n_hot_iter */ 100,
/* flush_l2_cache */ 1,
/* rotate_count */ 5};
#else
const auto stream_config = StreamConfig{stream, /* time kernel */ false, /* log level */ 0};
#endif
const float elapsed_time = invoker.Run(argument, stream_config);
#ifdef GENERATE_CK_STANDALONE_RUNNER
std::cout << "elapsed time: " << elapsed_time << " ms" << std::endl;
#else
(void)elapsed_time;
#endif
return 0;
} // kernel definition
} // extern C
```
Also seeing some errors:
```
rank1]:E1220 13:17:04.168000 677380 torch/_inductor/select_algorithm.py:2003] [3/1] CUDA compilation error during autotuning:
[rank1]:E1220 13:17:04.168000 677380 torch/_inductor/select_algorithm.py:2003] [3/1] C++ compile error
[rank1]:E1220 13:17:04.168000 677380 torch/_inductor/select_algorithm.py:2003] [3/1]
[rank1]:E1220 13:17:04.168000 677380 torch/_inductor/select_algorithm.py:2003] [3/1] Command:
[rank1]:E1220 13:17:04.168000 677380 torch/_inductor/select_algorithm.py:2003] [3/1] /nix/store/baxdbiqlbq3xgfrycxz0l3lhgqr30gpg-rocmcxx/bin/clang -O3 -x hip -std=c++17 --offload-arch=gfx908 -fno-gpu-rdc -fPIC -mllvm -amdgpu-early-inline-all=true -mllvm -amdgpu-function-calls=false -mllvm -enable-post-misched=0 -DNDEBUG -DCK_TILE_FMHA_FWD_FAST_EXP2=1 -fgpu-flush-denormals-to-zero -ffast-math -I/nix/store/jx8h8d85ymv8l950czxz8wpvd562mblx-unpack-composable_kernel-6.4.0a20241217/lib/python3.12/site-packages/ck4inductor/include -I/nix/store/jx8h8d85ymv8l950czxz8wpvd562mblx-unpack-composable_kernel-6.4.0a20241217/lib/python3.12/site-packages/ck4inductor/library/include -I/nix/store/qp3zdqix9pwcc505jgmlf27xcdq9v8xq-rocm-hip-libraries-meta/include -include __clang_hip_runtime_wrapper.h -L/nix/store/qp3zdqix9pwcc505jgmlf27xcdq9v8xq-rocm-hip-libraries-meta/lib -L/nix/store/qp3zdqix9pwcc505jgmlf27xcdq9v8xq-rocm-hip-libraries-meta/hip/lib -lamdhip64 -shared -o ~/ml-cache/torchinductor/q3/cq3qdsttyk3ptde6djpzxm46uboeevoyov3pe365w5q5xy36a525.so ~/ml-cache/torchinductor/q3/cq3qdsttyk3ptde6djpzxm46uboeevoyov3pe365w5q5xy36a525.cpp
$ cat ~/ml-cache/torchinductor/q3/cq3qdsttyk3ptde6djpzxm46uboeevoyov3pe365w5q5xy36a525.cpp
/**
* Generated code for CK inductor backend
* See torch._inductor.codegen.rocm.ck_universal_gemm_template.CKGemmTemplate
*
* Template instance CKGemmOperation(a_layout='Col', b_layout='Row', ds_layouts=(), c_layout='Row', a_element_dtype='BF16', b_element_dtype='BF16', ds_element_dtypes=(), c_element_dtype='BF16', acc_dtype='F32', c_shuffle_dtype='BF16', a_elementwise_op='PassThrough', b_elementwise_op='PassThrough', c_elementwise_op='PassThrough', gemm_specialization='GemmSpecialization::MNPadding', block_size=256, m_per_block=224, n_per_block=256, k_per_block=64, a_k1=8, b_k1=8, m_per_xdl=16, n_per_xdl=16, m_xdl_per_wave=7, n_xdl_per_wave=8, a_block_transfer_thread_cluster_lengths_ak0_m_ak1=(8, 32, 1), a_block_transfer_thread_cluster_arrange_order=(0, 2, 1), a_block_transfer_src_access_order=(0, 2, 1), a_block_transfer_src_vector_dim=1, a_block_transfer_src_scalar_per_vector=8, a_block_transfer_dst_scalar_per_vector_ak1=8, a_block_lds_extra_m=0, b_block_transfer_thread_cluster_lengths_bk0_n_bk1=(8, 32, 1), b_block_transfer_thread_cluster_arrange_order=(0, 2, 1), b_block_transfer_src_access_order=(0, 2, 1), b_block_transfer_src_vector_dim=1, b_block_transfer_src_scalar_per_vector=8, b_block_transfer_dst_scalar_per_vector_bk1=8, b_block_lds_extra_n=0, c_shuffle_m_xdl_per_wave_per_shuffle=1, c_shuffle_n_xdl_per_wave_per_shuffle=2, c_shuffle_block_transfer_cluster_lengths_m_block_m_per_block_n_block_n_per_block=(1, 32, 1, 8), c_shuffle_block_transfer_scalar_per_vector_n_per_block=(8,), block_gemm_pipeline_scheduler='BlockGemmPipelineScheduler::Intrawave', block_gemm_pipeline_version='BlockGemmPipelineVersion::v3', a_compute_dtype=None, b_compute_dtype=None)
*
* torch.__version__='2.6.0a.post20241218'
* torch.version.git_version=Unknown
*/
#include <exception>
#include <iostream>
#include <memory>
#include <random>
#include <vector>
// CK headers
#ifdef DEBUG_LOG
#define DEBUG_LOG_TMP DEBUG_LOG
#undef DEBUG_LOG
#else
#define DEBUG_LOG_TMP 0
#endif
#include "ck/ck.hpp"
#undef DEBUG_LOG
#define DEBUG_LOG DEBUG_LOG_TMP
#include "ck/utility/data_type.hpp"
#include "ck/library/utility/check_err.hpp"
#include "ck/library/utility/device_memory.hpp"
#include "ck/library/utility/fill.hpp"
#include "ck/library/utility/host_tensor.hpp"
#include "ck/library/utility/host_tensor_generator.hpp"
#include "ck/library/utility/literals.hpp"
// CK GEMM header(s)
#include "ck/tensor_operation/gpu/device/impl/device_gemm_multiple_d_xdl_cshuffle_v3.hpp"
// We compile all models with -fvisibility=hidden. Any symbols that need to be
// exposed in the final shared library must be declared with PT_EXPORT to make
// them visible.
#ifdef __GNUC__ // Applies to any compiler with GNU extensions (clang and g++)
#define PT_EXPORT __attribute__((__visibility__("default")))
#else
#ifdef _WIN32
#define PT_EXPORT __declspec(dllexport)
#else
#define PT_EXPORT
#endif
#endif
// as long as there is no custom arithmetic it's fine
using bfloat16 = uint16_t;
using float8_e4m3fnuz = uint8_t;
using float8_e5m2fnuz = uint8_t;
// CK globals
template <ck::index_t... Is>
using S = ck::Sequence<Is...>;
template<typename... Ts>
using Tuple = ck::Tuple<Ts...>;
using PassThrough = ck::tensor_operation::element_wise::PassThrough;
using Bilinear = ck::tensor_operation::element_wise::Bilinear;
using Scale = ck::tensor_operation::element_wise::Scale;
using ScaleAdd = ck::tensor_operation::element_wise::ScaleAdd;
using MultiplyMultiply = ck::tensor_operation::element_wise::MultiplyMultiply;
// see "composable_kernel/include/ck/utility/data_type.hpp"
using F8 = ck::f8_t;
using BF8 = ck::bf8_t;
using F16 = ck::half_t;
using F32 = float;
// using F64 = double;
using BF16 = ck::bhalf_t;
// using I32 = int32_t;
// using I8 = int8_t;
// using I4 = ck::int4_t;
#if DEBUG_LOG
static constexpr auto kDEBUG_LOG = 1;
#else
static constexpr auto kDEBUG_LOG = 0;
#endif
// CK GEMM globals
using Row = ck::tensor_layout::gemm::RowMajor;
using Col = ck::tensor_layout::gemm::ColumnMajor;
using BlockGemmPipelineScheduler = ck::BlockGemmPipelineScheduler;
using GemmSpecialization = ck::tensor_operation::device::GemmSpecialization;
using BlockGemmPipelineVersion = ck::BlockGemmPipelineVersion;
struct MultiplyMultiplyAdd {
template <typename E, typename C, typename D0, typename D1, typename D2>
__host__ __device__ constexpr void
operator()(E& e, const C& c, const D0& d0, const D1& d1, const D2& d2) const {
e = ck::type_convert<E>(
ck::type_convert<float>(c)
* ck::type_convert<float>(d0)
* ck::type_convert<float>(d1)
+ ck::type_convert<float>(d2)
);
}
};
// Gemm operator ck_devicegemm_multid_xdl_shuffle_v3_KalayoutVCol_KblayoutVRow_KdslayoutsV_KclayoutVRow_KaelementdtypeVBF16_KbelementdtypeVBF16_KdselementdtypesV_KcelementdtypeVBF16_KaccdtypeVF32_KcshuffledtypeVBF16_KaelementwiseopVPassThrough_KbelementwiseopVPassThrough_KcelementwiseopVPassThrough_KgemmspecializationVGemmSpecializationMNPadding_KblocksizeV256_KmperblockV224_KnperblockV256_KkperblockV64_Kak1V8_Kbk1V8_KmperxdlV16_KnperxdlV16_KmxdlperwaveV7_KnxdlperwaveV8_Kablocktransferthreadclusterlengthsak0mak1V8x32x1_KablocktransferthreadclusterarrangeorderV0x2x1_KablocktransfersrcaccessorderV0x2x1_KablocktransfersrcvectordimV1_KablocktransfersrcscalarpervectorV8_Kablocktransferdstscalarpervectorak1V8_KablockldsextramV0_Kbblocktransferthreadclusterlengthsbk0nbk1V8x32x1_KbblocktransferthreadclusterarrangeorderV0x2x1_KbblocktransfersrcaccessorderV0x2x1_KbblocktransfersrcvectordimV1_KbblocktransfersrcscalarpervectorV8_Kbblocktransferdstscalarpervectorbk1V8_KbblockldsextranV0_KcshufflemxdlperwavepershuffleV1_KcshufflenxdlperwavepershuffleV2_KcshuffleblocktransferclusterlengthsmblockmperblocknblocknperblockV1x32x1x8_KcshuffleblocktransferscalarpervectornperblockV8_KblockgemmpipelineschedulerVBlockGemmPipelineSchedulerIntrawave_KblockgemmpipelineversionVBlockGemmPipelineVersionv3_KacomputedtypeVNone_KbcomputedtypeVNone
using Operation_ck_devicegemm_multid_xdl_shuffle_v3_KalayoutVCol_KblayoutVRow_KdslayoutsV_KclayoutVRow_KaelementdtypeVBF16_KbelementdtypeVBF16_KdselementdtypesV_KcelementdtypeVBF16_KaccdtypeVF32_KcshuffledtypeVBF16_KaelementwiseopVPassThrough_KbelementwiseopVPassThrough_KcelementwiseopVPassThrough_KgemmspecializationVGemmSpecializationMNPadding_KblocksizeV256_KmperblockV224_KnperblockV256_KkperblockV64_Kak1V8_Kbk1V8_KmperxdlV16_KnperxdlV16_KmxdlperwaveV7_KnxdlperwaveV8_Kablocktransferthreadclusterlengthsak0mak1V8x32x1_KablocktransferthreadclusterarrangeorderV0x2x1_KablocktransfersrcaccessorderV0x2x1_KablocktransfersrcvectordimV1_KablocktransfersrcscalarpervectorV8_Kablocktransferdstscalarpervectorak1V8_KablockldsextramV0_Kbblocktransferthreadclusterlengthsbk0nbk1V8x32x1_KbblocktransferthreadclusterarrangeorderV0x2x1_KbblocktransfersrcaccessorderV0x2x1_KbblocktransfersrcvectordimV1_KbblocktransfersrcscalarpervectorV8_Kbblocktransferdstscalarpervectorbk1V8_KbblockldsextranV0_KcshufflemxdlperwavepershuffleV1_KcshufflenxdlperwavepershuffleV2_KcshuffleblocktransferclusterlengthsmblockmperblocknblocknperblockV1x32x1x8_KcshuffleblocktransferscalarpervectornperblockV8_KblockgemmpipelineschedulerVBlockGemmPipelineSchedulerIntrawave_KblockgemmpipelineversionVBlockGemmPipelineVersionv3_KacomputedtypeVNone_KbcomputedtypeVNone =
ck::tensor_operation::device::DeviceGemmMultiD_Xdl_CShuffle_V3<
/* a_layout */ Col,
/* b_layout */ Row,
/* ds_layouts */ Tuple<>,
/* c_layout */ Row,
/* a_element_dtype */ BF16,
/* b_element_dtype */ BF16,
/* ds_element_dtypes */ Tuple<>,
/* c_element_dtype */ BF16,
/* acc_dtype */ F32,
/* c_shuffle_dtype */ BF16,
/* a_elementwise_op */ PassThrough,
/* b_elementwise_op */ PassThrough,
/* c_elementwise_op */ PassThrough,
/* gemm_specialization */ GemmSpecialization::MNPadding,
/* block_size */ 256,
/* m_per_block */ 224,
/* n_per_block */ 256,
/* k_per_block */ 64,
/* a_k1 */ 8,
/* b_k1 */ 8,
/* m_per_xdl */ 16,
/* n_per_xdl */ 16,
/* m_xdl_per_wave */ 7,
/* n_xdl_per_wave */ 8,
/* a_block_transfer_thread_cluster_lengths_ak0_m_ak1 */ S<8, 32, 1>,
/* a_block_transfer_thread_cluster_arrange_order */ S<0, 2, 1>,
/* a_block_transfer_src_access_order */ S<0, 2, 1>,
/* a_block_transfer_src_vector_dim */ 1,
/* a_block_transfer_src_scalar_per_vector */ 8,
/* a_block_transfer_dst_scalar_per_vector_ak1 */ 8,
/* a_block_lds_extra_m */ 0,
/* b_block_transfer_thread_cluster_lengths_bk0_n_bk1 */ S<8, 32, 1>,
/* b_block_transfer_thread_cluster_arrange_order */ S<0, 2, 1>,
/* b_block_transfer_src_access_order */ S<0, 2, 1>,
/* b_block_transfer_src_vector_dim */ 1,
/* b_block_transfer_src_scalar_per_vector */ 8,
/* b_block_transfer_dst_scalar_per_vector_bk1 */ 8,
/* b_block_lds_extra_n */ 0,
/* c_shuffle_m_xdl_per_wave_per_shuffle */ 1,
/* c_shuffle_n_xdl_per_wave_per_shuffle */ 2,
/* c_shuffle_block_transfer_cluster_lengths_m_block_m_per_block_n_block_n_per_block */ S<1, 32, 1, 8>,
/* c_shuffle_block_transfer_scalar_per_vector_n_per_block */ S<8>,
/* block_gemm_pipeline_scheduler */ BlockGemmPipelineScheduler::Intrawave,
/* block_gemm_pipeline_version */ BlockGemmPipelineVersion::v3>;
extern "C" {
PT_EXPORT int rocm_ck_gemm_template(const bfloat16* X, const bfloat16* W, bfloat16* Y, int32_t M, int32_t N, int32_t K, int32_t LDA, int32_t LDB, int32_t LDC, int32_t LDD, size_t* workspace_size, uint8_t* workspace, hipStream_t stream) {
auto gemm =
Operation_ck_devicegemm_multid_xdl_shuffle_v3_KalayoutVCol_KblayoutVRow_KdslayoutsV_KclayoutVRow_KaelementdtypeVBF16_KbelementdtypeVBF16_KdselementdtypesV_KcelementdtypeVBF16_KaccdtypeVF32_KcshuffledtypeVBF16_KaelementwiseopVPassThrough_KbelementwiseopVPassThrough_KcelementwiseopVPassThrough_KgemmspecializationVGemmSpecializationMNPadding_KblocksizeV256_KmperblockV224_KnperblockV256_KkperblockV64_Kak1V8_Kbk1V8_KmperxdlV16_KnperxdlV16_KmxdlperwaveV7_KnxdlperwaveV8_Kablocktransferthreadclusterlengthsak0mak1V8x32x1_KablocktransferthreadclusterarrangeorderV0x2x1_KablocktransfersrcaccessorderV0x2x1_KablocktransfersrcvectordimV1_KablocktransfersrcscalarpervectorV8_Kablocktransferdstscalarpervectorak1V8_KablockldsextramV0_Kbblocktransferthreadclusterlengthsbk0nbk1V8x32x1_KbblocktransferthreadclusterarrangeorderV0x2x1_KbblocktransfersrcaccessorderV0x2x1_KbblocktransfersrcvectordimV1_KbblocktransfersrcscalarpervectorV8_Kbblocktransferdstscalarpervectorbk1V8_KbblockldsextranV0_KcshufflemxdlperwavepershuffleV1_KcshufflenxdlperwavepershuffleV2_KcshuffleblocktransferclusterlengthsmblockmperblocknblocknperblockV1x32x1x8_KcshuffleblocktransferscalarpervectornperblockV8_KblockgemmpipelineschedulerVBlockGemmPipelineSchedulerIntrawave_KblockgemmpipelineversionVBlockGemmPipelineVersionv3_KacomputedtypeVNone_KbcomputedtypeVNone {};
auto invoker = gemm.MakeInvoker();
auto argument = gemm.MakeArgument(
reinterpret_cast<const BF16*>(X),
reinterpret_cast<const BF16*>(W),
std::array<const void*, 0>{ },
reinterpret_cast<BF16*>(Y),
M,
N,
K,
LDA,
LDB,
std::array<ck::index_t, 0>{ },
LDC,
1, // kBatch
PassThrough {},
PassThrough {},
PassThrough {} // c_elementwise_op
);
if (!gemm.IsSupportedArgument(argument)) {
// we do our best to statically avoid this case in `filter_op`
std::cerr << "invalid argument for gemm instance " << gemm.GetTypeString() << std::endl;
argument.Print();
return -23;
}
if (workspace_size) {
*workspace_size = gemm.GetWorkSpaceSize(&argument);
return 0;
}
// run the kernel
#ifdef GENERATE_CK_STANDALONE_RUNNER
const auto stream_config = StreamConfig{
stream,
/* time kernel */ 1,
/* log level */ 1,
/* n_cold_iter */ 100,
/* n_hot_iter */ 100,
/* flush_l2_cache */ 1,
/* rotate_count */ 5};
#else
const auto stream_config = StreamConfig{stream, /* time kernel */ false, /* log level */ 0};
#endif
const float elapsed_time = invoker.Run(argument, stream_config);
#ifdef GENERATE_CK_STANDALONE_RUNNER
std::cout << "elapsed time: " << elapsed_time << " ms" << std::endl;
#else
(void)elapsed_time;
#endif
return 0;
} // kernel definition
} // extern C
⏎
```
</details>
### Versions
<details>
<summary>Very long version info on python3.12-torch-2.6.0a-nightly-20241218:</summary>
```
env | grep -E -i '(torch|hsa|rocm|rocr|ccl).*='
TORCHINDUCTOR_CK_DIR=/nix/store/8wrnjabmr02rhknibrjr6qya3fimml2f-python3.12-ck4inductor-6.4.0a20241217/lib/python3.12/site-packages/ck4inductor
TORCHINDUCTOR_CACHE_DIR=~/ml-cache/torchinductor
TORCHINDUCTOR_AUTOGRAD_CACHE=1
ROCM_BUILD_ID=release-nixos-60300
TORCHINDUCTOR_MAX_AUTOTUNE_GEMM_BACKENDS=CK,ATEN,TRITON,CPP
ROCM_LIBPATCH_VERSION=60300
ROCM_PATH=/nix/store/8zk35m6vnbcf339zi9k6jra4xs4ipd49-rocm-hip-libraries-meta
TORCHINDUCTOR_FX_GRAPH_CACHE=1
TORCH_ROCM_FA_PREFER_CK=1
acl-2.3.2
aotriton-unstable-20241122
attr-2.5.2
bash-5.2p37
binutils-2.43.1
binutils-2.43.1-lib
binutils-wrapper-2.43.1
blas-3
blas-3-dev
brotli-1.1.0-lib
bzip2-1.0.8
bzip2-1.0.8-bin
bzip2-1.0.8-dev
clang-rocm-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
clang-rocm-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-dev
clang-rocm-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-lib
clr-6.3.0
clr-6.3.0-icd
cmake-3.30.5
compiler-rt-libc-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
compiler-rt-libc-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-dev
coreutils-9.5
curl-8.11.0
elfutils-0.191
expand-response-params
expat-2.6.4
expat-2.6.4-dev
find-xml-catalogs-hook
gcc-13.3.0
gcc-13.3.0-lib
gcc-13.3.0-libgcc
gcc-prefix
gcc-wrapper-13.3.0
gdbm-1.24
gdbm-1.24-dev
gdbm-1.24-lib
getopt-1.1.6
gfortran-13.3.0
gfortran-13.3.0-lib
gfortran-13.3.0-libgcc
gfortran-wrapper-13.3.0
glibc-2.40-36
glibc-2.40-36-bin
glibc-2.40-36-dev
gmp-6.3.0
gmp-with-cxx-6.3.0
gnugrep-3.11
hipblas-6.3.0
hipblas-common-unstable
hipblaslt-6.3.0
hipcub-6.3.0
hipfft-6.3.0
hipfort-6.3.0
hipify-6.3.0
hiprand-6.3.0
hipsolver-6.3.0
hipsparse-6.3.0
hwdata-0.388
hwloc-2.11.2-lib
isl-0.20
keyutils-1.6.3-lib
krb5-1.21.3-lib
libarchive-3.7.7-lib
libcap-2.70-lib
libdrm-2.4.123
libevent-2.1.12
libfabric-1.22.0
libffi-3.4.6
libffi-3.4.6-dev
libgcrypt-1.10.3-lib
libglvnd-1.7.0
libgpg-error-1.50
libidn2-2.3.7
libmpc-1.3.1
libnl-3.10.0
libpciaccess-0.18.1
libpfm-4.13.0
libpsl-0.21.5
libpsm2-12.0.1
libsodium-1.0.20
libssh2-1.11.1
libunistring-1.2
libunwind-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
libunwind-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-dev
libuv-1.48.0
libX11-1.8.10
libXau-1.0.11
libxcb-1.17.0
libxcrypt-4.4.36
libXdmcp-1.1.5
libXext-1.3.6
libxml2-2.13.4
libxml2-2.13.4-bin
libxml2-2.13.4-dev
libyaml-0.2.5
linux-headers-6.10
lld-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
lld-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-dev
lld-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-lib
llhttp-9.2.1
llvm-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
llvm-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-dev
llvm-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec-lib
llvm-binutils-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
llvm-binutils-wrapper-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
lsb_release
mailcap-2.1.54
miopen-6.3.0
miopen-gfx1030.kdb
miopen-gfx900.kdb
miopen-gfx906.kdb
miopen-gfx908.kdb
miopen-gfx90a.kdb
mpdecimal-4.0.0
mpdecimal-4.0.0-cxx
mpdecimal-4.0.0-dev
mpfr-4.2.1
mpich-4.2.3
mpich-4.2.3-doc
mpich-4.2.3-man
munge-0.5.16
ncurses-6.4.20221231
ncurses-6.4.20221231-dev
ncurses-6.4.20221231-man
nghttp2-1.64.0-lib
nss-cacert-3.104
numactl-2.0.18
openblas-0.3.28
openmp-18.0.0-d6e55e17f328a495bc32fddb7826e673ac9766ec
openmpi-5.0.6
openmpi-5.0.6-man
openssl-3.3.2
openssl-3.3.2-bin
openssl-3.3.2-dev
pcre2-10.44
perl-5.40.0
pkg-config-0.29.2
pkg-config-wrapper-0.29.2
pmix-5.0.4
prrte-3.0.7
publicsuffix-list-0-unstable-2024-10-25
python3.12-aiodns-3.2.0
python3.12-aiohappyeyeballs-2.4.2
python3.12-aiohttp-3.10.10
python3.12-aiosignal-1.3.1
python3.12-async-timeout-4.0.3
python3.12-attrs-24.2.0
python3.12-bcrypt-4.2.0
python3.12-brotli-1.1.0
python3.12-brotlicffi-1.1.0.0
python3.12-certifi-2024.08.30
python3.12-cffi-1.17.1
python3.12-charset-normalizer-3.3.2
python3.12-ck4inductor-6.4.0a20241217
python3.12-cryptography-43.0.1
python3.12-filelock-3.16.1
python3.12-frozenlist-1.4.1
python3.12-fsspec-2024.3.0
python3.12-huggingface-hub-0.26.2
python3.12-idna-3.10
python3.12-joblib-1.4.2
python3.12-lz4-4.3.3
python3.12-markdown-it-py-3.0.0
python3.12-mdurl-0.1.2
python3.12-msgpack-1.1.0
python3.12-multidict-6.1.0
python3.12-numpy-1.26.4
python3.12-orjson-3.10.7
python3.12-packaging-24.1
python3.12-pandas-2.2.3
python3.12-paramiko-3.5.0
python3.12-pip-24.0
python3.12-psutil-6.0.0
python3.12-pycares-4.4.0
python3.12-pycparser-2.22
python3.12-pygments-2.18.0
python3.12-pynacl-1.5.0
python3.12-pyspnego-0.11.1
python3.12-python-dateutil-2.9.0.post0
python3.12-pytz-2024.2
python3.12-pyyaml-6.0.2
python3.12-requests-2.32.3
python3.12-rich-13.8.1
python3.12-simplejson-3.19.3
python3.12-six-1.16.0
python3.12-smbprotocol-1.14.0
python3.12-tensile-6.3.0
python3.12-tensilelite-6.3.0
python3.12-torch-2.6.0a-nightly-20241218
python3.12-torch-2.6.0a-nightly-20241218-lib
python3.12-tqdm-4.66.5
python3.12-typing-extensions-4.12.2
python3.12-tzdata-2024.2
python3.12-ujson-5.10.0
python3.12-urllib3-2.2.3
python3.12-yarl-1.13.1
python3.12-zstd-1.5.5.1
python3-3.12.7
rccl-6.3.0
rdma-core-54.0
readline-8.2p13
readline-8.2p13-dev
rhash-1.4.4
rocalution-6.3.0
rocblas-6.3.0
rocfft-6.3.0
rocm-comgr-6.3.0
rocm-core-6.3.0
rocmcxx
rocm-device-libs-6.3.0
rocminfo-6.3.0
rocm-llvm-merge
rocm-merged
rocm-runtime-6.3.0
rocm-smi-6.3.0
rocprim-6.3.0
rocprofiler-register-6.3.0
rocrand-6.3.0
rocsolver-6.3.0
rocsparse-6.3.0
rocthrust-6.3.0
roctracer-6.3.0
shell-deps
source
sqlite-3.46.1
sqlite-3.46.1-bin
sqlite-3.46.1-dev
strip.sh
systemd-minimal-libs-256.7
tzdata-2024b
ucc-1.3.0
ucx-1.17.0
unpack-composable_kernel-6.4.0a20241217
util-linux-minimal-2.39.4-lib
xgcc-13.3.0-libgcc
xz-5.6.3
xz-5.6.3-bin
xz-5.6.3-dev
zlib-1.3.1
zlib-1.3.1-dev
zstd-1.5.6
zstd-1.5.6-bin
zstd-1.5.6-dev
Collecting environment information...
PyTorch version: 2.6.0a.post20241218
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-0
OS: NixOS 25.05 (Warbler) (x86_64)
GCC version: Could not collect
Clang version: 18.0.0git
CMake version: version 3.30.5
Libc version: glibc-2.40
Python version: 3.12.7 (main, Oct 1 2024, 02:05:46) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.12.4-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI100 (gfx908:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD Eng Sample: 100-000000425_37/24_N
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 35%
CPU max MHz: 5616.0000
CPU min MHz: 400.0000
BogoMIPS: 7400.01
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.6.0a0.post20241218
[pip3] triton==3.2.0
[conda] Could not collect
```
</details>
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @chauhang @penguinwu | module: rocm,triaged,oncall: pt2 | low | Critical |
2,753,609,087 | PowerToys | Crop and Lock broken for multiple screens. | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Crop and Lock
### Steps to reproduce
Crop a window and then move it to a screen with different resolution. The window will become blank until returned to previous window. Only tested with 1920x1080 and 3440x1440 screen.
### ✔️ Expected Behavior
To have the same cropped window open when moving between the two screens.
### ❌ Actual Behavior
The cropped window became blank until returned to previous window.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-CropAndLock | low | Critical |
2,753,610,677 | rust | ICE (delayed): broken MIR in DefId(...) (...): bad assignment (...): NoSolution | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
(one file: "ice-mwe.rs")
```Rust
use std::ops::Add;
pub trait TimesTwo
where *const Self: Add<*const Self>,
{
extern "C" fn t2_ptr(slf: *const Self)
-> <*const Self as Add<*const Self>>::Output {
slf + slf
}
}
fn main(){}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.85.0-nightly (9e136a30a 2024-12-19)
binary: rustc
commit-hash: 9e136a30a965bf4e63f03095c57df7257bf96fd6
commit-date: 2024-12-19
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.6
```
### Error output
(exact command used: `rustc path/to/ice-mwe.rs`)
```
note: no errors encountered even though delayed bugs were created
note: those delayed bugs will now be shown as internal compiler errors
error: internal compiler error: broken MIR in DefId(0:5 ~ ice_mwe[fdb9]::TimesTwo::t2_ptr) (_0 = Add(move _2, move _3)): bad assignment (Alias(Projection, AliasTy { args: [*const Self/#0, *const Self/#0], def_id: DefId(2:3495 ~ core[57d6]::ops::arith::Add::Output), .. }) = *const Self/#0): NoSolution
--> src/ice-mwe.rs:8:8
|
8 | slf + slf
| ^^^^^^^^^
|
note: delayed at compiler/rustc_borrowck/src/type_check/mod.rs:1157:21 - disabled backtrace
--> src/ice-mwe.rs:8:8
|
8 | slf + slf
| ^^^^^^^^^
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/home/niac/tmp/mwe/rustc-ice-2024-12-20T22_57_23-3180020.txt` to your bug report
query stack during panic:
end of query stack
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
(exact command used: `RUST_BACKTRACE=1 rustc path/to/ice-mwe.rs`)
(also downloadable as [rustc-ice-2024-12-20T22_54_18-3179078.txt](https://github.com/user-attachments/files/18216114/rustc-ice-2024-12-20T22_54_18-3179078.txt))
```
note: no errors encountered even though delayed bugs were created
note: those delayed bugs will now be shown as internal compiler errors
error: internal compiler error: broken MIR in DefId(0:5 ~ ice_mwe[fdb9]::TimesTwo::t2_ptr) (_0 = Add(move _2, move _3)): bad assignment (Alias(Projection, AliasTy { args: [*const Self/#0, *const Self/#0], def_id: DefId(2:3495 ~ core[57d6]::ops::arith::Add::Output), .. }) = *const Self/#0): NoSolution
--> src/ice-mwe.rs:8:8
|
8 | slf + slf
| ^^^^^^^^^
|
note: delayed at compiler/rustc_borrowck/src/type_check/mod.rs:1157:21
0: <rustc_errors::DiagCtxtInner>::emit_diagnostic
1: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
2: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
3: <rustc_errors::DiagCtxtHandle>::span_delayed_bug::<rustc_span::span_encoding::Span, alloc::string::String>
4: <rustc_borrowck::type_check::TypeChecker>::typeck_mir
5: rustc_borrowck::type_check::type_check
6: rustc_borrowck::nll::compute_regions
7: rustc_borrowck::do_mir_borrowck
8: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::mir_borrowck::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
9: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_data_structures::vec_cache::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>, rustc_query_system::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, false>
10: rustc_query_impl::query_impl::mir_borrowck::get_query_non_incr::__rust_end_short_backtrace
11: rustc_interface::passes::run_required_analyses
12: rustc_interface::passes::analysis
13: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 0]>>
14: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::SingleCache<rustc_middle::query::erase::Erased<[u8; 0]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, false>
15: rustc_query_impl::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
16: rustc_interface::passes::create_and_enter_global_ctxt::<core::option::Option<rustc_interface::queries::Linker>, rustc_driver_impl::run_compiler::{closure#0}::{closure#2}>::{closure#2}::{closure#0}
17: rustc_interface::interface::run_compiler::<(), rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
18: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<(), rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
19: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<(), rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
20: std::sys::pal::unix::thread::Thread::new::thread_start
21: start_thread
22: clone3
--> src/ice-mwe.rs:8:8
|
8 | slf + slf
| ^^^^^^^^^
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/home/niac/tmp/mwe/rustc-ice-2024-12-20T22_54_18-3179078.txt` to your bug report
query stack during panic:
end of query stack
```
</p>
</details>
| I-ICE,T-compiler,C-bug,S-bug-has-test,T-types | low | Critical |
2,753,613,703 | react-native | My App is getting crashed here on testflight but it workinf fine on simulator: facebook::yoga::measureNodeWithMeasureFunc(facebook::yoga::Node* | See the below crash log from TestFlight
**Expected Behaviour**
So I am trying to show some data from the dummy JSON i have using Flatlist, it is working absolutely fine on the simulator but when uploaded to TestFlight the app is getting crashed on that screen.
the pointing issue is here:
CybaProject 0x10132f360 facebook::yoga::measureNodeWithMeasureFunc(facebook::yoga::Node*, float, float, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, facebook::yoga::LayoutData&, facebook::yoga::La... + 244 (CalculateLayout.cpp:311)
below is the full crash log:
Incident Identifier: 5600E386-14C9-44A2-9381-2723CC6CC193
Distributor ID: com.apple.TestFlight
Hardware Model: iPhone17,1
Process: CybaProject [66305]
Path: /private/var/containers/Bundle/Application/FA653CA2-8512-4B00-95FD-E293558F00C7/CybaProject.app/CybaProject
Identifier: io.syba.syba
Version: 1.37.5 (111)
AppStoreTools: 16C5031b
AppVariant: 1:iPhone17,1:18
Beta: YES
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd [1]
Coalition: io.syba.syba [13929]
Date/Time: 2024-12-20 13:29:28.7535 +0100
Launch Time: 2024-12-20 13:29:27.1328 +0100
OS Version: iPhone OS 18.1.1 (22B91)
Release Type: User
Baseband Version: 1.11.01
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Reason: -[__NSFrozenDictionaryM isEqualToString:]: unrecognized selector sent to instance 0x303691da0
Termination Reason: SIGNAL 6 Abort trap: 6
Terminating Process: CybaProject [66305]
Triggered by Thread: 8
Last Exception Backtrace:
0 CoreFoundation 0x19ee3c7cc __exceptionPreprocess + 164 (NSException.m:249)
1 libobjc.A.dylib 0x19c10f2e4 objc_exception_throw + 88 (objc-exception.mm:356)
2 CoreFoundation 0x19ef428c8 -[NSObject(NSObject) doesNotRecognizeSelector:] + 364 (NSObject.m:162)
3 CoreFoundation 0x19eddab08 ___forwarding___ + 1560 (NSForwarding.m:3612)
4 CoreFoundation 0x19edda430 _CF_forwarding_prep_0 + 96 (:-1)
5 CybaProject 0x10113dd58 +[RCTFont updateFont:withFamily:size:weight:style:variant:scaleMultiplier:] + 560 (RCTFont.mm:426)
6 CybaProject 0x1012c7f1c -[RCTTextAttributes effectiveFont] + 128 (RCTTextAttributes.mm:220)
7 CybaProject 0x1012c7b00 -[RCTTextAttributes effectiveTextAttributes] + 64 (RCTTextAttributes.mm:150)
8 CybaProject 0x1012c3cc0 -[RCTBaseTextShadowView attributedTextWithBaseTextAttributes:] + 472 (RCTBaseTextShadowView.mm:101)
9 CybaProject 0x1012c9938 -[RCTTextShadowView attributedTextWithMeasuredAttachmentsThatFitSize:] + 80 (RCTTextShadowView.mm:179)
10 CybaProject 0x1012c9cb8 -[RCTTextShadowView textStorageAndLayoutManagerThatFitsSize:exclusiveOwnership:] + 312 (RCTTextShadowView.mm:227)
11 CybaProject 0x1012c8d2c RCTTextShadowViewMeasure(YGNode const*, float, YGMeasureMode, float, YGMeasureMode) + 132 (RCTTextShadowView.mm:385)
12 CybaProject 0x10132f360 facebook::yoga::measureNodeWithMeasureFunc(facebook::yoga::Node*, float, float, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, facebook::yoga::LayoutData&, facebook::yoga::La... + 244 (CalculateLayout.cpp:311)
13 CybaProject 0x10132f360 facebook::yoga::calculateLayoutImpl(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::Layou... + 836 (CalculateLayout.cpp:1273)
14 CybaProject 0x10132f360 facebook::yoga::calculateLayoutInternal(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::L... + 1984 (CalculateLayout.cpp:2219)
15 CybaProject 0x101330880 facebook::yoga::computeFlexBasisForChild(facebook::yoga::Node const*, facebook::yoga::Node*, float, facebook::yoga::SizingMode, float, float, float, facebook::yoga::SizingMode, facebook::yoga::Dire... + 1924 (CalculateLayout.cpp:232)
16 CybaProject 0x101330880 facebook::yoga::computeFlexBasisForChildren(facebook::yoga::Node*, float, float, facebook::yoga::SizingMode, facebook::yoga::SizingMode, facebook::yoga::Direction, facebook::yoga::FlexDirection, bo... + 2228 (CalculateLayout.cpp:543)
17 CybaProject 0x101330880 facebook::yoga::calculateLayoutImpl(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::Layou... + 3568 (CalculateLayout.cpp:1371)
18 CybaProject 0x101330880 facebook::yoga::calculateLayoutInternal(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::L... + 7392 (CalculateLayout.cpp:2219)
19 CybaProject 0x1013321bc facebook::yoga::calculateLayoutImpl(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::Layou... + 6200 (CalculateLayout.cpp:1679)
20 CybaProject 0x1013321bc facebook::yoga::calculateLayoutInternal(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::L... + 13852 (CalculateLayout.cpp:2219)
21 CybaProject 0x10132c9cc facebook::yoga::layoutAbsoluteChild(facebook::yoga::Node const*, facebook::yoga::Node const*, facebook::yoga::Node*, float, float, facebook::yoga::SizingMode, facebook::yoga::Direction, facebook::y... + 2332 (AbsoluteLayout.cpp:440)
22 CybaProject 0x10132d730 facebook::yoga::layoutAbsoluteDescendants(facebook::yoga::Node*, facebook::yoga::Node*, facebook::yoga::SizingMode, facebook::yoga::Direction, facebook::yoga::LayoutData&, unsigned int, unsigned in... + 320 (AbsoluteLayout.cpp:503)
23 CybaProject 0x101333254 facebook::yoga::calculateLayoutImpl(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::Layou... + 10448 (CalculateLayout.cpp:2076)
24 CybaProject 0x101333254 facebook::yoga::calculateLayoutInternal(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::L... + 18100 (CalculateLayout.cpp:2219)
25 CybaProject 0x10132c9cc facebook::yoga::layoutAbsoluteChild(facebook::yoga::Node const*, facebook::yoga::Node const*, facebook::yoga::Node*, float, float, facebook::yoga::SizingMode, facebook::yoga::Direction, facebook::y... + 2332 (AbsoluteLayout.cpp:440)
26 CybaProject 0x10132d730 facebook::yoga::layoutAbsoluteDescendants(facebook::yoga::Node*, facebook::yoga::Node*, facebook::yoga::SizingMode, facebook::yoga::Direction, facebook::yoga::LayoutData&, unsigned int, unsigned in... + 320 (AbsoluteLayout.cpp:503)
27 CybaProject 0x101333254 facebook::yoga::calculateLayoutImpl(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::Layou... + 10448 (CalculateLayout.cpp:2076)
28 CybaProject 0x101333254 facebook::yoga::calculateLayoutInternal(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::L... + 18100 (CalculateLayout.cpp:2219)
29 CybaProject 0x1013321bc facebook::yoga::calculateLayoutImpl(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::Layou... + 6200 (CalculateLayout.cpp:1679)
30 CybaProject 0x1013321bc facebook::yoga::calculateLayoutInternal(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::L... + 13852 (CalculateLayout.cpp:2219)
31 CybaProject 0x10132c9cc facebook::yoga::layoutAbsoluteChild(facebook::yoga::Node const*, facebook::yoga::Node const*, facebook::yoga::Node*, float, float, facebook::yoga::SizingMode, facebook::yoga::Direction, facebook::y... + 2332 (AbsoluteLayout.cpp:440)
32 CybaProject 0x10132d730 facebook::yoga::layoutAbsoluteDescendants(facebook::yoga::Node*, facebook::yoga::Node*, facebook::yoga::SizingMode, facebook::yoga::Direction, facebook::yoga::LayoutData&, unsigned int, unsigned in... + 320 (AbsoluteLayout.cpp:503)
33 CybaProject 0x101333254 facebook::yoga::calculateLayoutImpl(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::Layou... + 10448 (CalculateLayout.cpp:2076)
34 CybaProject 0x101333254 facebook::yoga::calculateLayoutInternal(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::L... + 18100 (CalculateLayout.cpp:2219)
35 CybaProject 0x1013321bc facebook::yoga::calculateLayoutImpl(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::Layou... + 6200 (CalculateLayout.cpp:1679)
36 CybaProject 0x1013321bc facebook::yoga::calculateLayoutInternal(facebook::yoga::Node*, float, float, facebook::yoga::Direction, facebook::yoga::SizingMode, facebook::yoga::SizingMode, float, float, bool, facebook::yoga::L... + 13852 (CalculateLayout.cpp:2219)
37 CybaProject 0x1013338b4 facebook::yoga::calculateLayout(facebook::yoga::Node*, float, float, facebook::yoga::Direction) + 1084 (CalculateLayout.cpp:2350)
38 CybaProject 0x10115d8ac -[RCTShadowView layoutWithMinimumSize:maximumSize:layoutDirection:layoutContext:] + 200 (RCTShadowView.m:272)
39 CybaProject 0x101153c28 -[RCTRootShadowView layoutWithAffectedShadowViews:] + 132 (RCTRootShadowView.m:35)
40 CybaProject 0x10116715c -[RCTUIManager uiBlockWithLayoutUpdateForRootView:] + 112 (RCTUIManager.m:549)
41 CybaProject 0x10116a594 -[RCTUIManager _layoutAndMount] + 196 (RCTUIManager.m:1126)
42 CybaProject 0x1011346d0 __32-[RCTCxxBridge batchDidComplete]_block_invoke + 36 (RCTCxxBridge.mm:1514)
43 libdispatch.dylib 0x1a6b10370 _dispatch_call_block_and_release + 32 (init.c:1549)
44 libdispatch.dylib 0x1a6b120d0 _dispatch_client_callout + 20 (object.m:576)
45 libdispatch.dylib 0x1a6b196d8 _dispatch_lane_serial_drain + 744 (queue.c:3934)
46 libdispatch.dylib 0x1a6b1a1e0 _dispatch_lane_invoke + 380 (queue.c:4025)
47 libdispatch.dylib 0x1a6b25258 _dispatch_root_queue_drain_deferred_wlh + 288 (queue.c:7193)
48 libdispatch.dylib 0x1a6b24aa4 _dispatch_workloop_worker_thread + 540 (queue.c:6787)
49 libsystem_pthread.dylib 0x227347c7c _pthread_wqthread + 288 (pthread.c:2696)
50 libsystem_pthread.dylib 0x227344488 start_wqthread + 8 (:-1)
Thread 0 name:
Thread 0:
0 libsystem_kernel.dylib 0x00000001ef1ce688 mach_msg2_trap + 8 (:-1)
1 libsystem_kernel.dylib 0x00000001ef1d1d98 mach_msg2_internal + 80 (mach_msg.c:201)
2 libsystem_kernel.dylib 0x00000001ef1d1cb0 mach_msg_overwrite + 424 (mach_msg.c:0)
3 libsystem_kernel.dylib 0x00000001ef1d1afc mach_msg + 24 (mach_msg.c:323)
4 CoreFoundation 0x000000019ee0da84 __CFRunLoopServiceMachPort + 160 (CFRunLoop.c:2637)
5 CoreFoundation 0x000000019ee0d130 __CFRunLoopRun + 1212 (CFRunLoop.c:3021)
6 CoreFoundation 0x000000019ee0c830 CFRunLoopRunSpecific + 588 (CFRunLoop.c:3434)
7 GraphicsServices 0x00000001eadec1c4 GSEventRunModal + 164 (GSEvent.c:2196)
8 UIKitCore 0x00000001a1972eb0 -[UIApplication _run] + 816 (UIApplication.m:3844)
9 UIKitCore 0x00000001a1a215b4 UIApplicationMain + 340 (UIApplication.m:5496)
10 CybaProject 0x0000000100fac180 main + 76 (main.m:8)
11 dyld 0x00000001c47faec8 start + 2724 (dyldMain.cpp:1334)
Thread 1:
0 libsystem_pthread.dylib 0x0000000227344480 start_wqthread + 0 (:-1)
Thread 2:
0 libsystem_pthread.dylib 0x0000000227344480 start_wqthread + 0 (:-1)
Thread 3:
0 libsystem_pthread.dylib 0x0000000227344480 start_wqthread + 0 (:-1)
Thread 4:
0 libsystem_pthread.dylib 0x0000000227344480 start_wqthread + 0 (:-1)
Thread 5:
0 libsystem_pthread.dylib 0x0000000227344480 start_wqthread + 0 (:-1)
Thread 6:
0 libsystem_pthread.dylib 0x0000000227344480 start_wqthread + 0 (:-1)
Thread 7:
0 libsystem_pthread.dylib 0x0000000227344480 start_wqthread + 0 (:-1)
Thread 8 name:
Thread 8 Crashed:
0 libsystem_kernel.dylib 0x00000001ef1d91d4 __pthread_kill + 8 (:-1)
1 libsystem_pthread.dylib 0x000000022734aef8 pthread_kill + 268 (pthread.c:1721)
2 libsystem_c.dylib 0x00000001a6bcbad8 abort + 128 (abort.c:122)
3 libc++abi.dylib 0x00000002271595b8 abort_message + 132 (abort_message.cpp:78)
4 libc++abi.dylib 0x0000000227147bac demangling_terminate_handler() + 348 (cxa_default_handlers.cpp:77)
5 libobjc.A.dylib 0x000000019c12ae14 _objc_terminate() + 156 (objc-exception.mm:496)
6 libc++abi.dylib 0x000000022715887c std::__terminate(void (*)()) + 16 (cxa_handlers.cpp:59)
7 libc++abi.dylib 0x0000000227158820 std::terminate() + 108 (cxa_handlers.cpp:88)
8 libdispatch.dylib 0x00000001a6b120e4 _dispatch_client_callout + 40 (object.m:579)
9 libdispatch.dylib 0x00000001a6b196d8 _dispatch_lane_serial_drain + 744 (queue.c:3934)
10 libdispatch.dylib 0x00000001a6b1a1e0 _dispatch_lane_invoke + 380 (queue.c:4025)
11 libdispatch.dylib 0x00000001a6b25258 _dispatch_root_queue_drain_deferred_wlh + 288 (queue.c:7193)
12 libdispatch.dylib 0x00000001a6b24aa4 _dispatch_workloop_worker_thread + 540 (queue.c:6787)
13 libsystem_pthread.dylib 0x0000000227347c7c _pthread_wqthread + 288 (pthread.c:2696)
14 libsystem_pthread.dylib 0x0000000227344488 start_wqthread + 8 (:-1)
Thread 9 name:
Thread 9:
0 libsystem_kernel.dylib 0x00000001ef1ce688 mach_msg2_trap + 8 (:-1)
1 libsystem_kernel.dylib 0x00000001ef1d1d98 mach_msg2_internal + 80 (mach_msg.c:201)
2 libsystem_kernel.dylib 0x00000001ef1d1cb0 mach_msg_overwrite + 424 (mach_msg.c:0)
3 libsystem_kernel.dylib 0x00000001ef1d1afc mach_msg + 24 (mach_msg.c:323)
4 CoreFoundation 0x000000019ee0da84 __CFRunLoopServiceMachPort + 160 (CFRunLoop.c:2637)
5 CoreFoundation 0x000000019ee0d130 __CFRunLoopRun + 1212 (CFRunLoop.c:3021)
6 CoreFoundation 0x000000019ee0c830 CFRunLoopRunSpecific + 588 (CFRunLoop.c:3434)
7 Foundation 0x000000019dab4500 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 212 (NSRunLoop.m:373)
8 Foundation 0x000000019dab4350 -[NSRunLoop(NSRunLoop) runUntilDate:] + 64 (NSRunLoop.m:420)
9 UIKitCore 0x00000001a1986358 -[UIEventFetcher threadMain] + 420 (UIEventFetcher.m:1241)
10 Foundation 0x000000019dac56c8 __NSThread__start__ + 724 (NSThread.m:991)
11 libsystem_pthread.dylib 0x000000022734937c _pthread_start + 136 (pthread.c:931)
12 libsystem_pthread.dylib 0x0000000227344494 thread_start + 8 (:-1)
Thread 10:
0 libsystem_pthread.dylib 0x0000000227344480 start_wqthread + 0 (:-1)
Thread 11:
0 libsystem_pthread.dylib 0x0000000227344480 start_wqthread + 0 (:-1)
Thread 12 name:
Thread 12:
0 hermes 0x000000010255b658 hermes::hbc::RuntimeFunctionHeader::isLarge() const + 0 (BytecodeDataProvider.h:80)
1 hermes 0x000000010255b658 hermes::hbc::RuntimeFunctionHeader::frameSize() const + 0 (BytecodeDataProvider.h:70)
2 hermes 0x000000010255b658 hermes::vm::CodeBlock::getFrameSize() const + 0 (CodeBlock.h:137)
3 hermes 0x000000010255b658 hermes::vm::CallResult<hermes::vm::HermesValue, (hermes::vm::detail::CallResultSpecialize)2> hermes::vm::Interpreter::interpretFunction<false, false>(hermes::vm::Runtime&, hermes::vm::InterpreterSt... + 408 (Interpreter.cpp:1024)
4 hermes 0x000000010255b498 hermes::vm::Runtime::interpretFunctionImpl(hermes::vm::CodeBlock*) + 52 (Interpreter.cpp:825)
5 hermes 0x000000010254e524 hermes::vm::JSFunction::_callImpl(hermes::vm::Handle<hermes::vm::Callable>, hermes::vm::Runtime&) + 40 (Callable.cpp:1123)
6 hermes 0x000000010254d730 hermes::vm::Callable::call(hermes::vm::Handle<hermes::vm::Callable>, hermes::vm::Runtime&) + 44 (Callable.h:253)
7 hermes 0x000000010254d730 hermes::vm::Callable::executeCall(hermes::vm::Handle<hermes::vm::Callable>, hermes::vm::Runtime&, hermes::vm::Handle<hermes::vm::HermesValue>, hermes::vm::Handle<hermes::vm::HermesValue>, hermes::v... + 1004 (Callable.cpp:357)
8 hermes 0x00000001025de098 hermes::vm::functionPrototypeApply(void*, hermes::vm::Runtime&, hermes::vm::NativeArgs) + 344 (:-1)
9 hermes 0x000000010254e42c hermes::vm::NativeFunction::_nativeCall(hermes::vm::NativeFunction*, hermes::vm::Runtime&) + 144 (Callable.h:507)
10 hermes 0x000000010255a4a0 hermes::vm::Interpreter::handleCallSlowPath(hermes::vm::Runtime&, hermes::vm::PinnedHermesValue*) + 60 (Interpreter.cpp:274)
11 hermes 0x000000010255bea8 hermes::vm::CallResult<hermes::vm::HermesValue, (hermes::vm::detail::CallResultSpecialize)2> hermes::vm::Interpreter::interpretFunction<false, false>(hermes::vm::Runtime&, hermes::vm::InterpreterSt... + 2536 (Interpreter.cpp:1620)
12 hermes 0x000000010255b498 hermes::vm::Runtime::interpretFunctionImpl(hermes::vm::CodeBlock*) + 52 (Interpreter.cpp:825)
13 hermes 0x000000010254e524 hermes::vm::JSFunction::_callImpl(hermes::vm::Handle<hermes::vm::Callable>, hermes::vm::Runtime&) + 40 (Callable.cpp:1123)
14 hermes 0x000000010254e114 hermes::vm::Callable::call(hermes::vm::Handle<hermes::vm::Callable>, hermes::vm::Runtime&) + 40 (Callable.h:253)
15 hermes 0x000000010254e114 hermes::vm::BoundFunction::_boundCall(hermes::vm::BoundFunction*, hermes::inst::Inst const*, hermes::vm::Runtime&) + 412 (Callable.cpp:779)
16 hermes 0x0000000102537d50 hermes::vm::Callable::call(hermes::vm::Handle<hermes::vm::Callable>, hermes::vm::Runtime&) + 44 (Callable.h:253)
17 hermes 0x0000000102537d50 facebook::hermes::HermesRuntimeImpl::call(facebook::jsi::Function const&, facebook::jsi::Value const&, facebook::jsi::Value const*, unsigned long) + 284 (hermes.cpp:2198)
18 CybaProject 0x000000010130814c facebook::jsi::Function::call(facebook::jsi::Runtime&, facebook::jsi::Value const*, unsigned long) const + 44 (jsi-inl.h:264)
19 CybaProject 0x000000010130814c facebook::jsi::Function::call(facebook::jsi::Runtime&, std::initializer_list<facebook::jsi::Value>) const + 44 (jsi-inl.h:269)
20 CybaProject 0x000000010130814c facebook::jsi::Value facebook::jsi::Function::call<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<ch... + 244 (jsi-inl.h:277)
21 CybaProject 0x0000000101307fb4 facebook::react::JSIExecutor::callFunction(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std... + 52 (JSIExecutor.cpp:234)
22 CybaProject 0x0000000101307fb4 decltype(std::declval<facebook::react::JSIExecutor::callFunction(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::... + 52 (invoke.h:344)
23 CybaProject 0x0000000101307fb4 void std::__1::__invoke_void_return_wrapper<void, true>::__call[abi:ne180100]<facebook::react::JSIExecutor::callFunction(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocato... + 52 (invoke.h:419)
24 CybaProject 0x0000000101307fb4 std::__1::__function::__alloc_func<facebook::react::JSIExecutor::callFunction(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<cha... + 52 (function.h:169)
25 CybaProject 0x0000000101307fb4 std::__1::__function::__func<facebook::react::JSIExecutor::callFunction(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std... + 84 (function.h:311)
26 CybaProject 0x0000000101135ae4 decltype(std::declval<void (*&)(std::__1::function<void ()> const&, std::__1::function<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> ()>)>()(std::declval<std:... + 32 (invoke.h:344)
27 CybaProject 0x0000000101135ae4 void std::__1::__invoke_void_return_wrapper<void, true>::__call[abi:ne180100]<void (*&)(std::__1::function<void ()> const&, std::__1::function<std::__1::basic_string<char, std::__1::char_traits<cha... + 72 (invoke.h:419)
28 CybaProject 0x0000000101305a88 std::__1::__function::__value_func<void (std::__1::function<void ()> const&, std::__1::function<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> ()>)>::operator(... + 32 (function.h:428)
29 CybaProject 0x0000000101305a88 std::__1::function<void (std::__1::function<void ()> const&, std::__1::function<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> ()>)>::operator()(std::__1::func... + 32 (function.h:981)
30 CybaProject 0x0000000101305a88 facebook::react::JSIExecutor::callFunction(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std... + 440 (JSIExecutor.cpp:232)
31 CybaProject 0x00000001012ef880 std::__1::__function::__value_func<void (facebook::react::JSExecutor*)>::operator()[abi:ne180100](facebook::react::JSExecutor*&&) const + 24 (function.h:428)
32 CybaProject 0x00000001012ef880 std::__1::function<void (facebook::react::JSExecutor*)>::operator()(facebook::react::JSExecutor*) const + 24 (function.h:981)
33 CybaProject 0x00000001012ef880 facebook::react::NativeToJsBridge::runOnExecutorQueue(std::__1::function<void (facebook::react::JSExecutor*)>&&)::$_0::operator()() const + 48 (NativeToJsBridge.cpp:308)
34 CybaProject 0x00000001012ef880 decltype(std::declval<facebook::react::NativeToJsBridge::runOnExecutorQueue(std::__1::function<void (facebook::react::JSExecutor*)>&&)::$_0&>()()) std::__1::__invoke[abi:ne180100]<facebook::react::... + 48 (invoke.h:344)
35 CybaProject 0x00000001012ef880 void std::__1::__invoke_void_return_wrapper<void, true>::__call[abi:ne180100]<facebook::react::NativeToJsBridge::runOnExecutorQueue(std::__1::function<void (facebook::react::JSExecutor*)>&&)::$_0&>... + 48 (invoke.h:419)
36 CybaProject 0x00000001012ef880 std::__1::__function::__alloc_func<facebook::react::NativeToJsBridge::runOnExecutorQueue(std::__1::function<void (facebook::react::JSExecutor*)>&&)::$_0, std::__1::allocator<facebook::react::Native... + 48 (function.h:169)
37 CybaProject 0x00000001012ef880 std::__1::__function::__func<facebook::react::NativeToJsBridge::runOnExecutorQueue(std::__1::function<void (facebook::react::JSExecutor*)>&&)::$_0, std::__1::allocator<facebook::react::NativeToJsBr... + 60 (function.h:311)
38 CybaProject 0x0000000101138494 std::__1::__function::__value_func<void ()>::operator()[abi:ne180100]() const + 20 (function.h:428)
39 CybaProject 0x0000000101138494 std::__1::function<void ()>::operator()() const + 20 (function.h:981)
40 CybaProject 0x0000000101138494 facebook::react::tryAndReturnError(std::__1::function<void ()> const&) + 32 (RCTCxxUtils.mm:73)
41 CybaProject 0x0000000101145064 facebook::react::RCTMessageThread::tryFunc(std::__1::function<void ()> const&) + 24 (RCTMessageThread.mm:68)
42 CybaProject 0x0000000101144e68 std::__1::__function::__value_func<void ()>::operator()[abi:ne180100]() const + 20 (function.h:428)
43 CybaProject 0x0000000101144e68 std::__1::function<void ()>::operator()() const + 20 (function.h:981)
44 CybaProject 0x0000000101144e68 invocation function for block in facebook::react::RCTMessageThread::runAsync(std::__1::function<void ()>) + 44 (RCTMessageThread.mm:44)
45 CoreFoundation 0x000000019ee1f6e4 __CFRUNLOOP_IS_CALLING_OUT_TO_A_BLOCK__ + 28 (CFRunLoop.c:1818)
46 CoreFoundation 0x000000019ee0d910 __CFRunLoopDoBlocks + 356 (CFRunLoop.c:1860)
47 CoreFoundation 0x000000019ee0cfd4 __CFRunLoopRun + 864 (CFRunLoop.c:2971)
48 CoreFoundation 0x000000019ee0c830 CFRunLoopRunSpecific + 588 (CFRunLoop.c:3434)
49 CybaProject 0x000000010112e414 +[RCTCxxBridge runRunLoop] + 212 (RCTCxxBridge.mm:326)
50 Foundation 0x000000019dac56c8 __NSThread__start__ + 724 (NSThread.m:991)
51 libsystem_pthread.dylib 0x000000022734937c _pthread_start + 136 (pthread.c:931)
52 libsystem_pthread.dylib 0x0000000227344494 thread_start + 8 (:-1)
Thread 13 name:
Thread 13:
0 libsystem_kernel.dylib 0x00000001ef1d3f90 __psynch_cvwait + 8 (:-1)
1 libsystem_pthread.dylib 0x0000000227346a50 _pthread_cond_wait + 1204 (pthread_cond.c:862)
2 libc++.1.dylib 0x00000001af3df584 std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) + 28 (condition_variable.cpp:30)
3 hermes 0x00000001025f7f60 hermes::vm::HadesGC::Executor::worker() + 116 (:-1)
4 hermes 0x00000001025f7ec8 void* std::__1::__thread_proxy[abi:v160006]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, hermes::vm::HadesGC::Executor::Exec... + 44 (:-1)
5 libsystem_pthread.dylib 0x000000022734937c _pthread_start + 136 (pthread.c:931)
6 libsystem_pthread.dylib 0x0000000227344494 thread_start + 8 (:-1)
Thread 8 crashed with ARM Thread State (64-bit):
x0: 0x0000000000000000 x1: 0x0000000000000000 x2: 0x0000000000000000 x3: 0x0000000000000000
x4: 0x000000022715df3b x5: 0x000000016f2b6800 x6: 0x000000000000006e x7: 0x0000000000000000
x8: 0x5091d50d56dafcfc x9: 0x5091d50c39f18cfc x10: 0x0000000000000051 x11: 0x000000000000000b
x12: 0x000000000000000b x13: 0x000000019f266bbc x14: 0x00000000001ff800 x15: 0x00000000000007fb
x16: 0x0000000000000148 x17: 0x000000016f2b7000 x18: 0x0000000000000000 x19: 0x0000000000000006
x20: 0x0000000000003607 x21: 0x000000016f2b70e0 x22: 0x0000000000000114 x23: 0x000000016f2b70e0
x24: 0x00000003018d29e8 x25: 0x000000030238f700 x26: 0x0000000000000000 x27: 0x000000030238f700
x28: 0x0000000000000000 fp: 0x000000016f2b6770 lr: 0x000000022734aef8
sp: 0x000000016f2b6750 pc: 0x00000001ef1d91d4 cpsr: 0x40000000
esr: 0x56000080 Address size fault
Binary Images:
0x100fa4000 - 0x101ad7fff CybaProject arm64 <e9948a80aa393fa891d28b77c816d855> /private/var/containers/Bundle/Application/FA653CA2-8512-4B00-95FD-E293558F00C7/CybaProject.app/CybaProject
0x101edc000 - 0x101ee7fff libobjc-trampolines.dylib arm64e <35a44678195b39c2bdd7072893564b45> /private/preboot/Cryptexes/OS/usr/lib/libobjc-trampolines.dylib
0x10252c000 - 0x1026fffff hermes arm64 <a56cdaaf202733d2a806ba11e8c1b3ef> /private/var/containers/Bundle/Application/FA653CA2-8512-4B00-95FD-E293558F00C7/CybaProject.app/Frameworks/hermes.framework/hermes
0x19c0f8000 - 0x19c148d5f libobjc.A.dylib arm64e <1608892e67db3f949fc291492b86c95f> /usr/lib/libobjc.A.dylib
0x19d9fd000 - 0x19e70afff Foundation arm64e <6d0212cc3b9e32c9be2072989ce3acb8> /System/Library/Frameworks/Foundation.framework/Foundation
0x19edba000 - 0x19f2fcfff CoreFoundation arm64e <1532d3d89b3b3f2fb35f55a20ddf411b> /System/Library/Frameworks/CoreFoundation.framework/CoreFoundation
0x1a15a0000 - 0x1a3473fff UIKitCore arm64e <575e5140fa6a37c2b00ba4eacedfda53> /System/Library/PrivateFrameworks/UIKitCore.framework/UIKitCore
0x1a6b0e000 - 0x1a6b53fff libdispatch.dylib arm64e <7de7ec03cfb7349d9b9e8782b38f231d> /usr/lib/system/libdispatch.dylib
0x1a6b54000 - 0x1a6bd3ff3 libsystem_c.dylib arm64e <0150f750db0a3f54b23ad21c55af8824> /usr/lib/system/libsystem_c.dylib
0x1af3be000 - 0x1af44bffb libc++.1.dylib arm64e <491f481bd014381c904eaed69c09f984> /usr/lib/libc++.1.dylib
0x1c47c7000 - 0x1c484a99f dyld arm64e <3060d36a16ce3c3a92583881459f5714> /usr/lib/dyld
0x1eadeb000 - 0x1eadf3fff GraphicsServices arm64e <8425ea11000e3e5e8abcbddf3ff3fa32> /System/Library/PrivateFrameworks/GraphicsServices.framework/GraphicsServices
0x1ef1cd000 - 0x1ef206ff3 libsystem_kernel.dylib arm64e <b9618c71c0cb31b6825f92a4737c890e> /usr/lib/system/libsystem_kernel.dylib
0x227146000 - 0x227160fff libc++abi.dylib arm64e <5e1a37143fad3ad7a23d61c4be170233> /usr/lib/libc++abi.dylib
0x227343000 - 0x22734fff3 libsystem_pthread.dylib arm64e <3ca98e388eee3c269862c5f66aad93c0> /usr/lib/system/libsystem_pthread.dylib
EOF
****
| Needs: Author Feedback,Needs: Repro | low | Critical |
2,753,617,516 | frp | [Feature Request] Add configurable port range for P2P mode | ### Describe the feature request
I would like to request a feature that allows explicitly defining a range of ports for P2P mode.
Currently, ports are selected using:
```
port := rand.IntN(65535-1024) + 1024
```
[source](https://github.com/fatedier/frp/blob/dev/pkg/nathole/nathole.go#L401)
This approach may result in conflicts if some ports in this range are already in use. Being able to specify a custom port range would provide more control and prevent such issues.
**Why this feature should be added**
1. Avoid port conflicts: users can prevent runtime issues by limiting the range to known available ports.
2. Flexibility: allows adaptation to specific environments or network policies.
**Proposed Solution**
Introduce a configuration option, such as p2pPortRange, where users can specify the allowed range of ports:
```
p2pPortRange = "29160-29200"
```
If not set, the default behavior remains unchanged (random selection from 1024–65535), ensuring backward compatibility.
### Describe alternatives you've considered
_No response_
### Affected area
- [X] Docs
- [ ] Installation
- [ ] Performance and Scalability
- [ ] Security
- [X] User Experience
- [ ] Test and Release
- [X] Developer Infrastructure
- [ ] Client Plugin
- [ ] Server Plugin
- [ ] Extensions
- [ ] Others | lifecycle/stale | low | Major |
2,753,625,596 | rust | derive(PartialEq) should not prevent "field is never read" warnings | ### Code
```Rust
#[derive(PartialEq)]
struct MyStruct {
x: i32,
y: i32, // no unused field warning, unfortunately
}
struct MyStruct2 {
x: i32,
y: i32, // warning today
}
pub fn use_struct() {
let ms = MyStruct { x: 1, y: 2 };
let _ = ms.x;
let ms = MyStruct2 { x: 1, y: 2 };
let _ = ms.x;
}
```
### Current output
```Shell
warning: field `y` is never read
--> src/lib.rs:9:5
|
7 | struct MyStruct2 {
| --------- field in this struct
8 | x: i32,
9 | y: i32, // warning today
| ^
|
= note: `#[warn(dead_code)]` on by default
```
### Desired output
A warning for both MyStruct and MyStruct2.
### Rationale and extra context
This was originally discussed on #84647 and #85200, although the conclusion there was to only exclude Debug and Clone.
However, it's really common to derive PartialEq on a type, especially when writing tests.
This means that adding tests can subtly stop this warning from catching issues. As far as I can see, there isn't a way to opt-in to stricter behaviour with either rustc or clippy here. There's no equivalent of `must_use` for struct fields, for example.
This issue was the root of a nasty bug for me. Would you be open to making this diagnostic fire for more automatically derived traits?
### Other cases
```Rust
```
### Rust Version
```Shell
Reproduced on rustc 1.83 on the playground.
```
### Anything else?
_No response_ | A-lints,A-diagnostics,T-compiler | low | Critical |
2,753,629,767 | rust | Rust reproducibility issue - Finding the proper fix | We are building Rust for a custom target in the Yocto framework. Here we run a test called _reproducibility_ to ensure the rust binaries are identical between the builds.
We do that by building and comparing the rust binaries & libs in two different build directories -
1. _reproducibleA_ &
2. _reproducibleB-extended_
With every rust release we've seen a several issues which makes the rust binaries are not reproducible. The issues seen are with several reasons like -
1. Regressions caused by rust updates (absolute path in proc-macro)
https://github.com/rust-lang/rust/pull/121959/commits/a9a979839bbdfec48c75d618ab0dce8a953589b8
2. Incorrect/missing Rust config settings (_remap-debuginfo_ is not set by default)
https://lists.openembedded.org/g/openembedded-core/message/188940
3. Compiler options and settings - _rustdoc_ failed with _codegen-units_ option setting
https://lists.openembedded.org/g/openembedded-core/message/202540
4. Regressions with Crate updates - CC crate appending a unique hash to the object files
https://github.com/rust-lang/cc-rs/pull/1277
https://github.com/rust-lang/cc-rs/pull/1270
There are a few other failure cases we've analyzed and fixed. This issue is happening for multiple reasons and these failure are resulting in lot of debug efforts and project delays.
This can be avoided by having such a reproducibility test on _rust_ itself.
Any comments or inputs on detecting and fixing this issue during rust development itself? (via a test case or any other ways) | T-compiler,A-reproducibility,C-discussion | low | Critical |
2,753,630,718 | flutter | [go_router]: Bad state: Future already completed on context.pop() | ### Steps to reproduce
1. Go to this [dartpad](https://dartpad.dev/?id=2608e901f8ca64058881594d9066284f) and run the code locally. (Go_router version 14.6.2)
2. Click on the Button
3. Click on the cancel button
### Expected results
Loading animation stops
### Actual results
Loading animation does not stop because there is some kind of race condition. The `CancelScreen` is popped shortly before the loading animation dialog is popped which causes the Error. If I put a wait in between (see my comment in code) it works fine.
Not sure if it is linked to this older issue: https://github.com/flutter/flutter/issues/123369
### Code sample
[Sample dartpad](https://dartpad.dev/?id=2608e901f8ca64058881594d9066284f)
<details open><summary>Code sample ()</summary>
```dart
import 'package:flutter/material.dart';
import 'dart:async';
import 'package:go_router/go_router.dart';
void main() => runApp(const MyApp());
final pageShellNavigatorKey = GlobalKey<NavigatorState>();
final rootNavigatorKey = GlobalKey<NavigatorState>();
@TypedGoRoute<HomeRoute>(
path: '/',
routes: [
TypedGoRoute<CancelScreenRoute>(path: 'cancel'),
],
)
@immutable
class HomeRoute extends GoRouteData {
@override
NoTransitionPage<void> buildPage(BuildContext context, GoRouterState state) {
return NoTransitionPage(
child: MyHomePage(title: 'title'),
);
}
}
@immutable
class CancelScreenRoute extends GoRouteData {
static final GlobalKey<NavigatorState> $parentNavigatorKey =
pageShellNavigatorKey;
@override
NoTransitionPage<void> buildPage(BuildContext context, GoRouterState state) {
return NoTransitionPage(
child: CancelScreen(),
);
}
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
static CancelScreenRoute cancelScreenFactory(GoRouterState state) =>
CancelScreenRoute();
static HomeRoute homeFactory(GoRouterState state) => HomeRoute();
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: GoRouter(
navigatorKey: rootNavigatorKey,
routes: [
ShellRoute(
navigatorKey: pageShellNavigatorKey,
builder: (context, GoRouterState state, child) {
return child;
},
routes: [
GoRouteData.$route(path: '/', factory: homeFactory, routes: [
GoRouteData.$route(
path: 'cancel',
parentNavigatorKey: pageShellNavigatorKey,
factory: cancelScreenFactory,
),
]),
],
),
],
),
);
}
}
class MyHomePage extends StatefulWidget {
final String title;
const MyHomePage({
super.key,
required this.title,
});
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
Future<bool> showEditDialog() async {
final partialJson = await GoRouter.of(context).push(
'/cancel',
);
// waiting here fixes it
/*await Future.delayed(const Duration(seconds: 1), () {
print('One second has passed.');
});*/
return true;
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
ElevatedButton(
child: Text('button'),
onPressed: () => wrapEitherWithLoadingAndError(
context: context,
eitherFunction: showEditDialog,
onSuccess: (_, __) {
// maybe show success dialog here
},
)),
],
),
),
);
}
}
class CancelScreen extends StatelessWidget {
const CancelScreen({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
body: ElevatedButton(
child: Text('cancel'),
onPressed: GoRouter.of(context).pop,
),
);
}
}
class LoadingModal {
static void showLoadingDialog(
BuildContext context, {
bool useRootNavigator = true,
}) async {
await showDialog<void>(
context: context,
barrierDismissible: false,
useRootNavigator: useRootNavigator,
builder: (BuildContext context) {
return const Dialog(
backgroundColor: Colors.transparent,
elevation: 0,
child: Center(
child: CircularProgressIndicator(
valueColor: AlwaysStoppedAnimation<Color>(Colors.black),
),
),
);
},
);
}
}
Future<void> wrapEitherWithLoadingAndError<T extends dynamic>({
required BuildContext context,
required FutureOr<bool> Function() eitherFunction,
required FutureOr<void> Function(BuildContext, bool) onSuccess,
}) async {
LoadingModal.showLoadingDialog(
context,
useRootNavigator: false,
);
final result = await eitherFunction();
if (context.mounted) {
if (GoRouter.of(context).canPop()) {
GoRouter.of(context).pop(result);
}
if (result) {
onSuccess(context, result);
}
return;
}
}
```
</details>
### Logs
<details open><summary>Logs</summary>
```
[ERROR:flutter/runtime/dart_vm_initializer.cc(40)] Unhandled Exception: Bad state: Future already completed
#0 _AsyncCompleter.complete (dart:async/future_impl.dart:84:31)
#1 ImperativeRouteMatch.complete (package:go_router/src/match.dart:456:15)
#2 GoRouterDelegate._completeRouteMatch (package:go_router/src/delegate.dart:171:14)
#3 GoRouterDelegate._handlePopPageWithRouteMatch.<anonymous closure> (package:go_router/src/delegate.dart:151:9)
<asynchronous suspension>
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on Manjaro Linux 6.1.119-1-MANJARO, locale en_US.UTF-8)
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.1)
[✓] Chrome - develop for the web
[✓] Linux toolchain - develop for Linux desktop
[✓] Android Studio (version 2024.2)
[✓] Connected device (2 available)
[✓] Network resources
```
</details>
| package,a: error message,has reproducible steps,P2,p: go_router,team-go_router,triaged-go_router,found in release: 3.27,found in release: 3.28 | low | Critical |
2,753,643,259 | godot | Having rename open on a node in scene tree and switching scene wrongly changes name of selected node in the newly opened scene | ### Tested versions
Godot v4.4.dev7
### System information
Fedora Linux 40 (KDE Plasma) on Wayland - X11 display driver, Single-window
### Issue description
Using single window mode.
Having rename open on a node in scene tree and switching scene wrongly changes name of selected node in the newly opened scene.
Can also throws this error instead:
```
ERROR: Node not found: "/root/@EditorNode@18613/@Panel@13/@VBoxContainer@14/DockHSplitLeftL/DockHSplitLeftR/DockHSplitMain/@VBoxContainer@25/DockVSplitCenter/@VSplitContainer@52/@VBoxContainer@53/@EditorMainScreen@98/MainScreen/@CanvasItemEditor@10371/@VSplitContainer@10195/@HSplitContainer@10197/@HSplitContainer@10199/@Control@10200/@SubViewportContainer@10201/@SubViewport@10202/Control/Label2" (absolute path attempted from "/root/@EditorNode@18613/@Panel@13/@VBoxContainer@14/DockHSplitLeftL/DockHSplitLeftR/DockVSplitLeftR/DockSlotLeftUR/Scene/@SceneTreeEditor@4803").
ERROR: editor/gui/scene_tree_editor.cpp:1477 - Parameter "n" is null.
```
What happens looks to do what the node setup looks like in both scenes and what is selected.
https://github.com/user-attachments/assets/a1f5922f-b098-4ef5-9b47-6c91da982687
### Steps to reproduce
See video for steps
### Minimal reproduction project (MRP)
[rename.zip](https://github.com/user-attachments/files/18216420/rename.zip)
| bug,topic:editor | low | Critical |
2,753,643,761 | godot | Strange behavior of RigidBody2D instances and PackedScene | ### Tested versions
Godot 4.0, 4.1.1, 4.2, Godot v4.3.stable -- Issue 1 occurs, Issue 2 occurs
Godot 4.4-dev3 through 4.4-dev7 -- Issue 1 occurs, **Issue 2 does not occur**
Godot 4.4-dev2 -- Issue 1 occurs, Issue 2 occurs
(I could not find where, but the fix for Issue 2 seems to be somewhere in 4.4-dev3.)
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 4070 Ti (NVIDIA; 32.0.15.6614) - AMD Ryzen 9 7950X 16-Core Processor (32 Threads)
### Issue description
Possibly related to:
https://github.com/godotengine/godot/issues/45886
Other references:
https://forum.godotengine.org/t/how-do-i-save-a-node-and-its-children/1276/3
https://docs.godotengine.org/en/4.3/classes/class_packedscene.html
I've been having some issues in my project with RigidBody2D that are generated by code and added at runtime.
Some issues are likely due to my own incompetence.
Possibly might need to split this into 2 issues, but we'll see.
**Issue 1: RigidBodies moving on their own; strange gravity behavior**
I've noticed this with the default physics engine; however since I switched to Rapier2D I've kind of ignored it since the issue went away.
RigidBody2D that I create from code always have weird behavior, and I have no idea why.
They move around randomly, and seem to rotate/move in weird ways, despite nothing happening to them.
I thought maybe it was due to setting the positions, but I've tried not setting the positions at all and the same thing occurred.
Maybe it is a simple fix?
See example below in the MRP.
**Issue 2: PackedScenes instanced from the PolyShader class have duplicate child polygons.**
### Steps to reproduce
Open the MRP.
**Issue 1: RigidBodies moving on their own; strange gravity behavior**
Run the project and click the "Make" button. This will spawn some cubes.
Click the button 3-6 times, creating a tower of cubes.
Observe that eventually, the cube tower will seem to topple on its own.
Even if the cubes are stacked perfect, they will start rotating to the left and fall over.
They will also continue rotating to the left as if being pulled by some leftwards, rotating gravity; rather than resting flat on the ground.
Some may even rotate back the other way and rest like a coin that somehow landed perfectly on its side.
**Issue 2: PackedScenes instanced from the PolyShader class have duplicate child polygons.**
Click the Reset button to reset the scene.
Enable the PolyShader class by clicking the CheckButton.
Click Make. This does the same thing as before, except a custom class is used to generate Polygon2D nodes to color the object, instead of using the visible collision shapes property.
Observe the remote scene tree (under dynamic_level node.) Two Polygon2D nodes are created (one is hidden.)
Click Save Last to save the last object that was made as a packed scene under res://saves/test.tscn
Click Load to create a new instance of the scene
Notice that the color of the new instance is different (brighter.)
Go to the remote scene tree and observe that the polygons were duplicated (4 polygons instead of 2.)
(You can also try disabling PolyShader and save/load objects; and notice that no nodes are duplicated.)
Bonus:
Go to the asset store.
Install Rapier Physics 2D - Fast Version with Parallel SIMD Solver.
Restart the project.
Go to Project Settings > Physics > 2D > Physics Engine and select Rapier2D.
Save and restart the project.
Follow the steps again for Issue 1. Notice that Issue 1 is fixed (boxes no longer topple over themselves or move weirdly.)
(Issue 2 still occurs as before.)
### Minimal reproduction project (MRP)
[packed_scene_duplicate_test.zip](https://github.com/user-attachments/files/18216352/packed_scene_duplicate_test.zip)
| bug,topic:physics,needs testing | low | Minor |
2,753,644,022 | angular | Calling resource.set(value) does not abort in-progress loaders if the given value is equal to the value before the resource was reloaded | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
I'm not sure whether this is a bug or whether this is intended, but it feels unintuitive to me. Therefore "expected" below is what _I_ would have expected.
**Expected behaviour**
Calling `resouce.set(value)` should always abort in-progress async loading
**Actual behaviour**
Calling `resouce.reload()` then calling `resouce.set(value)` _before_ the async reload is complete, with the same `value` that was set for the resource before reload was called, does not abort the reload operation.
This creates an inconsistent behaviour. E.g.:
```ts
resource.set({});
resources.reload();
resource.set(undefined); // cancels the reload
resource.set(undefined);
resources.reload();
resource.set(undefined); // does not cancel the reload
```
**Example reproduction**
In the following component, click the "Load it" button to trigger the reload, then click the "Cancel it" button to set the resource explicitly to `undefined` before the async load operation has completed.
```ts
import {Component, resource, ResourceStatus} from '@angular/core';
import {JsonPipe} from '@angular/common';
@Component({
selector: 'app-root',
imports: [JsonPipe],
template: `
<p>
@if (something.value()) {
{{something.value() | json}}
} @else if (something.isLoading()) {
Loading...
} @else {
Load it?
}
</p>
@if (something.isLoading()) {
<p><button type="button" (click)="cancelIt()">Cancel it</button></p>
} @else {
<p><button type="button" (click)="loadIt()">Load it</button></p>
}
`
})
export class AppComponent {
readonly something = resource({
loader: async ({previous, abortSignal}) => {
if (previous.status === ResourceStatus.Idle) {
return undefined;
}
return await someLongRunningTask(abortSignal);
}
});
loadIt() {
this.something.reload();
}
cancelIt() {
// Setting a value after reload is called that is the same as the value
// which was set before reload was called does not cancel the reload action
this.something.set(undefined);
// whereas setting a different value to the previous one always aborts in-progress reload operations
// this.something.set({});
}
}
function someLongRunningTask(abortSignal: AbortSignal) {
return new Promise<unknown>((resolve, reject) => {
if (abortSignal.aborted) {
reject(abortSignal.reason);
return;
}
let timeout: number | undefined = undefined;
const abort = () => {
clearTimeout(timeout);
reject(abortSignal.reason);
};
abortSignal.addEventListener('abort', abort);
timeout = setTimeout(() => {
abortSignal.removeEventListener('abort', abort);
resolve({success: true});
}, 2_000)
})
}
```
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.0.6
Node: 20.18.1
Package Manager: npm 10.8.2
OS: darwin arm64
Angular: 19.0.5
... animations, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.6
@angular-devkit/build-angular 19.0.6
@angular-devkit/core 19.0.6
@angular-devkit/schematics 19.0.6
@angular/cli 19.0.6
@schematics/angular 19.0.6
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else?
_No response_ | area: core,state: has PR,bug,cross-cutting: signals | low | Critical |
2,753,677,746 | vscode | Accessibility: navigating / reading comments treeview is overly verbose and inefficient with screen readers |
Type: <b>Bug</b>
* Open a file associated with a Copilot-enabled Github repository.
* Select some or all of the file
* Choose Copilot review / comment from Command pallet.
* Move focus to the comments treeview by pressing f6 one or more times.
* Arrow up and down the comments treeview.
* Take note of how a screen reader (such as NVDA) reports each treeview item. Or use an accessibility inspection tool and look at what is exposed via the treeview item's name property.
* Note that each comment starts with the text: "Inspect this in the accessible view (Alt+F2).", before the actual comment. Thus the user hears for example: "Inspect this in the accessible view (Alt+F2). GitHub Copilot at line 8 column 1 in adventure_singleKey.py Comment: The wildcard import `from kid import *` can lead to namespace pollution and potential conflicts..."
to increase efficiency, it would be much better if the text about inspection in the accessible view was included after the comment itself, or perhaps via the description (accDescription) property, rather than name (accName).
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.27764
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz (12 x 2592)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.72GB (5.85GB free)|
|Process Argv|. --crash-reporter-id 3ccf12dd-ee3d-4980-bdfc-6a5f9d3079ed|
|Screen Reader|yes|
|VM|0%|
</details><details><summary>Extensions (22)</summary>
Extension|Author (truncated)|Version
---|---|---
pythoncpp-debug|ben|0.3.0
ruff|cha|2024.56.0
doxdocgen|csc|1.4.0
vscode-markdownlint|Dav|0.57.0
EditorConfig|Edi|0.16.4
copilot|Git|1.254.0
copilot-chat|Git|0.23.2
vscode-pull-request-github|Git|0.102.0
vscode-docker|ms-|1.29.3
vscode-kubernetes-tools|ms-|1.3.18
debugpy|ms-|2024.14.0
isort|ms-|2023.10.1
python|ms-|2024.22.0
vscode-pylance|ms-|2024.12.1
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
powershell|ms-|2024.4.0
remote-explorer|ms-|0.4.3
vscode-yaml|red|1.15.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | bug,accessibility,polish,comments | low | Critical |
2,753,681,657 | PowerToys | Feedback github landing page | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce

Reproduce:
* Open Powertoys and click »Feedback« and open browser.
* Right-click Powertoys in tray and choose Report bug
Issue: The Feedback button sends prople directly to a page to create a new issue, rather than to the issues page where they can check if the issue has already been added and what progress has been made. If you have problems with issues being rapported by several different people in a matter of short time this is probably why.
### ✔️ Expected Behavior
1) Being sent to https://github.com/microsoft/PowerToys/issues where I can see and search the list of current issues, to see if my issue has already been reported, before submitting it.
2) Also adding a link at the top of the »New issue« page to https://github.com/microsoft/PowerToys/issues and encouraging people to search and check there, before submitting a new one, would be a good idea.
Both would give people a chance to check if the issue has already been submitted, see progress and add their thoughts in the same place, rather than everyone creating new issues-rapports for the same issue, which in turn makes the issue tracker hard to navigate and overlook.
### ❌ Actual Behavior
Getting sent directly to the »Create new issue« page.
### Other Software
_No response_
btw the New issue reporter you have is really nice and well organised! Others should take inspiration from it. Great work! | Issue-Bug,Needs-Triage | low | Critical |
2,753,683,443 | rust | `impl<T> PointerLike for {Rc,Arc,Weak}<T>` can't exist but should | The unstable trait `PointerLike` must be implemented for any type which is to be coerced to a [`dyn*` type](https://github.com/rust-lang/rust/issues/102425). However, the only built-in smart pointer type it is currently implemented for is `Box`, and this is a special case in the compiler.
I believe it would be useful for `PointerLike` to be implemented for, among other things, `Rc`, `Arc`, and their corresponding `Weak` types. This would enable use cases such as polymorphic collections like `Vec<dyn* SomeTrait>` where each element consists of a (strong or weak) reference-counted pointer to other data structures, whereas the current available option is `Vec<Box<dyn SomeTrait>>` which results in an inefficient extra allocation and double-indirection for each item.
However, it is currently impossible even to modify the standard library to meet this requirement. This is because the compiler requires that the type implementing `PointerLike` be either primitive or `repr(transparent)`, and types like `Rc` cannot be `repr(transparent)` because they have an allocator field:
```rust
pub struct Rc<
T: ?Sized,
#[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global,
> {
ptr: NonNull<RcInner<T>>,
phantom: PhantomData<RcInner<T>>,
alloc: A,
}
```
The only non-zero-sized field of `Rc<T>` is `ptr`, but `Rc<T, SomeNonZstAllocator>` is non-zero-sized, which disqualifies the entire `Rc` type from being able to have `repr(transparent)`, and therefore means you can't even implement `PointerLike` for `Rc<T>` = `Rc<T, Global>`.
It seems to me that, therefore, it would be useful to adjust the design of `PointerLike`’s implementation restriction — or of `repr(transparent)`, or something — so that `impl PointerLike<T> for Rc<T>` is possible. `Box` manages to implement `PointerLike` only by way of having been granted a unique “is `Box`” exception. This exception could be generalized to other standard library types, and that would address being able to use `Rc` and friends in `dyn*`, but it would privilege the standard library in a way that has no path to stabilization.
(If `dyn*` settles into being solely an async trait implementation detail and not a language feature, then this entire issue is arguably moot since users won't be getting to write `Vec<dyn* Trait>` at all — but `PointerLike` would still be potentially useful as a trait bound to people trying to solve similar problems in libraries.)
---
For an example of the kind of code I’m interested in writing:
```rust
#![allow(incomplete_features)]
#![feature(pointer_like_trait)]
#![feature(dyn_star)]
use std::cell::Cell;
use std::rc::{Rc, Weak};
pub trait Watcher<T> {
fn notify(&self, value: &T);
}
pub struct Watched<T> {
value: T,
watchers: Vec<dyn* Watcher<T>>,
}
impl<T> Watched<T> {
pub fn set(&mut self, value: T) {
self.value = value;
for watcher in self.watchers.iter() {
watcher.notify(&self.value);
}
}
}
#[repr(transparent)]
struct CopyToCell<T>(Weak<Cell<T>>);
impl<T> core::marker::PointerLike for CopyToCell<T> {}
impl<T: Copy + 'static> Watcher<T> for CopyToCell<T> {
fn notify(&self, value: &T) {
if let Some(cell) = self.0.upgrade() {
cell.set(*value);
}
}
}
fn main() {
let c = Rc::new(Cell::new(0));
let mut w = Watched {
value: 0,
watchers: vec![
CopyToCell(Rc::downgrade(&c))
],
};
w.set(1);
assert_eq!(c.get(), 1);
}
```
This code does not compile, but the only reason it does not is that the `PointerLike` implementation is rejected. You can prove this by wrapping the `Weak` in an extra `Box`, and, it will compile and run on current nightly.
---
@rustbot label +F-dyn_star | C-feature-request,F-dyn_star | low | Minor |
2,753,686,258 | rust | `-musl` platforms do not include unwind tables for libc | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
In certain cases, it may be desirable to capture a backtrace from a signal handler which includes stack frames prior to signal handling.
Consider this program, which attempts to demonstrate the problem in a concise way - please note that the actual implementation is more complex, in order to deal with signal safety:
```rust
use libc::{
c_int, c_void, getpid, getuid, pid_t, sigaction, sigemptyset, sigset_t, size_t, syscall, uid_t,
SYS_rt_tgsigqueueinfo, SA_SIGINFO, SIGRTMIN,
};
use std::{
backtrace::Backtrace,
mem::MaybeUninit,
ptr::{addr_of, addr_of_mut, null_mut},
};
// please do not use this definition of siginfo_t in live code
#[repr(C)]
struct siginfo_t {
si_signo: c_int,
_si_errno: c_int,
si_code: c_int,
si_pid: pid_t,
si_uid: uid_t,
si_ptr: *mut c_void,
_si_pad: [c_int; (128 / size_of::<c_int>()) - 3],
}
unsafe extern "C" fn handle_signal(_signo: c_int, _info: *mut siginfo_t, _ucontext: *const c_void) {
// neither of these operations are signal safe
let bt = Backtrace::force_capture();
dbg!(bt);
}
fn main() {
let sa_mask = unsafe {
let mut sa_mask = MaybeUninit::<sigset_t>::uninit();
sigemptyset(sa_mask.as_mut_ptr());
sa_mask.assume_init()
};
let sa = sigaction {
sa_sigaction: handle_signal as size_t,
sa_mask,
sa_flags: SA_SIGINFO,
sa_restorer: None,
};
// please do not blindly copy this code for use in real world applications, it is meant to be
// brief and functional, not strictly correct.
unsafe {
let _ = sigaction(SIGRTMIN(), addr_of!(sa), null_mut());
let mut si: siginfo_t = MaybeUninit::zeroed().assume_init();
si.si_signo = SIGRTMIN();
si.si_code = -1; // SI_QUEUE
si.si_pid = getpid();
si.si_uid = getuid();
let _ = syscall(
SYS_rt_tgsigqueueinfo,
getpid(),
getpid(),
SIGRTMIN(),
addr_of_mut!(si),
);
}
}
```
When built for `x86_64-unknown-linux-gnu`, it produces the following output:
```
[src/main.rs:24:5] bt = Backtrace [
{ fn: "sigtest::handle_signal", file: "./src/main.rs", line: 23 },
{ fn: "syscall" },
{ fn: "sigtest::main", file: "./src/main.rs", line: 51 },
{ fn: "core::ops::function::FnOnce::call_once", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/ops/function.rs", line: 250 },
{ fn: "std::sys::backtrace::__rust_begin_short_backtrace", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/sys/backtrace.rs", line: 154 },
{ fn: "std::rt::lang_start::{{closure}}", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/rt.rs", line: 164 },
{ fn: "core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/ops/function.rs", line: 284 },
{ fn: "std::panicking::try::do_call", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs", line: 554 },
{ fn: "std::panicking::try", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs", line: 518 },
{ fn: "std::panic::catch_unwind", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panic.rs", line: 345 },
{ fn: "std::rt::lang_start_internal::{{closure}}", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/rt.rs", line: 143 },
{ fn: "std::panicking::try::do_call", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs", line: 554 },
{ fn: "std::panicking::try", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs", line: 518 },
{ fn: "std::panic::catch_unwind", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panic.rs", line: 345 },
{ fn: "std::rt::lang_start_internal", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/rt.rs", line: 143 },
{ fn: "std::rt::lang_start", file: "/rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/rt.rs", line: 163 },
{ fn: "main" },
{ fn: "__libc_start_main" },
{ fn: "_start" },
]
```
When built for `x86_64-unknown-linux-musl`, this is the output:
```
[src/main.rs:24:5] bt = Backtrace [
{ fn: "sigtest::handle_signal", file: "./src/main.rs", line: 23 },
]
```
Observe that at one point the backtrace walks through `syscall` - while in this instance this is because the same thread issuing the syscall receives the signal, it is conceivable - and quite likely - that any arbitrary thread may be caught making a syscall or otherwise inside of libc if some other thread were to send a signal for such purpose.
This is caused by a combination of two similar but technically distinct problems:
1. musl's signal trampoline does not have CFI annotations - this can be worked around by writing one's own trampoline and making the `rt_sigaction` syscall directly.
2. musl, by default, [does not include exception handling information in non-debug builds](https://git.musl-libc.org/cgit/musl/tree/configure?id=61399d4bd02ae1ec03068445aa7ffe9174466bfd#n504):
```bash
#
# Modern GCC wants to put DWARF tables (used for debugging and
# unwinding) in the loaded part of the program where they are
# unstrippable. These options force them back to debug sections (and
# cause them not to get generated at all if debugging is off).
#
tryflag CFLAGS_AUTO -fno-unwind-tables
tryflag CFLAGS_AUTO -fno-asynchronous-unwind-tables
```
I have not tracked down the build configuration for the musl platform; however, the second issue should be easily addressable in one of two ways:
1. If musl is already being built, change the build flags for musl such that it always includes unwind tables
2. Otherwise, begin building the copy of musl which is to be redistributed such that it may include unwind tables
Unwind tables are not included in musl because - _as best I can guess_ - there is concern over memory utilization in extreme environments , and because it is further assumed that unwinding through libc is an unlikely case as this will most likely occur when unwinding from a signal handler - already uncommon - and only when the application is one that _does_ unwinding which, while common, is still a subset of libc's consumers.
As backtraces are a first-class component of Rust's error handling design, and because this functionality works as intended and expected on `-gnu` - a Tier 1 platform - it seems reasonable to correct the behavior on `-musl`, particularly if it is indeed as straightforward as building without two flags. For users who require to exclude as much as is necessary from compiled artifacts, including unwind tables and the unwinder, it is still possible to strip the exception handling information at a later time - `strip` just needs to be configured to remove `.eh_frame` or `.debug_info`.
I can do the work if someone could point me in the direction of build configuration.
| T-compiler,T-bootstrap,O-musl,C-bug,A-backtrace | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.