id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,788,082,174 | flutter | Reflecting Updates in the Changelog File (3.27.2) | ### Steps to reproduce
Thank you very much for regularly releasing updates and addressing the existing issues. It seems that the [Changelog file](https://github.com/flutter/flutter/blob/stable/CHANGELOG.md) does not include the recent changes made in version 3.27.2 on the stable channel. Please include the list of these changes in the file if possible.
### Expected results
N/A
### Actual results
N/A
### Code sample
N/A
### Screenshots or Video
### Logs
### Flutter Doctor output
[✓] Flutter (Channel stable, 3.27.2, on macOS 15.2 24C101 darwin-arm64, locale en-CA)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.2)
[✓] VS Code (version 1.96.3)
[✓] Connected device (4 available)
[✓] Network resources
• No issues found! | team-release | low | Minor |
2,788,089,915 | go | build: build failure on x_tools-go1.23-linux-arm64_c4as16-perf_vs_gopls_0_11 | ```
#!watchflakes
default <- builder == "x_tools-go1.23-linux-arm64_c4as16-perf_vs_gopls_0_11" && repo == "tools" && mode == "build"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8725813899705243009)):
go: downloading github.com/BurntSushi/toml v1.0.0
go: downloading github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
2025/01/13 23:53:56 Load average: 0.71 0.67 0.28 1/400 1322
2025/01/13 23:53:56 Waiting for load average to drop below 0.20...
2025/01/13 23:54:26 Load average: 0.43 0.60 0.27 1/402 1323
2025/01/13 23:54:26 Waiting for load average to drop below 0.20...
2025/01/13 23:54:56 Load average: 0.26 0.55 0.26 1/402 1323
2025/01/13 23:54:56 Waiting for load average to drop below 0.20...
2025/01/13 23:55:26 Load average: 0.16 0.49 0.25 1/402 1323
2025/01/13 23:55:26 Running sub-repo benchmarks for tools
...
go: downloading golang.org/x/vuln v1.1.3
go: downloading honnef.co/go/tools v0.5.1
go: downloading mvdan.cc/xurls/v2 v2.5.0
go: downloading golang.org/x/text v0.21.0
go: downloading mvdan.cc/gofumpt v0.7.0
go: downloading golang.org/x/exp/typeparams v0.0.0-20241210194714-1829a127f884
go: downloading github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c
2025/01/13 23:56:57 Error running subrepo tests: error running sub-repo tools benchmark "baseline" with toolchain baseline in dir /home/swarming/.swarming/w/ir/x/w/tools/gopls/internal/test/integration/bench: exit status 1
2025/01/13 23:56:57 FAIL
exit status 1
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,788,098,867 | godot | First use of shader with refraction_enabled=true causes stutter | ### Tested versions
- Reproducible in 4.3.stable, 4.4.beta[4ce466d]
### System information
Godot v4.4.dev.mono (611d6e8da) - Pop!_OS 22.04 LTS on Wayland - X11 display driver, Multi-window, 3 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2070 SUPER (nvidia; 565.77) - AMD Ryzen 7 5800X 8-Core Processor (16 threads)
### Issue description
This was discussed in depth in the Godot rocket chat here: https://chat.godotengine.org/channel/vfx-tech-art/thread/3w7r2PGyTifFXPv84
I will provide a quick summary followed by more details.
### Quick Summary
When a shader that uses ``refraction_enabled=true`` first becomes visible and in view, there is a brief stutter while Godot allocates the necessary resources for that shader.
### Details
I'm going to quote @DarioSamo has he did the digging into these details after we narrowed down the cause being the refraction setting.
> This might not be that hard to fix. There's a lot of stuff that is only queried based on what surfaces are visible on the scene:
```cpp
if (uses_lightmap) {
surf->sort.uses_lightmap = 1;
scene_state.used_lightmap = true;
}
if (surf->flags & GeometryInstanceSurfaceDataCache::FLAG_USES_SUBSURFACE_SCATTERING) {
scene_state.used_sss = true;
}
if (surf->flags & GeometryInstanceSurfaceDataCache::FLAG_USES_SCREEN_TEXTURE) {
scene_state.used_screen_texture = true;
}
if (surf->flags & GeometryInstanceSurfaceDataCache::FLAG_USES_NORMAL_TEXTURE) {
scene_state.used_normal_texture = true;
}
if (surf->flags & GeometryInstanceSurfaceDataCache::FLAG_USES_DEPTH_TEXTURE) {
scene_state.used_depth_texture = true;
}
```
> if these pass through, then it does whatever it needs to create these buffers, but the existence of these textures depends on the viewport itself
> I wanted to bring attention to this in particular
```cpp
if (scene_state.used_screen_texture) {
RENDER_TIMESTAMP("Copy Screen Texture");
// Copy screen texture to backbuffer so we can read from it
_render_buffers_copy_screen_texture(p_render_data);
}
```
> this is only triggered if the boolean is true
And then @Calinou pointed out:
> refraction requires screen and depth texture (depth since 4.4), so changing it will cause recompilation
To which @DarioSamo discovered we can expand our approach to ubershaders in order to take into account the different textures needed on scene load:
> Not much different than what I described. We have a surface cache construction step. The idea would be we check for these flags and keep a counter for each time a surface is added and removed that uses the feature. Then we default scene state to true if the counter > 0.
```cpp
if (surf->flags & GeometryInstanceSurfaceDataCache::FLAG_USES_SCREEN_TEXTURE) {
default_scene_state.surfaces_using_screen_texture++;
}
if (surf->flags & GeometryInstanceSurfaceDataCache::FLAG_USES_NORMAL_TEXTURE) {
default_scene_state.surfaces_using_normal_texture++;
}
if (surf->flags & GeometryInstanceSurfaceDataCache::FLAG_USES_DEPTH_TEXTURE) {
default_scene_state.surfaces_using_depth_texture++;
}
```
> Something along those lines, then decrease the counter on surface destruction
With that said, it sounds like the scope of this issue may be beyond just refraction. I suspect anything that needs to allocate a texture? I'll let @DarioSamo confirm that and if so, we may want to update the title of this issue to reflect it.
### Workaround
Create a quad with a simple shader attached that makes use of the resources you need allocated. In my case for example, I would create a quad with a simple StandardMaterial3D that has ``refraction_enabled=true`` visible to the camera, but hidden from the player behind a loading scene as the rest of the scene loads.
### Steps to reproduce
1. Get a lower end device. In my case it is an Android device. Details:
Model Name: Galaxy S10e
Model number: SM-G970U1
This will not work with a high end cpu as it will create the resources before the next frame refresh occurs and you will not notice a stutter.
2. Have something moving in the background so you notice the stutter.
3. Place the quad with a refractive material in view of the camera but leave it invisible.
4. Set it to become visible while the scene is playing and notice a brief freeze on its first draw.
I have an MRP attached that is slightly more involved as I originally thought it was related to GPUParticles. But nonetheless, I attached it in case anyone finds it useful as the stutter does occur with it on my Android device.
### Minimal reproduction project (MRP)
[mrp.zip](https://github.com/user-attachments/files/18415153/mrp.zip)
NOTE:
This particular MRP uses GPUParticles3D. The first particle does not use refraction and hence doesn't cause a stutter. The sub-emitter (shockwave) emits particles with refraction and causes this stutter. Just press the "emit" button and see it play. | topic:rendering,topic:3d,performance | low | Minor |
2,788,106,345 | vscode | Tools enabled later via `when` clause don't show up in vscode.lm.tools API | The tool is available for `#referencing` in chat, but it's not available on the API if the when clause is not initially true
| bug,chat,chat-tools | low | Minor |
2,788,121,136 | ant-design | Support only numeric input for InputNumber | ### Reproduction link
[](https://codesandbox.io/p/sandbox/basic-antd-5-23-1-forked-8jy68s)
### Steps to reproduce
1. type any letter
2. notice how the letter is typed when the input field only allows numbers
### What is expected?
that works like in chrome, that only shows numbers no letters.
### What is actually happening?
show letters that are not allowed.
| Environment | Info |
| --- | --- |
| antd | 5.23.1 |
| React | 18.0.0 |
| System | Sequoia 15.1.1 |
| Browser | 18.1.1 |
---
I commented the problem here
https://github.com/ant-design/ant-design/issues/49049#issuecomment-2590926724
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Major |
2,788,167,554 | kubernetes | Improve performance of Service account token creation code path at scale | ### What would you like to be added?
Currently ServiceAccount token creation code path is issuing GET calls directly to etcd [here](https://github.com/kubernetes/kubernetes/blob/6fdacf04117cef54a0babd0945e8ef87d0f9461d/pkg/registry/core/serviceaccount/storage/token.go#L99-L100 to validate if ServiceAccount exists before a token gets created instead of leveraging APIServer cache and then falling back to etcd.
Ask is to leverage APIServer cache for ServiceAccount GET calls during validation phase of Service Account token creation and fall back to etcd if/when cache bails out.
### Why is this needed?
At scale, i.e; at higher pod churn i.e around 600-800 QPS, I was able to see this is becoming a bottleneck for pod-start-up SLOs.
I have seen cases where it took `O(seconds)` for just for kubelet token creation because of the latencies from etcd.
See below audit log snippet:
example 1 log entry:
```
[annotations.apiserver.latency.k8s.io/apf-queue-wait](http://annotations.apiserver.latency.k8s.io/apf-queue-wait) : 26.072µs
annotations.apiserver.latency.k8s.io/etcd : 3.950256834s
annotations.apiserver.latency.k8s.io/response-write : 1.393µs
annotations.apiserver.latency.k8s.io/serialize-response-object : 5.418656ms
annotations.apiserver.latency.k8s.io/total : 3.996358921s
```
example 2 log entry:
```
annotations.apiserver.latency.k8s.io/apf-queue-wait : 14.56416759s
annotations.apiserver.latency.k8s.io/etcd : 33.853028992s
annotations.apiserver.latency.k8s.io/response-write : 789ns
annotations.apiserver.latency.k8s.io/serialize-response-object :95.365µs
annotations.apiserver.latency.k8s.io/total : 48.457248047s
```
example 2 log entry:
I patched APIServer to add some more detailed logs:
```
I0114 04:05:22.398438 11 token.go:84] [RequestID: de602c51-8531-49e3-9ff3-94373cda648b] Hakuna Token creation started for name: default
2025-01-14T04:05:22.000Z
I0114 04:05:22.398447 11 token.go:92] [RequestID: de602c51-8531-49e3-9ff3-94373cda648b] Hakuna Namespace: test-d7w9e9-1
2025-01-14T04:05:22.000Z
I0114 04:05:22.398451 11 token.go:104] [RequestID: de602c51-8531-49e3-9ff3-94373cda648b] Hakuna Looking up ServiceAccount: default
2025-01-14T04:05:22.000Z
I0114 04:05:23.433142 11 token.go:113] [RequestID: de602c51-8531-49e3-9ff3-94373cda648b] Hakuna ServiceAccount lookup completed in 1.034684445s
2025-01-14T04:05:23.000Z
I0114 04:05:23.437082 11 token.go:164] [RequestID: de602c51-8531-49e3-9ff3-94373cda648b] Hakuna PodObjectRef Get call in 3.87446ms
2025-01-14T04:05:23.000Z
I0114 04:05:23.437153 11 token.go:178] [RequestID: de602c51-8531-49e3-9ff3-94373cda648b] Hakuna Node ObjectRef for Pod Get call from cache in 3.945812ms
2025-01-14T04:05:23.000Z
I0114 04:05:23.437163 11 token.go:226] [RequestID: de602c51-8531-49e3-9ff3-94373cda648b] Hakuna BoundObjectRef processed in 3.95784ms
2025-01-14T04:05:23.000Z
I0114 04:05:23.437176 11 token.go:255] [RequestID: de602c51-8531-49e3-9ff3-94373cda648b] Hakuna jwt token claims processed 1.038734165s
2025-01-14T04:05:23.000Z
I0114 04:05:23.440265 11 token.go:266] [RequestID: de602c51-8531-49e3-9ff3-94373cda648b] Hakuna Token generated in 3.077087ms
2025-01-14T04:05:23.000Z
........
I0114 04:05:23.440291 11 token.go:277] [RequestID: de602c51-8531-49e3-9ff3-94373cda648b] Hakuna Token creation completed in 1.04184648s for JTI id 666734ee-f6a0-4bf8-bf56-50eb23c20161
2025-01-14T04:05:23.000Z
```
You can see that in both log entries just the ServiceAccount lookup from etcd took `O(seconds)`.
While I understand there could be staleness of cache may be in the O(100ms), I'm assuming that shouldn't be a huge concern at the cost of performance gain because of this at scale (I can post numbers later if needed with leveraging cache and w/o leveraging cache at 800 QPS).
Also, my understanding is token is anyways invalidated when associated object i.e secret/pod/node is deleted, so I don't see a security risk here as much relying on cache for validating service-account existence. Would like to here from community what they think ?
I also see that we do similar thing for ServiceAccountTokenPodNode info for Pods logic already [here](https://github.com/kubernetes/kubernetes/blob/6fdacf04117cef54a0babd0945e8ef87d0f9461d/pkg/registry/core/serviceaccount/storage/token.go#L162) | sig/scalability,kind/feature,sig/auth,needs-triage | medium | Major |
2,788,178,044 | flutter | [webview_flutter] webview does not work properly in Huawei P30 after hot-restarting many times. | ### What package does this bug report belong to?
webview_flutter
### What target platforms are you seeing this bug on?
Android
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
async:
dependency: transitive
description:
name: async
sha256: d2872f9c19731c2e5f10444b14686eb7cc85c76274bd6c16e1816bff9a3bab63
url: "https://pub.dev"
source: hosted
version: "2.12.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "8aab1771e1243a5063b8b0ff68042d67334e3feab9e95b9490f9a6ebf73b42ea"
url: "https://pub.dev"
source: hosted
version: "2.1.2"
characters:
dependency: transitive
description:
name: characters
sha256: f71061c654a3380576a52b451dd5532377954cf9dbd272a78fc8479606670803
url: "https://pub.dev"
source: hosted
version: "1.4.0"
clock:
dependency: transitive
description:
name: clock
sha256: fddb70d9b5277016c77a80201021d40a2247104d9f4aa7bab7157b7e3f05b84b
url: "https://pub.dev"
source: hosted
version: "1.1.2"
collection:
dependency: transitive
description:
name: collection
sha256: "2f5709ae4d3d59dd8f7cd309b4e023046b57d8a6c82130785d2b0e5868084e76"
url: "https://pub.dev"
source: hosted
version: "1.19.1"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "6a95e56b2449df2273fd8c45a662d6947ce1ebb7aafe80e550a3f68297f3cacc"
url: "https://pub.dev"
source: hosted
version: "1.3.2"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_inappwebview:
dependency: "direct main"
description:
name: flutter_inappwebview
sha256: "80092d13d3e29b6227e25b67973c67c7210bd5e35c4b747ca908e31eb71a46d5"
url: "https://pub.dev"
source: hosted
version: "6.1.5"
flutter_inappwebview_android:
dependency: transitive
description:
name: flutter_inappwebview_android
sha256: "62557c15a5c2db5d195cb3892aab74fcaec266d7b86d59a6f0027abd672cddba"
url: "https://pub.dev"
source: hosted
version: "1.1.3"
flutter_inappwebview_internal_annotations:
dependency: transitive
description:
name: flutter_inappwebview_internal_annotations
sha256: "787171d43f8af67864740b6f04166c13190aa74a1468a1f1f1e9ee5b90c359cd"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
flutter_inappwebview_ios:
dependency: transitive
description:
name: flutter_inappwebview_ios
sha256: "5818cf9b26cf0cbb0f62ff50772217d41ea8d3d9cc00279c45f8aabaa1b4025d"
url: "https://pub.dev"
source: hosted
version: "1.1.2"
flutter_inappwebview_macos:
dependency: transitive
description:
name: flutter_inappwebview_macos
sha256: c1fbb86af1a3738e3541364d7d1866315ffb0468a1a77e34198c9be571287da1
url: "https://pub.dev"
source: hosted
version: "1.1.2"
flutter_inappwebview_platform_interface:
dependency: transitive
description:
name: flutter_inappwebview_platform_interface
sha256: cf5323e194096b6ede7a1ca808c3e0a078e4b33cc3f6338977d75b4024ba2500
url: "https://pub.dev"
source: hosted
version: "1.3.0+1"
flutter_inappwebview_web:
dependency: transitive
description:
name: flutter_inappwebview_web
sha256: "55f89c83b0a0d3b7893306b3bb545ba4770a4df018204917148ebb42dc14a598"
url: "https://pub.dev"
source: hosted
version: "1.1.2"
flutter_inappwebview_windows:
dependency: transitive
description:
name: flutter_inappwebview_windows
sha256: "8b4d3a46078a2cdc636c4a3d10d10f2a16882f6be607962dbfff8874d1642055"
url: "https://pub.dev"
source: hosted
version: "0.6.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "3f41d009ba7172d5ff9be5f6e6e6abb4300e263aab8866d2a0842ed2a70f8f0c"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: c35baad643ba394b40aac41080300150a4f08fd0fd6a10378f8f7c6bc161acec
url: "https://pub.dev"
source: hosted
version: "10.0.8"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: f8b613e7e6a13ec79cfdc0e97638fddb3ab848452eff057653abd3edba760573
url: "https://pub.dev"
source: hosted
version: "3.0.9"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: "976c774dd944a42e83e2467f4cc670daef7eed6295b10b36ae8c85bcbf828235"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
matcher:
dependency: transitive
description:
name: matcher
sha256: dc58c723c3c24bf8d3e2d3ad3f2f9d7bd9cf43ec6feaa64181775e60190153f2
url: "https://pub.dev"
source: hosted
version: "0.12.17"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: e3641ec5d63ebf0d9b41bd43201a66e3fc79a65db5f61fc181f04cd27aab950c
url: "https://pub.dev"
source: hosted
version: "1.16.0"
path:
dependency: transitive
description:
name: path
sha256: "75cca69d1490965be98c73ceaea117e8a04dd21217b37b292c9ddbec0d955bc5"
url: "https://pub.dev"
source: hosted
version: "1.9.1"
plugin_platform_interface:
dependency: transitive
description:
name: plugin_platform_interface
sha256: "4820fbfdb9478b1ebae27888254d445073732dae3d6ea81f0b7e06d5dedc3f02"
url: "https://pub.dev"
source: hosted
version: "2.1.8"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
source_span:
dependency: transitive
description:
name: source_span
sha256: "254ee5351d6cb365c859e20ee823c3bb479bf4a293c22d17a9f1bf144ce86f7c"
url: "https://pub.dev"
source: hosted
version: "1.10.1"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "8b27215b45d22309b5cddda1aa2b19bdfec9df0e765f2de506401c071d38d1b1"
url: "https://pub.dev"
source: hosted
version: "1.12.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: "4ac0537115a24d772c408a2520ecd0abb99bca2ea9c4e634ccbdbfae64fe17ec"
url: "https://pub.dev"
source: hosted
version: "2.1.3"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "921cd31725b72fe181906c6a94d987c78e3b98c2e205b397ea399d4054872b43"
url: "https://pub.dev"
source: hosted
version: "1.4.1"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: "7f554798625ea768a7518313e58f83891c7f5024f88e46e7182a4558850a4b8e"
url: "https://pub.dev"
source: hosted
version: "1.2.2"
test_api:
dependency: transitive
description:
name: test_api
sha256: fb31f383e2ee25fbbfe06b40fe21e1e458d14080e3c67e7ba0acfde4df4e0bbd
url: "https://pub.dev"
source: hosted
version: "0.7.4"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "0968250880a6c5fe7edc067ed0a13d4bae1577fe2771dcf3010d52c4a9d3ca14"
url: "https://pub.dev"
source: hosted
version: "14.3.1"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
webview_flutter:
dependency: "direct main"
description:
name: webview_flutter
sha256: "889a0a678e7c793c308c68739996227c9661590605e70b1f6cf6b9a6634f7aec"
url: "https://pub.dev"
source: hosted
version: "4.10.0"
webview_flutter_android:
dependency: transitive
description:
name: webview_flutter_android
sha256: "285cedfd9441267f6cca8843458620b5fda1af75b04f5818d0441acda5d7df19"
url: "https://pub.dev"
source: hosted
version: "4.1.0"
webview_flutter_platform_interface:
dependency: transitive
description:
name: webview_flutter_platform_interface
sha256: d937581d6e558908d7ae3dc1989c4f87b786891ab47bb9df7de548a151779d8d
url: "https://pub.dev"
source: hosted
version: "2.10.0"
webview_flutter_wkwebview:
dependency: transitive
description:
name: webview_flutter_wkwebview
sha256: b7e92f129482460951d96ef9a46b49db34bd2e1621685de26e9eaafd9674e7eb
url: "https://pub.dev"
source: hosted
version: "3.16.3"
sdks:
dart: ">=3.7.0-0 <4.0.0"
flutter: ">=3.24.0"
```
</details>
### Steps to reproduce
This behavior was originally observed in #159688 but the fix only addressed the pixelation issue
1. run the example in a Huawei P30 (ELE-L04) with EMUI 12.0.0 (latest update)
2. Hot restart the app many times to see the different error states
### Expected results
The webview displays the webpage normally as any web browser or the flutter_inappwebview package and you can scroll
### Actual results
The webview sometimes displays a blank screen, sometimes the page partially loads but I can't scroll and sometimes the page layout is broken.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter_inappwebview/flutter_inappwebview.dart';
import 'package:webview_flutter/webview_flutter.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
home: Scaffold(
appBar: AppBar(title: Text('test')),
body: OfficialWebview(),
),
);
}
}
class OfficialWebview extends StatefulWidget {
const OfficialWebview({super.key});
@override
State<OfficialWebview> createState() => _OfficialWebviewState();
}
class _OfficialWebviewState extends State<OfficialWebview> {
late final WebViewController _controller;
@override
void initState() {
super.initState();
_controller = WebViewController()
..setJavaScriptMode(JavaScriptMode.unrestricted)
..loadRequest(Uri.parse('https://flutter.dev/'));
}
@override
Widget build(BuildContext context) {
return WebViewWidget(controller: _controller);
}
}
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>


</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib/main.dart on ELE L04 in debug mode...
Your project is configured with Android NDK 26.3.11579264, but the following plugin(s) depend on a different Android NDK version:
- flutter_inappwebview_android requires Android NDK 27.0.12077973
- webview_flutter_android requires Android NDK 27.0.12077973
Fix this issue by using the highest Android NDK version (they are backward compatible).
Add the following to /Users/victorcatano/development/p30webview/android/app/build.gradle.kts:
android {
ndkVersion = "27.0.12077973"
...
}
✓ Built build/app/outputs/flutter-apk/app-debug.apk
I/flutter (24700): [IMPORTANT:flutter/shell/platform/android/android_context_vk_impeller.cc(60)] Using the Impeller rendering backend (Vulkan).
Connecting to VM Service at ws://127.0.0.1:60255/WnPcWjBrlJY=/ws
Connected to the VM Service.
I/Choreographer(24700): Skipped 57 frames! The application may be doing too much work on its main thread.
D/ApplicationLoaders(24700): createClassLoader zip: /data/app/com.google.android.trichromelibrary_677820033-h_MTD1WOdwlyswolmFAhvw==/base.apk librarySearchPath: /data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/lib/arm64:/data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/base.apk!/lib/arm64-v8a:/data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/split_config.en.apk!/lib/arm64-v8a:/data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/split_config.es.apk!/lib/arm64-v8a:/data/app/com.google.android.trichromelibrary_677820033-h_MTD1WOdwlyswolmFAhvw==/base.apk!/lib/arm64-v8a libraryPermittedPath: parent: java.lang.BootClassLoader@554d021 targetSdkVersion: 34 isBundled: false classLoaderName: null sharedLibraries: null
D/ApplicationLoaders(24700): createClassLoader zip: /data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/base.apk librarySearchPath: /data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/lib/arm64:/data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/base.apk!/lib/arm64-v8a:/data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/split_config.en.apk!/lib/arm64-v8a:/data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/split_config.es.apk!/lib/arm64-v8a:/data/app/com.google.android.trichromelibrary_677820033-h_MTD1WOdwlyswolmFAhvw==/base.apk!/lib/arm64-v8a libraryPermittedPath: parent: java.lang.BootClassLoader@554d021 targetSdkVersion: 34 isBundled: false classLoaderName: null sharedLibraries: [dalvik.system.PathClassLoader[DexPathList[[zip file "/data/app/com.google.android.trichromelibrary_677820033-h_MTD1WOdwlyswolmFAhvw==/base.apk"],nativeLibraryDirectories=[/data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/lib/arm64, /data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/base.apk!/lib/arm64-v8a, /data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/split_config.en.apk!/lib/arm64-v8a, /data/app/com.google.android.webview-5v6ABbFqtHSZzOuqwII55Q==/split_config.es.apk!/lib/arm64-v8a, /data/app/com.google.android.trichromelibrary_677820033-h_MTD1WOdwlyswolmFAhvw==/base.apk!/lib/arm64-v8a, /system/lib64, /hw_product/lib64, /system/product/lib64, /prets/lib64]]]]
I/WebViewFactory(24700): Loading com.google.android.webview version 131.0.6778.200 (code 677820033)
W/linker (24700): Warning: "/data/app/com.google.android.trichromelibrary_677820033-h_MTD1WOdwlyswolmFAhvw==/base.apk!/lib/arm64-v8a/libmonochrome_64.so" unused DT entry: unknown processor-specific (type 0x70000001 arg 0x0) (ignoring)
I/cr_WVCFactoryProvider(24700): version=131.0.6778.200 (677820033) minSdkVersion=29 isBundle=true multiprocess=true packageId=3
I/cr_LibraryLoader(24700): Successfully loaded native library
I/cr_CachingUmaRecorder(24700): Flushed 5 samples from 5 histograms, 0 samples were dropped.
I/cr_CombinedPProvider(24700): #registerProvider() provider:WV.y8@cea229a isPolicyCacheEnabled:false policyProvidersSize:0
I/cr_PolicyProvider(24700): #setManagerAndSource() 0
I/cr_CombinedPProvider(24700): #linkNativeInternal() 1
I/cr_AppResProvider(24700): #getApplicationRestrictionsFromUserManager() Bundle[EMPTY_PARCEL]
I/cr_PolicyProvider(24700): #notifySettingsAvailable() 0
I/cr_CombinedPProvider(24700): #onSettingsAvailable() 0
I/cr_CombinedPProvider(24700): #flushPolicies()
W/chromium(24700): [WARNING:dns_config_service_android.cc(81)] Failed to read DnsConfig.
E/chromium(24700): [ERROR:simple_index_file.cc(307)] Could not create a directory to hold the index file
E/chromium(24700): [ERROR:simple_file_enumerator.cc(21)] opendir /data/user/0/com.example.p30webview/cache/WebView/Default/HTTP Cache/Code Cache/wasm: No such file or directory (2)
E/chromium(24700): [ERROR:simple_index_file.cc(620)] Could not reconstruct index from disk
I/HwViewRootImpl(24700): removeInvalidNode jank list is null
W/cr_media(24700): getBluetoothAdapter() requires BLUETOOTH permission
W/cr_media(24700): registerBluetoothIntentsIfNeeded: Requires BLUETOOTH permission
I/PlatformViewsController(24700): Hosting view in view hierarchy for platform view: 0
I/flutter (24700): [IMPORTANT:flutter/shell/platform/android/platform_view_android.cc(308)] Flutter recommends migrating plugins that create and register surface textures to the new surface producer API. See https://docs.flutter.dev/release/breaking-changes/android-surface-plugins
W/chromium(24700): [WARNING:viz_main_impl.cc(85)] VizNullHypothesis is disabled (not a warning)
I/PlatformViewsController(24700): PlatformView is using SurfaceProducer backend
D/mali_winsys(24700): EGLint new_window_surface(egl_winsys_display *, void *, EGLSurface, EGLConfig, egl_winsys_surface **, EGLBoolean) returns 0x3000
W/Gralloc3(24700): allocator 3.x is not supported
W/VideoCapabilities(24700): Unrecognized profile/level 0/0 for video/mpeg2
W/VideoCapabilities(24700): Unrecognized profile/level 0/2 for video/mpeg2
W/VideoCapabilities(24700): Unrecognized profile/level 0/3 for video/mpeg2
I/VideoCapabilities(24700): Unsupported profile 5 for video/mpeg2
I/chatty (24700): uid=10542(com.example.p30webview) ThreadPoolSingl identical 2 lines
I/VideoCapabilities(24700): Unsupported profile 5 for video/mpeg2
W/VideoCapabilities(24700): Unrecognized profile/level 1/32 for video/mp4v-es
W/VideoCapabilities(24700): Unrecognized profile/level 32768/2 for video/mp4v-es
W/VideoCapabilities(24700): Unrecognized profile/level 32768/64 for video/mp4v-es
D/SensorManager(24700): 0x7251cb8dc0 addFd fd=280
I/DecorView[](24700): pkgName:com.example.p30webview old windowMode:1 new windoMode:1, isFixedSize:false
W/OpenGLRenderer(24700): dequeueBuffer failed, error = -110; switching to fallback
D/NetworkSecurityConfig(24700): No Network Security Config specified, using platform default
W/mple.p30webvie(24700): Accessing hidden field Landroid/content/pm/ApplicationInfo;->primaryCpuAbi:Ljava/lang/String; (greylist, reflection, allowed)
D/HwGalleryCacheManagerImpl(24700): mIsEffect:false
I/Choreographer(24700): Skipped 31 frames! The application may be doing too much work on its main thread.
I/cr_MediaCodecBridge(24700): create MediaCodec video decoder, mime video/avc, decoder name OMX.hisi.video.decoder.avc
W/OpenGLRenderer(24700): reserveNext failed, error = -2147483648 (Unknown error -2147483648)
W/BufferQueueProducer(24700): [SurfaceTexture-0-24700-0]:1390: disconnect: not connected (req=1)
W/libEGL (24700): EGLNativeWindowType 0x7212f42550 disconnect failed
I/HwViewRootImpl(24700): removeInvalidNode jank list is null
I/OMXClient(24700): IOmx service obtained
I/ACodec (24700): In onAllocateComponent create compenent, codec name: OMX.hisi.video.decoder.avc
I/MediaCodec(24700): MediaCodec will operate in async mode
D/SurfaceUtils(24700): connecting to surface 0x7214db2010, reason connectToSurface
I/MediaCodec(24700): [OMX.hisi.video.decoder.avc] setting surface generation to 25292801
D/SurfaceUtils(24700): disconnecting from surface 0x7214db2010, reason connectToSurface(reconnect)
D/SurfaceUtils(24700): connecting to surface 0x7214db2010, reason connectToSurface(reconnect)
E/ACodec (24700): [OMX.hisi.video.decoder.avc] setPortMode on output to DynamicANWBuffer failed w/ err -2147483648
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] using color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) and dataspace 0x104
I/ACodec (24700): onStart
I/HwExtendedUtils(24700): Set to window composer mode as 2
I/ACodec (24700): gralloc usage: 0(OMX) => 0x2900(ACodec)
D/SurfaceUtils(24700): disconnecting from surface 0x7214db2010, reason setNativeWindowSizeFormatAndUsage
D/SurfaceUtils(24700): connecting to surface 0x7214db2010, reason setNativeWindowSizeFormatAndUsage
D/SurfaceUtils(24700): set up nativeWindow 0x7214db2010 for 1632x1080, color 0x30d, rotation 0, usage 0x2900
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Allocating 7 buffers from a native window of size 2924544 on output port
I/cr_MediaCodecBridge(24700): create MediaCodec video decoder, mime video/avc, decoder name OMX.hisi.video.decoder.avc
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] using color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) and dataspace 0x104
I/OMXClient(24700): IOmx service obtained
I/ACodec (24700): In onAllocateComponent create compenent, codec name: OMX.hisi.video.decoder.avc
I/MediaCodec(24700): MediaCodec will operate in async mode
D/SurfaceUtils(24700): connecting to surface 0x7214df1010, reason connectToSurface
I/MediaCodec(24700): [OMX.hisi.video.decoder.avc] setting surface generation to 25292802
D/SurfaceUtils(24700): disconnecting from surface 0x7214df1010, reason connectToSurface(reconnect)
D/SurfaceUtils(24700): connecting to surface 0x7214df1010, reason connectToSurface(reconnect)
E/ACodec (24700): [OMX.hisi.video.decoder.avc] setPortMode on output to DynamicANWBuffer failed w/ err -2147483648
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] using color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) and dataspace 0x104
I/ACodec (24700): onStart
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] using color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) and dataspace 0x104
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Now handling output port settings change
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Output port now disabled.
I/HwExtendedUtils(24700): Set to window composer mode as 2
I/HwExtendedUtils(24700): Set to window composer mode as 2
I/ACodec (24700): gralloc usage: 0(OMX) => 0x2900(ACodec)
D/SurfaceUtils(24700): disconnecting from surface 0x7214df1010, reason setNativeWindowSizeFormatAndUsage
D/SurfaceUtils(24700): connecting to surface 0x7214df1010, reason setNativeWindowSizeFormatAndUsage
D/SurfaceUtils(24700): set up nativeWindow 0x7214df1010 for 1632x1080, color 0x30d, rotation 0, usage 0x2900
I/ACodec (24700): gralloc usage: 0(OMX) => 0x2900(ACodec)
D/SurfaceUtils(24700): disconnecting from surface 0x7214db2010, reason setNativeWindowSizeFormatAndUsage
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Allocating 7 buffers from a native window of size 2924544 on output port
D/SurfaceUtils(24700): connecting to surface 0x7214db2010, reason setNativeWindowSizeFormatAndUsage
D/SurfaceUtils(24700): set up nativeWindow 0x7214db2010 for 1632x1088, color 0x30d, rotation 0, usage 0x2900
W/ACodec (24700): [OMX.hisi.video.decoder.avc] setting nBufferCountActual to 13 failed: -1010
W/ACodec (24700): [OMX.hisi.video.decoder.avc] setting nBufferCountActual to 12 failed: -1010
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Allocating 11 buffers from a native window of size 2715648 on output port
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] using color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) and dataspace 0x104
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Output port now reenabled.
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] using color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) and dataspace 0x104
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Now handling output port settings change
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Output port now disabled.
I/HwExtendedUtils(24700): Set to window composer mode as 2
I/ACodec (24700): gralloc usage: 0(OMX) => 0x2900(ACodec)
D/SurfaceUtils(24700): disconnecting from surface 0x7214df1010, reason setNativeWindowSizeFormatAndUsage
D/SurfaceUtils(24700): connecting to surface 0x7214df1010, reason setNativeWindowSizeFormatAndUsage
D/SurfaceUtils(24700): set up nativeWindow 0x7214df1010 for 1632x1088, color 0x30d, rotation 0, usage 0x2900
W/ACodec (24700): [OMX.hisi.video.decoder.avc] setting nBufferCountActual to 13 failed: -1010
W/ACodec (24700): [OMX.hisi.video.decoder.avc] setting nBufferCountActual to 12 failed: -1010
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Allocating 11 buffers from a native window of size 2715648 on output port
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Output port now reenabled.
D/ACodec (24700): sendVideoFpsDataToiAware time:0 fps:-1 msg:-1
D/ACodec (24700): sendVideoFpsDataToiAware time:0 fps:-1 msg:-1
I/cr_MediaCodecBridge(24700): create MediaCodec video decoder, mime video/avc, decoder name OMX.hisi.video.decoder.avc
I/OMXClient(24700): IOmx service obtained
I/ACodec (24700): In onAllocateComponent create compenent, codec name: OMX.hisi.video.decoder.avc
I/MediaCodec(24700): MediaCodec will operate in async mode
D/SurfaceUtils(24700): connecting to surface 0x72152e3010, reason connectToSurface
I/MediaCodec(24700): [OMX.hisi.video.decoder.avc] setting surface generation to 25292803
D/SurfaceUtils(24700): disconnecting from surface 0x72152e3010, reason connectToSurface(reconnect)
D/SurfaceUtils(24700): connecting to surface 0x72152e3010, reason connectToSurface(reconnect)
E/ACodec (24700): [OMX.hisi.video.decoder.avc] setPortMode on output to DynamicANWBuffer failed w/ err -2147483648
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] using color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) and dataspace 0x104
I/ACodec (24700): onStart
I/HwExtendedUtils(24700): Set to window composer mode as 2
I/ACodec (24700): gralloc usage: 0(OMX) => 0x2900(ACodec)
D/SurfaceUtils(24700): disconnecting from surface 0x72152e3010, reason setNativeWindowSizeFormatAndUsage
D/SurfaceUtils(24700): connecting to surface 0x72152e3010, reason setNativeWindowSizeFormatAndUsage
D/SurfaceUtils(24700): set up nativeWindow 0x72152e3010 for 1632x1080, color 0x30d, rotation 0, usage 0x2900
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Allocating 7 buffers from a native window of size 2924544 on output port
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] using color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) and dataspace 0x104
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] got color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) err=0(NO_ERROR)
I/ACodec (24700): [OMX.hisi.video.decoder.avc] using color aspects (R:2(Limited), P:1(BT709_5), M:1(BT709_5), T:3(SMPTE170M)) and dataspace 0x104
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Now handling output port settings change
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Output port now disabled.
I/HwExtendedUtils(24700): Set to window composer mode as 2
I/ACodec (24700): gralloc usage: 0(OMX) => 0x2900(ACodec)
D/SurfaceUtils(24700): disconnecting from surface 0x72152e3010, reason setNativeWindowSizeFormatAndUsage
D/SurfaceUtils(24700): connecting to surface 0x72152e3010, reason setNativeWindowSizeFormatAndUsage
D/SurfaceUtils(24700): set up nativeWindow 0x72152e3010 for 1632x1088, color 0x30d, rotation 0, usage 0x2900
W/ACodec (24700): [OMX.hisi.video.decoder.avc] setting nBufferCountActual to 13 failed: -1010
W/ACodec (24700): [OMX.hisi.video.decoder.avc] setting nBufferCountActual to 12 failed: -1010
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Allocating 11 buffers from a native window of size 2715648 on output port
I/ACodec (24700): [OMX.hisi.video.decoder.avc] Output port now reenabled.
D/ACodec (24700): sendVideoFpsDataToiAware time:0 fps:-1 msg:-1
I/AwareBitmapCacher(24700): init lrucache size: 2097152 pid=24700
D/ProfileInstaller(24700): Installing profile for com.example.p30webview
W/mple.p30webvie(24700): Accessing hidden method Llibcore/io/Memory;->peekLong(JZ)J (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Llibcore/io/Memory;->pokeLong(JJZ)V (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Llibcore/io/Memory;->pokeInt(JIZ)V (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Llibcore/io/Memory;->peekInt(JZ)I (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Llibcore/io/Memory;->pokeByte(JB)V (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Llibcore/io/Memory;->peekByte(J)B (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Llibcore/io/Memory;->pokeByteArray(J[BII)V (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Llibcore/io/Memory;->peekByteArray(J[BII)V (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->objectFieldOffset(Ljava/lang/reflect/Field;)J (greylist,core-platform-api, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->getLong(Ljava/lang/Object;J)J (greylist,core-platform-api, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden field Ljava/nio/Buffer;->address:J (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->arrayBaseOffset(Ljava/lang/Class;)I (greylist,core-platform-api, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->arrayIndexScale(Ljava/lang/Class;)I (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->getInt(Ljava/lang/Object;J)I (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->putInt(Ljava/lang/Object;JI)V (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->getLong(Ljava/lang/Object;J)J (greylist,core-platform-api, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->putLong(Ljava/lang/Object;JJ)V (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->getObject(Ljava/lang/Object;J)Ljava/lang/Object; (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->putObject(Ljava/lang/Object;JLjava/lang/Object;)V (greylist, reflection, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->getInt(Ljava/lang/Object;J)I (greylist, linking, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->putObject(Ljava/lang/Object;JLjava/lang/Object;)V (greylist, linking, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->putInt(Ljava/lang/Object;JI)V (greylist, linking, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->getObject(Ljava/lang/Object;J)Ljava/lang/Object; (greylist, linking, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->putLong(Ljava/lang/Object;JJ)V (greylist, linking, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->getLong(Ljava/lang/Object;J)J (greylist,core-platform-api, linking, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->putObject(Ljava/lang/Object;JLjava/lang/Object;)V (greylist, linking, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->putInt(Ljava/lang/Object;JI)V (greylist, linking, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->getUnsafe()Lsun/misc/Unsafe; (greylist,core-platform-api, linking, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->compareAndSwapObject(Ljava/lang/Object;JLjava/lang/Object;Ljava/lang/Object;)Z (greylist, linking, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->getObject(Ljava/lang/Object;J)Ljava/lang/Object; (greylist, linking, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->getLong(Ljava/lang/Object;J)J (greylist,core-platform-api, linking, allowed)
W/cr_MediaCodecBridge(24700): Releasing: OMX.hisi.video.decoder.avc
W/ACodec (24700): forcing OMX state to Idle when received shutdown in ExecutingState
D/ACodec (24700): sendVideoFpsDataToiAware time:33333 fps:30 msg:30
D/SurfaceUtils(24700): disconnecting from surface 0x7214db2010, reason disconnectFromSurface
W/cr_MediaCodecBridge(24700): Codec released
W/cr_MediaCodecBridge(24700): Releasing: OMX.hisi.video.decoder.avc
W/ACodec (24700): forcing OMX state to Idle when received shutdown in ExecutingState
D/SurfaceUtils(24700): disconnecting from surface 0x7214df1010, reason disconnectFromSurface
W/cr_MediaCodecBridge(24700): Codec released
W/cr_MediaCodecBridge(24700): Releasing: OMX.hisi.video.decoder.avc
W/ACodec (24700): forcing OMX state to Idle when received shutdown in ExecutingState
D/SurfaceUtils(24700): disconnecting from surface 0x72152e3010, reason disconnectFromSurface
W/cr_MediaCodecBridge(24700): Codec released
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->putObject(Ljava/lang/Object;JLjava/lang/Object;)V (greylist, linking, allowed)
W/mple.p30webvie(24700): Accessing hidden method Lsun/misc/Unsafe;->compareAndSwapObject(Ljava/lang/Object;JLjava/lang/Object;Ljava/lang/Object;)Z (greylist, linking, allowed)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel master, 3.28.0-2.0.pre.38698, on macOS 15.2 24C101 darwin-arm64, locale en-CO)
• Flutter version 3.28.0-2.0.pre.38698 on channel master at /Users/victorcatano/fvm/versions/master
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision bd1ebf2e14 (3 days ago), 2025-01-11 13:16:21 -0800
• Engine revision bd1ebf2e14
• Dart version 3.7.0 (build 3.7.0-312.0.dev)
• DevTools version 2.42.0
[!] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/victorcatano/Library/Android/sdk
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] Connected device (5 available)
• ELE L04 (mobile) • JND0219517003286 • android-arm64 • Android 10 (API 29)
```
</details>
| e: device-specific,platform-android,p: webview,package,P3,team-android,triaged-android | low | Critical |
2,788,180,670 | rust | `no super in the root` in non-root module | ### Code
```Rust
#![allow(unused)]
mod foo {
use super::{};
}
```
### Current output
```Shell
error[E0432]: unresolved import `super`
--> src/lib.rs:3:9
|
3 | use super::{};
| ^^^^^^^^^ no `super` in the root
For more information about this error, try `rustc --explain E0432`.
```
### Desired output
```Shell
No such error
```
### Rationale and extra context
_No response_
### Other cases
```Rust
The same error happens with `self::{}`, but not `crate::{}`.
```
### Rust Version
```Shell
stable 1.84.0
nightly 2025-01-12 48a426eca9df23b24b35
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,788,246,172 | kubernetes | Support spec.MaxUnavailable=0 to use PDB with Jobs and prevent disruptions during node upgrades | ### What would you like to be added?
Add support to the `spec.MaxUnavailable` setting when set to 0 (zero) for PDBs used in combination with jobs.
### **Context**
Currently, the [documentation](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#arbitrary-controllers-and-selectors) states that `spec.MaxUnavailable` should not be used for resources that do not support the scale subresource, and when deployed, the PDB shows this warning:
> Warning CalculateExpectedPodCountFailed 75s controllermanager Failed to calculate the number of expected pods: jobs.batch does not implement the scale subresource
However, by setting `spec.MaxUnavailable=0` in the PDB and use it in combination with a selector label to point the PDB to the Scaled Jobs, our testing has confirmed that it effectively prevents disruptions of Keda Scaled Jobs during node upgrades.
**The reasoning is that when `spec.MaxUnavailable` is set to 0, there is not need to know the number of Pods running at a given moment.**
Behavior observed during testing:
1. Node is cordoned
2. Kubernetes attempts to evict pods running a job, and eviction is prevented by the PDB.
3. Kubernetes keeps attempting to evict the pods.
4. Jobs finish execution.
5. Kubernetes evicts the pods.
6. The upgrade can proceed.
**Example PDB Used**
```
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: agent-scaledjob-pdb
spec:
maxUnavailable: 0
selector:
matchLabels:
app: agent-scaledjobs
```
The selector label is also included in the ScaledJob
```
..
spec:
jobTargetRef:
activeDeadlineSeconds: 1500
template:
metadata:
name: agent-scaledjob
labels:
app: agent-scaledjobs
..
```
### Why is this needed?
To avoid disruptions of jobs during node image or Kubernetes upgrades | kind/feature,needs-sig,needs-triage | low | Critical |
2,788,249,514 | flutter | Flutter conductor must be updated to handle monorepo releases | null | P1,team-release | medium | Minor |
2,788,257,400 | flutter | [web] Enable invalid_runtime_check_with_js_interop_types lint in web_ui code. | Given the amount of bespoke JS-interop code that we have in the Flutter Web engine, I think it'd be nice to enable the `invalid_runtime_check_with_js_interop_types` lint. [Docs](https://dart.dev/tools/linter-rules/invalid_runtime_check_with_js_interop_types).
It got "disabled" (actually, not activated), here:
* https://github.com/flutter/flutter/pull/161560 | engine,c: tech-debt,team-web | low | Minor |
2,788,257,912 | pytorch | [fsdp2] maybe unreliable `set_unshard_in_backward(False)` | Hey Andrew @awgu,
As a big fan of FSDP2, I keep posting improvement 😄
This flag ([`set_unshard_in_backward(False)`](https://github.com/pytorch/pytorch/blob/aa57f0c6637d4377d2d86d377fdf41840498960a/torch/distributed/fsdp/_fully_shard/_fully_shard.py#L408)) is super helpful to skip `unshard()` in backward pass especially for unused backward pass of certain `nn.Module`.
For valid example -- ZeRO3, these certain parameters are only used in forward, but not used in backward (skipped `unshard()`), so we keep those parameters in sharded state and keep gradient in None during backward pass, which saves allgather and reduce communication while being correct.
But in case of ZeRO2 or ZeRO++ (`reshard_after_forward=False or int`), this `set_unshard_in_backward(False)` results in misleading logic and potential bug. For example,
- these certain parameters are only used in forward, but not used in backward
- after forward, these parameters stay in unsharded state (ZeRO2) or [replicate, shard] state (ZeRO++)
- `set_unshard_in_backward(False)` skips `unshard` in pre-backward hook
- so in backward, these parameters stay in ZeRO2 and ZeRO++ state, which is misleading enough (i.e., by semantic, this flag `set_unshard_in_backward(False)` should mean that parameters stays in sharded state but in fact in ZeRO2 and ZeRO++ state)
As a suggestion, we can either ban ZeRO2 and ZeRO++ when using this flag`set_unshard_in_backward(False)` or figure out a more general way to support all ZeRO state.
How do you think 😁?
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,788,286,765 | rust | the rust build fails on FreeBSD/amd64-14 due to not finding libzstd despite it being installed on the build host. | I tried building rust (on FreeBSD/amd64 14.0) from today's clone (HEAD is 8c39ce5b4fb5b61796e5fd8cec56c7b9abd2122b)
```sh
./configure --set install.prefix=$HOME --set install.sysconfdir=$HOME/etc --llvm-root=/usr/local/llvm19
./x.py build
```
It ended unsuccessfully with the following:
```
Compiling rustc_driver v0.0.0 (/home/woods/work/m-rust/compiler/rustc_driver)
error: linking with `cc` failed: exit status: 1
|
= note: LC_ALL="C" PATH="/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0-sysroot/lib/rustlib/x86_64-unknown-freebsd/bin:/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0/lib/rustlib/x86_64-unknown-freebsd/bin:/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0/lib/rustlib/x86_64-unknown-freebsd/bin:/home/woods/pkg/bin:/home/woods/bin:/bin:/usr/bin:/usr/local/bin:" VSLANG="1033" "cc" "-Wl,--version-script=/tmp/rustcAGOOXN/list" "-Wl,--no-undefined-version" "-m64" "/tmp/rustcAGOOXN/symbols.o" "<1 object files omitted>" "/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0-rustc/x86_64-unknown-freebsd/release/deps/rustc_driver-678ab8838275942c.0bj53j4cvoxvg8zvx88oek9t7.rcgu.rmeta" "<1 object files omitted>" "-Wl,--as-needed" "-Wl,-Bstatic" "/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0-rustc/x86_64-unknown-freebsd/release/deps/{librustc_driver_impl-4bae2b5e07e44ec5.rlib,libctrlc-71f931fb16bec6af.rlib,libnix-1442dacdc2c23b79.rlib,librustc_log-c0d6dbecb9cd6dd4.rlib,libtracing_tree-df3639e168c3645e.rlib,libtracing_log-f99def6756aafc8e.rlib,libnu_ansi_term-a98b4f33bd82980a.rlib,libtracing_subscriber-4af61ac18f717d43.rlib,libnu_ansi_term-3b384a8bc7193777.rlib,liboverload-8fc38ace4bb4d4d5.rlib,libsharded_slab-a2f9aa0cd3f25a45.rlib,liblazy_static-914814e95529954b.rlib,libmatchers-067e388262b33701.rlib,libregex_automata-4adedaf6ee0fa036.rlib,libregex_syntax-4c39ade0ad89ae6d.rlib,libthread_local-c4030bc03cb07099.rlib,librustc_smir-94aa48ecbee0c0d4.rlib,libstable_mir-0621e0b5a08f8701.rlib,libtime-d1304a4bb48a1848.rlib,libtime_core-b734096f6ecb5eb5.rlib,libnum_conv-3e0c51cc5fed482c.rlib,libderanged-008beceb12e7b50a.rlib,libpowerfmt-3e9933fe193a019f.rlib,librustc_interface-61ed7d2625417c15.rlib,librustc_codegen_llvm-25c7a8ed8c5465b5.rlib,librustc_llvm-274993179981bcc4.rlib,librustc_sanitizers-d264b09a88f43596.rlib,librustc_hir_typeck-b03ff094e3a2218b.rlib,librustc_hir_analysis-d234234108a003c5.rlib,librustc_monomorphize-c6e8114b8224cda7.rlib,librustc_mir_transform-4e37530aca707852.rlib,librustc_mir_build-9ea12534cbd67e09.rlib,librustc_pattern_analysis-1339abb6211c245e.rlib,librustc_borrowck-9d8433099891f68b.rlib,librustc_traits-958b3148185c2a3c.rlib,librustc_const_eval-c2146265b7408c4d.rlib,librustc_mir_dataflow-2a0f908c13915c17.rlib,librustc_ast_lowering-67063c831bd5d3ab.rlib,librustc_builtin_macros-d22d1eb4f2c29dd5.rlib,librustc_resolve-e53c128ec5f51623.rlib,libpulldown_cmark-68b5c2f2c499d835.rlib,libunicase-8acba8277f961efd.rlib,libpulldown_cmark_escape-723191d07bc8e169.rlib,librustc_passes-319ce4cdcd47d95d.rlib,librustc_privacy-187c6eae831ed696.rlib,librustc_ty_utils-7b4363f085be8f28.rlib,librustc_query_impl-a5f444cae3f476ff.rlib,librustc_lint-45b4eac1ae9f7f25.rlib,libunicode_security-2305730dac9c3b6b.rlib,libunicode_script-ae2a107fc491f12d.rlib,librustc_codegen_ssa-9170e61d173a58b9.rlib,libwasm_encoder-c1005a5681831cf0.rlib,libleb128-4ea5827ad7ab45ca.rlib,libthorin-4836e2f36c09df26.rlib,libhashbrown-acb128d3aafd8e04.rlib,libahash-ca3b6da3e6d427e9.rlib,liballocator_api2-65278355585bae51.rlib,libgimli-754ad356442ad26e.rlib,libfallible_iterator-7b97c71b81546f0e.rlib,librustc_symbol_mangling-aa81523337a2e783.rlib,librustc_demangle-063ce806a6dfa9ed.rlib,libpunycode-9d3c60cfc702bf02.rlib,librustc_trait_selection-b13e11ce3ba168f1.rlib,librustc_next_trait_solver-c705d25318f657cc.rlib,librustc_parse_format-1b833b63b367e737.rlib,librustc_transmute-0a9bc7d98bae3125.rlib,librustc_infer-3f52bd30dec15574.rlib,librustc_incremental-a9515441c3ab6d5c.rlib,libpathdiff-e9ecf6c05ed4f809.rlib,librustc_metadata-3015d800c2ea1fdf.rlib,liblibloading-5e7d99a01c821283.rlib,librustc_expand-595875b815d92e43.rlib,librustc_ast_passes-d41e714d79deacb3.rlib,librustc_parse-e92fc18d31ec22b9.rlib,libunicode_normalization-66e2a5fd6170449c.rlib,libtinyvec-9943206937676f25.rlib,libtinyvec_macros-37be9b660011dd03.rlib}" "/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0-sysroot/lib/rustlib/x86_64-unknown-freebsd/lib/{libproc_macro-96ee6d83eae46b34.rlib}" "/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0-rustc/x86_64-unknown-freebsd/release/deps/{libregex-2bea9ac640f180bf.rlib,libregex_automata-647e0511891b8f0f.rlib,libaho_corasick-f23555996dc4619d.rlib,libregex_syntax-64253bda1bda70db.rlib,libcc-6e9b4ff5d6443c9f.rlib,libshlex-be60fb23f5fe6fec.rlib,libar_archive_writer-2a8b4e1afb1943a4.rlib,librustc_middle-484b1881b6f3c05c.rlib,libfield_offset-593055940b88c5e9.rlib,libmemoffset-d9b0c3210aca29f5.rlib,librustc_apfloat-4e7872f929d403e9.rlib,libgsgdt-564226be9f7ca74a.rlib,libpolonius_engine-bb3a22c6e2244dca.rlib,libdatafrog-135888ec959203c9.rlib,librustc_query_system-d4ea58f31d4db76b.rlib,librustc_attr_parsing-38c8b27306098089.rlib,librustc_attr_data_structures-65c40a1a6cda555b.rlib,librustc_session-0e9871ae42652270.rlib,libgetopts-27aa0516e322f727.rlib,libunicode_width-bc149f880e63bd63.rlib,librustc_hir_pretty-206eb5f8dd19d9b1.rlib,librustc_errors-f946bf2f8e700c80.rlib,libtermize-ce3ec6ba467a42cd.rlib,librustc_error_codes-b9d8df4e46351aac.rlib,librustc_type_ir-f54c70b1dfc386e8.rlib,librustc_ast_pretty-12b7ff69144951bd.rlib,libitertools-3e3ff0b1c8d4bf49.rlib,libannotate_snippets-9b20301b1b3d9a87.rlib,libanstyle-84c992b12f50dac3.rlib,libtermcolor-654ba8bbb92e5984.rlib,librustc_lint_defs-ea7ebf96a83bb970.rlib,librustc_error_messages-2356bf78fa6678d6.rlib,librustc_baked_icu_data-a98d4ebe25fa9af6.rlib,libicu_list-ac9b7ea4ce1ab578.rlib,libicu_list_data-8ee642e65c15914c.rlib,libregex_automata-cc7a3a7523062e6e.rlib,libicu_provider_adapters-38158b6002458522.rlib,libicu_locid_transform-eec6a847293ac9fc.rlib,libicu_locid_transform_data-825f90b22a433841.rlib,libicu_provider-df9a8027f2c847be.rlib,libicu_locid-78a1a244a3a8ac5b.rlib,liblitemap-93d975e2cd116bb1.rlib,libwriteable-5ae134a171443db4.rlib,libfluent_bundle-4aa5dfb4a552ec40.rlib,libfluent_langneg-7bfa05aa09c9f74a.rlib,libintl_pluralrules-4c1d35599ac487d0.rlib,libself_cell-5d8fd69232569989.rlib,libself_cell-b0ff3ef64a40c698.rlib,libintl_memoizer-2d9750366ebb1874.rlib,libtype_map-3cc803b01bff4da3.rlib,libunic_langid-dceccb47dd44dd89.rlib,libunic_langid_macros-4640b415f304e02a.rlib,libunic_langid_impl-d794a6b3c4630e82.rlib,libtinystr-a57119c43811c860.rlib,libzerovec-beda1b95e521d919.rlib,libyoke-6f7618785e8dbfa2.rlib,libzerofrom-4812fbb32d1507f6.rlib,libfluent_syntax-392311bd6333574a.rlib,libthiserror-37aa34d6eddcecf9.rlib,librustc_hir-71b742c708031fd7.rlib,libodht-ea3cfec055e891b6.rlib,librustc_target-224f6d47fb75a45e.rlib,libobject-39e2d0036a658f7a.rlib,libruzstd-50a940a4324ec02c.rlib,libtwox_hash-82a4ca5450c07ed6.rlib,libstatic_assertions-e68e80149a9f58f3.rlib,libflate2-406a8709afe5980d.rlib,libminiz_oxide-a648d51f161daac9.rlib,libadler2-fb1fb62726045bf1.rlib,libcrc32fast-00805bfffac5de92.rlib,libwasmparser-fdf733c5576d8c4f.rlib,librustc_fs_util-36e75bf255966d88.rlib,librustc_abi-aee6187d68fa8f5c.rlib,librustc_feature-892a64a2c7f3bc8f.rlib,libserde_json-aadd8bc2513e3f81.rlib,libryu-c464a3fbe63b3bcf.rlib,libserde-8d95b7475b55d2a4.rlib,librand_xoshiro-27378eabb6871455.rlib,librand-52d716baae3ac197.rlib,librand_chacha-2b6daeb70e843955.rlib,libppv_lite86-04fe3b6371f86655.rlib,libzerocopy-828797d8768a7475.rlib,libbyteorder-5283af83e54dc47e.rlib,librand_core-d38b5d372c69d418.rlib,librustc_ast-08a251c6832351a6.rlib,libmemchr-0534aa5fd4e2537b.rlib,librustc_ast_ir-382b2f0be9d1f26b.rlib,librustc_lexer-4788e64988e83250.rlib,libunicode_xid-63b77864dd1deb85.rlib,libunicode_properties-954574c360b909b3.rlib,librustc_span-adb841bb52e9683a.rlib,libunicode_width-6c56bd6c6c225ae3.rlib,libblake3-ef07d89ededadc67.rlib,libconstant_time_eq-8211d5e9dcefa3db.rlib,libarrayref-4110a6e4434e2df8.rlib,libitoa-97de34720279d13b.rlib,libscoped_tls-28b3ced1ae35c388.rlib,libsha2-751b38e1ee150a21.rlib,libsha1-73755c23bdb322cf.rlib,libcpufeatures-8f745d695829d317.rlib,libmd5-5d6eead03ad06579.rlib,libdigest-66f3498185594879.rlib,libblock_buffer-7d595291c034ede8.rlib,libcrypto_common-a57783a659a1aff5.rlib,libgeneric_array-353d10e05a4d71a9.rlib,libtypenum-760c1e2613e03e75.rlib,librustc_data_structures-1053647515096a54.rlib,libelsa-8096b935d4697546.rlib,libstable_deref_trait-9712baa2c8bc7388.rlib,libstacker-27c30d4695daed8a.rlib,libpsm-64b545ee7521e23d.rlib,libmemmap2-6adc1efc15a27a67.rlib,librustc_arena-5db0bf02ab8f9233.rlib,libtempfile-bfdefe5410259fda.rlib,libgetrandom-721e4f81a8dd825a.rlib,libfastrand-1fd238d90c73318f.rlib,librustix-44eaff68ea54eb49.rlib,libbitflags-f03bd5a3a2eadeab.rlib,liberrno-e4500660b0f7bbb6.rlib,librustc_stable_hash-b2fef44724b745b4.rlib,libarrayvec-3df6e03caacc1602.rlib,libmeasureme-a11b9e6d02c245b1.rlib,librustc_hash-3a80cf31c5eb998f.rlib,libparking_lot-3f85b0f41caa1d58.rlib,libparking_lot_core-9a1ca066262f7d18.rlib,liblock_api-7b110236eee6ec51.rlib,libscopeguard-dd9f46b9888d511a.rlib,librustc_graphviz-bce1cb9bd7e0b48c.rlib,libjobserver-0906cfd33a10f04e.rlib,libtracing-05e162d36b3dc762.rlib,libcfg_if-93f68ca2cfeae092.rlib,libpin_project_lite-d5321628bd2b0cc8.rlib,libtracing_core-cb13cd587f3cea2d.rlib,libonce_cell-edfc948814f25d31.rlib,librustc_hash-3383be06cb568c81.rlib,librustc_index-3db69d9a816b3639.rlib,librustc_serialize-0839f2b3f7ac629f.rlib,libindexmap-50c421918c915529.rlib,libequivalent-d5de69d1a0ce348b.rlib,librayon-455e3a2ba8168a6c.rlib,librayon_core-617cec76f7e4e92e.rlib,libnum_cpus-0ccb12dbc54ca1bf.rlib,liblibc-9ba6b2a659063125.rlib,libcrossbeam_deque-0ddec9b29b4d9196.rlib,libcrossbeam_epoch-0101bfe3c3ae7290.rlib,libcrossbeam_channel-5e671f5d1760fba6.rlib,libcrossbeam_utils-e6d36c606961ae0b.rlib,libeither-34c50a8e6c5afbec.rlib,libhashbrown-999d4dc558f71ad2.rlib,libfoldhash-c70e8f802b9f757b.rlib,libthin_vec-98e722f8714e6247.rlib,libsmallvec-dc0381fe1ecd023f.rlib,libena-5f114bda1d55ee57.rlib,liblog-bf4c16985b990e5e.rlib}" "/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0-sysroot/lib/rustlib/x86_64-unknown-freebsd/lib/{libstd-9922f423544429c4.rlib,libpanic_unwind-a2e9435a70d003be.rlib,libobject-a2cbacc1d9b0cbb6.rlib,libmemchr-abfb5a964f1f530a.rlib,libaddr2line-5fafdcabda5449e2.rlib,libgimli-38f8fa3b8a8c44ee.rlib,librustc_demangle-9295d63a05b32ed3.rlib,libstd_detect-562bc339c6ae89c6.rlib,libhashbrown-94276790940a08b7.rlib,librustc_std_workspace_alloc-d5e9c95677475f2d.rlib,libminiz_oxide-fe15f28d5b78702e.rlib,libadler2-602fd801b8e4808c.rlib,libunwind-557a67c374cbc641.rlib,libcfg_if-d243372002117ff8.rlib,liblibc-7cddd3f0c774adf1.rlib,liballoc-18859d5d405dba87.rlib,librustc_std_workspace_core-7fc511c2f3ba90f3.rlib,libcore-e20f11d3e16bf07e.rlib,libcompiler_builtins-fd24d3e3bbb46650.rlib}" "-Wl,-Bdynamic" "-lPolly" "-lPollyISL" "-lrt" "-lexecinfo" "-lpthread" "-lm" "-lz" "-lzstd" "-lc++" "-lc" "-lrt" "-lutil" "-lexecinfo" "-lkvm" "-lmemstat" "-lkvm" "-lutil" "-lprocstat" "-lrt" "-ldevstat" "-lexecinfo" "-lpthread" "-lgcc_s" "-lc" "-lm" "-lrt" "-lpthread" "-lrt" "-lutil" "-lexecinfo" "-lkvm" "-lmemstat" "-lkvm" "-lutil" "-lprocstat" "-lrt" "-ldevstat" "-Wl,--eh-frame-hdr" "-Wl,-z,noexecstack" "-L" "/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0-rustc/x86_64-unknown-freebsd/release/build/psm-436e38c2f952a9a4/out" "-L" "/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0-rustc/x86_64-unknown-freebsd/release/build/blake3-c3090fa4edb10df8/out" "-L" "/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0-rustc/x86_64-unknown-freebsd/release/build/blake3-c3090fa4edb10df8/out" "-L" "/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0-rustc/x86_64-unknown-freebsd/release/build/rustc_llvm-df31628df9f34818/out" "-L" "/usr/local/llvm19/lib" "-o" "/home/woods/work/m-rust/build/x86_64-unknown-freebsd/stage0-rustc/x86_64-unknown-freebsd/release/deps/librustc_driver-678ab8838275942c.so" "-shared" "-Wl,-soname=librustc_driver-678ab8838275942c.so" "-Wl,-z,relro,-z,now" "-Wl,-O1" "-nodefaultlibs" "-Wl,-z,origin" "-Wl,-rpath,$ORIGIN/../lib"
= note: some arguments are omitted. use `--verbose` to show all linker arguments
= note: ld: error: unable to find library -lzstd
cc: error: linker command failed with exit code 1 (use -v to see invocation)
```
Note that zstd is installed:
```sh
# pkg info | grep zstd
zstd-1.5.6 Fast real-time compression algorithm
```
Obviously the build isn't looking in `/usr/local` for installed libraries, and I don't see any obvious option in the configure script to tell it either to look there generically by default, or even to look there explicitly for libzstd.
```
# pkg shlib libzstd.so.1
libzstd.so.1 is provided by the following packages:
zstd-1.5.6
libzstd.so.1 is linked to by the following packages:
llvm15-15.0.7_10
mesa-dri-24.0.7
rsync-3.3.0
tiff-4.4.0_3
binutils-2.43.1,1
llvm19-19.1.6
```
Perhaps this is one of those intermediate library rpath problems, but anyway, how do I tell `x.py` to always look in `/usr/local` for more libraries?
System info:
```
$ uname -a
FreeBSD fezzik 14.0-RELEASE-p4 FreeBSD 14.0-RELEASE-p4 #0 releng/14.0-4edf3b807: Sat Dec 30 17:43:31 EST 2023 root@worm:/usr/obj/usr/src/amd64.amd64/sys/GENERIC-ZFS amd64
```
| O-freebsd,T-bootstrap,C-bug | low | Critical |
2,788,302,095 | terminal | Ctrl+Break can leave the WSL shell in an unusable state | ### Windows Terminal version
1.22.2702.0
### Windows build number
10.0.19045.5247
### Other Software
VIM 8.2.4919 (inside WSL)
### Steps to reproduce
1. Open the Settings, and set the *Profile termination behavior* to *Close only when process exits successfully*.
2. Start a WSL shell and run `vim`.
3. Enter `:set mouse=a` to enable mouse mode.
4. Press <kbd>Ctrl</kbd>+<kbd>Break</kbd> (possibly <kbd>Ctrl</kbd>+<kbd>ScrLock</kbd> depending on your keyboard).
5. When you see the `press Enter to restart` message, press <kbd>Enter</kbd>.
6. Once the shell restart, try clicking in the window to select some text.
### Expected Behavior
I'd expect mouse selection to work.
### Actual Behavior
When you click in the window, nothing can be selected, but a bunch of "random" characters get entered at the prompt. It seems that <kbd>Ctrl</kbd>+<kbd>Break</kbd> kills the session without giving the app a chance to exit gracefully, and then when we restart the session we don't reset any of the VT state either. As a result we're left with a mouse mode enabled which the shell wasn't expecting.
This mouse mode example was just the easiest to demonstrate, but you can see how it could be even more annoying if the app had enabled something like win32 input mode, or had selected a graphic character set.
Note that for Windows console apps this is less of a problem, because they can trap the break and block it, or at least shut down more gracefully. But I don't think there is anything that Linux apps can do about this, because it's the WSL session itself that is being aborted (at least that's what appears to be happening). | Issue-Bug,Area-TerminalConnection,Product-Terminal | low | Major |
2,788,306,233 | godot | AgX has low dynamic range output with Mobile renderer | ### Tested versions
All Godot versions with the AgX tonemapper ( 084e84be7813e55acd3d3e909bd66229834bedb1 onward)
### System information
Windows 10 (build 19045) - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 980 Ti (NVIDIA; 31.0.15.4665) - Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz (8 threads)
### Issue description
When using the AgX tonemapper with the Mobile renderer, it is not possible to output the full SDR dynamic range without either adjusting the exposure to be very high (and making your scene look very overexposed) or using adjustments such as `Brightness`. In some situations, it appears that the `Glow` effect can also be used to brighten the final output.
This is inconsistent with the existing tonemappers, which use the `White` parameter to enable full SDR dynamic range output, even with the Mobile renderer. For example, Reinhard suffers from the same issue when `White` is set to 16.0, but can be resolved by simply setting `White` to a lower value.
This problem stems from white being hardcoded to around 16 in the AgX tonemapper and the Mobile renderer only providing values up to 2.0 to the tonemapping function.
Forward+ provides values of most any range to the tonemapper, and thus does not exhibit this problem.
Mobile:

Forward+:

The best way to circumvent this issue that I have come up with is to set the `Brightness` adjustment in Godot to `1.17` for Mobile to get full [0.0, 1.0] dynamic range output. This feels wrong to me because it is inconsistent with the typical solution of adjusting the white parameter of the tonemapper.
It also feels wrong because the hardcoded white value of 16 prevents the full and intended behaviour of the AgX tonemapper to be used on the Mobile renderer; colours do not correctly desaturate as they become brighter and the top part of the "S" contrast curve is entirely discarded. (Said differently: AgX behaves incorrectly on the Mobile renderer and adjustments can only mask the problem.)

It appears that the original AgX PR actually had an adjustable `White` parameter, but [it was removed](https://github.com/godotengine/godot/pull/87260#issuecomment-2388786615).
### Compatibility Renderer
I have also had a scenario sometimes occur where the compatibility renderer will also output a reduced dynamic range due to only passing values in the range [0.0, 1.0] to the tonemapping function. Unfortunately, I haven't been able to reproduce this consistently.
Compatibility (only sometimes??):

### Steps to reproduce
1. Create a project with the Mobile renderer
2. Create a 3D scene with a WorldEnvironment node and set its tonemapper to be AgX
3. Add a very high energy DirectionalLight3D (16.0 energy, for example)
4. Add a MeshInstance3D with a Sphere mesh
5. Add a StandardMaterial3D to the mesh with a white Albedo color
6. Note that no matter how bright the light, the scene will only render up to a value of #dbdbdb (219/255)
7. Switch to Forward+ and note that it is possible to output #ffffff (255/255) with a bright light
This can also be tested using the "Gradients" scene of the [tone mapping and color correction demo from Calinou](https://github.com/Calinou/godot-demo-projects/tree/add-color-correction-demo).
### Minimal reproduction project (MRP)
[agx-low-out-range.zip](https://github.com/user-attachments/files/18427018/agx-low-out-range.zip) | discussion,topic:rendering,topic:3d | low | Minor |
2,788,307,210 | go | x/tools/gopls: CodeAction: an error in one CodeAction provider should not block another | See [CL 640395](https://go.dev/cl/640395) ([https://go-review.googlesource.com/c/tools/+/640395/comment/2b081fe5_fb1b18c3/](https://go-review.googlesource.com/c/tools/+/640395/comment/2b081fe5_fb1b18c3/)) for context.
| FeatureRequest,gopls,Tools | low | Critical |
2,788,308,291 | electron | webContents.startDrag(item) behavior on macOS differs from that on Windows and Linux | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.3.0
### What operating system(s) are you using?
macOS
### Operating System Version
macOS Sequoia 15.2
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
18
### Expected Behavior
On macOS, webContents.startDrag(item) behaves the same as it does on Windows and Linux.
### Actual Behavior
We are using native drag-and-drop in our project. Our code is the same as described in the documentation: [Native File Drag & Drop](https://www.electronjs.org/docs/latest/tutorial/native-file-drag-drop)
The only difference from the docs is that we have custom logic that should run immediately after the drag ends. Here is a code example:
```
ipcMain.on('ondragstart', (event, filePath) => {
console.log('drag start')
event.sender.startDrag({
file: path.join(__dirname, filePath),
icon: iconName
})
console.log('drag end')
// custom logic after drag end
dragEndHandler(filePath)
})
```
On Windows and Linux, `console.log('drag end')` and `dragEndHandler` are not called until the file is dropped. We rely on this behavior because currently, there is no way to listen for the drag-end event when using native drag-and-drop.
Starting from [Electron 18.3.5](https://github.com/electron/electron/releases/tag/v18.3.5), we have observed different behavior on macOS. On macOS, `console.log('drag end')` and `dragEndHandler` are called immediately after `startDrag` and before the file is dropped. This behavior breaks our current logic.
### Testcase Gist URL
_No response_
### Additional Information
For our app, it is critical that `dragEndHandler` is called after the drag ends. The best solution would be to introduce a new drag-end event, similar to the [HTMLElement: dragend event](https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/dragend_event). However, if implementing this is too complex, it would also be acceptable to fix the differing `startDrag` behavior on macOS. | platform/macOS,bug :beetle:,component/drag-and-drop,status/confirmed,has-repro-gist,33-x-y | low | Critical |
2,788,330,529 | electron | desktopCapturer crashing on linux w/XDP when the capture portal is dismissed/closed | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.0.0, 33.0.0, 34.0.0
### What operating system(s) are you using?
Other Linux
### Operating System Version
Linux bakery 6.12.9-zen1-1-zen #1 ZEN SMP PREEMPT_DYNAMIC Fri, 10 Jan 2025 00:39:35 +0000 x86_64 GNU/Linux
### What arch are you using?
x64
### Last Known Working Electron version
31.3.1 - No crashing but cannot open another portal request. 31.4.0 & later crashes
### Expected Behavior
Doesn't crash & allows following requests
### Actual Behavior
When the user closes the portal picker electron will crash.
### Testcase Gist URL
https://www.electronjs.org/docs/latest/api/desktop-capturer the example provided is a simple repro
### Additional Information
When attempting this from chromium the same errors will be spit out but there is no crash and can make following pick attempts so I believe theyre unrelated. Other than that theres no logs.
```
[10356:10356:0114/163419.616405:ERROR:screencast_portal.cc(367)] Failed to start the screen cast session.
[10356:10356:0114/163419.616432:ERROR:base_capturer_pipewire.cc(81)] ScreenCastPortal failed: 2
``` | platform/linux,bug :beetle:,32-x-y,33-x-y,34-x-y | low | Critical |
2,788,339,853 | next.js | `'use cache'` and `cacheLife` are not using stale data while revalidating in server-rendered (dynamic) routes | ### Link to the code that reproduces this issue
https://github.com/diego-aquino/next-js-dynamic-io-stale-while-revalidate-bug
### To Reproduce
1. Build the application with `pnpm build`
2. Start the application with `pnpm start`
3. Click the link "Static page" to go to the page `/static`
4. Refresh the page a few times until you see server logs like:
```
2025-01-14T20:52:30.701Z [static] Fetching data...
2025-01-14T20:52:32.703Z [static] Data fetched.
```
5. Click the link "Return"
6. Click the link "Dynamic page" to go to the page `/dynamic`
7. Again, refresh the page a few times until you see server logs like:
```
2025-01-14T20:53:27.647Z [dynamic] Fetching data...
2025-01-14T20:53:29.647Z [dynamic] Data fetched.
```
### Current vs. Expected behavior
Following the steps, I expected the caching behavior to be the same in static or dynamic routes, specifically showing stale data while revalidating in background.
The two pages, `/static` and `/dynamic`, use the same server component [`PageContent`](https://github.com/diego-aquino/next-js-dynamic-io-stale-while-revalidate-bug/blob/main/app/PageContent.tsx), which takes 2 seconds to render, simulating a slow fetching operation. It is cached with:
https://github.com/diego-aquino/next-js-dynamic-io-stale-while-revalidate-bug/blob/3b411bc79339f05041a7336e620470c7b3ee9046/app/PageContent.tsx#L9-L10
After building the application in step 1, `/static` was prerendered as static content and `/dynamic` is server-rendered.
```
Route (app) Size First Load JS
┌ ƒ /dynamic 179 B 112 kB
└ ○ /static 179 B 112 kB
○ (Static) prerendered as static content
ƒ (Dynamic) server-rendered on demand
```
In step 4, the loading component is not shown even if I constantly refresh the page `/static`. I consider this as expected because the cached output is stale and being revalidated in background, as indicated by the server logs "[...] Fetching data..." and "[...] Data fetched".
However, the behavior is different when the same component is mounted in a dynamic route. The page `/dynamic` calls `await headers();` to make it server-rendered. Differently from the static page, refreshing the dynamic page in step 7 shows the loading component after the cached output becomes stale, blocking the page until the component is rendered on the server. I expected the page to show the stale data while revalidating, as with the static page.
https://github.com/user-attachments/assets/ebbd73a6-a745-4667-ae0c-16eac576505e
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #52~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Dec 9 15:00:52 UTC 2
Available memory (MB): 15910
Available CPU cores: 12
Binaries:
Node: 22.12.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.12.3
Relevant Packages:
next: 15.2.0-canary.9 // Latest available version is detected (15.2.0-canary.9).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
dynamicIO, Runtime
### Which stage(s) are affected? (Select all that apply)
next start (local)
### Additional context
_No response_ | Runtime,dynamicIO | low | Critical |
2,788,354,655 | pytorch | Something is fishy with discard_graph_changes | Discovered with @yanboliang in https://github.com/pytorch/pytorch/pull/142830#discussion_r1913437378
cc @chauhang @penguinwu @ydwu4 @bdhirsh @yf225.
What's going on is:
1) we do a discard_graph_changes
2) then we do a speculate_subgraph, which gives us some lifted_freevars
The lifted_freevars map proxies from the discard_graph_changes's subtracer INSTEAD OF the outer subtracer. This is pretty unexpected and seems like a footgun for using discard_graph_changes.
I'm not sure what to do about this right now. | triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher | low | Minor |
2,788,354,876 | tensorflow | Seg Fault when iterate dataset created from data service | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.18.0
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 22.04
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Segfault when trying to iterate dataset get from data service.
### Standalone code to reproduce the issue
```shell
# start the data service file start_dataservice.py
import tensorflow as tf
dispatcher = tf.data.experimental.service.DispatchServer(
tf.data.experimental.service.DispatcherConfig(port=50050), start=True
)
dispatcher_address = dispatcher.target.split("://")[1]
worker = tf.data.experimental.service.WorkerServer(
tf.data.experimental.service.WorkerConfig(dispatcher_address=dispatcher_address), start=True
)
print("Starting Worker")
worker.join()
# test file test_dataset_service.py
import tensorflow as tf
import numpy as np
flags = tf.compat.v1.app.flags
flags.DEFINE_bool("local", False, "Run data service in process")
flags.DEFINE_bool("distribute", False, "Run data service in distributed_epoch mode")
FLAGS = flags.FLAGS
def local_service():
print("Starting Local Service")
dispatcher = tf.data.experimental.service.DispatchServer(
tf.data.experimental.service.DispatcherConfig(port=50050), start=True
)
dispatcher_address = dispatcher.target.split("://")[1]
worker = tf.data.experimental.service.WorkerServer(
tf.data.experimental.service.WorkerConfig(dispatcher_address=dispatcher_address), start=True
)
print("Dispatcher target is ", dispatcher.target)
return dispatcher, worker, dispatcher.target
def apply_transformations(ds_train):
ds_train = ds_train.map(normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds_train = ds_train.cache()
ds_train = ds_train.shuffle(60000)
ds_train = ds_train.batch(128)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)
return ds_train
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
ds_train = tf.data.Dataset.from_tensor_slices((x_train, y_train))
def normalize_img(image, label):
"""Normalizes images: `uint8` -> `float32`."""
return tf.cast(image, tf.float32) / 255.0, label
ds_train = apply_transformations(ds_train)
# Create dataset however you were before using the tf.data service.
dataset = ds_train
if FLAGS.local:
dispatcher, worker, service = local_service()
else:
dispatcher_address = "localhost"
dispatcher_port = "50050"
service = "grpc://{}:{}".format(dispatcher_address, dispatcher_port)
if FLAGS.distribute:
processing_mode = "distributed_epoch"
else:
processing_mode = "parallel_epochs"
# This will register the dataset with the tf.data service cluster so that
# tf.data workers can run the dataset to produce elements. The dataset returned
# from applying `distribute` will fetch elements produced by tf.data workers.
dataset = dataset.apply(
tf.data.experimental.service.distribute(processing_mode=processing_mode, service=service)
)
for (x1, y1), (x2, y2) in zip(dataset, ds_train):
np.allclose(x1, x2)
np.allclose(y1, y2)
print("verified mnist dataset locally vs over service")
# script to run
python -m pip install --upgrade pip
python -m pip install tensorflow==2.18.0
python -m pip install 'protobuf<4'
screen -d -m python start_dataservice.py
python3 test_dataset_service.py --local=False
```
### Relevant log output
```shell
2025-01-14 21:56:19.778399: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1736891779.795141 9168 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1736891779.800177 9168 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-14 21:56:19.815971: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
I0000 00:00:1736891783.518634 9168 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 889 MB memory: -> device: 0, name: NVIDIA A100-SXM4-40GB, pci bus id: 0000:10:1c.0, compute capability: 8.0
I0000 00:00:1736891783.520395 9168 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 37945 MB memory: -> device: 1, name: NVIDIA A100-SXM4-40GB, pci bus id: 0000:10:1d.0, compute capability: 8.0
I0000 00:00:1736891783.522012 9168 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 37945 MB memory: -> device: 2, name: NVIDIA A100-SXM4-40GB, pci bus id: 0000:20:1c.0, compute capability: 8.0
I0000 00:00:1736891783.523626 9168 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 37945 MB memory: -> device: 3, name: NVIDIA A100-SXM4-40GB, pci bus id: 0000:20:1d.0, compute capability: 8.0
I0000 00:00:1736891783.525222 9168 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:4 with 37945 MB memory: -> device: 4, name: NVIDIA A100-SXM4-40GB, pci bus id: 0000:90:1c.0, compute capability: 8.0
I0000 00:00:1736891783.526807 9168 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:5 with 37945 MB memory: -> device: 5, name: NVIDIA A100-SXM4-40GB, pci bus id: 0000:90:1d.0, compute capability: 8.0
I0000 00:00:1736891783.528377 9168 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:6 with 37945 MB memory: -> device: 6, name: NVIDIA A100-SXM4-40GB, pci bus id: 0000:a0:1c.0, compute capability: 8.0
I0000 00:00:1736891783.529933 9168 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:7 with 37945 MB memory: -> device: 7, name: NVIDIA A100-SXM4-40GB, pci bus id: 0000:a0:1d.0, compute capability: 8.0
/test/bin/testDataservice: line 5: 9168 Segmentation fault (core dumped) python ${BIN_DIR}/test_dataset_service.py --local=False
``` | type:bug,comp:ops,TF 2.18 | medium | Critical |
2,788,390,379 | tauri | [bug] Build fails if front-end files are collectively too large. | ### Describe the bug
I have been running into an issue where my tauri app will not build if my front-end assets amount to around 1 GB or more of data. If I delete enough assets the project builds. The project I have been testing with is the basic "Getting Started" project using JavaScript, npm, and Vanilla presets. The only thing I've changed is the front-end. Below is a rough tree of my project and it's file sizes.
src
|
|_index.html(82 KB)
|_[14 other html files](979 KB)
|
|_assets
|
|_fonts(308 KB)
|_gallery(148 MB)
|_images(27 MB)
|_libraries(928 KB)
|_app(337 MB)
|_obj(684 MB)
|_videos(298 MB)
If I delete both the app and video folders, the project builds. Alternatively I can leave those folders and delete the obj folder which will also allow me to build.
If I attempt to build without removing assets I receive errors stating that random metafiles are corrupt. Depending on what I delete the metafile that corrupts changes but stays consistent.
I am attempting to use Tauri in place of a current Electron solution, and due to limitations of this project all assets must be bundled with the project.
### Reproduction
My guess would be to reproduce this you need to make a tauri app based of the quick start project and fill it's front-end with more than a gigabyte of assets.
### Expected behavior
Expected a build of my app to be packaged.
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.19045 x86_64 (X64)
✔ WebView2: 131.0.2903.112
✔ MSVC: Visual Studio Professional 2022
✔ rustc: 1.84.0 (9fc6b4312 2025-01-07)
✔ cargo: 1.84.0 (66221abde 2024-11-19)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.18.0
- npm: 10.8.2
[-] Packages
- tauri 🦀: 2.2.2
- tauri-build 🦀: 2.0.5
- wry 🦀: 0.48.1
- tao 🦀: 0.31.1
- @tauri-apps/api : not installed!
- @tauri-apps/cli : 2.2.4
[-] Plugins
- tauri-plugin-opener 🦀: 2.2.4
- @tauri-apps/plugin-opener : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../src
```
### Stack trace
```text
PS C:\{USER_DIR}\Documents\TauriTests\Test1\test-app> npm run tauri build
> [email protected] tauri
> tauri build
Compiling test-app v0.1.0 (C:\{USER_DIR}\Documents\TauriTests\Test1\test-app\src-tauri)
error[E0786]: found invalid metadata files for crate `test_app_lib`
--> src\main.rs:5:5
|
5 | test_app_lib::run()
| ^^^^^^^^^^^^
|
= note: corrupt metadata encountered in \\?\C:\ {USER_DIR}\Documents\TauriTests\Test1\test-app\src-tauri\targ
For more information about this error, try `rustc --explain E0786`.
error: could not compile `test-app` (bin "test-app") due to 1 previous error
failed to build app: failed to build app
Error failed to build app: failed to build app
```
### Additional context
Some experiments I've tried is creating a base getting started app and only adding my asset files, this also caused the build to fail. | type: bug,status: needs triage | low | Critical |
2,788,397,408 | pytorch | "GenericHOPVariable" / abstract out Dynamo support for HOPs | From HOP sync discussion (with @xmfan).
Idea 1: abstract out Dynamo support for HOPs
Some way to create a HOP where:
1) a user defines how to construct the inputs to each subgraph from the (args, kwargs)
2) using this, we can create a GenericHOPVariable that should be able to handle FX graphs as inputs. Dynamo can always speculate_subgraph on the fx graphs using the previous function
Idea 2: Some way to tag a HOP's subgraph as being "already proven safe by Dynamo"
1) If we see it has already been tagged, then there isn't a need to speculate it again.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @ydwu4 @bdhirsh @yf225 | triaged,oncall: pt2,module: dynamo,module: higher order operators,module: pt2-dispatcher | low | Minor |
2,788,422,022 | next.js | `resolveSWCOptions` fails when using next.config.ts with extended tsconfig | ### Link to the code that reproduces this issue
https://github.com/V1RE/repro-next-config-ts
### To Reproduce
1. Make your tsconfig.json extend another tsconfig
2. Remove compiler options from your main tsconfig.json
3. Use a `.ts` next config
4. Run `next dev`
### Current vs. Expected behavior
When following the steps above, [`resolveSWCOptions`](https://github.com/vercel/next.js/blob/6616e203054523961ccbc53e841f7b8877025285/packages/next/src/build/next-config-ts/transpile-config.ts#L8) throws an error, since the [parsing](https://github.com/vercel/next.js/blob/6616e203054523961ccbc53e841f7b8877025285/packages/next/src/build/next-config-ts/transpile-config.ts#L32) of the tsconfig.json file doesn't throw.
This only happens when using `next.config.ts`, .mjs and .js work as expected.

### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Fri Dec 6 19:03:40 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6041
Available memory (MB): 24576
Available CPU cores: 12
Binaries:
Node: 23.6.0
npm: 10.9.2
Yarn: N/A
pnpm: 9.15.0
Relevant Packages:
next: 15.2.0-canary.9 // Latest available version is detected (15.2.0-canary.9).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
TypeScript, Developer Experience
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
_No response_ | TypeScript | low | Critical |
2,788,467,604 | node | "What's New" section in API docs | ### What is the problem this feature will solve?
Finding new apis added is dificult....
### What is the feature you are proposing to solve the problem?
Since most of our documentation has "Added in" metadata now, it would be helpful if the documentation could have an automatically generated "What New In This Version" page that is automatically generated from the metadata. For folks who are looking at the docs, it is often difficult to find and identify newly added APIs.
### What alternatives have you considered?
_No response_ | doc,feature request | low | Minor |
2,788,481,417 | pytorch | Support torch.func.grad for Flex Attention | ### 🚀 The feature, motivation and pitch
Currently, flex attention does not support `torch.func.grad`:
```python
import torch
from torch.nn.attention.flex_attention import flex_attention
torch.set_default_device("cuda")
q = torch.randn(1, 1, 1, 16)
k = torch.randn(1, 1, 1, 16)
v = torch.randn(1, 1, 1, 16)
torch.func.grad(flex_attention)(q, k, v)
```
```
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_ops.py", line 471, in wrapper
return self.dispatch(
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_ops.py", line 341, in dispatch
return dispatch_functorch(self, args, kwargs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/pyfunctorch.py", line 294, in dispatch_functorch
return interpreter.process(op, args, kwargs)
File "/home/ubuntu/.local/lib/python3.10/site-packages/torch/_functorch/pyfunctorch.py", line 171, in process
kernel = op.functorch_table[TransformType.Grad]
KeyError: <TransformType.Grad: 2>
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @zou3519 @Chillee @samdow @kshitij12345 @ydwu4 @bdhirsh @yf225 @drisspg @yanboliang @BoyuanFeng | triaged,oncall: pt2,module: functorch,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Critical |
2,788,484,619 | godot | "Snap Controls to Pixel" only partially works | ### Tested versions
v4.4.dev7.mono.official [46c8f8c5c]
### System information
Godot v4.4.dev7.mono - Windows 11 (build 22631) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 Ti SUPER (NVIDIA; 32.0.15.6614) - 13th Gen Intel(R) Core(TM) i9-13900K (32 threads)
### Issue description
Snapping controls to pixels only works consistently for the position of the control, not its size.
I pasted a screenshot into Aseprite and zoomed to 25% (my game is scaled by 4x). It should look perfectly scaled, but the TextEdit and OptionButton nodes are blurry due to non-integer sizes:


Note that I had stretch mode set to CanvasItems for this screenshot (so the issue would be visible in Aseprite). Using Viewport stretching obviously renders at the correct pixel resolution, but causes other issues like the bottom row of pixels of some control nodes to occasionally be cut off.
In this screenshot, the first two control nodes have a height of 18.5, causing the bottom edge to blur. The first OptionButton has its text blurred due to the size, but the TextEdit at the top does not have this issue. The second OptionButton has a height of 19, which causes its text to render at a non-integer Y-position (9.5 offset from the top). The Position property of each node is snapped to an integer, but their sizes and internal nodes seem to ignore this.
### Steps to reproduce
In Project Settings:
- gui/common/snap_controls_to_pixels = On
- display/window/size/viewport_width = 640
- display/window/size/viewport_height = 360
- display/window/size/mode = Exclusive Fullscreen
- display/window/stretch/mode = viewport (for native pixel resolution, causes issues with rows of pixels being lost) or canvas_items (to test blurring when downscaling to viewport size)
- display/window/stretch/aspect = expand
- display/window/stretch/scale_mode = integer
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,topic:2d | low | Major |
2,788,506,180 | flutter | [a11y] Form should have correct semantics role | when using the widget https://api.flutter.dev/flutter/widgets/Form-class.html, is should have the following role
| OS | role |
|--------|--------|
| web |form |
| ios | - |
| macos | NSAccessibilityGroupRole, AXLandmarkForm |
| windows | - |
| android | - | | framework,a: accessibility,c: proposal,P2,team-accessibility,triaged-accessibility | low | Minor |
2,788,509,482 | flutter | [a11y] tooltip popup should have correct semantics role | When using https://api.flutter.dev/flutter/material/Tooltip-class.html, the tooltip popup (not the tooltip widget) should have following role
| OS | role |
|--------|--------|
| web | tooltip |
| ios | - |
| macos | NSAccessibilityGroupRole, AXUserInterfaceTooltip |
| windows | ROLE_SYSTEM_TOOLTIP |
| android | - | | framework,f: material design,a: accessibility,c: proposal,P2,team-accessibility,triaged-accessibility | low | Minor |
2,788,517,170 | flutter | [a11y] loading indicator should have correct semantics role | When using https://api.flutter.dev/flutter/material/CircularProgressIndicator-class.html, the loading spinner should have following role
| OS | role |
|--------|--------|
| web |- |
| ios | - |
| macos | NSAccessibilityProgressIndicatorRole |
| windows | - |
| android | - |
When using https://api.flutter.dev/flutter/material/LinearProgressIndicator-class.html, the linear indicator should have following role
| OS | role |
|--------|--------|
| web |progressbar |
| ios | - |
| macos | NSAccessibilityLevelIndicatorRole |
| windows | ROLE_SYSTEM_PROGRESSBAR |
| android | - | | framework,f: material design,a: accessibility,c: proposal,P3,team-accessibility,triaged-accessibility | low | Minor |
2,788,520,514 | flutter | [pigeon] Support Event Channels in all supported languages | As of https://github.com/flutter/packages/pull/7892 Event Channel support is available in Dart, Kotlin, and Swift.
It would be nice for this feature to work on all platform/languages supported by pigeon. | c: new feature,team-ecosystem,p: pigeon,P2,triaged-ecosystem | low | Minor |
2,788,520,794 | angular | Event coalescing fires events too late for desired browser focus traversal and sometimes fails to fire event listeners | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
We have a use case where we dynamically are creating items based on previous values. In our actual application, based on certain control values, we hide or show different components. We need the tabbing to follow the order of the components with the new DOM layout.
With `eventCoalescing` set to `false`, change detection occurs at the end of `keydown` being broadcast and the `keyup` event handler is invoked and focus is set to the new item dynamically rendered as desired.
With `eventCoalescing` set to `true`, focus is set to the wrong item due to the timing of when change detection is run. Also, the `keyup` event handler is never invoked.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-mfnhbni1?file=src%2Fmain.ts
### Please provide the exception or error you saw
```true
Event listener never fired
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ △ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 19.0.7
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 19.0.6
... animations, common, compiler, compiler-cli, core, forms
... platform-browser, router
Package Version
------------------------------------------------------
@angular-devkit/architect 0.1900.7
@angular-devkit/core 19.0.7
@angular-devkit/schematics 19.0.7
@angular/build 19.0.7
@angular/cli 19.0.7
@schematics/angular 19.0.7
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else?
To reproduce:
1. Load stackblitz from https://stackblitz.com/edit/stackblitz-starters-mfnhbni1?file=src%2Fmain.ts
2. Put focus in the text field for `Item 1`
3. Enter any non-blank value
4. Hit tab
5. Focus is set on the `Reset` button
6. `onKeyup` method is never invoked
Then to see the desired behavior (with `eventCoalescing` off):
1. Change `providers: [provideZoneChangeDetection({ eventCoalescing: true })],` to `providers: [provideZoneChangeDetection({ eventCoalescing: false })],` (line 89)
2. Wait for page to reload
3. Put focus in the text field for `Item 1`
4. Enter any non-blank value
5. Hit tab
6. Focus is set on the `Item 2` input element
7. `onKeyup` method is fired as expected
Related issues:
- https://github.com/angular/angular/issues/57460
- https://github.com/angular/angular/issues/57528 | area: core,core: change detection,core: zoneless | low | Critical |
2,788,529,680 | flutter | [pigeon] Separate `PigeonOptions` api from internal usage. | There are a few issues that have been created over the past couple years due to the internal `PigeonOptions` class being used as the API surface for options for users.
There should be a new class for internal use only to avoid these issues and fix the open issues related to this.
related: https://github.com/flutter/flutter/issues/149956 | team-ecosystem,p: pigeon,P2,triaged-ecosystem | low | Minor |
2,788,584,818 | rust | Crater runs for 1.85 | Note: Please do not conduct triage on these runs without discussing how to do so with a release team member first. Thanks! | S-waiting-on-review | low | Major |
2,788,588,242 | next.js | Intercepting & Parallel route on vercel is broken | ### Link to the code that reproduces this issue
https://github.com/heecheon92/next15-test
### To Reproduce
Local Build,
1. npm i && npm run dev (or npm run start)
2. visit localhost:3000
3. visit the page -> Route Interception (or you can simply visit localhost:3000/playground/route_interception/home)
4. Visit "Progress" tab from "Home" tab. (It displays as normal page)
5. Visit "Progress" tab from "Payment" tab. (**It displays as modal**)
1. deploy the product on vercel.
2. visit the page -> Route Interception (or you can simply visit https://next15-test-gilt.vercel.app/playground/route_interception/home)
3. Visit "Progress" tab from "Home" tab. (It should display as normal page)
4. Visit "Progress" tab from "Payment" tab. (It should display as modal **but it displays as normal page**)
### Current vs. Expected behavior
In local development and production build, intercepting & parallel route behaves as expected.
In local builds, regardless of dev or prod, visiting "Progress" tab from "Payment" tab display as modal.
Once deployed to vercel, visiting "Progress" tab from "Payment" tab displays as normal page.
It should be displayed as modal just like local build.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64
Available memory (MB): 16384
Available CPU cores: 12
Binaries:
Node: 20.10.0
npm: 10.9.0
Yarn: 1.22.21
pnpm: 9.7.1
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: 15.1.4
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
One weird behavior pertaining to this issue is that "prefetch" on Link component is speculatively related.
From the sample project I provided, if you explicitly set prefetch option to "false" on Link to "Progress" tab, like this
```tsx
<Link
href="/playground/route_interception/progress"
className="p-2 bg-blue-600 text-white rounded-md"
prefetch={false} // <-- this option
>
Progress
</Link>
```
This partially fixes the issue.
In detail, when prefetch is set to false and deployed to vercel, following happens
1. Visit following pages with exact order "Home" -> "Payment" -> "Progress"
2. Progress page displays as modal just like local build
3. Close the modal, visit "Home" again and then visit "Progress"
4. Now, Progress page shows "404 page not found"
Following page is built on exactly same source code but prefetch option for the Link to Progress page is set to false.
https://next15-test-git-prefetchdisabled-heecheon92s-projects.vercel.app/playground/route_interception/home | Parallel & Intercepting Routes | low | Critical |
2,788,596,735 | ollama | Multi GPU, default GPU setting, specific model pin to specific GPU | I have a multi GPU configuration and they are different GPU models with different memory size.
I wish:
1. we could select default GPU for all models (potentially the fastest one with higher memory)
2. we could select specific model per specific GPU to use small models into small VRAM GPU and use fastest/highest_VRAM card for other large models.
| feature request | low | Minor |
2,788,599,896 | vscode | Code Highlighting | ADD ISSUE DESCRIPTION HERE
all of my code is white rather than different aspects of the code being highlighted
Version: 1.96.3
Commit: 91fbdddc47bc9c09064bf7acf133d22631cbf083
User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36
Embedder: codespaces
<!-- generated by web issue reporter --> | info-needed | low | Minor |
2,788,616,482 | electron | If nativeTheme.themeSource is set to “dark” on Windows, a white line is drawn under the menu bar. | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
34.0.0
### What operating system(s) are you using?
Windows
### Operating System Version
Windows11 24H2
### What arch are you using?
x64
### Last Known Working Electron version
33.3.1
### Expected Behavior

### Actual Behavior

### Testcase Gist URL
[https://gist.github.com/df43514921aa3386c054acc8721790be](https://gist.github.com/df43514921aa3386c054acc8721790be)
### Additional Information
_No response_ | platform/windows,bug :beetle:,34-x-y | low | Critical |
2,788,642,307 | flutter | Pointer behaving incorrectly when MouseRegion and SelectionArea are combined | Copied from Google internal bug b/389519820
We applied the MouseRegion in our Flutter web product in order to make the pointer become a hand when hovering over a clickable image, but back to normal normal out of the clickable region. But the cursor always doesn't always come back to normal even though the onExit() event was triggered. The mouse sometimes remains as a hand when not over the image.
<details>
<summary>Repro code</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
theme: ThemeData(useMaterial3: false),
title: 'Flutter Demo',
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key? key, required this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
_counter++;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title, style: TextStyle(fontFamily: 'ProductSans')),
),
body: Center(
child: SelectionArea(
child: Padding(
padding: EdgeInsets.all(16.0),
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
MouseRegion(
cursor: SystemMouseCursors.click,
onEnter: (_) {
debugPrint('onEnter');
},
onExit: (_) {
debugPrint('onExit');
},
child: GestureDetector(
child: Container(
decoration: BoxDecoration(
border: Border.all(color: Colors.black, width: 1),
borderRadius: BorderRadius.circular(10.0),
boxShadow: [
BoxShadow(
color: Colors.black,
blurRadius: 4,
offset: Offset(2, 2),
),
],
),
child: InteractiveViewer(
scaleEnabled: false,
child: ClipRRect(
borderRadius: BorderRadius.circular(10.0),
child: Image.network(
'https://www.kasandbox.org/programming-images/avatars/leaf-blue.png',
),
),
),
),
),
),
],
),
),
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: const Icon(Icons.add),
),
);
}
}
```
</details> | platform-web,customer: google,a: mouse,has reproducible steps,f: selection,team-web,found in release: 3.27,found in release: 3.28 | low | Critical |
2,788,648,887 | pytorch | [dynamo] Do not always skip code objects unconditionally | Currently, when Dynamo determines that a frame should be skipped, we will also skip tracing all future calls to the same code object. This can cause issues when skipping a frame is dependent on inputs to the function:
```python
import torch
@torch.compile(dynamic=False)
def fn(x, n):
if n == 0:
try:
# causes frame to be skipped
torch._dynamo.graph_break()
finally:
pass
if torch.compiler.is_compiling():
return x + 1
return x - 1
print(fn(torch.ones(3), 0)) # skipped
print(fn(torch.ones(3), 1)) # skipped
import torch._dynamo
torch._dynamo.reset()
print(fn(torch.ones(3), 1)) # compiled!
print(fn(torch.ones(3), 0)) # skipped
# Output:
# tensor([0., 0., 0.])
# tensor([0., 0., 0.])
# tensor([2., 2., 2.])
# tensor([0., 0., 0.])
```
We see that whether `fn(torch.ones(3), 1)` gets compiled is dependent on calling order! This makes it more difficult to understand the PT2 programming model. Thus, when skipping a frame is condition-dependent, we shouldn't skip the code object unconditionally - we should instead just skip the current frame and use guards to check if a future call should also skip/fall back to eager.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Minor |
2,788,661,305 | rust | attempting to build on NetBSD/amd64 9.99.81 causes boootstrap/debug/rustc to crash with a SIGSEGV | I've been attempting to build rust from a clone done today (HEAD is 3736b85779d1db4c215b910004d7efcd7aff8408) on NetBSD/amd64 9.99.81 and I'm seeing boootstrap/debug/rustc crash with a SIGSEGV while "Building stage1 library artifacts".
```sh
./configure --set install.prefix=/usr/local --set install.sysconfdir=/usr/local/etc
```
Then because I didn't want to build LLVM I added the following line to the `[llvm]` section of the generated `./config.toml`:
```
download-ci-llvm = true
```
Running `./x.py build` results in:
```
Compiling llvm-bitcode-linker v0.0.1 (/work/woods/m-rust/src/tools/llvm-bitcode-linker)
Finished `release` profile [optimized] target(s) in 26.06s
Building stage1 library artifacts (x86_64-unknown-netbsd)
error: failed to run `rustc` to learn about target-specific information
Caused by:
process didn't exit successfully: `/work/woods/m-rust/build/bootstrap/debug/rustc /work/woods/m-rust/build/bootstrap/debug/rustc - --crate-name ___ --print=file-names -Csymbol-mangling-version=legacy '--check-cfg=cfg(feature,values(any()))' -Zunstable-options '--check-cfg=cfg(bootstrap)' -Zmacro-backtrace -Csplit-debuginfo=off -Cprefer-dynamic -Zinline-mir -Zinline-mir-preserve-debug -Zmir_strip_debuginfo=locals-in-tiny-functions -Clink-args=-Wl,-z,origin '-Clink-args=-Wl,-rpath,$ORIGIN/../lib' -Cembed-bitcode=yes -Cforce-frame-pointers=yes '-Zcrate-attr=doc(html_root_url="https://doc.rust-lang.org/nightly/")' --target x86_64-unknown-netbsd --crate-type bin --crate-type rlib --crate-type dylib --crate-type cdylib --crate-type staticlib --crate-type proc-macro --print=sysroot --print=split-debuginfo --print=crate-name --print=cfg` (exit status: 254)
--- stderr
rustc exited with signal: 11 (SIGSEGV) (core dumped)
Build completed unsuccessfully in 0:18:05
```
GDB 11.0 can't find any symbols to give a reasonable stack backtrace from the resulting core (despite `file` reporting that the binary is "not stripped").
| I-crash,T-compiler,O-netbsd,T-bootstrap,C-bug,E-needs-investigation | low | Critical |
2,788,669,329 | go | proposal: slices: CopyFunc and AppendFunc | ### Proposal Details
The `copy` and `append` builtins are well known and useful. Combining copy and append with elementwise mapping yields two useful functions that are easy to explain in existing terms:
`CopyFunc` is defined similar to `copy` but takes an additional mapping function to apply before assigning the src element to dst.
```go
func CopyFunc[S1 ~[]E1, S2 ~[]E2, E1, E2 any](dst S1, src S2, f func(E2) E1) int
```
The implementation for `AppendFunc` is simple and straightforward enough that I include it in full:
```go
func AppendFunc[S1 ~[]E1, S2 ~[]E2, E1, E2 any](dst S1, src S2, f func(E2) E1) S1 {
dst = Grow(dst, len(src))
for _, v := range src {
dst = append(dst, f(v))
}
return dst
}
```
`AppendFunc` could have the signature `(dst S, f func(E2) E1, src ...E2) S`. Unlike lowercase `append`, it is always for working with a full slice—otherwise you could have just used `append(s, f(v))`. It also makes it easier to pass an anonymous mapping function when it's the last parameter.
Note that in-place map is `CopyFunc(s, s, f)` and new-slice map is `AppendFunc([]T(nil), s, f)`.
`AppendFunc` could be written with existing functions (assuming `xiter`):
```go
slices.AppendSeq(slices.Grow(dst, len(src)), xiter.Map(f, slices.Values(src)))
```
This is more awkward and unlikely to be as efficient. `CopyFunc` would require a corresponding `CopySeq` and would not be able to handle certain overlaps between dst and src correctly. | Proposal,LibraryProposal | low | Major |
2,788,672,010 | pytorch | [MPS] Indexing Returns 0 if OOB | ### 🐛 Describe the bug
Using PyTorch to train a model on MacOS worked fine, so I switched to using CUDA, where it would crash. The issue is that CUDA will crash if you index out of bounds, along with CPU, MPS will return a 0. This causes an inconsistency in models, and will result in undefined behavior on MPS. This was tested on PyTorch version `2.1.1+cu121` and the latest version on GitHub main at the time of the issue creation
The following code was tested on each platform
```python
import torch
import torch.nn as nn
```
```python
# CPU
embed = nn.Embedding(10, 2).to('cpu')
t = torch.tensor(10).to('cpu')
print(embed(t))
"""
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[2], line 3
1 embed = nn.Embedding(10, 2).to('cpu')
2 t = torch.tensor(10).to('cpu')
----> 3 print(embed(t))
...
IndexError: index out of range in self
"""
```
```python
# CUDA
embed = nn.Embedding(10, 2).to('cuda')
t = torch.tensor(10).to('cuda')
print(embed(t))
"""
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[2], line 3
1 embed = nn.Embedding(10, 2).to('cuda')
2 t = torch.tensor(10).to('cuda')
----> 3 print(embed(t))
...
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
"""
```
```python
# MPS
embed = nn.Embedding(10, 2).to('mps')
t = torch.tensor(10).to('mps')
print(embed(t))
"""
tensor([0., 0.], device='mps:0', grad_fn=<EmbeddingBackward0>)
"""
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.30.1
Libc version: N/A
Python version: 3.13.1 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 10:38:40) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit-Mach-O
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] optree==0.13.1
[pip3] torch==2.6.0a0+git0431d47
[conda] numpy 2.2.1 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.6.0a0+git0431d47 pypi_0 pypi
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | module: error checking,triaged,module: mps | low | Critical |
2,788,674,422 | next.js | next/image fails with an Arabic phrase | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/intelligent-waterfall-5htj5v?workspaceId=ws_2L9qrGbsdQSgPmvDvdbcYv
### To Reproduce
1. go to `/og`
### Current vs. Expected behavior
1. Can render the image
2. This package hasn't any error message, the development experience is bad, next/image must report the issues it found, besides the documentation must be improved, if an image randomly fails how could we render a fallback image for example? this and all related with this module must be documented and covered
### Provide environment information
```bash
> @ info /project/workspace
> next info
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.2.0-canary.10 // Latest available version is detected (15.2.0-canary.10).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Image (next/image)
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
```tsx
import { ImageResponse } from "next/og";
// import { ImageResponse } from "@vercel/og";
import { NextRequest } from "next/server";
export async function GET(request: NextRequest) {
return new ImageResponse(
(
<div
// lang='en'
lang="ar"
style={{
fontSize: 128,
background: "white",
width: "100%",
height: "100%",
display: "flex",
textAlign: "center",
alignItems: "center",
justifyContent: "center",
}}
>
{/* Hello */}
نطق الأسماء على المستوى
</div>
),
{
width: 1200,
height: 600,
}
);
}
``` | Image (next/image) | low | Critical |
2,788,700,477 | rust | Rust 1.84 sometimes allows overlapping impls in incremental re-builds | This repro uses a bit of lexical comments trickery, but only for convenience. (Quite useful while manually testing multiple rust versions), so the whole change you need to do between incremental compilations is turning
```rust
// /* // <- uncomment this line
```
into
```rust
/* // <- uncomment this line
```
The main relevant change that this entails is
```rust
impl Trait for W {}
```
turning into
```rust
impl Trait for S<W> {}
```
which makes the `Other`-impls overlapping. As an additional effect, the code turning the overlap into UB is also uncommented. (The `const _ …` stuff only “fixes” the `*` symbols left behind by the `*/*` in the middle.)
```rust
trait Trait {}
struct S0<T>(T);
struct S<T>(T);
impl<T> Trait for S<T> where S0<T>: Trait {}
struct W;
trait Other {
type Choose<L, R>;
}
struct A;
struct B;
// first impl
impl<T: Trait> Other for T {
type Choose<L, R> = L;
}
// second impl
impl<T> Other for S<T> {
type Choose<L, R> = R;
}
const _: u8 = 0
// /* // <- uncomment this line
*0;
impl Trait for W {}
pub fn transmute<L, R>(l: L) -> R {
todo!();
}
const _: u8 = 0
*/*
0;
impl Trait for S<W> {}
fn use_first_impl<T: Trait, L, R>(l: L) -> <<T as TyEq>::To as Other>::Choose<L, R> {
l
}
fn use_second_impl<T, L, R>(l: <S<T> as Other>::Choose<L, R>) -> R {
l
}
trait TyEq {
type To;
}
impl<T> TyEq for T {
type To = T;
}
fn transmute_inner<W, T, L, R>(l: L) -> R
where
T: Trait + TyEq<To = S<W>>,
{
use_second_impl::<W, L, R>(use_first_impl::<T, L, R>(l))
}
pub fn transmute<L, R>(l: L) -> R {
transmute_inner::<W, S<W>, L, R>(l)
}
const _: u8 =
// */
0;
fn main() {
let v = vec![65_u8, 66, 67];
let s: String = transmute(v);
println!("{}", s);
}
```
Reproduce
```bash
cargo new repro
cd repro
…write above to src/main…
cargo run
…uncomment the line in question as described…
cargo run
```
```rust
Compiling repro v0.1.0 (/home/frank/repro)
warning: struct `A` is never constructed
--> src/main.rs:14:8
|
14 | struct A;
| ^
|
= note: `#[warn(dead_code)]` on by default
warning: struct `B` is never constructed
--> src/main.rs:15:8
|
15 | struct B;
| ^
warning: `repro` (bin "repro") generated 2 warnings
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.12s
Running `target/debug/repro`
ABC
```
Is safe code that compiles to UB, but *only* in incremental re-builds (compilation error otherwise), considered a soundness issue to be labelled `I-unsound`?
This was already fixed with #133828 (which also explains what was the underlying issue). That PR seems like a fairly small & straightforward fix to me… should it perhaps be considered for backporting to stable? cc @compiler-errors
@rustbot label regression-from-stable-to-stable, T-compiler, A-incr-comp, A-coherence | P-low,T-compiler,regression-from-stable-to-stable,I-unsound,A-incr-comp,C-bug,A-coherence | low | Critical |
2,788,701,729 | godot | grab_focus for window is not working on macOS. | ### Tested versions
godot4.3
Xcode Version 16.1 (16B40)
### System information
2
### Issue description
1.grap_focus for window is not work on Mac OS but it can work well on microsoft windows.
2."display/window/subwindows/embed_subwindows" for ProjectSettings is not work on Macos as well .
3 how to create independent child windows on Mac os
### Steps to reproduce
1
### Minimal reproduction project (MRP)
2 | bug,platform:macos,topic:porting,needs testing | low | Minor |
2,788,712,691 | tauri | [bug] android Failed to assemble APK: i cant`t use custom tauri plugin | ### Describe the bug
i cant`t use custom tauri plugin
tauri-cli 2.2.4
create-tauri-app 4.5.9
step1: cargo create-tauri-app
step2: cargo tauri plugin new custom
step3: introduce the custom plugin and do nothing else
step4: cargo tauri android build
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
FAILURE: Build failed with an exception.
* What went wrong:
Could not determine the dependencies of task ':app:lintVitalReportUniversalRelease'.
> Could not resolve all dependencies for configuration ':app:universalReleaseCompileClasspath'.
> Could not resolve project :tauri-plugin-math.
Required by:
project :app
> No matching variant of project :tauri-plugin-math was found. The consumer was configured to find a library for use during compile-time, preferably optimized for Android, as well as attribute 'com.android.build.api.attributes.AgpVersionAttr' with value '8.5.1', attribute 'com.android.build.api.attributes.BuildTypeAttr' with value 'release', attribute 'com.android.build.api.attributes.ProductFlavor:abi' with value 'universal', attribute 'org.jetbrains.kotlin.platform.type' with value 'androidJvm' but:
- No variants exist.
* Try:
> Creating consumable variants is explained in more detail at https://docs.gradle.org/8.9/userguide/declaring_dependencies.html#sec:resolvable-consumable-configs.
> Review the variant matching algorithm at https://docs.gradle.org/8.9/userguide/variant_attributes.html#sec:abm_algorithm.
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
BUILD FAILED in 6s
Error Failed to assemble APK: command ["/Users/phoenix/Desktop/gogs/tmp/aaa/src-tauri/gen/android/gradlew", "--project-dir", "/Users/phoenix/Desktop/gogs/tmp/aaa/src-tauri/gen/android"] exited with code 1:
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,788,715,708 | deno | Deno 2.1.4 - node:http's http.request socketPath option fails in Deno, but works in Node. | ## Summary:
* Deno's `node:http` module behaves differently than Node.js's `http` module when using `http.request` with the `socketPath` option to connect to the Docker socket.
* Instead of connecting to the Docker socket, Deno appears to attempt a TCP connection to `http://localhost/info`, resulting in an `ECONNREFUSED` error.
* This discrepancy was discovered while troubleshooting a failure using `@testcontainers/postgresql` library and was traced down through its dependencies (`dockerode`, `docker-modem`) to this minimal reproduction.
* The equivalent code works perfectly in node, indicating a failure in Deno.
## **Reproduction Steps:**
**1. Setup:**
* Install Docker Desktop on macOS.
* Ensure the Docker daemon is running and accessible via the default Unix socket: `/Users/<your_username>/.docker/run/docker.sock` (replace `<your_username>` with your actual username).
* Make sure you have "Allow the default Docker socket to be used" enabled under advanced settings in Docker.
**2. Node.js (Correct Behavior):**
Create a file named `node-test.js` with the following content:
Ensure you set `"type": "module"` in `package.json`.
```javascript
import http from "http";
const options = {
socketPath: "/Users/<your_username>/.docker/run/docker.sock",
path: "/info",
method: "GET",
};
const req = http.request(options, () => {
console.log(`Success`);
});
req.on("error", (e) => {
console.error(`Error: ${e.message}`);
});
req.end();
```
Run: `node node-test.js`
* **Expected Output:**
```
Success
```
* **Actual Output:**
```
Success
```
**3. Deno (Incorrect Behavior):**
* Create a file named `deno-test.ts` with the following content:
```typescript
import http from "node:http";
const options = {
socketPath: "/Users/<your_username>/.docker/run/docker.sock",
path: "/info",
method: "GET",
};
const req = http.request(options, () => {
console.log(`Success`);
});
req.on("error", (e) => {
console.error(`Error: ${e.message}`);
});
req.end();
```
Run `deno -A deno-test.ts`
* **Expected Output:**
```
Success
```
* **Actual Output:**
```
Error: error sending request for url (http://localhost/info): client error (Connect): tcp connect error: Connection refused (os error 61): Connection refused (os error 61)
```
**Environment:**
```
deno 2.1.4 (stable, release, x86_64-apple-darwin)
v8 13.0.245.12-rusty
typescript 5.6.2
Docker Desktop: 4.37.2 (179585)
``` | node compat | low | Critical |
2,788,737,471 | ui | [bug]: Shadcn CLI doesn't realize the text-based bun lockfile "bun.lock", always use npm.(Next 15) | ### Describe the bug
I'm using next 15 with bun.
It worked well until I upgraded my bun to a text-based lockfile.
After I changed the archive `bun.lock` to a text-based lockfile `bun.lock` and removed `bun.lockb`, the shadcn CLI will always use **npm** instead of bun.
I tried re-adding `bun.lockb` and it worked, so I'm guessing that only `bun.lockb` is being recognized and not `bun.lock`
Before adding `bun.lockb`, only `bun.lock`

After adding `bun.lockb`

### Affected component/components
all
### How to reproduce
1. following Bun [text-based lockfile doc](https://bun.sh/docs/install/lockfile#text-based-lockfile) to use text-based lockfile
2. use shadcn cli to add component `bunx --bun shadcn@latest add checkbox ` in Next 15 project.
3. It will show npm error
4. copy bun.lock and rename to bun.lockb
5. add component again `bunx --bun shadcn@latest add checkbox `
6. It worked, and with bun
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
OS: Mac OS 15.0.1(24A348)
Shell: zsh
Runtime: bun 1.1.43
Framework: Next 15 with react 19
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,788,742,700 | rust | Tracking Issue for NVPTX shared memory | <!--
Thank you for creating a tracking issue!
Tracking issues are for tracking a feature from implementation to stabilization.
Make sure to include the relevant RFC for the feature if it has one.
If the new feature is small, it may be fine to skip the RFC process. In that
case, you can use `issue = "none"` in your initial implementation PR. The
reviewer will ask you to open a tracking issue if they agree your feature can be
added without an RFC.
-->
Feature gate: `#![feature(stdarch_nvptx)]` (probably; see https://github.com/rust-lang/rust/issues/111199)
This is a tracking issue for access to shared memory from NVPTX device code.
There are two flavors of shared memory in NVPTX (see [this NVIDIA Technical Blog (2013)](https://developer.nvidia.com/blog/using-shared-memory-cuda-cc/) for a friendly introduction):
* **static** shared memory provides a fixed size amount of shared memory independent of the block size. There can be multiple static shared arrays (declared in CUDA C++ using the `__shared__` attribute).
* **dynamic** shared memory provides one base pointer, with a length (in bytes) that must be specified in the kernel launch parameters. If multiple arrays are needed, the single buffer must be manually partitioned (in CUDA C++).
In practice, the required amount of shared memory for GPU algorithms will depend on the block size. Since many domains expect to choose the block size at run-time, *static* shared memory is often considered to be of limited use.
<!--
Include a short description of the feature.
-->
### Public API
<!--
For most library features, it'd be useful to include a summarized version of the public API.
(E.g. just the public function signatures without their doc comments or implementation.)
-->
Note that other `core::arch::nvptx` intrinsics are covered by https://github.com/rust-lang/rust/issues/111199.
```rust
use core::arch::nvptx;
#[no_mangle]
pub unsafe extern "ptx-kernel" fn reverse_kernel(n: u32, a: *mut f32) {
let t = nvptx::_thread_idx_x() as usize;
let tr = n as usize - t - 1;
let (s, sn) = nvptx::_dynamic_smem(); // <--- this issue
let s = s as *mut f32;
assert_eq!(sn as u32, n * 4); // requirement for the algorithm below to be correct
*s.add(t) = *a.add(t);
nvptx::_syncthreads();
*a.add(t) = *s.add(tr);
}
```
#### A possible implementation for dynamic shared memory
Shared memory can be exposed using inline assembly ([reference](https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#special-registers-dynamic-smem-size)).
```rs
core::arch::global_asm!(".extern .shared .align 16 .b8 _shared_data[];");
#[inline(always)]
pub fn _dynamic_smem() -> (*mut u8, u32) {
// Dynamic shared memory.
let size: u32;
let saddr: u64;
let ptr: *mut u8;
unsafe {
asm!("mov.u32 {}, %dynamic_smem_size;", out(reg32) size);
asm!("mov.u64 {}, _shared_data;", out(reg64) saddr);
asm!("cvta.shared.u64 {ptr}, {saddr};", ptr = out(reg64) ptr, saddr = in(reg64) saddr);
}
(ptr, size)
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [ ] Implementation: #...
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- Do we prefer the `_dynamic_smem() -> (*mut u8, u32)` or should we have two separate intrinsics for accessing the base pointer and the size?
- Do we want to use `usize` for the length even though NVPTX does not (yet, and maybe ever) support that? I'm not sure about other vendors, if we're looking for the intrinsics to be as close as possible.
- Is it desirable to expose static shared memory or shall we focus on dynamic (which is less intrusive to implement, but may require more thinking to launch kernels with valid parameters)?
- It has also been suggested that this be implemented as an LLVM intrinsic by introducing `llvm.nvvm.read.ptx.sreg.dynamic_smem_size`. That could in principle allow accessing the shared memory base pointer without introducing a new symbol (`_shared_data above`). I lean toward using the inline assembly for now and possibly migrating it later, but no other inline assembly is used in `core::arch::nvptx` and I don't know if it should be kept that way.
- Do we want to tackle launch bounds in this issue? If so, it should not be under the `stdarch_nvptx` feature gate since it would need to verify launch parameters on the host (better user experience) or handle failure on the device (i.e., prevent UB if an insufficient amount of shared memory is provided). I prefer to handle only the unsafe primitives here since there are many open questions about such ergonomics.
- Should this issue consider partitioning dynamic shared memory into parts (e.g., multiple arrays or more general data structures, parametrized by block size)? This interacts with more types and would also make it inappropriate for the `stdarch_nvptx` feature gate.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,O-NVPTX,C-tracking-issue | low | Critical |
2,788,748,468 | ant-design | 为什么antd Table组件不提供无缝滚动的方案? | ### What problem does this feature solve?
这个功能解决表格无缝滚动的问
### What does the proposed API look like?
我期望antd Table可以添加一个无缝滚动的状态 ,开启后,即可使表格循环无缝滚动
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Minor |
2,788,763,109 | rust | Tracking issue for release notes of #134143: Convert `struct FromBytesWithNulError` into enum |
This issue tracks the release notes text for #134143.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Convert `struct FromBytesWithNulError` into enum](https://github.com/rust-lang/rust/pull/134143)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @nyurik, @Amanieu -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,needs-triage,relnotes-tracking-issue | low | Critical |
2,788,801,603 | PowerToys | Advanced Paste - Sticky Shift key | ### Provide a description of requested docs changes
keys like Shift or Alt behave as they are sticky without any indication. it's definitely cased by Advanced Paste after pressing its keys combination like Win+Shift+V | Issue-Docs,Needs-Triage | low | Minor |
2,788,809,458 | langchain | PydanticUserError: SQLDatabaseToolkit is not fully defined; you should define Callbacks, then call SQLDatabaseToolkit.model_rebuild(). | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
pip install "autogen-ext[azure]" python-dotenv langchain-community psycopg2 langchain_openai langchain_experimental "autogen-core" "autogen-agentchat"
from dotenv import load_dotenv
import os
from langchain_community.utilities.sql_database import SQLDatabase
import psycopg2
from langchain_openai import AzureChatOpenAI
from langchain_experimental.sql import SQLDatabaseChain
from langchain_experimental.sql.base import SQLDatabaseChain
from langchain.cache import InMemoryCache
# Load environment variables from the .env file from the same directory as notebook
load_dotenv()
# Retrieve environment variables
POSTGRES_USER = os.getenv('POSTGRES_USER')
POSTGRES_PASSWORD = os.getenv('POSTGRES_PASSWORD')
POSTGRES_HOST = os.getenv('POSTGRES_HOST')
POSTGRES_PORT = os.getenv('POSTGRES_PORT')
POSTGRES_DB = os.getenv('POSTGRES_DB')
AZURE_OPENAI_KEY = os.getenv('AZURE_OPENAI_KEY')
AZURE_OPENAI_ENDPOINT = os.getenv('AZURE_OPENAI_ENDPOINT')
AZURE_OPENAI_DEPLOYMENT = os.getenv('AZURE_OPENAI_DEPLOYMENT')
llm_config = {
"config_list": [
{
"model": AZURE_OPENAI_DEPLOYMENT,
"temperature": 0.7,
"api_key": AZURE_OPENAI_KEY,
"azure_endpoint": AZURE_OPENAI_ENDPOINT,
"api_type": "azure",
"api_version": "2024-10-21"
}]}
shipment_db_uri = f"postgresql+psycopg2://{POSTGRES_USER}:{POSTGRES_PASSWORD}@{POSTGRES_HOST}:{POSTGRES_PORT}/{POSTGRES_DB}"
shipment_chain = SQLDatabaseChain(llm=azure_llm, database=shipment_db, verbose=True)
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
PydanticUserError Traceback (most recent call last)
Cell In[22], [line 116](vscode-notebook-cell:?execution_count=22&line=116)
[2](vscode-notebook-cell:?execution_count=22&line=2) llm_config = {
[3](vscode-notebook-cell:?execution_count=22&line=3) "config_list": [
[4](vscode-notebook-cell:?execution_count=22&line=4) {
(...)
[111](vscode-notebook-cell:?execution_count=22&line=111) ]
[112](vscode-notebook-cell:?execution_count=22&line=112) }
[115](vscode-notebook-cell:?execution_count=22&line=115) # Initialize the database chains
--> [116](vscode-notebook-cell:?execution_count=22&line=116) shipment_chain = SQLDatabaseChain(llm=azure_llm, database=shipment_db, verbose=True)
[117](vscode-notebook-cell:?execution_count=22&line=117) crm_chain = SQLDatabaseChain(llm=azure_llm, database=crm_db, verbose=True)
[119](vscode-notebook-cell:?execution_count=22&line=119) # Create assistant agents
File ~/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/new_env_0.4/lib/python3.12/site-packages/langchain_core/load/serializable.py:125, in Serializable.__init__(self, *args, **kwargs)
[123](https://file+.vscode-resource.vscode-cdn.net/Users/arnaudcomet/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/~/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/new_env_0.4/lib/python3.12/site-packages/langchain_core/load/serializable.py:123) def __init__(self, *args: Any, **kwargs: Any) -> None:
[124](https://file+.vscode-resource.vscode-cdn.net/Users/arnaudcomet/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/~/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/new_env_0.4/lib/python3.12/site-packages/langchain_core/load/serializable.py:124) """"""
--> [125](https://file+.vscode-resource.vscode-cdn.net/Users/arnaudcomet/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/~/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/new_env_0.4/lib/python3.12/site-packages/langchain_core/load/serializable.py:125) super().__init__(*args, **kwargs)
[... skipping hidden 1 frame]
File ~/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/new_env_0.4/lib/python3.12/site-packages/pydantic/_internal/_mock_val_ser.py:100, in MockValSer.__getattr__(self, item)
[98](https://file+.vscode-resource.vscode-cdn.net/Users/arnaudcomet/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/~/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/new_env_0.4/lib/python3.12/site-packages/pydantic/_internal/_mock_val_ser.py:98) # raise an AttributeError if `item` doesn't exist
[99](https://file+.vscode-resource.vscode-cdn.net/Users/arnaudcomet/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/~/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/new_env_0.4/lib/python3.12/site-packages/pydantic/_internal/_mock_val_ser.py:99) getattr(self._val_or_ser, item)
--> [100](https://file+.vscode-resource.vscode-cdn.net/Users/arnaudcomet/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/~/Projects/multi-AI-test/azure-postgresql-openai-langchain-autogen-demo/new_env_0.4/lib/python3.12/site-packages/pydantic/_internal/_mock_val_ser.py:100) raise PydanticUserError(self._error_message, code=self._code)
PydanticUserError: `SQLDatabaseChain` is not fully defined; you should define `BaseCache`, then call `SQLDatabaseChain.model_rebuild()`.
### Description
I saw this issue: [https://github.com/langchain-ai/langchain/pull/28297](https://github.com/langchain-ai/langchain/issues/28284) and that one ([https://github.com/langchain-ai/langchain/pull/28297](https://github.com/langchain-ai/langchain/pull/28297)) looked like it solved but it hasn't for me. I have to go back to 2.9.2 but now it's not compatible with autogen 0.4 and I have to get to the latest version.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.2.0: Fri Dec 6 19:02:41 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6030
> Python Version: 3.12.8 (main, Dec 3 2024, 18:42:41) [Clang 16.0.0 (clang-1600.0.26.4)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.10
> langchain_experimental: 0.3.4
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.5
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 2.2.1
> openai: 1.59.7
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.0
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available. | investigate,Ɑ: core | low | Critical |
2,788,810,269 | flutter | Map not loaded: `PlatformException(recreating_view, trying to create an already created view, view id: '0', null)` | ### Steps to reproduce
[ERROR:flutter/runtime/dart_vm_initializer.cc(40)] Unhandled Exception: PlatformException(recreating_view, trying to create an already created view, view id: '0', null)
#0 StandardMethodCodec.decodeEnvelope (package:flutter/src/services/message_codecs.dart:646:7)
#1 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:334:18)
<asynchronous suspension>
#2 PlatformViewsService.initUiKitView (package:flutter/src/services/platform_views.dart:248:5)
<asynchronous suspension>
#3 _DarwinViewState._createNewUiKitView (package:flutter/src/widgets/platform_view.dart:921:36)
<asynchronous suspension>
### Expected results
Map Load on UI
### Actual results
[ERROR:flutter/runtime/dart_vm_initializer.cc(40)] Unhandled Exception: PlatformException(recreating_view, trying to create an already created view, view id: '0', null)
#0 StandardMethodCodec.decodeEnvelope (package:flutter/src/services/message_codecs.dart:646:7)
#1 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:334:18)
<asynchronous suspension>
#2 PlatformViewsService.initUiKitView (package:flutter/src/services/platform_views.dart:248:5)
<asynchronous suspension>
#3 _DarwinViewState._createNewUiKitView (package:flutter/src/widgets/platform_view.dart:921:36)
<asynchronous suspension>
### Code sample
<details open><summary>Code sample</summary>
```dart
GoogleMap(
mapType: MapType.normal,
initialCameraPosition: _kGooglePlex,
onMapCreated: (GoogleMapController controller) {
_controller.complete(controller);
},
markers: Set<Marker>.of(markers.values),
myLocationEnabled : true// YOUR MARKS IN MAP
)
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
[ERROR:flutter/runtime/dart_vm_initializer.cc(40)] Unhandled Exception: PlatformException(recreating_view, trying to create an already created view, view id: '0', null)
#0 StandardMethodCodec.decodeEnvelope (package:flutter/src/services/message_codecs.dart:646:7)
#1 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:334:18)
<asynchronous suspension>
#2 PlatformViewsService.initUiKitView (package:flutter/src/services/platform_views.dart:248:5)
<asynchronous suspension>
#3 _DarwinViewState._createNewUiKitView (package:flutter/src/widgets/platform_view.dart:921:36)
<asynchronous suspension>
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| waiting for customer response,in triage | low | Critical |
2,788,840,802 | flutter | Bug Report: Scroll Becomes Unresponsive and Lags Under Multi-Finger Rapid Usage | <!-- Failed to upload "ScreenRecording_01-15-2025 11-47-58_1.mp4" -->
### Steps to reproduce
Description: When continuously scrolling with repeated and rapid gestures (e.g., using multiple fingers to scroll back and forth rapidly), the scroll view becomes unresponsive and lags severely. The issue occurs consistently under high-frequency interactions involving multiple fingers on the screen.
Steps to Reproduce:
1. Open the application and navigate to a screen with a scrollable widget.
2. Begin scrolling in any direction.
3.Rapidly and repeatedly interact with the scroll view using multiple fingers (e.g., two or more fingers tapping and scrolling simultaneously).
Observe the behavior of the scroll view.
Environment:
Dart SDK: 3.3.2
Flutter SDK: 3.19.4
Device: [Specify the device and OS version]
Scrollable Widget: [Provide details if it's a specific widget like ListView, SingleChildScrollView, etc.]
Additional Notes: The issue appears to be related to simultaneous or conflicting scroll inputs being processed during rapid multi-finger interactions.
### Expected results
The scroll view should remain smooth and responsive, regardless of the frequency or intensity of scrolling interactions.
### Actual results
The scroll view becomes laggy.
Scrolling responsiveness deteriorates significantly.
In some cases, the scroll view stops responding altogether.
### Code sample
```
import 'package:flutter/material.dart';
import 'package:sliver_tools/sliver_tools.dart';
class SliverGridExample extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('SliverTools Grid Example'),
),
body: CustomScrollView(
slivers: [
// MultiSliver ใช้รวม Slivers หลายตัวใน ScrollView
MultiSliver(
children: [
SliverPadding(
padding: EdgeInsets.all(8.0),
sliver: SliverGrid(
gridDelegate: SliverGridDelegateWithFixedCrossAxisCount(
crossAxisCount: 3, // จำนวนคอลัมน์ใน Grid
mainAxisSpacing: 8,
crossAxisSpacing: 8,
childAspectRatio: 1,
),
delegate: SliverChildBuilderDelegate(
(BuildContext context, int index) {
return Container(
color: Colors.blueAccent,
child: Center(
child: Text(
'Item $index',
style: TextStyle(color: Colors.white),
),
),
);
},
childCount: 30, // จำนวนไอเท็มใน Grid
),
),
),
SliverToBoxAdapter(
child: Padding(
padding: const EdgeInsets.all(16.0),
child: Text(
'This is a SliverToBoxAdapter after the Grid.',
style: TextStyle(fontSize: 16),
),
),
),
],
),
],
),
);
}
}
void main() => runApp(MaterialApp(home: SliverGridExample()));
```
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[[Upload media here]](https://drive.google.com/file/d/1YVeG9ErhQ4q5OJFlm2fbx7_dk7NiKMD4/view?usp=sharing)
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.19.4, on macOS 15.1.1 24B91 darwin-arm64, locale en-TH)
• Flutter version 3.19.4 on channel stable at /Users/watcharatep.lnk/fvm/versions/3.19.4
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 68bfaea224 (10 months ago), 2024-03-20 15:36:31 -0700
• Engine revision a5c24f538d
• Dart version 3.3.2
• DevTools version 2.31.1
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/watcharatep.lnk/Library/Android/sdk/
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = /Users/watcharatep.lnk/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
[✓] VS Code (version 1.96.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
```
</details>
| waiting for customer response,in triage | low | Critical |
2,788,848,937 | pytorch | [inductor] `MaxUnpool` crash when meeting out-of-bound value on inductor | ### 🐛 Describe the bug
**symptom**: when input tensor shape is too small (e.g., [1, 1, 1]) and kernel size >1, eager will return an empty tensor while inductor crashes.
**device**: both on CPU and cuda.
**exposed area**: `MaxUnpool1d`, `MaxUnpool2d`, and `MaxUnpool3d`
```python
import torch
import torch.nn as nn
from torch._inductor import config
config.fallback_random = True
torch.manual_seed(0)
torch.set_grad_enabled(False)
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.unpool = nn.MaxUnpool1d(kernel_size=2, stride=2, padding=1)
def forward(self, x):
x = self.unpool(x, x.long())
return x
model = Model()
x = torch.randn(1, 1, 1)
inputs = [x]
try:
output = model(*inputs)
print(f"succeed on eager: {output}")
except Exception as e:
print(e)
try:
c_model = torch.compile(model)
c_output = c_model(*inputs)
print(f"succeed on inductor: {c_output}")
except Exception as e:
print(e)
```
error log
CPU
```
succeed on eager: tensor([], size=(1, 1, 0))
kernel, /tmp/torchinductor_root/jy/cjypsx3k535vxaiglqu75ffcicjwpqokk6hiqhrtdzzwznugpdmm.cpp:20, index out of bounds: 0 <= tmp10 < 0L
```
cuda
```
succeed on eager: tensor([], device='cuda:0', size=(1, 1, 0))
/pytorch/aten/src/ATen/native/cuda/MaxUnpooling.cu:47: max_unpooling2d_forward_kernel: block: [0,0,0], thread: [0,0,0] Assertion `maxind >= 0 && maxind < outputImageSize` failed.
RuntimeError: CUDA error: device-side assert triggered
```
### Versions
PyTorch version: 2.7.0.dev20250112+cu124
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: Tesla V100-SXM2-32GB
<details>
<summary>click here for detailed env</summary>
```
PyTorch version: 2.7.0.dev20250112+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 550.142
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250112+cu124
[pip3] torchaudio==2.6.0.dev20250112+cu124
[pip3] torchvision==0.22.0.dev20250112+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250112+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250112+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250112+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @SherlockNoMad @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: decompositions,module: inductor | low | Critical |
2,788,849,050 | tensorflow | tensorflow cuda Unable to register cuDNN factory error in wsl2 with tf 2.17,18 | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf 2.18
### Custom code
No
### OS platform and distribution
wsl2 ubuntu 24.04lts
### Mobile device
windows 11 x86
### Python version
3.12
### Bazel version
_No response_
### GCC/compiler version
gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
### CUDA/cuDNN version
12.5/9.3
### GPU model and memory
rtx 4060 laptop gpu/8gb vram/16gb ram
### Current behavior?
python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"
2025-01-15 05:03:52.570454: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-01-15 05:03:52.577748: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1736917432.586305 920 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1736917432.588797 920 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-15 05:03:52.597990: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
v2.18.0-rc2-4-g6550e4bd802 2.18.0
### Standalone code to reproduce the issue
steps:
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#verify-you-have-a-cuda-capable-gpu
use wsl method
```shell
python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"
```
### Relevant log output
```shell
2025-01-15 05:03:52.570454: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-01-15 05:03:52.577748: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1736917432.586305 920 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1736917432.588797 920 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-15 05:03:52.597990: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
v2.18.0-rc2-4-g6550e4bd802 2.18.0
``` | stat:awaiting response,type:build/install,wsl2,TF 2.18 | medium | Critical |
2,788,911,051 | neovim | Terminal colors 16-255 are not the expected ones when using a GUI | ### Problem
This was originally reported here https://github.com/neovide/neovide/issues/2937
This is a continuation of https://github.com/neovim/neovim/issues/7018, which was never fixed for GUI clients.
When using a GUI the terminal colors 16-255 does not match the xterm256 color palatte, which as far as I am aware is used in all modern terminal emulators, including the Windows terminal. Furthermore, it's not possible to override those colors.
Here's one example, one obvious difference is the upper right corner of the second big box.

### Steps to reproduce
On Windows
neovide.exe -- --clean
:terminal
type [.\AnsiColors256.ans](https://github.com/Maximus5/ConEmu/blob/master/Release/ConEmu/Addons/AnsiColors256.ans)
### Expected behavior
The default colors should match the xterm256 palette on Windows and Linux. On macOS, I'm not sure which palette is used, but it should match that. Note that Vim also uses this palette.
Alternatively, all colors should be configurable like the first 16 ones.
### Nvim version (nvim -v)
0.10.3
### Vim (not Nvim) behaves the same?
N/A
### Operating system/version
Windows 11
### Terminal name/version
Neovide 0.14.0
### $TERM environment variable
Doesn't have any effect
### Installation
Unknown | bug,gui,terminal,ui-extensibility,highlight | low | Minor |
2,788,941,486 | ant-design | 配置主题后,message, modal同时使用message 的loading无动画了 | ### Reproduction link
[](https://codesandbox.io/p/sandbox/zi-ding-yi-yang-shi-antd-5-23-1-forked-mk5m3y?workspaceId=ws_99LrLMfTCFev5anV7aEji)
### Steps to reproduce
1、示例中先点击loading按钮,再点击confirm按钮
2、看到message的loading动画停止了
### What is expected?
message的loading动画不停止
### What is actually happening?
message的loading动画停止了
| Environment | Info |
| --- | --- |
| antd | undefined |
| React | - |
| System | - |
| Browser | - |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug | low | Major |
2,788,969,700 | PowerToys | Powertoys run is not working | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
when opening the powertoys
a pop up showed
```Version: 0.87.1.0
OS Version: Microsoft Windows NT 10.0.22631.0
IntPtr Length: 8
x64: True
Date: 15-01-2025 12:11:45
Exception:
System.IO.FileNotFoundException: Could not load file or assembly 'System.Security.Cryptography, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. The system cannot find the file specified.
File name: 'System.Security.Cryptography, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
at Wox.Infrastructure.Image.ImageHashGenerator.GetHashFromImage(ImageSource image, String filePath)
at Wox.Infrastructure.Image.ImageLoader.LoadAsync(String path, Boolean generateThumbnailsFromFiles, Boolean loadFullImage)
at Wox.Infrastructure.Image.ImageLoader.<>c.<<Initialize>b__13_1>d.MoveNext()
--- End of stack trace from previous location ---
at System.Threading.Tasks.Task.<>c.<ThrowAsync>b__128_1(Object state)
at System.Threading.QueueUserWorkItemCallback.Execute()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()```
### ✔️ Expected Behavior
turn on the powertoys run
### ❌ Actual Behavior
showing a crash pop up
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,788,989,541 | ollama | ModernBERT (embeddings) | One of the most downloaded models in Ollama is [nomic-embed-text](https://ollama.com/library/nomic-embed-text), an embedding model. It has more than 10M downloads. However, it's a 10 month old model and newer version (from nomic and other teams) are already better in almost all the metrics.
Last month [AnswerAI released ModernBERT](https://huggingface.co/docs/transformers/main/en/model_doc/modernbert) and already has >4M downloads in HF. The nomic team [has a version based on it](https://huggingface.co/nomic-ai/modernbert-embed-base) too.
I think the community would benefit quite al lot from a newer embedding version in Ollama. Not sure if we can upload it ourselves or we need to ask the team for it.
| model request | low | Major |
2,788,996,868 | flutter | [pigeon] Support Swift enums for Dart sealed classes | ### Use case
This is continuation of #153995.
I've moved it into new issue as was requested by @tarrinneal.
The support of sealed classes was added in `22.7.0` version.
However I've on `Swift` side protocols are used instead of enums which means we are not able to use exhaustive switch/case.
Kotlin code is generated with exhaustive `sealed` classes.
### Proposal
Dart input:
```dart
sealed class A {}
class B extends A {
final int number;
B(this.number);
}
class C extends A {
final bool isTrue;
C(this.isTrue);
}
```
Swift output:
```swift
enum A {
case B(number: Int)
case C(isTrue: Bool)
}
``` | package,c: proposal,team-ecosystem,p: pigeon,P2,triaged-ecosystem | low | Minor |
2,788,998,122 | transformers | autocast() got an unexpected keyword argument 'cache_enabled when use trainer.torch_jit_model_eval | ### System Info
- `transformers` version: 4.46.3
- Platform: Linux-4.18.0-147.el8_1.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.10
- Huggingface_hub version: 0.26.5
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@muellerzr
@SunMarc
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When using the `torch_jit_model_eval()` method in trainer, it prompts
> "failed to use PyTorch jit mode due to: autocast() got an unexpected keyword argument 'cache_enabled'."
Looking at the details, it was found that the error was caused by the `self.accelerator.autocast(cache_enabled=False)` method. Its method definition is `def autocast(self, autocast_handler: AutocastKwargs = None)`, and there is no `cache_enabled` method.
Is this because the code here has not been updated, or because I ignored some settings?
Is there a solution now?
### Expected behavior
Work normally. | bug | low | Critical |
2,789,008,648 | ollama | internlm3-8b-instruct | ### https://huggingface.co/internlm/internlm3-8b-instruct
---
### llama.cpp commit:
### https://github.com/ggerganov/llama.cpp/pull/11233
---
## Introduction
InternLM3 has open-sourced an 8-billion parameter instruction model, InternLM3-8B-Instruct, designed for general-purpose usage and advanced reasoning. This model has the following characteristics:
- **Enhanced performance at reduced cost**:
State-of-the-art performance on reasoning and knowledge-intensive tasks surpass models like Llama3.1-8B and Qwen2.5-7B. Remarkably, InternLM3 is trained on only 4 trillion high-quality tokens, saving more than 75% of the training cost compared to other LLMs of similar scale.
- **Deep thinking capability**:
InternLM3 supports both the deep thinking mode for solving complicated reasoning tasks via the long chain-of-thought and the normal response mode for fluent user interactions.
## InternLM3-8B-Instruct
### Performance Evaluation
We conducted a comprehensive evaluation of InternLM using the open-source evaluation tool [OpenCompass](https://github.com/internLM/OpenCompass/). The evaluation covered five dimensions of capabilities: disciplinary competence, language competence, knowledge competence, inference competence, and comprehension competence. Here are some of the evaluation results, and you can visit the [OpenCompass leaderboard](https://rank.opencompass.org.cn) for more evaluation results.
| Benchmark | | InternLM3-8B-Instruct | Qwen2.5-7B-Instruct | Llama3.1-8B-Instruct | GPT-4o-mini(close source) |
| ------------ | ------------------------------- | --------------------- | ------------------- | -------------------- | ------------------------- |
| General | CMMLU(0-shot) | **83.1** | 75.8 | 53.9 | 66.0 |
| | MMLU(0-shot) | 76.6 | **76.8** | 71.8 | 82.7 |
| | MMLU-Pro(0-shot) | **57.6** | 56.2 | 48.1 | 64.1 |
| Reasoning | GPQA-Diamond(0-shot) | **37.4** | 33.3 | 24.2 | 42.9 |
| | DROP(0-shot) | **83.1** | 80.4 | 81.6 | 85.2 |
| | HellaSwag(10-shot) | **91.2** | 85.3 | 76.7 | 89.5 |
| | KOR-Bench(0-shot) | **56.4** | 44.6 | 47.7 | 58.2 |
| MATH | MATH-500(0-shot) | **83.0*** | 72.4 | 48.4 | 74.0 |
| | AIME2024(0-shot) | **20.0*** | 16.7 | 6.7 | 13.3 |
| Coding | LiveCodeBench(2407-2409 Pass@1) | **17.8** | 16.8 | 12.9 | 21.8 |
| | HumanEval(Pass@1) | 82.3 | **85.4** | 72.0 | 86.6 |
| Instrunction | IFEval(Prompt-Strict) | **79.3** | 71.7 | 75.2 | 79.7 |
| Long Context | RULER(4-128K Average) | 87.9 | 81.4 | **88.5** | 90.7 |
| Chat | AlpacaEval 2.0(LC WinRate) | **51.1** | 30.3 | 25.0 | 50.7 |
| | WildBench(Raw Score) | **33.1** | 23.3 | 1.5 | 40.3 |
| | MT-Bench-101(Score 1-10) | **8.59** | 8.49 | 8.37 | 8.87 |
- The evaluation results were obtained from [OpenCompass](https://github.com/internLM/OpenCompass/) (some data marked with *, which means evaluating with Thinking Mode), and evaluation configuration can be found in the configuration files provided by [OpenCompass](https://github.com/internLM/OpenCompass/).
- The evaluation data may have numerical differences due to the version iteration of [OpenCompass](https://github.com/internLM/OpenCompass/), so please refer to the latest evaluation results of [OpenCompass](https://github.com/internLM/OpenCompass/). | model request | low | Major |
2,789,011,188 | flutter | [mobile web]: Flutter freezes while scrolling | ### Steps to reproduce
flutter build web or flutter build web --wasm
On the web platform release mode, a normal scrolling list will freeze and drop frames when scrolling.
### Expected results
Should scroll smoothly
### Actual results
Scrolling freezes and drops frames
### Code sample
<details open><summary>Code sample</summary>
```dart
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter ListView Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: ColorfulListView(),
);
}
}
class ColorfulListView extends StatelessWidget {
final List<Color> colors = [
Colors.red,
Colors.green,
Colors.blue,
];
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Colorful List'),
),
body: ListView.builder(
itemCount: 50,
itemBuilder: (context, index) {
final color = colors[index % colors.length];
return Container(
color: color,
padding: EdgeInsets.all(16.0),
child: Text(
'Item ${index + 1}',
style: TextStyle(color: Colors.white, fontSize: 18),
),
);
},
),
);
}
}
```
</details>
### Screenshots or Video
### Logs
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.2, on macOS 15.0.1 24A348 darwin-arm64, locale zh-Hans-CN)
• Flutter version 3.27.2 on channel stable at /Users/bob/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 68415ad1d9 (2 days ago), 2025-01-13 10:22:03 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
• Pub download mirror https://pub.flutter-io.cn
• Flutter download mirror https://storage.flutter-io.cn
```
</details>
| framework,c: performance,f: scrolling,platform-web,browser: chrome-android,team-web | low | Major |
2,789,025,035 | langchain | When running OpenAI Assistant with tools: AttributeError: 'RequiredActionFunctionToolCall' object has no attribute 'tool' | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When running an AgentExecutor with an assistant, some some return value is not handled by the agent code.
Here is the code of a Chat agent (runs) and an Assitant:
```python
from langchain.agents.openai_assistant import OpenAIAssistantRunnable
from langchain.agents import Tool
from langchain.tools import StructuredTool, Tool
from langchain.agents import AgentExecutor,create_openai_tools_agent
import os
def gnix(*args, **kwargs):
return f"The gnix is xing.\nReceived args: {args};\nreceived kwargs: {kwargs}"
# from pydantic import BaseModel, Field
# class NoArgs(BaseModel):
# pass
# def gnix_noargs(_: NoArgs) -> str:
# return "The gnix is xing."
tool = Tool.from_function(
func=gnix,
name="get_gnix",
description="get the gnix. This function recieves no input and returns the gnix value.",
# callbacks=[]
)
tools = [tool]
message = "what tools do you have? can you get me the gnix?"
api_key = os.getenv("OPENAI_API_KEY")
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
MessagesPlaceholder("chat_history", optional=True),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"),
]
)
model = ChatOpenAI()
agent = create_openai_tools_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
output = agent_executor.invoke({"input": "how is the gnix?"})
print(output["output"])
assistant = OpenAIAssistantRunnable.create_assistant( name="tmp",model="gpt-3.5-turbo",
instructions="You are a helpful helper that can use tools.", tools=tools)
agent_executor = AgentExecutor(agent=assistant, tools=tools)
output = agent_executor.invoke({"content": message})
print(output["output"])
```
### Error Message and Stack Trace (if applicable)
python assistant_test.py
**The Chat output**
The gnix is xing!
**The assistant output**
Traceback (most recent call last):
File "/Users/danielnebenzahl/code/tmp/langchain/assistant_test.py", line 61, in <module>
output = agent_executor.invoke({"content": message})
File "/Users/danielnebenzahl/.pyenv/versions/langchain/lib/python3.10/site-packages/langchain/chains/base.py", line 170, in invoke
raise e
File "/Users/danielnebenzahl/.pyenv/versions/langchain/lib/python3.10/site-packages/langchain/chains/base.py", line 160, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/danielnebenzahl/.pyenv/versions/langchain/lib/python3.10/site-packages/langchain/agents/agent.py", line 1624, in _call
next_step_output = self._take_next_step(
File "/Users/danielnebenzahl/.pyenv/versions/langchain/lib/python3.10/site-packages/langchain/agents/agent.py", line 1330, in _take_next_step
[
File "/Users/danielnebenzahl/.pyenv/versions/langchain/lib/python3.10/site-packages/langchain/agents/agent.py", line 1330, in <listcomp>
[
File "/Users/danielnebenzahl/.pyenv/versions/langchain/lib/python3.10/site-packages/langchain/agents/agent.py", line 1415, in _iter_next_step
yield self._perform_agent_action(
File "/Users/danielnebenzahl/.pyenv/versions/langchain/lib/python3.10/site-packages/langchain/agents/agent.py", line 1429, in _perform_agent_action
if agent_action.tool in name_to_tool_map:
File "/Users/danielnebenzahl/.pyenv/versions/langchain/lib/python3.10/site-packages/pydantic/main.py", line 891, in __getattr__
raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}')
AttributeError: 'RequiredActionFunctionToolCall' object has no attribute 'tool'
### Description
I expect AgentExecutor to work with assistants that use tools.
### System Info
python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Thu Sep 12 23:35:29 PDT 2024; root:xnu-10063.141.1.701.1~1/RELEASE_ARM64_T6000
> Python Version: 3.10.10 (main, May 16 2023, 17:41:32) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langsmith: 0.2.10
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.5
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: 4.0.3
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.7
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.5
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available. | 🤖:bug,investigate | low | Critical |
2,789,035,893 | pytorch | NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'PrivateUse1' backend. This could be because the operator doesn't exist for this backend | ### 🐛 Describe the bug
```
import torch
a = torch.ones((3,3), device='privateuseone')
print(a)
```
```
a = torch.ones((3,3), device='privateuseone')
NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'PrivateUse1' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, CUDA, Meta, QuantizedCPU, QuantizedCUDA, QuantizedMeta, MkldnnCPU, SparseCPU, SparseCUDA, SparseMeta, SparseCsrCPU, SparseCsrCUDA, SparseCsrMeta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
```
### Versions
PyTorch version: 2.5.0a0+gita8d6afb
Is debug build: True
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] optree==0.13.1
[pip3] torch==2.5.0a0+gita8d6afb
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 2.2.1 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.0a0+gita8d6afb dev_0 <develop>
cc @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens | triaged,module: PrivateUse1 | low | Critical |
2,789,043,326 | pytorch | [inductor] [dynamo]index_reduce_ raised AssertionError in assert_functional_graph | ### 🐛 Describe the bug
index_reduce_ will raise assertionError when the input is a view.
mini reproducer:
```python
import torch
class OpWrapperModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, ifm, op_inputs_dict):
result = ifm.index_reduce_(**op_inputs_dict)
return result
torch.manual_seed(8450)
ifm_t = torch.randn([4, 34, 64])
ifm = ifm_t[slice(None, None, None), slice(2, None, None), slice(None, None, None)]
index_tensor = torch.randint(low=0, high=34, size=[64])
source_tensor = torch.randn([4, 32, 64])
params = {
"index": index_tensor,
"source": source_tensor,
"dim": 2,
"reduce": "mean",
"include_self": False,
}
model = OpWrapperModule()
model_compiled = torch.compile(model, backend="inductor")
result = model_compiled(ifm, params)
```
ERROR log and trace:
```
Traceback (most recent call last):
File "/home/zhenzhao/qnpu/sw_214852/src/rep.py", line 27, in <module>
result = model_compiled(ifm, params)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1742, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1753, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 573, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1742, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1753, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1380, in __call__
return self._torchdynamo_orig_callable(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 1164, in __call__
result = self._inner_convert(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1361, in transform_code_object
transformations(instructions, code_options)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 2868, in run
super().run()
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 1052, in run
while self.step():
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 962, in step
self.dispatch_table[inst.opcode](self, inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3048, in RETURN_VALUE
self._return(inst)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/symbolic_convert.py", line 3033, in _return
self.output.compile_subgraph(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1101, in compile_subgraph
self.compile_and_call_fx_graph(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1382, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1432, in call_user_compiler
return self._call_user_compiler(gm)
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1483, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/output_graph.py", line 1462, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/__init__.py", line 2314, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_fx.py", line 1863, in compile_fx
return aot_autograd(
File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/backends/common.py", line 83, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1155, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 1131, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 580, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/aot_autograd.py", line 830, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 153, in aot_dispatch_base
fw_module, updated_flat_args, maybe_subclass_meta = aot_dispatch_base_graph( # type: ignore[misc]
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 184, in aot_dispatch_base_graph
copy_count = assert_functional_graph(fw_module.graph)
File "/usr/local/lib/python3.10/dist-packages/torch/_functorch/_aot_autograd/functional_utils.py", line 461, in assert_functional_graph
n.args[0] in placeholders
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
AssertionError: n=copy_, n.args[0]=permute, placeholders={arg2_1, arg0_1, arg1_1}, graph=graph():
%arg0_1 : [num_users=2] = placeholder[target=arg0_1]
%arg1_1 : [num_users=1] = placeholder[target=arg1_1]
%arg2_1 : [num_users=3] = placeholder[target=arg2_1]
%full : [num_users=1] = call_function[target=torch.ops.aten.full.default](args = ([4, 32, 64], 1), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: False})
%scalar_tensor : [num_users=1] = call_function[target=torch.ops.aten.scalar_tensor.default](args = (0,), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: False})
%expand : [num_users=1] = call_function[target=torch.ops.aten.expand.default](args = (%scalar_tensor, [4, 32, 64]), kwargs = {})
%index_put : [num_users=1] = call_function[target=torch.ops.aten.index_put.default](args = (%arg0_1, [None, None, %arg2_1], %expand), kwargs = {})
%empty : [num_users=1] = call_function[target=torch.ops.aten.empty.memory_format](args = ([4, 32, 64],), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: False})
%permute : [num_users=1] = call_function[target=torch.ops.aten.permute.default](args = (%empty, [0, 1, 2]), kwargs = {})
%copy_ : [num_users=1] = call_function[target=torch.ops.aten.copy_.default](args = (%permute, %index_put), kwargs = {})
%full_1 : [num_users=1] = call_function[target=torch.ops.aten.full.default](args = ([4, 32, 64], 0), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu, pin_memory: False})
%index_put_1 : [num_users=2] = call_function[target=torch.ops.aten.index_put.default](args = (%full_1, [None, None, %arg2_1], %full, True), kwargs = {})
%lt : [num_users=1] = call_function[target=torch.ops.aten.lt.Scalar](args = (%index_put_1, 1), kwargs = {})
%scalar_tensor_1 : [num_users=1] = call_function[target=torch.ops.aten.scalar_tensor.default](args = (1.0,), kwargs = {dtype: torch.float32, layout: torch.strided, device: cpu})
%where : [num_users=1] = call_function[target=torch.ops.aten.where.self](args = (%lt, %scalar_tensor_1, %index_put_1), kwargs = {})
%index_put_2 : [num_users=1] = call_function[target=torch.ops.aten.index_put.default](args = (%copy_, [None, None, %arg2_1], %arg1_1, True), kwargs = {})
%div : [num_users=1] = call_function[target=torch.ops.aten.div.Tensor](args = (%index_put_2, %where), kwargs = {})
%copy__1 : [num_users=1] = call_function[target=torch.ops.aten.copy_.default](args = (%arg0_1, %div), kwargs = {})
return (copy__1,)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.6.0a0+git30ac7fd
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5 (ssh://[email protected]/habana-internal/tpc_llvm10 150d2d7c6a8ff8abf0d8ce194d3fac3986b078e6)
CMake version: version 3.28.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
cc @bdhirsh @ezyang @chauhang @penguinwu @zou3519 @yf225 | triaged,module: functionalization,oncall: pt2,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,789,046,763 | pytorch | torch.compile() In my use case of calling torch.compile(), I have found that the model's data outputs are inconsistent. I suspect that using Triton for operator fusion may have introduced precision deviations. I am unsure how to locate and fix this issue. | ### 🐛 Describe the bug
"My Torch environment is as follows:
2.2.2+cu121
My goal is to use functions related to torch.compile() to optimize the inference time of our model. In fact, it does work and achieves over a 50% reduction in inference time in the default mode.
The model code is as follows:
`"""
copy from https://github.com/alimama-tech/NeurIPS_Auto_Bidding_AIGB_Track_Baseline/blob/main/bidding_train_env/baseline/dd/DFUSER.py
"""
from torch.optim import Adam
import os
from typing import Optional, Tuple, List
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import gin
from .temporal import TemporalUnet
from .basic import (
cosine_beta_schedule,
Losses,
extract,
apply_conditioning,
apply_conditioning_with_fix,
)
class ReduceSum(nn.Module):
def forward(self, x):
return torch.sum(x, dim=-1)
@gin.configurable
class GaussianInvDynDiffusion(nn.Module):
def __init__(self, model, horizon, observation_dim, action_dim, n_timesteps=1000,
clip_denoised=False, predict_epsilon=True, hidden_dim=256,
loss_discount=1.0, returns_condition=False,
condition_guidance_w=0.1,
inv_bias=True,
):
super().__init__()
self.horizon = horizon
self.observation_dim = observation_dim
self.action_dim = action_dim
self.transition_dim = observation_dim + action_dim
self.model = model
self.inv_model = nn.Sequential(
nn.Linear(4 * self.observation_dim, hidden_dim, bias=inv_bias),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim, bias=inv_bias),
nn.ReLU(),
nn.Linear(hidden_dim, hidden_dim, bias=inv_bias),
nn.ReLU(),
# ReduceSum(),
nn.Linear(hidden_dim, self.action_dim, bias=inv_bias),
)
self.returns_condition = returns_condition
self.condition_guidance_w = condition_guidance_w
betas = cosine_beta_schedule(n_timesteps)
alphas = 1. - betas
alphas_cumprod = torch.cumprod(alphas, axis=0)
alphas_cumprod_prev = torch.cat([torch.ones(1), alphas_cumprod[:-1]])
self.n_timesteps = int(n_timesteps)
self.clip_denoised = clip_denoised
self.predict_epsilon = predict_epsilon
self.register_buffer('betas', betas)
self.register_buffer('alphas_cumprod', alphas_cumprod)
self.register_buffer('alphas_cumprod_prev', alphas_cumprod_prev)
# calculations for diffusion q(x_t | x_{t-1}) and others
self.register_buffer('sqrt_alphas_cumprod', torch.sqrt(alphas_cumprod))
self.register_buffer('sqrt_one_minus_alphas_cumprod', torch.sqrt(1. - alphas_cumprod))
self.register_buffer('log_one_minus_alphas_cumprod', torch.log(1. - alphas_cumprod))
self.register_buffer('sqrt_recip_alphas_cumprod', torch.sqrt(1. / alphas_cumprod))
self.register_buffer('sqrt_recipm1_alphas_cumprod', torch.sqrt(1. / alphas_cumprod - 1))
# calculations for posterior q(x_{t-1} | x_t, x_0)
posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
self.register_buffer('posterior_variance', posterior_variance)
self.register_buffer('posterior_log_variance_clipped',
torch.log(torch.clamp(posterior_variance, min=1e-20)))
self.register_buffer('posterior_mean_coef1',
betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))
self.register_buffer('posterior_mean_coef2',
(1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))
loss_weights = self.get_loss_weights(loss_discount)
self.loss_fn = Losses['state_l2'](loss_weights)
def get_loss_weights(self, discount):
self.action_weight = 1
dim_weights = torch.ones(self.observation_dim, dtype=torch.float32)
discounts = discount ** torch.arange(self.horizon, dtype=torch.float)
discounts = discounts / discounts.mean()
loss_weights = torch.matmul(discounts[:, None], dim_weights[None, :])
if self.predict_epsilon:
loss_weights[0, :] = 0
return loss_weights
# ------------------------------------------ sampling ------------------------------------------#
def predict_start_from_noise(self, x_t, t, noise):
if self.predict_epsilon:
return (
extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
)
else:
return noise
def q_posterior(self, x_start, x_t, t):
posterior_mean = (
extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
)
posterior_variance = extract(self.posterior_variance, t, x_t.shape)
posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape)
return posterior_mean, posterior_variance, posterior_log_variance_clipped
def p_mean_variance(self, x, cond, t, returns: torch.Tensor = torch.ones(1, 1)):
if self.returns_condition:
# epsilon could be epsilon or x0 itself
epsilon_cond = self.model(x, cond, t, returns, use_dropout=False)
epsilon_uncond = self.model(x, cond, t, returns, force_dropout=True)
epsilon = epsilon_uncond + self.condition_guidance_w * (epsilon_cond - epsilon_uncond)
else:
epsilon = self.model(x, cond, t)
t = t.detach().to(torch.int64)
x_recon = self.predict_start_from_noise(x, t=t, noise=epsilon)
if self.clip_denoised:
x_recon.clamp_(-5., 5.)
model_mean, posterior_variance, posterior_log_variance = self.q_posterior(
x_start=x_recon, x_t=x, t=t)
return model_mean, posterior_variance, posterior_log_variance
def p_sample(self, x, cond, t, returns: torch.Tensor = torch.ones(1, 1)):
with torch.no_grad():
b, _, _ = x.shape
model_mean, _, model_log_variance = self.p_mean_variance(x=x, cond=cond, t=t, returns=returns)
noise = 0.5 * torch.randn_like(x, device=x.device)
nonzero_mask = (1 - (t == 0).float()).reshape(b, 1, 1)
return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
def p_sample_loop(self, shape, cond, returns: torch.Tensor = torch.ones(1, 1), t: int = 0, fix_dim: Optional[int] = None, save_denoise: bool = False):
with torch.no_grad():
torch.random.manual_seed(2046)
batch_size = shape[0]
x = 0.5 * torch.randn(shape[0], shape[1], shape[2], device=cond.device)
output1, output2 = [], []
if fix_dim is None:
x = apply_conditioning(x, cond, 0)
else:
x = apply_conditioning_with_fix(x, cond, 0, t, fix_dim)
for i in range(self.n_timesteps - 1, -1, -1):
timesteps = torch.ones(batch_size,
device=cond.device) * i
x = self.p_sample(x, cond, timesteps, returns)
#output1.append(x.clone().detach().cpu().numpy().tolist())
output1.append(x.clone().detach())
if fix_dim is None:
x = apply_conditioning(x, cond, 0)
else:
x = apply_conditioning_with_fix(x, cond, 0, t, fix_dim)
#output2.append(x.clone().detach().cpu().numpy().tolist())
output2.append(x.clone().detach())
#if save_denoise:
# return x, output1, output2
return x
# @torch.no_grad()
def conditional_sample(self, cond, returns: torch.Tensor = torch.ones(1, 1), horizon: int = 48, t: int = 0, fix_dim: Optional[int] = None, save_denoise: bool = False):
with torch.no_grad():
batch_size = 1
horizon = self.horizon
shape = torch.tensor([batch_size, horizon, self.observation_dim])
return self.p_sample_loop(shape, cond, returns, t, fix_dim, save_denoise)
def forward(self, cond, returns, t: int = 0, fix_dim: Optional[int] = None, save_denoise: bool = False):
return self.conditional_sample(cond=cond, returns=returns, t=t, fix_dim=fix_dim, save_denoise=save_denoise)
# ------------------------------------------ training ------------------------------------------#
def q_sample(self, x_start, t, noise=None):
if noise is None:
noise = torch.randn_like(x_start, device=x_start.device)
self.sqrt_alphas_cumprod = self.sqrt_alphas_cumprod.to(t.device)
self.sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod.to(t.device)
sample = (
extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise
)
return sample
def p_losses(self, x_start, cond, t, returns=None, masks=None):
noise = torch.randn_like(x_start, device=x_start.device)
x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
t = t.to(x_noisy.device)
x_recon = self.model(x_noisy, cond, t, returns)
if self.predict_epsilon:
loss, info = self.loss_fn(x_recon, noise, masks)
else:
loss, info = self.loss_fn(x_recon, x_start, masks)
return loss, info
def loss(self, x, cond, returns, masks, action_mask=None):
"""
x with shape: (batch_size, step_len, H)
"""
batch_size = len(x)
t = torch.randint(0, self.n_timesteps, (batch_size,), device=x.device).long()
diffuse_loss, info = self.p_losses(x[:, :, self.action_dim:], cond, t, returns, masks)
diffuse_loss_batch = torch.reshape(info['loss'].mean(dim=(1,2)), (-1, 1))
_t = torch.reshape(t, (-1, 1))
loss_batch_t = torch.concat([_t, diffuse_loss_batch], dim=-1)
inv_loss, pred_a_t, mape = self.inv_loss(x, action_mask)
loss = (1 / 2) * (diffuse_loss + inv_loss)
# diffusion t loss bin size
return loss, info, (diffuse_loss, inv_loss), pred_a_t, mape, loss_batch_t
def inv_loss(self, x, masks):
# Calculating inv loss
x_t = x[:, :-1, self.action_dim:]
a_t = x[:, :-1, :self.action_dim]
x_t_1 = x[:, 1:, self.action_dim:]
# x_t_1[:, :, 1] = 0
x_t_2 = torch.cat(
[torch.zeros(x.shape[0], 1, x.shape[-1] - self.action_dim, device=x.device), x[:, :-2, self.action_dim:]],
dim=1)
x_t_3 = torch.cat(
[torch.zeros(x.shape[0], 2, x.shape[-1] - self.action_dim, device=x.device), x[:, :-3, self.action_dim:]],
dim=1)
x_comb_t = torch.cat([x_t_2, x_t_3, x_t, x_t_1], dim=-1)
x_comb_t = x_comb_t.reshape(-1, 4 * self.observation_dim)
masks_flat = masks[:, :-1].reshape(-1)
x_comb_t = x_comb_t[masks_flat]
a_t = a_t.reshape(-1, self.action_dim)
a_t = a_t[masks_flat]
pred_a_t = self.inv_model(x_comb_t)
inv_loss = F.mse_loss(pred_a_t, a_t, reduction="mean")
mape = ((a_t - pred_a_t).abs()) / (a_t.abs() + 1e-8)
mape = mape.mean()
return inv_loss, pred_a_t, mape
@gin.configurable
class DFUSER(nn.Module):
def __init__(self, dim_obs=16, dim_actions=1, dim_return=1, gamma=1, tau=0.01,
ACTION_MAX=10, ACTION_MIN=0,
step_len=48, n_timesteps=10,
condition_guidance_w=1.2,
clip_denoised=True,
inv_bias=True
):
super().__init__()
self.n_timestamps = n_timesteps
self.num_of_states = dim_obs
self.num_of_actions = dim_actions
self.ACTION_MAX = ACTION_MAX
self.ACTION_MIN = ACTION_MIN
self.step_len = step_len
model = TemporalUnet(
horizon=step_len,
transition_dim=dim_obs,
cond_dim=dim_actions,
return_dim=dim_return,
returns_condition=True,
dim=128,
condition_dropout=0.25,
calc_energy=False
)
self.diffuser = GaussianInvDynDiffusion(
model=model,
horizon=step_len,
observation_dim=dim_obs,
action_dim=dim_actions,
clip_denoised=clip_denoised,
predict_epsilon=True,
hidden_dim=256,
n_timesteps=n_timesteps,
loss_discount=1,
returns_condition=True,
condition_guidance_w=condition_guidance_w,
inv_bias=inv_bias,
)
self.step = 0
self.num_of_episodes = 0
self.GAMMA = gamma
self.tau = tau
self.num_of_steps = 0
#def forward(self, states, actions, returns, masks, action_mask):
# x = torch.cat([actions, states], dim=-1)
# cond = torch.ones_like(states[:, 0], device=states.device)[:, None, :]
# loss, infos, (diffuse_loss, inv_loss), pred_a_t, mape, loss_batch_t = self.diffuser.loss(x, cond, returns=returns, masks=masks, action_mask=action_mask)
# return loss, (diffuse_loss, inv_loss), pred_a_t, mape, loss_batch_t
def forward(self, x, budget):
"""
x with shape (time_step, dim)
"""
return self.diffuser(cond=x, returns=budget)
def get_action_s_by_state(self, x: torch.Tensor, returns: torch.Tensor, cur_time:int):
x = torch.reshape(x, [self.step_len, self.num_of_states])
states = x[:cur_time]
conditions = states
x_0 = self.diffuser(cond=conditions, returns=returns)
states = x_0[0, :cur_time + 1]
states_next = states[None, -1]
if cur_time > 1:
states_curt1 = conditions[-2].float()[None, :]
else:
states_curt1 = torch.zeros_like(states_next, device=states_next.device)
if cur_time > 2:
states_curt2 = conditions[-3].float()[None, :]
else:
states_curt2 = torch.zeros_like(states_next, device=states_next.device)
states_comb = torch.hstack([states_curt1, states_curt2, conditions[-1].float()[None, :], states_next])
actions = self.diffuser.inv_model(states_comb)
actions = actions.detach().cpu()[0] # .cpu().data.numpy()
return actions, states_next, x_0
def save_net(self, save_path):
if not os.path.isdir(save_path):
os.makedirs(save_path)
torch.save(self.diffuser.state_dict(), f'{save_path}/diffuser.pt')
def save_model(self, save_path):
if not os.path.isdir(save_path):
os.makedirs(save_path)
model_temp = self.cpu()
jit_model = torch.jit.script(model_temp)
torch.jit.save(jit_model, f'{save_path}/diffuser.pth')
def load_net(self, load_path):
self.diffuser.load_state_dict(torch.load(load_path, map_location='cpu'))
self.use_cuda = torch.cuda.is_available()
if self.use_cuda:
self.diffuser.cuda()
def load_model(self, load_path):
# 加载 TorchScript 模型
jit_model = torch.jit.load(load_path, map_location='cpu')
# 将加载的模型分配给 self.diffuser
self = jit_model
# 检查是否有 CUDA 可用,并将模型移动到 GPU
self.use_cuda = torch.cuda.is_available()
if self.use_cuda:
self.cuda()
`
The inference code is as follows:
`#-*-coding:utf-8-*-
import sys
import os
import time
sys.path.append("..")
from dataclasses import dataclass, field
from typing import Dict, Optional
import numpy as np
import torch
import gin
from models.DFUSER import DFUSER
class AigbInference():
def __init__(self, model_dir, warmup=True):
self.model = DFUSER(
dim_obs=3,
dim_actions=1,
step_len=40,
n_timesteps=20,
gamma = 1,
tau = 0.01,
condition_guidance_w = 1.3,
clip_denoised = True,
inv_bias = True,
dim_return = 1
)
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
print(f'使用的设备: {device}')
self.model.to(device)
self.model.load_net("/home/admin/workspace/aop_lab/app_data/ckp01-pt-2025-1-3/diffuser.pt")
self.device = device
print(f'list_mode_options: {torch._inductor.list_mode_options()}')
self.model = torch.compile(self.model,backend="inductor")
#self.model = torch.compile(self.model,backend="inductor",mode="reduce-overhead")
#self.model = torch.compile(self.model,backend="inductor",mode="max-autotune")
#self.model = torch.compile(self.model,backend="inductor",mode="max-autotune-no-cudagraphs")
#self.model.load_model("/home/admin/workspace/aop_lab/app_data/ckp01-pth/diffuser.pth")
if warmup == True:
for i in range(self.model.step_len):
arg1_shape = (i, 3)
arg2_shape = (1, 1)
x = torch.ones(arg1_shape)
budget = torch.ones(arg2_shape)
x = x.to(self.device)
budget = budget.to(self.device)
traj_pred = self.model(x, budget)
print(f'warmup done')
def infer(self, x, budget):
start_time = time.perf_counter()
x = x.to(self.device)
budget = budget.to(self.device)
traj_pred = self.model(x, budget)
#traj_pred = self.model(x,budget)
print(
#f"traj_pred.shape: {traj_pred.shape} \n "
#f"traj_pred: {traj_pred} \n "
)
#self.model.save_model("/home/admin/workspace/aop_lab/app_data/ckp01-pth/checkpoint-pth-2")
end_time = time.perf_counter()
elapsed_time_ms = (end_time - start_time) * 1000
print(f"my_function 执行时间: {elapsed_time_ms:.3f} ms")
return traj_pred
`
Subsequently, I used Nsight to analyze the GPU utilization efficiency and concluded that some fragmented kernels were fused into Triton operators.
before

after

However, I soon discovered that when I run the model with the same input before and after optimization, the output results differ. Currently, I do not know how to resolve this issue."

### Versions
Collecting environment information...
PyTorch version: 2.2.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Alibaba Group Enterprise Linux Server 7.2 (Paladin) (x86_64)
GCC version: (GCC) 10.2.1 20200825 (Alibaba 10.2.1-3 2.17)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.32
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.32
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A10
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn.so.8.9.7
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 34-37,98-101
Off-line CPU(s) list: 0-33,38-97,102-127
Thread(s) per core: 0
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
Stepping: 6
CPU MHz: 3476.128
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5806.48
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 49152K
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.19.3
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.2.2+cu121
[pip3] torchaudio==2.2.2+cu121
[pip3] torchvision==0.17.2+cu121
[pip3] triton==2.2.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.19.3 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.2.2+cu121 pypi_0 pypi
[conda] torchaudio 2.2.2+cu121 pypi_0 pypi
[conda] torchvision 0.17.2+cu121 pypi_0 pypi
[conda] triton 2.2.0 pypi_0 pypi
(base)
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Critical |
2,789,053,446 | kubernetes | Recovery after injecting memory overload fault, pod cannot be scheduled | ### What happened?
Recovery after injecting memory overload fault, pod cannot be scheduled
### What did you expect to happen?
Recovery after injecting memory overload fault, pod can be scheduled normal
### How can we reproduce it (as minimally and precisely as possible)?
Direct cause:
Two PVCs are bound to the same master node (master3), and pod configuration has anti affinity, making it impossible to schedule to the same node
Process analysis:
1. Injecting memory overload fault, scheduling multiple replicas of deployment in the scheduler: after scheduling the first pod to master3, calling the local CSI interface through volume_manager for 10 minutes without response, continuing to sequentially schedule other replicas;
2. When the scheduler schedules to the second replica pod, as the first pod has not yet completed scheduling on master3, it continues to schedule the second replica pod to master3. After 4 minutes, the local CSI responds, and both volumes are successfully created. At this point, the pod is bound to PVC, causing both PVCs to be bound to master3;
When the scheduler schedules a pod to master3, if there is no response for 10 minutes when calling the local CSI interface through volumn_manager, the timeout scheduling fails. When scheduling other pods to Master3 again, after 4 minutes, LocalCSI responded and successfully bound two volumes to PVC, resulting in two PVCs being bound to Master3. Authsercice has anti affinity configuration and cannot be scheduled to the same node at the same time, causing it to remain pending
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# Version: v1.28.1
```
</details>
### Cloud provider
<details>
na
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,needs-triage | low | Minor |
2,789,094,450 | flutter | windows BSOD(blue screen) when running `flutter run -d emulator-5554` | ### Steps to reproduce
flutter run -d emulator-5554
BSOD occur
but when I run this with the "flutter run -d emulator-5554 " with the "--no-enable-impeller " flag
there's no problem i think its the impeller's bug
### Expected results
BSOD WHEN RUNNING flutter run -d emulator-5554 without the flag "--no-enable-impeller" flag
### Actual results
BSOD
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| waiting for customer response,in triage | low | Critical |
2,789,115,357 | pytorch | Batching rule for aten::_thnn_fused_gru_cell | ### 🚀 The feature, motivation and pitch
I am currently using `vmap` with GRUCell and got following message:
> There is a performance drop because we have not yet implemented the batching rule for aten::_thnn_fused_gru_cell. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at ../aten/src/ATen/functorch/BatchedFallback.cpp:81.)
According to the [vmap operator support list](https://docs.google.com/spreadsheets/d/1Sp4HUjxwMifS5oDQg0yvjqk7hKOpCfKO4jWH4MTGP-k/edit#gid=0), the batching rule is indeed not implemented yet. Are there any plans to work on this in the near future or are there any suggestion for a workaround? Would be awesome to have such a batching rule!
### Alternatives
_No response_
### Additional context
_No response_
cc @zou3519 @Chillee @samdow @kshitij12345 | triaged,module: functorch | low | Major |
2,789,119,643 | pytorch | torch.distributed hangs between Linux (X86) and Mac (M2 Pro) | ### 🐛 Describe the bug
I have pared the example code back to the simplest it can be and tried this on both machines. Both ends hang until the timeout.
Linux code:
```
import os
import torch.distributed as dist
from datetime import timedelta
def init_process(rank, world_size):
os.environ["MASTER_ADDR"] = "192.168.10.104"
os.environ["MASTER_PORT"] = "23456"
os.environ["TORCH_DISTRIBUTED_DEBUG"] = "DETAIL"
os.environ["NCCL_DEBUG"] = "INFO"
os.environ["GLOO_SOCKET_IFNAME"] = "enp3s0" # Specify correct network interface
print(f"Rank {rank}: Setting up process group...")
try:
dist.init_process_group(
backend="gloo", # or "nccl" if using GPUs
init_method="tcp://192.168.10.104:23456",
rank=rank,
world_size=world_size,
timeout=timedelta(seconds=120), # Adjust timeout
)
print(f"Rank {rank}: Process group initialized")
except Exception as e:
print(f"Rank {rank}: Error during initialization - {e}")
print(f"Rank {rank}: Reached end of init_process")
if __name__ == "__main__":
rank = int(os.environ.get("RANK", 0))
world_size = int(os.environ.get("WORLD_SIZE", 2))
init_process(rank, world_size)
```
On the Mac:
```
import os
import torch.distributed as dist
from datetime import timedelta
os.environ["MASTER_ADDR"] = "192.168.10.104" # Linux IPv4 address
os.environ["MASTER_PORT"] = "23456"
os.environ["GLOO_SOCKET_IFNAME"] = "en0" # Specify correct network interface
print("Rank 0: Setting up process group...")
dist.init_process_group(
backend="gloo",
init_method="tcp://192.168.10.104:23456", # Replace with Linux IPv4
rank=1,
world_size=2,
timeout=timedelta(seconds=60),
)
print("Rank 0: Process group initialized")
```
I have checked network connectivity between the machines and there are no firewall issues on either side.
I have used-up all my Claude.ai & ChatGPT credits investigating ways around and finally decided to raise this as an issue.
Hopefully someone real can help :-)
### Versions
Linux PyTorch version:
`python -c "import torch; print(torch.__version__)"
2.5.1+cpu
`
Mac PyTorch version:
` python -c "import torch; print(torch.__version__)"
2.5.1
`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,789,159,372 | pytorch | [torch.export] Error When Trying To Express Dynamism For Transformer Model of SD3 | ### 🐛 Describe the bug
**Brief Description:**
I am trying to export the transformer model of Stable Diffusion 3 using `torch.export.export_for_training`. The error occurs when trying to express dynamism for the feature map height and width. The reproducer and traceback are given below.
**Reproducible Code:**
```
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3-medium-diffusers", text_encoder_3=None, tokenizer_3=None)
unet_kwargs = {}
unet_kwargs["hidden_states"] = torch.ones((2, 16, 64, 64))
unet_kwargs["timestep"] = torch.from_numpy(np.array([1, 2], dtype=np.float32))
unet_kwargs["encoder_hidden_states"] = torch.ones((2, 154, 4096))
unet_kwargs["pooled_projections"] = torch.ones((2, 2048))
#Feature map height and width are dynamic
fm_height = torch.export.Dim('fm_height', min=16)
fm_width = torch.export.Dim('fm_width', min=16)
#iterate through the unet kwargs and set only hidden state kwarg to dynamic
dynamic_shapes = {key: (None if key != "hidden_states" else {2: fm_height, 3: fm_width}) for key in unet_kwargs.keys()}
transformer = torch.export.export_for_training(pipe.transformer.eval(), args=(), kwargs=(unet_kwargs), dynamic_shapes=dynamic_shapes).module()
```
**Error Traceback:**
```
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Error while creating guard:
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Name: ''
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Source: shape_env
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Create Function: SHAPE_ENV
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Guard Types: None
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Code List: None
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Object Weakref: None
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Guarded Class Weakref: None
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] Traceback (most recent call last):
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/_guards.py", line 281, in create
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] return self.create_fn(builder, self)
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 1836, in SHAPE_ENV
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] guards = output_graph.shape_env.produce_guards(
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 4178, in produce_guards
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] raise ConstraintViolationError(
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (fm_height, fm_width)! For more information, run with TORCH_LOGS="+dynamic".
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_height = L['hidden_states'].size()[2] in the specified range 16 <= fm_height <= 9223372036854775806 satisfy the generated guard Ne(((-(L['hidden_states'].size()[2]//2))//2) + 96, 0).
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_height = L['hidden_states'].size()[2] in the specified range 16 <= fm_height <= 9223372036854775806 satisfy the generated guard (L['hidden_states'].size()[2]//2) + ((-(L['hidden_states'].size()[2]//2))//2) + 96 <= 192.
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_width = L['hidden_states'].size()[3] in the specified range 16 <= fm_width <= 9223372036854775806 satisfy the generated guard Ne(((-(L['hidden_states'].size()[3]//2))//2) + 96, 0).
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_width = L['hidden_states'].size()[3] in the specified range 16 <= fm_width <= 9223372036854775806 satisfy the generated guard (L['hidden_states'].size()[3]//2) + ((-(L['hidden_states'].size()[3]//2))//2) + 96 <= 192.
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_width = L['hidden_states'].size()[3] in the specified range 16 <= fm_width <= 9223372036854775806 satisfy the generated guard Ne(294912, 1536*((L['hidden_states'].size()[3]//2))).
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_height = L['hidden_states'].size()[2] in the specified range 16 <= fm_height <= 9223372036854775806 satisfy the generated guard Ne(Mod((L['hidden_states'].size()[2]//2), 2*((L['hidden_states'].size()[2]//2))), 0).
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_width = L['hidden_states'].size()[3] in the specified range 16 <= fm_width <= 9223372036854775806 satisfy the generated guard Ne(Mod((L['hidden_states'].size()[3]//2), 2*((L['hidden_states'].size()[3]//2))), 0).
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_height = L['hidden_states'].size()[2] in the specified range 16 <= fm_height <= 9223372036854775806 satisfy the generated guard 16 <= L['hidden_states'].size()[2] and L['hidden_states'].size()[2] <= 385
E0115 12:29:57.483000 144841 torch/_guards.py:283] [8/0] - Not all values of fm_width = L['hidden_states'].size()[3] in the specified range 16 <= fm_width <= 9223372036854775806 satisfy the generated guard 16 <= L['hidden_states'].size()[3] and L['hidden_states'].size()[3] <= 385
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] Created at:
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 615, in transform
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] tracer = InstructionTranslator(
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2670, in __init__
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] output=OutputGraph(
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 317, in __init__
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] self.init_ambient_guards()
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] File "/home/user/Downloads/ov_notebooks_sd3/openvino_notebooks/.venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 463, in init_ambient_guards
E0115 12:29:57.485000 144841 torch/_guards.py:285] [8/0] self.guards.add(ShapeEnvSource().make_guard(GuardBuilder.SHAPE_ENV))
torch/export/__init__.py:154, in export_for_training(mod, args, kwargs, dynamic_shapes, strict, preserve_module_call_signature)
148 if isinstance(mod, torch.jit.ScriptModule):
149 raise ValueError(
150 "Exporting a ScriptModule is not supported. "
151 "Maybe try converting your ScriptModule to an ExportedProgram "
152 "using `TS2EPConverter(mod, args, kwargs).convert()` instead."
153 )
--> 154 return _export_for_training(
155 mod,
156 args,
157 kwargs,
158 dynamic_shapes,
159 strict=strict,
160 preserve_module_call_signature=preserve_module_call_signature,
161 )
torch/export/_trace.py:1017, in _log_export_wrapper.<locals>.wrapper(*args, **kwargs)
1010 else:
1011 log_export_usage(
1012 event="export.error.unclassified",
1013 type=error_type,
1014 message=str(e),
1015 flags=_EXPORT_FLAGS,
1016 )
-> 1017 raise e
1018 finally:
1019 _EXPORT_FLAGS = None
torch/export/_trace.py:990, in _log_export_wrapper.<locals>.wrapper(*args, **kwargs)
988 try:
989 start = time.time()
--> 990 ep = fn(*args, **kwargs)
991 end = time.time()
992 log_export_usage(
993 event="export.time",
994 metrics=end - start,
995 flags=_EXPORT_FLAGS,
996 **get_ep_stats(ep),
997 )
torch/export/exported_program.py:114, in _disable_prexisiting_fake_mode.<locals>.wrapper(*args, **kwargs)
111 @functools.wraps(fn)
112 def wrapper(*args, **kwargs):
113 with unset_fake_temporarily():
--> 114 return fn(*args, **kwargs)
torch/export/_trace.py:1746, in _export_for_training(mod, args, kwargs, dynamic_shapes, strict, preserve_module_call_signature)
1727 (
1728 args,
1729 kwargs,
(...)
1732 dynamic_shapes,
1733 ) = _process_export_inputs(mod, args, kwargs, dynamic_shapes)
1735 export_func = (
1736 functools.partial(
1737 _strict_export_lower_to_aten_ir,
(...)
1744 )
1745 )
-> 1746 export_artifact = export_func( # type: ignore[operator]
1747 mod=mod,
1748 args=args,
1749 kwargs=kwargs,
1750 dynamic_shapes=dynamic_shapes,
1751 preserve_module_call_signature=preserve_module_call_signature,
1752 pre_dispatch=False,
1753 original_state_dict=original_state_dict,
1754 orig_in_spec=orig_in_spec,
1755 allow_complex_guards_as_runtime_asserts=False,
1756 _is_torch_jit_trace=False,
1757 )
1759 export_graph_signature = export_artifact.aten.sig
1761 forward_arg_names = _get_forward_arg_names(mod, args, kwargs)
torch/export/_trace.py:1252, in _strict_export_lower_to_aten_ir(mod, args, kwargs, dynamic_shapes, preserve_module_call_signature, pre_dispatch, original_state_dict, orig_in_spec, allow_complex_guards_as_runtime_asserts, _is_torch_jit_trace, lower_to_aten_callback)
1239 def _strict_export_lower_to_aten_ir(
1240 mod: torch.nn.Module,
1241 args: Tuple[Any, ...],
(...)
1250 lower_to_aten_callback: Callable,
1251 ) -> ExportArtifact:
-> 1252 gm_torch_level = _export_to_torch_ir(
1253 mod,
1254 args,
1255 kwargs,
1256 dynamic_shapes,
1257 preserve_module_call_signature=preserve_module_call_signature,
1258 restore_fqn=False, # don't need to restore because we will do it later
1259 allow_complex_guards_as_runtime_asserts=allow_complex_guards_as_runtime_asserts,
1260 _log_export_usage=False,
1261 )
1263 # We detect the fake_mode by looking at gm_torch_level's placeholders, this is the fake_mode created in dynamo.
1264 (
1265 fake_args,
1266 fake_kwargs,
1267 dynamo_fake_mode,
1268 ) = _extract_fake_inputs(gm_torch_level, args, kwargs)
torch/export/_trace.py:560, in _export_to_torch_ir(f, args, kwargs, dynamic_shapes, preserve_module_call_signature, disable_constraint_solver, allow_complex_guards_as_runtime_asserts, restore_fqn, _log_export_usage, same_signature)
556 module_call_specs: Dict[str, Dict[str, pytree.TreeSpec]] = {}
557 with _wrap_submodules(
558 f, preserve_module_call_signature, module_call_specs
559 ), _ignore_backend_decomps():
--> 560 gm_torch_level, _ = torch._dynamo.export(
561 f,
562 dynamic_shapes=transformed_dynamic_shapes, # type: ignore[arg-type]
563 tracing_mode="symbolic",
564 disable_constraint_solver=disable_constraint_solver,
565 # currently the following 2 flags are tied together for export purposes,
566 # but untangle for sake of dynamo export api
567 prefer_deferred_runtime_asserts_over_guards=True,
568 allow_complex_guards_as_runtime_asserts=allow_complex_guards_as_runtime_asserts,
569 _log_export_usage=_log_export_usage,
570 same_signature=same_signature,
571 )(
572 *args,
573 **kwargs,
574 )
575 except (ConstraintViolationError, ValueRangeError) as e:
576 raise UserError(UserErrorType.CONSTRAINT_VIOLATION, str(e)) # noqa: B904
torch/_dynamo/eval_frame.py:1448, in export.<locals>.inner(*args, **kwargs)
1446 dim_constraints.solve()
1447 forced_specializations = dim_constraints.forced_specializations()
-> 1448 msg = dim_constraints.prettify_results(
1449 original_signature,
1450 dynamic_shapes,
1451 constraint_violation_error,
1452 forced_specializations,
1453 )
1454 if constraint_violation_error:
1455 constraint_violation_error.args = (
1456 constraint_violation_error.args[0] + msg,
1457 )
torch/fx/experimental/symbolic_shapes.py:2248, in DimConstraints.prettify_results(self, original_signature, dynamic_shapes, constraint_violation_error, forced_specializations)
2245 for s, val in forced_specializations.items():
2246 buf += f" - solving the guards generated for {s} resulted in a specialized value of {val}.\n"
-> 2248 self._process_derived_dim_roots(results, name_to_dim)
2250 dims = []
2251 others = []
torch/fx/experimental/symbolic_shapes.py:2064, in DimConstraints._process_derived_dim_roots(self, results, name_to_dim)
2062 # create result & dim
2063 results[str(root)] = {"min": min_, "max": max_}
-> 2064 name_to_dim[str(root)] = Dim(str(root), min=min_, max=max_)
2065 # remove old root min/max bounds
2066 c.pop("min", None)
torch/export/dynamic_shapes.py:227, in Dim(name, min, max)
225 _min = 0 if min is None else min
226 _max = int_oo if max is None else max
--> 227 assert _max > _min, f"Cannot create Dim with inconsistent min={min}, max={max}"
228 assert name.isidentifier(), f"Dim name must be a valid identifier, got {name}"
229 dim = _Dim(name, (int,), {"min": _min, "max": _max})
AssertionError: Cannot create Dim with inconsistent min=-4, max=-96
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 560.28.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
Stepping: 7
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 576 KiB (18 instances)
L1i cache: 576 KiB (18 instances)
L2 cache: 18 MiB (18 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-35
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy[/](https://file+.vscode-resource.vscode-cdn.net/)swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy==1.12.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1+cpu
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Critical |
2,789,200,344 | godot | AnimationTree.advance() on custom thread breaks update of the skeleton. | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated GeForce 920M - Intel(R) Core(TM) i7-5500U CPU @ 2.40GHz (4 Threads)
### Issue description
I've been unhappy with performance of skeletal animations so I moved everything to animation calculation on a **custom** thread inherited from Godot `Thread`. And is turns out that its not supported, and you'll say "of cause its not". But! You can make it work. You need some tricks. You need to call `advance()` on main thread once to create cache and then it can be updated. Also it need unique instances of `AnimationTreeNode`, sharing animation root is not an option here.
At first it gave me **4x** time performance boost as well if was not interfering with main thread ( _as oppose to running animation on **SubThread** which slows down main thread. And on **SubThread** it works fine, nothing breaks._ )
So while its running at first ok, after some time **skeleton** that is connected to `AnimationTree` stops updating positions. But transforms are still being calculated, animation still going. As you can see video below, I'm coping (_using simple_ `SkeletonModifier3D`) transforms from `AnimationTree` skeleton to **another skeleton** and its still being animated.
https://github.com/user-attachments/assets/7cc009f2-8165-4f69-b0e2-5bb213618e95
Anyway I've decided to report it as a bug. Since its possible and I'm not only one who will try it. And I feel like its totally possible.
One thing is calling `advance()` on thread will print an error to the console.
That because of `emit_signal(SNAME("mixer_applied"));` in the `AnimationMixer::_process_animation`. I have my edited version of engine where I just did this:
```
if(Thread::is_main_thread())
emit_signal(SNAME("mixer_applied"));
```
Behavior of bug is the same, regardless.
### Steps to reproduce
Run test project provided and wait for a while. Sometimes you need to wait a minute. Sometimes its 30 minutes.
### Minimal reproduction project (MRP)
[test_manual_advance.zip](https://github.com/user-attachments/files/18421243/test_manual_advance.zip) | discussion,needs testing,topic:animation | low | Critical |
2,789,208,264 | opencv | The findContours function is significantly slower in version 4.11 compared to version 4.7. | ### Describe the feature and motivation
Thank you for your work, but I tested and found that the speed of this function in version 4.11 is slower than in version 4.7.
Here are the test images and test results.
``` c++
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat img = imread("111.bmp", cv::IMREAD_GRAYSCALE);
cv::Mat mask;
inRange(img, 110, 120, mask);
std::vector<std::vector<cv::Point>> v_cont;
for (int i = 0; i < 10; i++)
{
v_cont.clear();
auto start = chrono::system_clock::now();
cv::findContours(mask, v_cont, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
auto end = chrono::system_clock::now();
auto duration = chrono::duration_cast<chrono::microseconds>(end - start) / 1000.;
cout << "4.11 findContours cost" << double(duration.count()) << "ms" << endl;
}
}
```



### Additional context
_No response_ | feature | low | Major |
2,789,218,496 | flutter | Mac_mokey run_debug_test_android is 10.42% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Mac_mokey run_debug_test_android"
}
-->
The post-submit test builder `Mac_mokey run_debug_test_android` had a flaky ratio 10.42% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1937
Commit: https://github.com/flutter/flutter/commit/e23c31265d75644fa89423321a5504620f09fa1c
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1937
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1934
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1932
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1931
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1924
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1922
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1917
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1910
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1909
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1905
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1902
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1900
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_mokey%20run_debug_test_android/1897
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Mac_mokey%20run_debug_test_android
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| P0,c: flake,team-tool | high | Critical |
2,789,218,911 | flutter | Mac_arm64 build_tests_2_4 is 2.08% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Mac_arm64 build_tests_2_4"
}
-->
The post-submit test builder `Mac_arm64 build_tests_2_4` had a flaky ratio 2.08% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_arm64%20build_tests_2_4/4973
Commit: https://github.com/flutter/flutter/commit/0009cc358ff7e2c06d67b239cfa1f054cff93132
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_arm64%20build_tests_2_4/4973
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_arm64%20build_tests_2_4/4955
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Mac_arm64%20build_tests_2_4
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| P0,c: flake,team-tool | high | Minor |
2,789,250,049 | rust | `-Z split-dwarf-kind={single,split}` is undocumented in the unstable book | Implementation history:
- In [rustc: Stabilize `-Z run-dsymutil` as `-C split-debuginfo` #79570][pr-79570]:
- `-Z split-dwarf=single` became `-C split-debuginfo=packed`
- `-Z split-dwarf=split` became `-C split-debuginfo=unpacked`
- However, as @davidtwco noted in 08ed338f561b000ce5672b55c0545fa7f3f13591 ([cg: split dwarf for crate dependencies #89819][pr-89819]):
> In https://github.com/rust-lang/rust/pull/79570, `-Z split-dwarf-kind={none,single,split}` was replaced by `-C split-debuginfo={off,packed,unpacked}`. `-C split-debuginfo`'s packed and unpacked aren't exact parallels to single and split, respectively.
- As such, #89819 introduced `-Z split-dwarf-kind={single,split}` to fill in the gap.
This unstable compiler flag doesn't seem to have an entry in the unstable book, I only discovered them while looking at `tests/run-make/split-debuginfo`. It would be nice to document its behavior and also the meaning of its values.
[pr-79570]: https://github.com/rust-lang/rust/pull/79570
[pr-89819]: https://github.com/rust-lang/rust/pull/89819 | A-debuginfo,T-compiler,A-docs,A-CLI | low | Critical |
2,789,268,640 | ui | [bug]: After first rendering, when clicking on the bar, the tooltip position is incorrect. | ### Describe the bug
When I initially click on the bar after rendering, the tooltip moves to the wrong position.
In the example, I clicked on Jun and the tooltip's position changed even though the mouse position did not change.


An error occurs only the first time you click, and it operates normally from then on.
### Affected component/components
Chart
### How to reproduce
1. Bar Chart
2. click the bar
3. tooltip position changed, even though mouse position did not change.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Mac, Chrome 131.0.6778.140(arm64)
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,789,299,329 | ant-design | ✨ 关于Table组件在虚拟滚动 行展开的略糟糕的体验说明 | ### What problem does this feature solve?
行展开 虚拟滚动使用的时候,第一行点击展开之后展开到上面去了,第二行及下面几行的展开效果是期望的
demo🔗: https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-zrzvf7
### What does the proposed API look like?
期望第一行点击展开之后能和第二行及之后的行同样向下展开,而非向上展开
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug | low | Major |
2,789,302,377 | transformers | Issue with Progressive Generation Using inputs_embeds and past_key_values | ### System Info
- `transformers` version: 4.46.3
- Platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.17
- Python version: 3.8.20
- Huggingface_hub version: 0.26.1
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA RTX A6000
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I am currently rewriting the generate_progressively function for my custom model class. My goal is to enable the model to generate results progressively by concatenating the initial input_ids with each element of the compress_outputs sequence in turn. Specifically:
1. In the first iteration, the model generates results by concatenating input_ids with the first element of compress_outputs.
2. In the second iteration, it concatenates input_ids with the first and second elements of compress_outputs (the first two elements) to generate results.
3. This process continues until the last element of the compress_outputs sequence is included.
To improve efficiency, I want to leverage caching, as the majority of the concatenated input in each iteration has already been used to compute past_key_values. Below is the code snippet for the function I implemented. In this context, self.model refers to mistral-7b-chat-v0.2.
```
@torch.no_grad()
def generate_progressively(
self,
input_ids,
attention_mask,
compress_outputs,
**kwargs,
):
results = []
compress_output_count = compress_outputs.size(1)
batch_size = input_ids.size(0)
inputs_embs = self.base.model.embed_tokens(input_ids)
prompt_cache = DynamicCache()
outputs = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
use_cache=True,
past_key_values=prompt_cache,
)
prompt_cache = outputs.past_key_values
for compress_ind in range(compress_output_count):
current_compress_outputs = compress_outputs[:, compress_ind: compress_ind+1, :].type_as(input_ids)
outputs = self.model(
input_ids=None,
inputs_embeds=current_compress_outputs,
use_cache=True,
past_key_values=prompt_cache,
)
prompt_cache = outputs.past_key_values
inputs_embs = torch.cat([inputs_embs, current_compress_outputs], dim=1)
attention_mask = torch.cat([attention_mask, torch.ones(batch_size, 1, device=input_ids.device)], dim=1)
generated_outputs = self.base.generate(
inputs_embeds=inputs_embs,
attention_mask=attention_mask,
use_cache=True,
past_key_values=prompt_cache,
return_dict_in_generate=True,
**kwargs,
)
results.append(generated_outputs.sequences)
return results
```
When I execute this code, the program throws an error during execution. The error occurs at line 393 in transformers/generation/utils.py, specifically in the prepare_inputs_for_generation function.
The problematic line of code is:
```
if inputs_embeds is not None and cache_position[0] == 0:
```
The error message is: IndexError: index 0 is out of bounds for dimension 0 with size 0.
I track the excution of the code and here’s a detailed breakdown of the issue:
The error occurs in transformers/generation/utils.py. Initially, the program enters the self._sample function and then proceeds to the self._get_initial_cache_position function.
Within this function, the following line:
```
if not is_torchdynamo_compiling():
cache_position = cache_position[past_length:]
```
causes the correct cache_position slice to become empty, resulting in an IndexError in subsequent steps.
Even if I manage to fix the issue with cache_position, another problem arises later in the self.prepare_inputs_for_generation function.
The relevant code is as follows:
```
if not self.config.is_encoder_decoder:
if inputs_embeds is not None and cache_position[0] == 0:
model_inputs[input_ids_key] = None
model_inputs["inputs_embeds"] = inputs_embeds
else:
model_inputs[input_ids_key] = input_ids.clone(memory_format=torch.contiguous_format)
model_inputs["inputs_embeds"] = None
```
In my case, I provide only inputs_embeds and past_key_values, and since cache_position[0] is not 0, the code attempts to set model_inputs[input_ids_key] using input_ids. However, since input_ids is None, this results in further issues.
Under the current implementation of the generate function in transformers, is it possible to use only inputs_embeds and past_key_values for generation? How can I modify my implementation to achieve progressive generation with caching as intended? Are there specific guidelines for correctly managing cache_position and ensuring compatibility with inputs_embeds?
### Expected behavior
My primary objective is to progressively generate outputs by leveraging caching (past_key_values) to improve efficiency. | bug,Generation | low | Critical |
2,789,387,294 | flutter | i am getting error after running the command flutter doctor | ### Steps to reproduce
Because flutter_tools depends on vm_service_interface 1.1.0 which doesn't match any versions, version solving failed.
### Actual results
Because flutter_tools depends on vm_service_interface 1.1.0 which doesn't match any versions, version solving failed.
### Logs
<details open>
<summary>Logs</summary>
```console
<!-- Paste your logs here -->
```
</details>
### Flutter Doctor output
<details open>
<summary>Doctor output</summary>
```console
<!-- Paste your output here -->
```
</details>
| waiting for customer response,in triage | low | Critical |
2,789,397,403 | kubernetes | Prevent alpha feature gates from being enabled by default | ### What happened?
An alpha feature was accidently introduced as on-by-default which probably should not be allowed.
Maybe there should be some ci checks to prevent this from happening in the future?
https://github.com/kubernetes/kubernetes/blob/2d0a4f75560154454682b193b42813159b20f284/pkg/features/versioned_kube_features.go#L826
### What did you expect to happen?
Alpha features should not be on-by-default
### How can we reproduce it (as minimally and precisely as possible)?
https://github.com/kubernetes/kubernetes/blob/2d0a4f75560154454682b193b42813159b20f284/pkg/features/versioned_kube_features.go#L826
### Anything else we need to know?
/cc @enj
### Kubernetes version
<details>
v1.32
</details>
### Cloud provider
<details>
N/A
</details>
### OS version
<details>
N/A
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/windows,sig/architecture,needs-triage | low | Major |
2,789,430,653 | PowerToys | Shortcut capability to easily change main display | ### Description of the new feature / enhancement
Key combination to avoid having to use Settings>Display>[click on desired display]>Make this my main display.
### Scenario when this would be used?
Some games will only run on the main display. The main display may not be the desired display when not running the game. A quick way to switch main displays with shortcuts would be nice.
### Supporting information
Similar to the NirSoft MultiMonitorControl utility that I'm having trouble running the exec from command line in a batch file -- "The app will not run on this PC." | Needs-Triage | low | Minor |
2,789,447,061 | ui | [bug]: a couple of issues on iPhone 12 mini, viewport: 375x812 | ### Describe the bug
1. carousel not filling width of container, works fine on other iPhone's and Android devices.
<img width="406" alt="Screenshot 2025-01-15 at 10 37 23" src="https://github.com/user-attachments/assets/3ac0acd6-235e-4e40-83ad-bb66ef17a740" />
2. The sheet content originating from top will create some glitch-y movement when trying to close.
### Affected component/components
Carousel, Sheet
### How to reproduce
Open website on iPhone 12 mini with these components and notice various issues.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
macOS 15.2, node v22.12.0, pnpm 9.15.3
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,789,449,987 | terminal | Dimmed UAC-prompt in Windows freezes it when logged in Windows terminal with user other than the one logged in | ### Windows Terminal version
1.21.3231.0
### Windows build number
10.0.26100.0
### Other Software
UAC-prompt that dims Windows causes this issue that freezes the entire OS.
### Steps to reproduce
1. Start Windows Terminal either as Administrator (if the logged in user account does not have admin privileges) or as other user and use an account other than the one logged in.
2. Perform any action that triggers a UAC-promt in Windows. The setting needs to be set so it dims Windows.
### Expected Behavior
UAC-prompt should appear as normal when Windows Terminal is started with a different account than the one logged in Windows.
### Actual Behavior
Windows freezes when a UAC-prompt pops up for any reason, even when not concerning Windows Terminal. This happens when Windows Terminal is started with a different user than the one logged in Windows. This is recreated with a user that has admin privileges and unsure if this happens with accounts without admin privileges. When doing anything that triggers a UAC-prompt where it dims Windows it freezes it completely. The screen dims, but no UAC-prompt is displayed. You can CTRL+ALT+DEL back to Windows, but all windows and taskbar is almost non-responsive. You can control the cursor, but it seem to not register any clicks and nothing from the keyboard as well. The CPU works extra hard as it’s fan speeds up to max and there is a considerably delay when clicking on different windows like the are responding, but heavily delayed so Windows is not usable.
The only way to resolve the issue is to turn off the computer on the power button.
Cannot find any issue within the Event viewer. This issue has been reproduced on two different laptops with multiple reinstalls of Windows.
| Issue-Bug,Needs-Author-Feedback,Needs-Triage,No-Recent-Activity | low | Major |
2,789,461,875 | rust | `PostAnalysisNormalize` can introduce new `{type_error}` | ```rust
#![feature(type_alias_impl_trait)]
type Tait = impl Copy;
fn set(x: &isize) -> isize {
*x
}
fn d(x: Tait) {
set(x);
}
fn other_define() -> Tait {
()
}
fn main() {}
```
This results in an ICE with `rustc src/main.rs -Zvalidate-mir -Zinline-mir=yes`
```
error: concrete type differs from previous defining opaque type use
--> src/main.rs:13:5
|
13 | ()
| ^^ expected `&'static isize`, got `()`
|
note: previous use here
--> src/main.rs:9:5
|
9 | set(x);
| ^^^^^^
error: internal compiler error: compiler/rustc_middle/src/mir/tcx.rs:118:21: deref projection of non-dereferenceable ty PlaceTy { ty: {type error}, variant_index: None }
thread 'rustc' panicked at compiler/rustc_middle/src/mir/tcx.rs:118:21:
Box<dyn Any>
stack backtrace:
15: 0x7f1f7416d320 - rustc_middle[6ec8af473b9179e3]::util::bug::bug_fmt
16: 0x7f1f7871f15c - <rustc_middle[6ec8af473b9179e3]::mir::tcx::PlaceTy>::projection_ty.cold
17: 0x7f1f77ccac3c - rustc_mir_transform[ddf46edc4b25c392]::validate::validate_types
18: 0x7f1f77cc03da - <rustc_mir_transform[ddf46edc4b25c392]::validate::Validator as rustc_mir_transform[ddf46edc4b25c392]::pass_manager::MirPass>::run_pass
19: 0x7f1f75584745 - rustc_mir_transform[ddf46edc4b25c392]::pass_manager::validate_body
20: 0x7f1f77204b6c - rustc_mir_transform[ddf46edc4b25c392]::pass_manager::run_passes_inner
21: 0x7f1f7733d234 - rustc_mir_transform[ddf46edc4b25c392]::optimized_mir
```
The body of `d` does not contain any type errors while building, so we actually build a body:
https://gist.github.com/lcnr/70f559aa69c4a733ca28f47b4a247113
We then normalize `Tait` to `{type error}`: https://gist.github.com/lcnr/92bde9605c555818cc90ea4e0321a258
Resulting in a deref projection of that type error after inlining: https://gist.github.com/lcnr/138f486799689a556d840f3a172b6068
We've got multiple ways forward here:
- more gracefully handle type errors in mir validation/optimizations
- replace the body with a dummy when normalizing to a type error
I believe this can be triggered with RPIT as well, it'd just be a bit more cumbersome | I-ICE,T-compiler,C-bug | low | Critical |
2,789,470,536 | pytorch | RuntimeError: upsample_nearest3d only supports output tensors with less than INT_MAX elements | ### 🐛 Describe the bug
Upscaling a tensor with `upsample_nearest3d` where the result size would exceed 2^31 causes a `RuntimeError`. Code to reproduce:
```
import torch
x = torch.ones((1, 256, 16, 720, 1280), dtype=torch.bfloat16).cuda()
out = torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')
assert (out[0] == out[-1]).all()
```
Gives the following error:
```
File "test.py", line 3, in <module>
out = torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/functional.py", line 4651, in interpolate
return torch._C._nn.upsample_nearest3d(input, output_size, scale_factors)
RuntimeError: upsample_nearest3d only supports output tensors with less than INT_MAX elements, but got [1, 256, 32, 1440, 2560]
```
Same behaviour can be observed also with `torch2.5.1` in both CUDA and HIP environments.
This is a limitation for some models, see e.g. the following [diffusers source code](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_hunyuan_video.py#L107-L116). The same error can occur due to [L115](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/autoencoders/autoencoder_kl_hunyuan_video.py#L115) and this requires setting `vae.enable_tiling().`
### Versions
```
PyTorch version: 2.6.0.dev20241122+rocm6.2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.2.41133-dd7f95766
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 18.0.0git (https://github.com/RadeonOpenCompute/llvm-project roc-6.3.1 24491 1e0fda770a2079fbd71e4b70974d74f62fd3af10)
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI300X (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.2.41133
MIOpen runtime version: 3.2.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.2
[pip3] optree==0.11.0
[pip3] pytorch-triton-rocm==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241122+rocm6.2
[pip3] torchaudio==2.5.0.dev20241206+rocm6.2
[pip3] torchvision==0.20.0.dev20241206+rocm6.2
[pip3] triton==3.0.0
[conda] No relevant packages
``` | triaged,module: 64-bit,module: interpolation | low | Critical |
2,789,474,279 | kubernetes | [Flaking test] Service is not reachable within 2m0s timeout on endpoint 172.31.0.12:xx over TCP protocol | ### Which jobs are flaking?
master-informing
- Conformance-EC2-arm64-master
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite.[It] [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite.[It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite.[It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
### Since when has it been flaking?
According to Triage, these tests have flaking since begining of Jan.
[15/01/2025, 15:23:29](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-ubuntu-ec2-containerd/1879383533215027200)
[15/01/2025, 05:40:06](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-ec2-arm64-conformance-latest/1879236563192254464)
[14/01/2025, 15:24:02](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-ubuntu-ec2-arm64-containerd/1879021141591330816)
[13/01/2025, 12:38:25](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-ec2-eks-al2023/1878617225812774912)
### Testgrid link
https://testgrid.k8s.io/sig-release-master-informing#Conformance%20-%20EC2%20-%20arm64%20-%20master
### Reason for failure (if possible)
Kubernetes e2e suite: [It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] expand_less 2m9s
```
{ failed [FAILED] service is not reachable within 2m0s timeout on endpoint 172.31.0.12:32411 over TCP protocol
In [It] at: k8s.io/kubernetes/test/e2e/network/service.go:1463 @ 01/14/25 19:11:58.851
}
```
Kubernetes e2e suite: [It] [sig-network] Services should be able to create a functioning NodePort service [Conformance] expand_less 2m10s
```
{ failed [FAILED] service is not reachable within 2m0s timeout on endpoint 172.31.0.12:31225 over TCP protocol
In [It] at: k8s.io/kubernetes/test/e2e/network/service.go:1278 @ 01/14/25 19:53:46.5
}
```
Kubernetes e2e suite: [It] [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] expand_less 2m12s
```
{ failed [FAILED] service is not reachable within 2m0s timeout on endpoint 172.31.0.12:30190 over TCP protocol
In [It] at: k8s.io/kubernetes/test/e2e/network/service.go:4265 @ 01/14/25 20:18:35.329
}
```
Kubernetes e2e suite: [It] [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] expand_less 2m12s
```
{ failed [FAILED] service is not reachable within 2m0s timeout on endpoint 172.31.0.12:32080 over TCP protocol
In [It] at: k8s.io/kubernetes/test/e2e/network/service.go:4265 @ 01/14/25 20:31:55.907
}
```
### Anything else we need to know?
N/A
### Relevant SIG(s)
/sig network | sig/network,kind/flake,triage/accepted | low | Critical |
2,789,474,688 | next.js | Error: The Edge Function "middleware" size is 1.02 MB and your plan size limit is 1 MB. | ### Link to the code that reproduces this issue
https://github.com/Olgoetz/CraftTech
### To Reproduce
Deploy the project on Vercel.
### Current vs. Expected behavior
Building this project on Vercel results in
Error: The Edge Function "middleware" size is 1.02 MB and your plan size limit is 1 MB. Learn More: https://vercel.link/edge-function-size
altough this function is far below 1 MB according to the build logs (see screenshot)
<img width="1093" alt="Image" src="https://github.com/user-attachments/assets/482fc43d-ba7b-4927-9d3c-c3cfa2eb4be9" />
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:16:46 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T8112
Available memory (MB): 8192
Available CPU cores: 8
Binaries:
Node: 20.18.1
npm: 10.8.2
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: 15.1.4
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Middleware
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
_No response_ | Middleware | low | Critical |
2,789,505,098 | next.js | next v15 with turbo causes "params should be awaited" error without params usage but with icon.svg file and slug in root path in app router | ### Link to the code that reproduces this issue
https://github.com/Cielquan/nextjs-params-async-turbo-issue
### To Reproduce
1. Clone repo
1. Install dependencies e.g. `pnpm i`
1. Run the `dev` script e.g. `pnpm dev`
1. Goto http://localhost:3000/de-DE/app and see icon in tab and no error in console
1. Stop `dev` script
1. Run `dev` script with `--turbo` e.g. `pnpm dev --turbo`
1. Goto http://localhost:3000/de-DE/app and see icon in tab and now error in console:
```
Error: Route "/[locale]/app" used `params.locale`. `params` should be awaited before using its properties. Learn more: https://nextjs.org/docs/messages/sync-dynamic-apis
at tree.children.children.metadata.icon (.next/server/chunks/ssr/c488b_next_dist_esm_build_templates_app-page_3dcdb1.js:71:376)
at Array.map (<anonymous>)
```
1. Delete `src/app/[locale]/app/icon.svg` file
1. Goto http://localhost:3000/de-DE/app and see icon in tab and no error in console
### Current vs. Expected behavior
### Current
`icon.svg` somehow causes an error where `params.locale` is used but not awaited, even so my code does not use `params` at all anywhere.
### Expected
`icon.svg` should not causes the error.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #49~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Nov 6 17:42:15 UTC 2
Available memory (MB): 15994
Available CPU cores: 4
Binaries:
Node: 20.12.1
npm: 10.5.0
Yarn: 1.22.19
pnpm: 9.11.0
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack, Internationalization (i18n)
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | Internationalization (i18n),Turbopack | low | Critical |
2,789,517,382 | ollama | Segmentation fault | ### What is the issue?
Dear Ollama Team,
I encountered an issue while attempting to run the 'ollama service' command after installation. Here are the detailed steps I followed and the error message I received:
1. I downloaded the 'ollama-linux-amd64.tgz' file from your official source because my research network has some access limitations.
2. I unzipped the file to the path '/usr/local/bin/ollama'.
3. When I executed the command 'ollama service', I encountered a 'Segmentation fault' error.
I would appreciate it if you could look into this issue and provide any possible solutions or workarounds. Thank you for your attention to this matter.
Best regards
### OS
Linux
### GPU
Nvidia
### CPU
_No response_
### Ollama version
_No response_ | bug,needs more info | low | Critical |
2,789,519,322 | pytorch | Connection Limitation in PyTorch Distributed (Vanilla) with c10d Rendezvous Backend | ### 🐛 Describe the bug
Hello PyTorch team,
I am encountering an issue while using PyTorch Distributed Vanilla with the c10d rendezvous backend. I am currently running PyTorch version 2.5.1.
When trying to establish connections across multiple nodes, I can only manage up to 75 simultaneous connections. The plan was to test with 128, 256, and 512 nodes, but I can't exceed this 75-connection limit.
The following error occurs when attempting to establish more connections:
```shell
Traceback (most recent call last):
File "/projects/I20240002/alicia.oliveira/arm_dist_env/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
...
raise RendezvousConnectionError(
torch.distributed.elastic.rendezvous.api.RendezvousConnectionError: The connection to the C10d store has failed. See inner exception for details.
```
I am running the setup on an x86 architecture with InfiniBand. I would like to know if this is a known limitation of the c10d rendezvous backend or if there are any configurations or adjustments I can make to allow more connections.
Thank you in advance for your help!
### Versions
PyTorch version: 2.5.1
Rendezvous backend: c10d
Architecture: x86
Network: InfiniBand
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,789,543,808 | PowerToys | Find and Replace feature | ### Description of the new feature / enhancement
I would like to request the addition of a Find and Replace feature in PowerToys. This utility would allow users to search for specific text or patterns within files and optionally replace them, streamlining workflows involving repetitive text edits across multiple documents.
### Scenario when this would be used?
Developers or writers editing configuration files, codebases, or text documents.
Professionals who need to update file content in bulk (e.g., replacing outdated terms or correcting common typos).
Anyone looking to efficiently search and modify text in multiple files or within a specific directory.
Every one is doing there work online or browser if this feature implement that will be great for every one
### Supporting information
A flexible Find and Replace tool can be inspired by existing solutions like Notepad++ or Visual Studio Code.
Features like regex (regular expression) support, file filtering by type, and case sensitivity toggles would greatly enhance its functionality.
Integrating this feature into PowerToys would provide users with a powerful utility that complements existing tools like FancyZones and PowerToys Run, making PowerToys an even more comprehensive productivity suite. | Needs-Triage | low | Minor |
2,789,550,403 | godot | Visual profiler "16.66 ms" graph rendering out of bounds | ### Tested versions
reproducible in 4.4.dev7; 4.3 stable (v4.3.stable.official [77dcf97d8]); probably in all of 4.x
### System information
Windows 7, 4.4dev7, Compatabillity. GPU: GeForce 540M
### Issue description
Visual bug where the measurement of "16.67 ms" in Visual Profiler goes out of windows bounds and renders on the 3d preview window, as seen in the screenshot.

### Steps to reproduce
Start any scene, open Visual Profiler, start Profiling, stop Profiling, turn of "fit to frame", observe the bug.
### Minimal reproduction project (MRP)
"N/A"
not really needed | bug,topic:editor | low | Critical |
2,789,558,046 | godot | Closing documentation selects random script | ### Tested versions
v4.4.beta.custom_build [4ce466d7f]
### System information
Fedora Linux 40 (KDE Plasma) on Wayland - Wayland display driver, Single-window
### Issue description
Opening a documentation by ctrl clicking in a script and then closing the documentation with ctrl+w then selects a random script
https://github.com/user-attachments/assets/99d26238-076a-4099-99fb-208f9d457705
### Steps to reproduce
^
### Minimal reproduction project (MRP)
[doc.zip](https://github.com/user-attachments/files/18423536/doc.zip) | bug,topic:editor,usability | low | Minor |
2,789,569,624 | vscode | Test Explorer Test Item | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Hi Team,
I would like to know is there any way by which we can access the Test item which gets displayed in Test explorer VS code using vs code api.
I want to bind some kind of event to Parent Test item which is shown in Test explorer.
I can see on right click there are some menus assigned to it and they perform some function like Go to test and hide test.
Looking out for same approach on click.
Any help is appreciated. | info-needed,under-discussion | low | Major |
2,789,596,494 | material-ui | [material-ui][Tabs] Out of a sudden broken with esm.sh | ### Steps to reproduce
Steps:
1. Open this link to live example: https://stackblitz.com/edit/stackblitz-starters-a4bz6g3w?file=index.html
2. see that neither Tabs nor TabList is rendered at all
### Current behavior
there are even no dom elements, and also nothing rendered

### Expected behavior
I would like to see tabs in one way or another
### Context
This error happens precisely after clearing cache of my browser (I still had an old cache from yesterday on my firefox and when deleting it, I get the same error while before it was all good). It is really today Wednesday 15. January that it broke.
I tried several different versions of mui and the same error occurs everywhere. From 5.16.14 to 6.4.0 (EDIT: I was previously running on 6.1.10 where I stilled had the working cache mentioned below)
### Your environment
https://stackblitz.com/edit/stackblitz-starters-a4bz6g3w?file=index.html
using firefox and brave
**Search keywords**: tabs | external dependency,component: tabs,package: material-ui | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.