id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,773,655,811 | go | x/telemetry/config: `collect go/platform/host/darwin/major-version:{20,21,22,23,24}` | ### Summary
We propose to add `go/platform/host/darwin/major-version:{20,21,22,23,24}` to the counters collected in the telemetry config. These counters correspond to macOS major versions 11 through 15 (11 is the oldest version we support, and 15 is the latest released version.
### Proposed Config Change
https://go-review.googlesource.com/c/telemetry/+/640738 | telemetry,Telemetry-Proposal | low | Minor |
2,773,661,743 | flutter | Replace `onSurfaceDestroyed` with `onSurfaceCleanup` in packages repo | `onSurfaceDestroyed` has been deprecated for Flutter 3.28+: https://github.com/flutter/flutter/blob/e3b301f23d35cacf5e054cf0e23ac49ea7f0086d/engine/src/flutter/shell/platform/android/io/flutter/view/TextureRegistry.java#L156
Known plugin usage:
`camera_android_camerax`
`video_player_android`
| platform-android,p: camera,p: video_player,P2,team-android,triaged-android,p: waiting for stable update | low | Minor |
2,773,670,616 | ollama | [feature] start ollama automatically on startup | i've been playing with this feature to automatically start ollama serve from startup (docker php init) but it won't start with an & (background process). then i tried to put it in a script with a lock file in cron and sees if that start the script. it starts my script but it then does not start `ollama serve` which should !
running it in bash works like a charm, but automating this is currently a pain in the ... | feature request | low | Minor |
2,773,674,877 | tauri | [bug] Clicking on a datalist with a null-coalescing value intermittently crashes the frontend | ### Describe the bug
Thanks for all the work y'all put into tauri. This is infinitely easier and faster than any of the .NET ecosystem application development I've done.
I made a [repo](https://github.com/jmurphyct/broken-tauri-datalist) with minimal reproduction code
With the code in a solidjs project:
```
import { For } from "solid-js";
export function App() {
const osList = [{name: "Edge"}, {name: "Firefox"}, {name: "Chrome"}, {name: undefined}];
return (
<>
<label for="browser">Choose your browser from the list:</label>
<input list="browsers" name="browser" id="browser"/>
<datalist id="browsers">
<For each={osList}>
{(osOption) => <option value={osOption.name ?? "Uhh"}/>}
</For>
</datalist>
</>
)
}
```
While running in dev (`npm tauri dev`)

Clicking on the 'browser' input field _sometimes_ crashes the frontend, closing the developer tools window and stopping rendering. Sometimes the window is all black, and sometimes it's all white.

It still runs in the console.
If it works once, it'll _probably_ keep working until I close and re-open the window. It feels like it fails about 40% of the time.
### Reproduction
I made a [repo](https://github.com/jmurphyct/broken-tauri-datalist) with minimal reproduction code.
To do it yourself:
1. Create a new Tauri project with the solidjs frontend.
2. Replace the contents of the App function with
```
const osList = [{name: "Edge"}, {name: "Firefox"}, {name: "Chrome"}, {name: undefined}];
return (
<>
<label for="browser">Choose your browser from the list:</label>
<input list="browsers" name="browser" id="browser"/>
<datalist id="browsers">
<For each={osList}>
{(osOption) => <option value={osOption.name ?? "Uhh"}/>}
</For>
</datalist>
</>
)
```
3. Run the project through `npm tauri dev`
4. Click on the "browser" input and observe the frontend crash.
### Expected behavior
The datalist options should appear below the input field, as they sometimes do.
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.19045 x86_64 (X64)
✔ WebView2: 131.0.2903.112
✔ MSVC:
- Visual Studio Build Tools 2019
- Visual Studio Professional 2022
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (directory override for 'C:\Users\jmurphy\Documents\GitHub\broken-tauri-datalist')
- node: 20.10.0
- npm: 10.2.5
[-] Packages
- tauri 🦀: 2.2.0
- tauri-build 🦀: 2.0.4
- wry 🦀: 0.48.0
- tao 🦀: 0.31.1
- @tauri-apps/api : not installed!
- @tauri-apps/cli : 2.2.2
[-] Plugins
- tauri-plugin-log 🦀: 2.2.0
- @tauri-apps/plugin-log : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:5173/
- framework: SolidJS
- bundler: Vite
```
### Stack trace
```text
No stack trace
```
### Additional context
Fails on nightly rust toolchain too.
I cannot get any errors if I run this in browser, so I don't think it's a solidjs issue. | type: bug,status: needs triage | low | Critical |
2,773,677,531 | storybook | [Bug]: SvelteKit mocks doesn't support the `$app/state` module | ### Describe the bug
[SvelteKit 2.12.0](https://github.com/sveltejs/kit/releases/tag/%40sveltejs%2Fkit%402.12.0) introduced a new built-in module, `$app/state`, intended for Svelte 5 usage.
- https://bsky.app/profile/svelte.dev/post/3ldhdcfc62k2k
- https://svelte.dev/docs/kit/$app-state
It functions similarly to the existing `$app/stores` module, but is based on runes instead of stores.
We have an (experimental) API that allows users to mock these SvelteKit-level built-in modules (originally contributed by @paoloricciuti), and we need to extend that to support `$app/state` as well. See:
- https://storybook.js.org/docs/get-started/frameworks/sveltekit#how-to-mock
- https://storybook.js.org/docs/get-started/frameworks/sveltekit#stores
### Additional context
The mocks lives in https://github.com/storybookjs/storybook/blob/next/code/frameworks/sveltekit/src/mocks/app | bug,sveltekit | low | Critical |
2,773,678,341 | go | x/tools/gopls: check spelling of identifiers in declarations | It would be nice if gopls could reliably point out misspellings in declarations (but not in references, where it is just a nuisance), while being aware of Go conventions for capitalization, word breaking, and local project naming conventions. An analyzer seems like the easiest integration. (Semantic tokens doesn't have a "misspelled" modifier, nor the means to suggest a fix.)
One way to do this is to develop something from scratch, but I bet there are spell checkers out there that we could reuse. The tricky part will be to define a loosely coupled interface to avoid x/tools from having to add a dependency on them.
| gopls,gopls/analysis | low | Minor |
2,773,705,140 | storybook | [Bug]: Svelte 5 docgen crashes with `MyComponent is not defined` if a variable has the same name as the filename | ### Describe the bug
In Svelte 5+Vite (or SvelteKit), if you have a variable _anywhere_ inside your Svelte component that is called the same as the filename, our internal Svelte docgen Vite plugin will crash with `MyComponent is not defined`.
Given the following Svelte component called `Greeting.svelte`:
```svelte
<script>
let Greeting = 'world';
</script>
Hello {Greeting}
```
It will crash with said error.
The reason for this, is that the above is compiled to something like this by Svelte:
```js
... imports
export default function Greeting_1($$anchor) {
let Greeting = 'world';
...
}
```
The default function has a `_1` appended to the name, to not cause a collision with the internal variable named `Greeting`.
See https://svelte.dev/playground/23d895f41dc9451997d8ee7ead4e7be9?version=5.16.6
However our Svelte docgen logic adds a `__docgen` property to the default export, using different naming heuristics from Svelte 4, doing:
```js
... imports
export default function Greeting_1($$anchor) {
let Greeting = 'world';
...
}
Greeting.__docgen = ...
// 👆 whoops, wrong variable name, should have been Greeting_1
```
We fixed a similar issue in `@storybook/addon-svelte-csf` v5 by instead of trying to replicate the variable naming logic, we used AST parsing to get the name of the default export and use that as the reference. We could potentially do the same here. We could also try to update the naming logic to mimic Svelte 5 behavior too, but AFAIK that is easier said than done, because that logic is now spread out through multiple flows in Svelte internally.
### Additional context
This is the naming logic that only works with Svelte 4: https://github.com/storybookjs/storybook/blob/next/code/frameworks/svelte-vite/src/plugins/svelte-docgen.ts/#L41-L71
This is where we add the `__docgen` property: https://github.com/storybookjs/storybook/blob/next/code/frameworks/svelte-vite/src/plugins/svelte-docgen.ts/#L228-L230 | bug,svelte,docgen | low | Critical |
2,773,707,353 | PowerToys | Keyboard mapping issue | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
I remapped my copilot key to act as my CTRL key since that's what it replaces on my laptop. I'm basically using that inside of a game to let CTRL + left/right be my strafing keys. When I press up to move forward and then hit my copilot(CTRL mapped) key with an arrow left/right to strafe, it'll stop moving forward and just strafe directly left or right instead of ahead and right or ahead and left like I'd expect. That's how the actual physical CTRL key works, so it appears to be a mapping issue within this software that essentially cancels my "UP" keypress that I'm actively holding down.
### ✔️ Expected Behavior
I expect the key I remapped to not cancel keys I'm actively holding down before pressing that key.
### ❌ Actual Behavior
When pressing a remapped key, it cancels any presses I'm actively holding down on other keys. I can just hold up to run and then hit the remapped key and it'll stop me dead in my tracks since it appears to cancel any preexisting button holds.
### Other Software
It works this way in literally anything it appears. I'm specifically using it for Final Fantasy 14 at the moment though. I can even go to a text document in word or textedit and hold a direction to move my cursor in text and pressing that remapped key will stop the long press I'm holding. | Issue-Bug,Needs-Triage | low | Minor |
2,773,716,933 | go | net/http: Redirect hardening | The [`http.Redirect`](https://pkg.go.dev/net/http#Redirect) function writes a 3xx redirect to a `ResponseWriter`.
`Redirect` takes a URL parameter, of type string. The URL parameter has only minimal sanitization applied, and is not safe for use with attacker-controlled inputs.
One example of possibly-surprising behavior is that a redirect to `\\example.com` is a relative-path reference according to [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986#section-4.2), but will be interpreted by most browsers as a network-path reference. `/\example.com` is an absolute-path reference according to the RFC, but will also be interpreted by browsers as a network-path reference. (Thanks to Jingcheng Yang (Sichuan University), Enze Wang@IPASSLAB(@zer0yu), Jianjun Chen (Tsinghua University & Zhongguancun Laboratory) for reporting this case.)
We should document that `Redirect` does not sanitize its URL parameter. Users who wish to use `Redirect` with untrusted URLs should parse the URL with `net/url`, perform whatever validation they may wish, and then reassemble the parsed and validated URL into a string with `net/url.URL.String`.
We should also consider, as a hardening measure, %-encoding backslashes at the start of `Redirect`'s URL parameter to prevent browsers from interpreting them as part of an absolute-path reference. | Security,NeedsFix,LibraryProposal | low | Major |
2,773,719,298 | PowerToys | PowerToys Run doesn't show up when the combo keys are pressed down | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
PowerToys Run module isn't starting with the combo key. Currently I set it to alt+space, and it doesn't open the interface
### ✔️ Expected Behavior
I expect the search bar to show up
### ❌ Actual Behavior
The search bar did not show up
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Minor |
2,773,747,451 | kubernetes | AppArmor fields dropped from higher-level workload spec (daemonset / replicaset / etc) if modified on skewed cluster | ### What happened?
1. Upgrade first node in 3-node cluster from 1.29 to 1.30
2. Modified a DaemonSet to migrate from `container.apparmor.security.beta.kubernetes.io` annotations to `appArmorProfile` field.
Modification was done by running `helm upgrade` on the node that had been upgraded, with a chart that checks the apiserver minor version: https://github.com/cilium/cilium/blob/v1.16.3/install/kubernetes/cilium/templates/cilium-agent/daemonset.yaml#L87-L95
3. Daemonset controller running on skewed node (still on 1.29) patched the DaemonSet status subresource, and the `appArmorProfile` field was dropped from the resource
5. DaemonSet Pods crashloop due to missing profile
### What did you expect to happen?
Fields are not dropped from resources when cluster is in a supported version skew.
### How can we reproduce it (as minimally and precisely as possible)?
Attempt to add AppArmor fields to a controller-managed resource (daemonset/deployment/replicaset/etc) in a skewed 1.29/1.30 cluster. Specifically, the kube-controller-manager must still be down-level.
### Anything else we need to know?
This was observed while upgrading an RKE2 cluster using Cilium, whose chart migrates from annotations to the new field when it detects that it is talking to a 1.30 apiserver.
<details>
<summary>Audit logs showing daemonset controller status update dropping fields</summary>
`cat audit*.log | jq -s '. | sort_by(.stageTimestamp)[] | select((.verb != "get") and (.stage == "ResponseComplete") and (.responseObject.metadata.name == "cilium")) | {verb, requestURI, stageTimestamp, userAgent, "spec.template.spec.securityContext": .responseObject.spec.template.spec.securityContext, "spec.template.metadata.annotations": .responseObject.spec.template.metadata.annotations}'`
```json
{
"verb": "create",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets?fieldManager=helm",
"stageTimestamp": "2025-01-07T19:38:24.127192Z",
"userAgent": "Helm/3.16.1",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites": "unconfined",
"container.apparmor.security.beta.kubernetes.io/cilium-agent": "unconfined",
"container.apparmor.security.beta.kubernetes.io/clean-cilium-state": "unconfined",
"container.apparmor.security.beta.kubernetes.io/mount-cgroup": "unconfined",
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:38:24.177231Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites": "unconfined",
"container.apparmor.security.beta.kubernetes.io/cilium-agent": "unconfined",
"container.apparmor.security.beta.kubernetes.io/clean-cilium-state": "unconfined",
"container.apparmor.security.beta.kubernetes.io/mount-cgroup": "unconfined",
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:38:26.353795Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites": "unconfined",
"container.apparmor.security.beta.kubernetes.io/cilium-agent": "unconfined",
"container.apparmor.security.beta.kubernetes.io/clean-cilium-state": "unconfined",
"container.apparmor.security.beta.kubernetes.io/mount-cgroup": "unconfined",
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:38:41.303421Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites": "unconfined",
"container.apparmor.security.beta.kubernetes.io/cilium-agent": "unconfined",
"container.apparmor.security.beta.kubernetes.io/clean-cilium-state": "unconfined",
"container.apparmor.security.beta.kubernetes.io/mount-cgroup": "unconfined",
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:38:52.544272Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites": "unconfined",
"container.apparmor.security.beta.kubernetes.io/cilium-agent": "unconfined",
"container.apparmor.security.beta.kubernetes.io/clean-cilium-state": "unconfined",
"container.apparmor.security.beta.kubernetes.io/mount-cgroup": "unconfined",
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:39:23.015950Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites": "unconfined",
"container.apparmor.security.beta.kubernetes.io/cilium-agent": "unconfined",
"container.apparmor.security.beta.kubernetes.io/clean-cilium-state": "unconfined",
"container.apparmor.security.beta.kubernetes.io/mount-cgroup": "unconfined",
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:39:30.984065Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites": "unconfined",
"container.apparmor.security.beta.kubernetes.io/cilium-agent": "unconfined",
"container.apparmor.security.beta.kubernetes.io/clean-cilium-state": "unconfined",
"container.apparmor.security.beta.kubernetes.io/mount-cgroup": "unconfined",
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:44:12.727728Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites": "unconfined",
"container.apparmor.security.beta.kubernetes.io/cilium-agent": "unconfined",
"container.apparmor.security.beta.kubernetes.io/clean-cilium-state": "unconfined",
"container.apparmor.security.beta.kubernetes.io/mount-cgroup": "unconfined",
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:44:35.494079Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites": "unconfined",
"container.apparmor.security.beta.kubernetes.io/cilium-agent": "unconfined",
"container.apparmor.security.beta.kubernetes.io/clean-cilium-state": "unconfined",
"container.apparmor.security.beta.kubernetes.io/mount-cgroup": "unconfined",
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:44:48.598381Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites": "unconfined",
"container.apparmor.security.beta.kubernetes.io/cilium-agent": "unconfined",
"container.apparmor.security.beta.kubernetes.io/clean-cilium-state": "unconfined",
"container.apparmor.security.beta.kubernetes.io/mount-cgroup": "unconfined",
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "patch",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium?fieldManager=helm",
"stageTimestamp": "2025-01-07T19:44:59.558778Z",
"userAgent": "Helm/3.16.1",
"spec.template.spec.securityContext": {
"appArmorProfile": {
"type": "Unconfined"
}
},
"spec.template.metadata.annotations": {
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:44:59.609019Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:45:00.128220Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:45:00.180536Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:45:00.193503Z",
"userAgent": "kube-controller-manager/v1.29.11+rke2r1 (linux/amd64) kubernetes/960a2f0/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:46:11.694538Z",
"userAgent": "kube-controller-manager/v1.30.7+rke2r1 (linux/amd64) kubernetes/0c76c64/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
{
"verb": "update",
"requestURI": "/apis/apps/v1/namespaces/kube-system/daemonsets/cilium/status",
"stageTimestamp": "2025-01-07T19:46:12.115811Z",
"userAgent": "kube-controller-manager/v1.30.7+rke2r1 (linux/amd64) kubernetes/0c76c64/system:serviceaccount:kube-system:daemon-set-controller",
"spec.template.spec.securityContext": {},
"spec.template.metadata.annotations": {
"prometheus.io/port": "9962",
"prometheus.io/scrape": "true"
}
}
```
</details>
### Kubernetes version
<details>
```console
root@rke2-server-1:/# kubectl version
Client Version: v1.30.7+rke2r1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.7+rke2r1
```
</details>
### Cloud provider
n/a
### OS version
Ubuntu 22.04
### Install tools
rke2
### Container runtime (CRI) and version (if applicable)
n/a
### Related plugins (CNI, CSI, ...) and versions (if applicable)
cilium | kind/support,sig/apps,sig/testing,needs-triage | low | Critical |
2,773,760,806 | go | proposal: x/mobile: bind support list of types to not export | ### Proposal Details
`gomobile bind` currently tries to export all exported types. There are cases like RPC and encoding where some types might be exported by the package, but should not be exported by gomobile (due to language feature support). In these cases, it would be clearer for usage to have a list of unexported types, or a compiler annotation for types, than to see random warnings in the output about unsupported types. | Proposal,mobile,LibraryProposal | low | Minor |
2,773,806,086 | flutter | [impeller] drawing fat arcs has artifacts sometimes | https://github.com/flutter/flutter/issues/158567 was filed to report artifacts when drawing oval sectors by drawing arcs with wide stroke widths. https://github.com/flutter/flutter/pull/161255 fixed that special case but it was seen that there are other inputs for which there are artifacts.
These cases are presumed to be less likely though.
## reproduction
Use the `FatStrokeArc` test introduced in https://github.com/flutter/flutter/pull/161255
## screenshots
<img width="488" alt="Screenshot 2025-01-07 at 1 44 09 PM" src="https://github.com/user-attachments/assets/4b077dde-859d-4192-a3a0-25f5a32eb70e" />
<img width="511" alt="Screenshot 2025-01-07 at 1 44 21 PM" src="https://github.com/user-attachments/assets/9df7fc0f-85f7-49c1-a6d0-3843ddb828a0" />
<img width="494" alt="Screenshot 2025-01-07 at 1 44 26 PM" src="https://github.com/user-attachments/assets/9a747451-5831-47be-9245-6d429982b2ce" />
<img width="510" alt="Screenshot 2025-01-07 at 1 44 42 PM" src="https://github.com/user-attachments/assets/94608e6d-37c2-4b1a-a490-3a929a7b7cfe" />
| P3,e: impeller,team-engine,triaged-engine | low | Minor |
2,773,843,054 | PowerToys | Snapped window stack scroll through | ### Description of the new feature / enhancement
The custom windows snap layouts are a game changer an a feature I very much appreciate. But even on the largest of screens we eventually run out of screen space to have windows side by side. Say for example you have a 2x1 grid and you snap a document to the left but you want to constantly refer through onenote and a webpage. It would be nice if you could have all three open as the same time but you only have space for two so you keep the document as the persistent window on the left side of the grid and snap onenote and chrome to the other side. You still have to minimize and maximize windows on the right side when you want to switch between onenote and the broswer. What if we could have both onenote and the browser open on the right side, stacked on top of each other but only one of them visible at a time, but we could just move the mouse to that part of the grid and scroll through that stack of windows using a shortcut like Windows key+mouse wheel or Ctrl+mouse wheel to bring the next window in the stack to view? I think that would be amazing. That would increase productivity a lot.
### Scenario when this would be used?
In the description.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,773,846,027 | node | node:sqlite: sqlite module should provide wrapper for `sqlite3_create_window_function` api | ### What is the problem this feature will solve?
i want to create user-defined window functions when using my sqlite database.
`node:sqlite` provides a wrapper for `sqlite3_create_function_v2`, but sqlite expects window functions to be defined with `sqlite3_create_window_function`
### What is the feature you are proposing to solve the problem?
`node:sqlite` should provide an additional wrapper for `sqlite3_create_window_function`, or extend the behavior of the existing wrapper to use it conditionally.
### What alternatives have you considered?
defining an aggregate window function on a database opened with `node:sqlite` requires C FFI by the `node:sqlite` module | feature request | low | Minor |
2,773,880,233 | vscode | Can we avoid creating extra text documents during batch refactors? | Related to #64485
**repo**
1. In a large workspace with no editors open
2. Do a workspace find replace for a common term
**Bug**
Observe that for each edited file, VS Code emits a `onDidOpenTextDocument` and then `onDidChangeTextDocument`. A bit later `onDidCloseTextDocument` is emitted when the documents are cleaned up. None of the files are even opened in text editors
---
When doing bulk edits/refactors on unopened files, can we avoid creating text documents that are exposed to extensions? I think extensions should be using file watching for these cases instead
If changing the behavior is too risky, maybe we could have a flag on these events so extensions can easily ignore them?
cc @andrewbranch | under-discussion | low | Critical |
2,773,897,392 | go | spec: for-range iterator yield function requires bool result - consider generalizing to any boolean type | Per the [spec](https://go.dev/ref/spec#For_statements), a "for-range" statement using an iterator requires the iterator's `yield` function to return `bool` not an arbitrary (user-defined) boolean:
```
Range expression 1st value 2nd value
array or slice a [n]E, *[n]E, or []E index i int a[i] E
string s string type index i int see below rune
map m map[K]V key k K m[k] V
channel c chan E, <-chan E element e E
integer value n integer type, or untyped int value i see below
function, 0 values f func(func() bool)
function, 1 value f func(func(V) bool) value v V
function, 2 values f func(func(K, V) bool) key k K v V
```
We should generalize this to any boolean type, similarly to how we allow any string type (not just `string`) when we range of strings.
Note that the original implementation of the type-checker accepted any boolean type, but the compiler's front-end had a problem with it (#71131). The (temporary) fix for that issue was to adjust the type-checker to match the spec literally. This avoided a compiler panic.
We should change the spec to reflect the original intent, and then revert the fix for #71131. | LanguageChange,NeedsDecision,early-in-cycle,LanguageProposal | low | Major |
2,773,902,319 | go | path/filepath: EvalSymlinks ignores link type on Windows | Windows makes a distinction between symlinks to a file and to a directory. A file link pointing to a directory cannot be traversed, nor can a directory link pointing to a file.
`filepath.EvalSymlinks` doesn't pay any attention to the type of links, however, and will resolve links that Windows will not. It probably should behave consistently with the rest of the OS.
Possible testcases (currently failing):
```go
func TestWindowsEvalSymlinksDirectoryLinkToFile(t *testing.T) {
dir := tempDirCanonical(t)
if err := os.WriteFile(dir+"/target", nil, 0666); err != nil {
t.Fatal(err)
}
mustExec(t, "cmd", "/c", "mklink", "/D", dir+`\symlink`, dir+`\target`)
if _, err := filepath.EvalSymlinks(dir + `\symlink`); err == nil {
t.Errorf("EvalSymlinks(symlink) succeeded; want error (directory link to file target)")
}
if _, err := os.ReadFile(dir + `\symlink`); err == nil {
t.Errorf("ReadFile(symlink) succeeded; want error (directory link to file target)")
}
}
func TestWindowsEvalSymlinksFileLinkToDirectory(t *testing.T) {
dir := tempDirCanonical(t)
if err := os.Mkdir(dir+"/target", 0777); err != nil {
t.Fatal(err)
}
mustExec(t, "cmd", "/c", "mklink", dir+`\symlink`, dir+`\target`)
if _, err := filepath.EvalSymlinks(dir + `\symlink`); err == nil {
t.Errorf("EvalSymlinks(filelink) succeeded; want error (file link to directory target)")
}
if _, err := os.ReadDir(dir + `\symlink`); err == nil {
t.Errorf("ReadDir(symlink) succeeded; want error (file link to directory target)")
}
}
func mustExec(t *testing.T, cmd string, args ...string) {
output, err := exec.Command(cmd, args...).CombinedOutput()
if err != nil {
t.Fatalf("command failed: %q\n%v", cmd, string(output))
}
}
``` | OS-Windows,NeedsDecision,BugReport | low | Critical |
2,773,914,360 | go | os: Root ignores link type on Windows | Windows makes a distinction between symlinks to a file and to a directory. A file link pointing to a directory cannot be traversed, nor can a directory link pointing to a file.
The `os.Root` type doesn't pay attention to link types when resolving files, however, and will follow links that Windows will not. It probably should behave consistently with the rest of the OS.
(See also #71165, which is the same issue in path/filepath.EvalSymlinks.) | NeedsInvestigation,BugReport | low | Major |
2,773,934,374 | next.js | Typescript error TP1001 when creating a Web Worker in Next.js 15.1.3 | ### Link to the code that reproduces this issue
https://github.com/farzadso/workers-issue
### To Reproduce
Run `npm run dev` and you can see the issue.
### Current vs. Expected behavior
There should be no errors in the console.
This is the error I see:
```
⚠ ./app/page.tsx:9:22
error TP1001 new Worker("/workers/timer.js") is not statically analyse-able
7 |
8 | useEffect(() => {
> 9 | ticker.current = new Worker("/workers/timer.js");
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
10 | }, []);
11 | return <div>Test</div>;
12 | }
```
If I remove `--turbopack` from the dev script it works fine.
I noticed this happening after my upgrade to Next.js 15 and using Turbopack for dev builds.
The Web Worker is being created dynamically, and it works fine, so I don't see any issues with the functionality.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Fri Dec 6 19:02:12 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6031
Available memory (MB): 36864
Available CPU cores: 14
Binaries:
Node: 20.15.1
npm: 10.7.0
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.2.0-canary.0 // Latest available version is detected (15.2.0-canary.0).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | Turbopack | low | Critical |
2,773,948,837 | vscode | Recursive variable font is rendered with wrong space width | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.2
- OS Version: Windows 11 24H2
Steps to Reproduce:
1. Install the VF version of Recursive font (Recursive_VF_1.085.ttf) from www.recursive.design
2. Change the editor font to "Recursive Mono Linear"
`settings.json`:
```json
{
"editor.fontFamily": "'Recursive Mono Linear'"
}
```
3. Observe that spaces are too narrow:

compared to e.g. Consolas:

The bug is not in the font itself: it renders just fine in Zed for Windows.

| bug,font-rendering,confirmation-pending | low | Critical |
2,773,970,877 | electron | Why does the copy event still fire when not focused in the window that's triggering the copy? | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
34.0.0-beta.8
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 11 Version 23H2 Build 22631.4602
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
In browsers, if a window writes text to clipboard via something like `document.execCommand("copy")` while I'm focused in another window, `execCommand` returns `false`, and the window's `copy` event doesn't fire.
### Actual Behavior
In Electron, for whatever reason, `execCommand` returns `true` and the `copy` event does fire for some reason.
For the record, I've been relying on this behavior, so I'm not necessarily suggesting here to change it, but getting some insight into why we have this discrepancy might be helpful. Curious what other events this could effect.
Typically, you have to be focused in the document for certain operations/events to be "approved."
### Testcase Gist URL
https://gist.github.com/pushkin-/854ad63e36ed80895722b8ef64e65098
### Additional Information
1. start the gist
2. go to the Console of the main window and run `monitorEvents(document, "copy")`
3. Click the "Open Window" button
4. Stay focused in the opened-window for 5 seconds
5. notice in the main window's devtools, we get a copy event and value of `true` that's logged.
If you repeat this in Chrome, no copy event will get logged and you'll get false.
1. open Chrome to google and open window to google from there: `window.open("https://google.com", "a", "width=500")`
2. in the devtools of the main window, trigger a copy in 5 seconds: `setTimeout(() => console.log(document.execCommand("copy")), 5000)`
3. focus on the other window
4. you'll get `false` printed in devtools
5. if you repeat but stay focused in the main window, you'll get `true` along with a copy event | platform/windows,bug :beetle:,status/confirmed,has-repro-gist,34-x-y | low | Critical |
2,773,985,315 | pytorch | python-3.13t binaries are only available for Linux x86 | ### 🐛 Describe the bug
Looking at https://download.pytorch.org/whl/test/torch/ I've noticed that 3.13t binaries are only available for Linux-x86, neither linux-aarch64, not Windows nor Mac support those
### Versions
2.6/CI
cc @seemethere @osalpekar @atalman | module: binaries,oncall: releng,triaged | low | Critical |
2,773,987,817 | godot | CharacterBody3D lags behind when moving on AnimatableBody3D moved by code | ### Tested versions
Tested in 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1070 (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz (8 Threads)
### Issue description
When using a CharacterBody3D on an AnimatableBody3D and moving the latter via code, the character ends up lagging behind by one frame.
https://github.com/user-attachments/assets/22de841d-9966-46e0-ad9e-f4b794164556
(Not that for this I tried a variety of setup for the moving platform that in my real project is supposed to be a moving vehicle that the player can move onto and wanted to have two characterbody3D interact with one another, with no success... Maybe another bug or a bad setup from me)
### Steps to reproduce
- Set an AnimatbleBody3D with collider (and mesh for visual help)
- Set a script on it to move it using `move_and_collide`
- Set a CharacterBody3D using the default movement template
- Move the AnimatableBody3D and observe the character lag behind by what looks like one frame
(it works better when attaching a camera to the character)
### Minimal reproduction project (MRP)
[charabodybug.zip](https://github.com/user-attachments/files/18339999/charabodybug.zip)
| bug,topic:physics,needs testing,topic:3d | low | Critical |
2,773,999,079 | pytorch | Incorrect Results with Tensor Parallelism | ### 🐛 Describe the bug
I am trying a basic Tensor Parallel implementation on a 2 layer MLP using `ColwiseParallel` followed by a `RowwiseParallel`. I would expect the final output of the MLP to be the same in the Tensor Parallel version compared to the non-parallelized version. However, the output tensors are different.
```python
import torch
import torch.nn as nn
import torch.distributed as dist
from torch.distributed.tensor.parallel import parallelize_module, ColwiseParallel, RowwiseParallel
from torch.distributed.tensor.placement_types import Replicate, Shard
class MLP(nn.Module):
def __init__(
self,
dim: int,
expand_ratio: int,
mp_mesh,
_parallelize=True
):
super().__init__()
self.mp_mesh = mp_mesh
self.proj_in = nn.Linear(dim, dim * expand_ratio)
self.act = nn.GELU("tanh")
self.proj_out = nn.Linear(dim * expand_ratio, dim)
def forward(self, x: torch.FloatTensor) -> torch.FloatTensor:
x = self.proj_in(x)
x = self.act(x)
x = self.proj_out(x)
return x
if __name__ == "__main__":
import os
from torch.distributed.device_mesh import init_device_mesh
import torch.distributed.tensor as dtensor
torch.manual_seed(0)
local_rank = int(os.environ["LOCAL_RANK"])
device = torch.device(f'cuda:{local_rank}')
mesh = init_device_mesh("cuda", (8,))
head_dim = 80
num_heads = 24
d_model = head_dim * num_heads
text_seq_len = 10
model = MLP(d_model, expand_ratio=4, mp_mesh=mesh, _parallelize=parallelize).to(device).to(torch.bfloat16)
dtext = dtensor.randn((text_seq_len, d_model), dtype=torch.bfloat16, device_mesh=mesh, placements=[Replicate()])
text = dtext.full_tensor()
text_output = model(text)
model = parallelize_module(model, device_mesh=mesh, parallelize_plan={
"proj_in": ColwiseParallel(use_local_output=True),
"proj_out": RowwiseParallel(use_local_output=True)})
parallel_text_out = model(dtext)
if local_rank == 0:
print("Text output: ", text_output)
print("Parallel text output: ", parallel_text_out)
assert text_output.size() == parallel_text_out.size()
assert torch.allclose(text_output, parallel_text_out) # This assertion fails
```
I run this on a single node with 8 GPUs via `torchrun --nproc_per_node=8 torch_tp_test.py`.
But the assertion fails with
```
Text output: tensor([[-0.1299, -0.1758, -0.0344, ..., 0.1128, -0.2178, -0.0466],
[-0.0226, 0.1167, 0.1768, ..., -0.0160, -0.0405, -0.2656],
[-0.1641, -0.0554, 0.2715, ..., 0.1475, 0.0967, 0.1309],
...,
[-0.0820, -0.0391, 0.2480, ..., -0.0525, -0.0962, 0.0903],
[-0.0179, -0.0850, -0.1641, ..., -0.2451, 0.0364, -0.0962],
[-0.2676, 0.0332, -0.2500, ..., -0.0410, -0.2412, 0.2930]],
device='cuda:0', dtype=torch.bfloat16, grad_fn=<AddmmBackward0>)
Parallel text output: AsyncCollectiveTensor(tensor([[-0.1309, -0.1758, -0.0334, ..., 0.1108, -0.2188, -0.0471],
[-0.0234, 0.1162, 0.1758, ..., -0.0176, -0.0381, -0.2676],
[-0.1621, -0.0549, 0.2695, ..., 0.1455, 0.0967, 0.1318],
...,
[-0.0825, -0.0366, 0.2480, ..., -0.0537, -0.0977, 0.0898],
[-0.0181, -0.0830, -0.1621, ..., -0.2451, 0.0361, -0.0977],
[-0.2676, 0.0325, -0.2490, ..., -0.0410, -0.2402, 0.2930]],
device='cuda:0', dtype=torch.bfloat16))
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/amogkamsetty/torch_tp_test.py", line 88, in <module>
[rank0]: assert torch.allclose(text_output, parallel_text_out)
[rank0]: AssertionError
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (conda-forge gcc 12.1.0-17) 12.1.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 5399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51,104-155
NUMA node1 CPU(s): 52-103,156-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torch-tb-profiler==0.3.1
[pip3] torchaudio==2.0.1+3b40834
[pip3] torchmetrics==1.4.0.post0
[pip3] torchtyping==0.1.4
[pip3] triton==3.1.0
[pip3] vllm_nccl_cu12==2.18.1.0.4.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torch-tb-profiler 0.3.1 pypi_0 pypi
[conda] torchaudio 2.0.1+3b40834 pypi_0 pypi
[conda] torchmetrics 1.4.0.post0 pypi_0 pypi
[conda] torchtyping 0.1.4 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
[conda] vllm-nccl-cu12 2.18.1.0.4.0 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,774,008,028 | langchain | TypeError: Object of type NAType is not serializable during state serialization in LangChain | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import pandas as pd
import numpy as np
import json
def debug_state(state):
"""Debug and identify keys with non-serializable data."""
for key, value in state.items():
try:
json.dumps(value) # Check JSON serialization
except TypeError as e:
print(f"Key '{key}' contains non-serializable data: {e}")
# Example Function
def final_return_node(state):
"""Simulate LangChain state serialization."""
print("--- Simulating LangChain State Serialization ---")
# Create a sample DataFrame with NAType
working_data = pd.DataFrame({
'column1': [1, pd.NA, 3],
'column2': [4, 5, np.nan]
})
# Update state with DataFrame
state["keys"] = {
"working_data": working_data.to_dict(orient="records"),
}
# Debug state
print("Debugging state for non-serializable data...")
debug_state(state)
# Attempt serialization
try:
json.dumps(state) # Validate JSON serialization
print("State passed JSON serialization.")
except TypeError as e:
print("JSON Serialization Error:", e)
raise ValueError("State is not JSON-serializable.") from e
# Simulate State and Call Function
state = {}
try:
final_return_node(state)
except Exception as e:
print("Error:", str(e))
### Error Message and Stack Trace (if applicable)
--- Simulating LangChain State Serialization ---
Debugging state for non-serializable data...
Key 'working_data' contains non-serializable data: Object of type NAType is not JSON serializable
JSON Serialization Error: Object of type NAType is not JSON serializable
Traceback (most recent call last):
File "example.py", line 40, in <module>
final_return_node(state)
File "example.py", line 33, in final_return_node
json.dumps(state) # Validate JSON serialization
File "C:\Python\lib\json\__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "C:\Python\lib\json\encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\Python\lib\json\encoder.py", line 258, in iterencode
return _iterencode(o, 0)
File "C:\Python\lib\json\encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type NAType is not JSON serializable
### Description
### Description
I am encountering a persistent issue in LangChain where attempting to serialize a state containing a pandas DataFrame with `pd.NA` values results in a `TypeError: Object of type NAType is not JSON serializable`. This error occurs despite implementing various sanitization techniques to replace or handle non-serializable values.
#### What I'm Doing
- I have a node function (`final_return_node`) in a LangChain graph that processes a pandas DataFrame (`working_data`) and updates the graph’s state with the processed data.
- The state contains a dictionary with keys that include the processed DataFrame converted to a dictionary using `to_dict(orient="records")`.
- I am using LangChain in conjunction with **Human in the Loop** and **Command and Interrupt** features to modify and resume the graph's flow based on user input.
- The goal is to serialize the state for further processing and pass it to subsequent nodes in the workflow.
#### What I Expect to Happen
- After updating the state, I expect the state to be serialized successfully without errors, provided that all non-serializable values like `pd.NA` and `np.nan` are sanitized and replaced with JSON-serializable alternatives (`None`, strings, etc.).
- The node should return the serialized state to the next step in the graph workflow.
#### What Is Actually Happening
- Despite replacing `pd.NA` and `np.nan` with `None`, and validating the state using both `json.dumps` and `msgpack.packb`, LangChain still raises a `TypeError: Object of type NAType is not serializable`.
- The issue persists even after thorough sanitization, suggesting that LangChain's internal serialization logic is accessing or processing the original `pd.NA` values somewhere within the state or during serialization.
#### Steps Taken to Debug
1. Implemented a debugging function to identify non-serializable keys and values in the state.
2. Replaced all instances of `pd.NA` and `np.nan` in the DataFrame using:
```python
.fillna("").replace({pd.NA: None, np.nan: None})
```
3. Serialized the state using both json.dumps and msgpack.packb for validation, confirming that the state passes these checks.
4. Logged the state and its sanitized version to verify that no pd.NA or other non-serializable values remain.
5. Despite these efforts, LangChain's internal serialization process continues to fail with the same error.
#### Hypothesis
- The issue might stem from LangChain's handling of the state internally, where it attempts to serialize a reference to the original, unsanitized DataFrame or retains some metadata associated with pandas extension types like pd.NA.
- Alternatively, LangChain’s serialization mechanism (e.g., MsgPack or custom serializers) may not correctly handle objects converted from pd.NA.
This bug is blocking my ability to process state updates and proceed through the LangChain graph workflow. It seems specific to LangChain's serialization implementation, as the sanitized state passes JSON and MsgPack validation outside of LangChain.
### System Info
python -m langchain_core.sys_info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.28
> langchain: 0.3.13
> langchain_community: 0.3.13
> langsmith: 0.2.6
> langchain_openai: 0.2.14
> langchain_text_splitters: 0.3.4
> langgraph_sdk: 0.1.48
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.24.3
> openai: 1.58.1
> orjson: 3.10.12
> packaging: 23.2
> pydantic: 2.10.4
> pydantic-settings: 2.7.0
> PyYAML: 6.0.2
> requests: 2.31.0
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
| 🤖:bug | low | Critical |
2,774,010,872 | pytorch | Skip empty frames recursively when top-level is empty | ### 🐛 Describe the bug
```python
import torch
def k(x):
return x
def g(x):
return k(x)
def f(x):
return g(x)
a = torch.ones(2, 2)
c = torch.compile(f, fullgraph=True)(a)
```
The above compile 3 times, f, g, and k with following log:
```
I0107 16:55:09.455000 1702873 torch/_dynamo/utils.py:1403] [0/0] ChromiumEventLogger initialized with id 50c41bbc-3619-4642-a30a-ca5562f3b129
V0107 16:55:09.456000 1702873 torch/_dynamo/convert_frame.py:941] [0/0] torchdynamo start compiling f /data/users/yidi/pytorch/test_while_loop.py:9, stack (elided 4 frames):
V0107 16:55:09.456000 1702873 torch/_dynamo/convert_frame.py:941] [0/0] File "/data/users/yidi/pytorch/test_while_loop.py", line 12, in <module>
V0107 16:55:09.456000 1702873 torch/_dynamo/convert_frame.py:941] [0/0] c = torch.compile(f, fullgraph=True)(a)
V0107 16:55:09.456000 1702873 torch/_dynamo/convert_frame.py:941] [0/0]
I0107 16:55:10.342000 1702873 torch/_dynamo/symbolic_convert.py:2744] [0/0] Step 1: torchdynamo start tracing f /data/users/yidi/pytorch/test_while_loop.py:9
I0107 16:55:10.343000 1702873 torch/fx/experimental/symbolic_shapes.py:3221] [0/0] create_env
V0107 16:55:10.347000 1702873 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] TRACE starts_line /data/users/yidi/pytorch/test_while_loop.py:10 in f (f)
V0107 16:55:10.347000 1702873 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] return g(x)
V0107 16:55:10.348000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL g []
V0107 16:55:10.351000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_FAST x [UserFunctionVariable()]
V0107 16:55:10.351000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [UserFunctionVariable(), LazyVariableTracker()]
V0107 16:55:10.351000 1702873 torch/_dynamo/symbolic_convert.py:3204] [0/0] INLINING <code object g at 0x7f4599c97260, file "/data/users/yidi/pytorch/test_while_loop.py", line 6>, inlined according trace_rules.lookup inlined by default
V0107 16:55:10.352000 1702873 torch/_dynamo/variables/builder.py:2869] [0/0] wrap_to_fake L['x'] (2, 2) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None], constraint_strides=[None, None], view_base_context=None, tensor_source=LocalSource(local_name='x', is_input=True, is_derefed_cell_contents=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0107 16:55:10.354000 1702873 torch/_dynamo/output_graph.py:2201] [0/0] create_graph_input L_x_ L['x'] FakeTensor(..., size=(2, 2)) at debug_level 0 before=False
V0107 16:55:10.355000 1702873 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] TRACE starts_line /data/users/yidi/pytorch/test_while_loop.py:7 in g (g) (inline depth: 1)
V0107 16:55:10.355000 1702873 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] return k(x)
V0107 16:55:10.355000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL k []
V0107 16:55:10.355000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_FAST x [UserFunctionVariable()]
V0107 16:55:10.355000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [UserFunctionVariable(), TensorVariable()]
V0107 16:55:10.356000 1702873 torch/_dynamo/symbolic_convert.py:3204] [0/0] INLINING <code object k at 0x7f4599d3b3c0, file "/data/users/yidi/pytorch/test_while_loop.py", line 3>, inlined according trace_rules.lookup inlined by default
V0107 16:55:10.356000 1702873 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] TRACE starts_line /data/users/yidi/pytorch/test_while_loop.py:4 in k (k) (inline depth: 2)
V0107 16:55:10.356000 1702873 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_source] return x
V0107 16:55:10.356000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE LOAD_FAST x []
V0107 16:55:10.356000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
V0107 16:55:10.356000 1702873 torch/_dynamo/symbolic_convert.py:3272] [0/0] DONE INLINING <code object k at 0x7f4599d3b3c0, file "/data/users/yidi/pytorch/test_while_loop.py", line 3>
V0107 16:55:10.357000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
V0107 16:55:10.357000 1702873 torch/_dynamo/symbolic_convert.py:3272] [0/0] DONE INLINING <code object g at 0x7f4599c97260, file "/data/users/yidi/pytorch/test_while_loop.py", line 6>
V0107 16:55:10.357000 1702873 torch/_dynamo/symbolic_convert.py:979] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
V0107 16:55:10.357000 1702873 torch/_dynamo/convert_frame.py:778] [0/0] Skipping frame because no content in function call f /data/users/yidi/pytorch/test_while_loop.py 9
V0107 16:55:10.357000 1702873 torch/_dynamo/convert_frame.py:787] [0/0] No graph captured with one_graph=True
I0107 16:55:10.358000 1702873 torch/_dynamo/pgo.py:639] [0/0] put_code_state: no cache key, skipping
I0107 16:55:10.358000 1702873 torch/_dynamo/convert_frame.py:1059] [0/0] run_gc_after_compile: running gc
V0107 16:55:10.365000 1702873 torch/_dynamo/convert_frame.py:941] [1/0] torchdynamo start compiling g /data/users/yidi/pytorch/test_while_loop.py:6, stack (elided 4 frames):
V0107 16:55:10.365000 1702873 torch/_dynamo/convert_frame.py:941] [1/0] File "/data/users/yidi/pytorch/test_while_loop.py", line 12, in <module>
V0107 16:55:10.365000 1702873 torch/_dynamo/convert_frame.py:941] [1/0] c = torch.compile(f, fullgraph=True)(a)
V0107 16:55:10.365000 1702873 torch/_dynamo/convert_frame.py:941] [1/0] File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 576, in _fn
V0107 16:55:10.365000 1702873 torch/_dynamo/convert_frame.py:941] [1/0] return fn(*args, **kwargs)
V0107 16:55:10.365000 1702873 torch/_dynamo/convert_frame.py:941] [1/0]
I0107 16:55:10.365000 1702873 torch/_dynamo/symbolic_convert.py:2744] [1/0] Step 1: torchdynamo start tracing g /data/users/yidi/pytorch/test_while_loop.py:6
I0107 16:55:10.365000 1702873 torch/fx/experimental/symbolic_shapes.py:3221] [1/0] create_env
V0107 16:55:10.366000 1702873 torch/_dynamo/symbolic_convert.py:956] [1/0] [__trace_source] TRACE starts_line /data/users/yidi/pytorch/test_while_loop.py:7 in g (g)
V0107 16:55:10.366000 1702873 torch/_dynamo/symbolic_convert.py:956] [1/0] [__trace_source] return k(x)
V0107 16:55:10.366000 1702873 torch/_dynamo/symbolic_convert.py:979] [1/0] [__trace_bytecode] TRACE LOAD_GLOBAL k []
V0107 16:55:10.367000 1702873 torch/_dynamo/symbolic_convert.py:979] [1/0] [__trace_bytecode] TRACE LOAD_FAST x [UserFunctionVariable()]
V0107 16:55:10.367000 1702873 torch/_dynamo/symbolic_convert.py:979] [1/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [UserFunctionVariable(), LazyVariableTracker()]
V0107 16:55:10.367000 1702873 torch/_dynamo/symbolic_convert.py:3204] [1/0] INLINING <code object k at 0x7f4599d3b3c0, file "/data/users/yidi/pytorch/test_while_loop.py", line 3>, inlined according trace_rules.lookup inlined by default
V0107 16:55:10.367000 1702873 torch/_dynamo/variables/builder.py:2869] [1/0] wrap_to_fake L['x'] (2, 2) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None], constraint_strides=[None, None], view_base_context=None, tensor_source=LocalSource(local_name='x', is_input=True, is_derefed_cell_contents=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0107 16:55:10.368000 1702873 torch/_dynamo/output_graph.py:2201] [1/0] create_graph_input L_x_ L['x'] FakeTensor(..., size=(2, 2)) at debug_level 0 before=False
V0107 16:55:10.369000 1702873 torch/_dynamo/symbolic_convert.py:956] [1/0] [__trace_source] TRACE starts_line /data/users/yidi/pytorch/test_while_loop.py:4 in k (k) (inline depth: 1)
V0107 16:55:10.369000 1702873 torch/_dynamo/symbolic_convert.py:956] [1/0] [__trace_source] return x
V0107 16:55:10.369000 1702873 torch/_dynamo/symbolic_convert.py:979] [1/0] [__trace_bytecode] TRACE LOAD_FAST x []
V0107 16:55:10.369000 1702873 torch/_dynamo/symbolic_convert.py:979] [1/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
V0107 16:55:10.369000 1702873 torch/_dynamo/symbolic_convert.py:3272] [1/0] DONE INLINING <code object k at 0x7f4599d3b3c0, file "/data/users/yidi/pytorch/test_while_loop.py", line 3>
V0107 16:55:10.369000 1702873 torch/_dynamo/symbolic_convert.py:979] [1/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
V0107 16:55:10.370000 1702873 torch/_dynamo/convert_frame.py:778] [1/0] Skipping frame because no content in function call g /data/users/yidi/pytorch/test_while_loop.py 6
V0107 16:55:10.370000 1702873 torch/_dynamo/convert_frame.py:787] [1/0] No graph captured with one_graph=True
I0107 16:55:10.370000 1702873 torch/_dynamo/pgo.py:639] [1/0] put_code_state: no cache key, skipping
I0107 16:55:10.370000 1702873 torch/_dynamo/convert_frame.py:1059] [1/0] run_gc_after_compile: running gc
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] torchdynamo start compiling k /data/users/yidi/pytorch/test_while_loop.py:3, stack (elided 4 frames):
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] File "/data/users/yidi/pytorch/test_while_loop.py", line 12, in <module>
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] c = torch.compile(f, fullgraph=True)(a)
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 576, in _fn
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] return fn(*args, **kwargs)
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] File "/data/users/yidi/pytorch/test_while_loop.py", line 10, in f
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0] return g(x)
V0107 16:55:10.374000 1702873 torch/_dynamo/convert_frame.py:941] [2/0]
I0107 16:55:10.374000 1702873 torch/_dynamo/symbolic_convert.py:2744] [2/0] Step 1: torchdynamo start tracing k /data/users/yidi/pytorch/test_while_loop.py:3
I0107 16:55:10.375000 1702873 torch/fx/experimental/symbolic_shapes.py:3221] [2/0] create_env
V0107 16:55:10.375000 1702873 torch/_dynamo/symbolic_convert.py:956] [2/0] [__trace_source] TRACE starts_line /data/users/yidi/pytorch/test_while_loop.py:4 in k (k)
V0107 16:55:10.375000 1702873 torch/_dynamo/symbolic_convert.py:956] [2/0] [__trace_source] return x
V0107 16:55:10.375000 1702873 torch/_dynamo/symbolic_convert.py:979] [2/0] [__trace_bytecode] TRACE LOAD_FAST x []
V0107 16:55:10.375000 1702873 torch/_dynamo/symbolic_convert.py:979] [2/0] [__trace_bytecode] TRACE RETURN_VALUE None [LazyVariableTracker()]
V0107 16:55:10.376000 1702873 torch/_dynamo/variables/builder.py:2869] [2/0] wrap_to_fake L['x'] (2, 2) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None], constraint_strides=[None, None], view_base_context=None, tensor_source=LocalSource(local_name='x', is_input=True, is_derefed_cell_contents=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0107 16:55:10.376000 1702873 torch/_dynamo/output_graph.py:2201] [2/0] create_graph_input L_x_ L['x'] FakeTensor(..., size=(2, 2)) at debug_level 0 before=False
V0107 16:55:10.377000 1702873 torch/_dynamo/convert_frame.py:778] [2/0] Skipping frame because no content in function call k /data/users/yidi/pytorch/test_while_loop.py 3
V0107 16:55:10.377000 1702873 torch/_dynamo/convert_frame.py:787] [2/0] No graph captured with one_graph=True
I0107 16:55:10.377000 1702873 torch/_dynamo/pgo.py:639] [2/0] put_code_state: no cache key, skipping
I0107 16:55:10.377000 1702873 torch/_dynamo/convert_frame.py:1059] [2/0] run_gc_after_compile: running gc
I0107 16:55:12.533000 1703243 torch/_dynamo/eval_frame.py:398] TorchDynamo attempted to trace the following frames: [
I0107 16:55:12.533000 1703243 torch/_dynamo/eval_frame.py:398]
I0107 16:55:12.533000 1703243 torch/_dynamo/eval_frame.py:398] ]
I0107 16:55:12.538000 1703243 torch/_dynamo/utils.py:636] TorchDynamo compilation metrics:
I0107 16:55:12.538000 1703243 torch/_dynamo/utils.py:636] Function Runtimes (s)
I0107 16:55:12.538000 1703243 torch/_dynamo/utils.py:636] ---------- --------------
V0107 16:55:12.538000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats constrain_symbol_range: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats defer_runtime_assert: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats evaluate_expr: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats _simplify_floor_div: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats _maybe_guard_rel: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats _find: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats has_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats size_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats simplify: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.539000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats _update_divisible: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats replace: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats _maybe_evaluate_static: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats get_implications: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats get_axioms: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats _maybe_evaluate_static_worker: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats safe_expand: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0107 16:55:12.540000 1703243 torch/fx/experimental/symbolic_shapes.py:172] lru_cache_stats uninteresting_files: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
I0107 16:55:13.045000 1702873 torch/_dynamo/eval_frame.py:398] TorchDynamo attempted to trace the following frames: [
I0107 16:55:13.045000 1702873 torch/_dynamo/eval_frame.py:398] * f /data/users/yidi/pytorch/test_while_loop.py:9
I0107 16:55:13.045000 1702873 torch/_dynamo/eval_frame.py:398] * g /data/users/yidi/pytorch/test_while_loop.py:6
I0107 16:55:13.045000 1702873 torch/_dynamo/eval_frame.py:398] * k /data/users/yidi/pytorch/test_while_loop.py:3
I0107 16:55:13.045000 1702873 torch/_dynamo/eval_frame.py:398] ]
I0107 16:55:13.050000 1702873 torch/_dynamo/utils.py:636] TorchDynamo compilation metrics:
I0107 16:55:13.050000 1702873 torch/_dynamo/utils.py:636] Function Runtimes (s)
I0107 16:55:13.050000 1702873 torch/_dynamo/utils.py:636] ---------------------- --------------
I0107 16:55:13.050000 1702873 torch/_dynamo/utils.py:636] _compile.compile_inner 0.9094
I0107 16:55:13.050000 1702873 torch/_dynamo/utils.py:636] gc 0.0024
```
Ideally, we should be able to skip compilation of function calls to g and k.
### Versions
main
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,774,020,584 | rust | Maybe-bounds in associated type bounds don't get rejected | The following code gets erroneously accepted:
```rs
fn f<T>() where T: Trait<Ty: ?Sized> {}
// ^^^^^^ not good
trait Trait { type Ty/*: ?Sized*/; }
```
Maybe-bounds / unbounds are only meant to be put on type parameters and associated types "declared in the immediate vicinity" which isn't the case here. This is supported by the fact that `Trait<Ty:>` (sic!) doesn't elaborate to `Trait<Ty: Sized>`.
Compare this to the snippet below which gets rightfully rejected:
```rs
fn f<T>() where T: Trait, T::Ty: ?Sized {}
// ^^^^^^
//~^^ ERROR `?Trait` bounds are only permitted at the point where a type parameter is declared
trait Trait { type Ty/*: ?Sized*/; }
```
---
Obviously a fix would be a breaking change. However I doubt anyone is writing such code (unless macro generated). In any case, we should run crater on the future PR. | T-compiler,C-bug,F-associated_type_bounds | low | Critical |
2,774,032,221 | pytorch | Some operators miss dtype check when using `torch.compile` | ### 🐛 Describe the bug
As reported here (https://github.com/pytorch/pytorch/issues/144314#issuecomment-2574508557), I notice some operators missing dtype check when executed in the context of `torch.compile`. The specific symptom is as follows:
- Eager Mode: Raises `not implemented for [specific dtype]` error
- torch.compile Mode: Yields regular outputs (I guess implicit data type casting happens under `torch.compile`)
Some related issues: https://github.com/pytorch/pytorch/issues/144314, https://github.com/pytorch/pytorch/issues/144310, https://github.com/pytorch/pytorch/issues/144247.
Although this dtype-check-missing issue may not be severe, in case you are interested, I cherrypick a few operators where dtype checks are missing in the CPU and CUDA backends. Here's a breakdown:
| Operator Name | CPU Backend Missing Check | CUDA Backend Misses Check | Expected Behavior (Eager Behavior) |
| -------- | ------- | ------- | ------- |
| torch.nn.functional.{log_softmax,softmax,logsigmoid} | uint, int8, int16, int32, int64 | uint, int8, int16, int32, int64 | Raise `not implemented for xxx` error |
| torch.nn.functional.{gelu,celu,hardsigmoid,hardswish}/torch.nextafter | uint, bool, int8, int16, int32, int64 | uint, bool, int8, int16, int32, int64 | Raise `not implemented for xxx` error |
| torch.nn.functional.prelu | bool, int8, int16, int32, int64 | uint, bool, int8, int16, int32, int64 | Raise `not implemented for xxx` error |
| torch.Tensor.mm | uint, bool | N/A | Raise `not implemented for xxx` error |
| torch.trace | uint, bfloat16 , half, bool | N/A | Raise `not implemented for xxx` error |
| torch.fmax | complex32, complex64 | N/A | Raise `not implemented for xxx` error |
| torch.xlogy/torch.nn.functional.mse_loss | complex64, complex32 | complex64, complex32 | Raise `not implemented for xxx` error |
Since these cases seem to share the same root cause, I am wondering if they can be fixed in a general way?
Below are detailed code that can reproduce the reported case for each operator.
<details>
<summary>log_softmax/softmax</summary>
```
import torch
from torch import nn
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input,dim):
return torch.nn.functional.log_softmax(input,dim) # replace `log_softmax` with `softmax to reproduce the issue in softmax
f = MyModel()
cf = torch.compile(f)
input = torch.randn((2))
dim = -1
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.float32, torch.float64]:
input = input.to(dtype).to(device)
eager_pass, compile_pass = "passed", "passed"
try:
f(input, dim)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input, dim)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>logsigmoid/gelu/celu/hardsigmoid/hardswish</summary>
```
import torch
from torch import nn
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input):
return torch.nn.functional.logsigmoid(input) # change logsigmoid to gelu/celu/hardsigmoid/hardswish will reproduce related inconsistent behaviors
f = MyModel()
cf = torch.compile(f)
input = torch.randn((2))
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.float32, torch.float64]:
input = input.to(dtype).to(device)
eager_pass, compile_pass = "passed", "passed"
try:
f(input)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>prelu</summary>
```
import torch
from torch import nn
import numpy as np
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input,weight):
return torch.nn.functional.prelu(input,weight)
f = MyModel()
cf = torch.compile(f)
input = torch.tensor(np.random.randint(-10, 10, (1,1,1)))
weight = torch.tensor(np.random.randint(-10, 10, (1)))
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.float32, torch.float64]:
input = input.to(dtype).to(device)
weight = weight.to(dtype).to(device)
eager_pass, compile_pass = "passed", "passed"
try:
f(input,weight)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input,weight)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>torch.nextafter</summary>
```
import torch
from torch import nn
import numpy as np
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input, other):
return torch.nextafter(input, other)
f = MyModel()
cf = torch.compile(f)
input = torch.tensor(np.random.randint(-10, 10, ()), dtype=torch.int64)
other = torch.tensor(np.random.randint(-10, 10, ()), dtype=torch.int64)
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.float32, torch.float64]:
input = input.to(dtype).to(device)
other = other.to(dtype).to(device)
eager_pass, compile_pass = "passed", "passed"
try:
f(input, other)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input, other)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>torch.Tensor.mm</summary>
```
import torch
from torch import nn
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input, mat2):
return torch.Tensor.mm(input,mat2)
f = MyModel()
cf = torch.compile(f)
input = torch.randn(1, 1)
mat2 = torch.randn(1, 1)
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.int8, torch.int16, torch.int32, torch.int64, torch.float16, torch.float32, torch.float64]:
input = input.to(dtype).to(device)
mat2 = mat2.to(dtype).to(device)
eager_pass, compile_pass = "passed", "passed"
try:
f(input, mat2)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input, mat2)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>torch.trace</summary>
```
import torch
from torch import nn
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input):
return torch.trace(input)
f = MyModel()
cf = torch.compile(f)
input = torch.randn(0,1)
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.bfloat16, torch.half]:
input = input.to(dtype).to(device)
eager_pass, compile_pass = "passed", "passed"
try:
f(input)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>torch.fmax</summary>
```
import torch
from torch import nn
import numpy as np
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input,other):
return torch.fmax(input,other)
f = MyModel()
cf = torch.compile(f)
input = torch.tensor(np.random.randn(1,1,1), dtype=torch.complex128)
other = torch.tensor(np.random.randn(0), dtype=torch.double)
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.bfloat16, torch.half, torch.complex64, torch.complex128]:
input = input.to(dtype).to(device)
try:
f(input, other)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input, other)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
<details>
<summary>torch.xlogy/mse_loss</summary>
```
import torch
from torch import nn
import numpy as np
torch._dynamo.config.recompile_limit = 100
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
def forward(self, input,other):
return torch.xlogy(input,other) # change torch.xlogy to torch.nn.functional.mse_loss can reproduce mse_loss's inconsistent behavior
f = MyModel()
cf = torch.compile(f)
input = torch.tensor(np.random.randn(1))
other = torch.tensor(np.random.randn(1,1))
for device in ['cpu', 'cuda']:
for dtype in [torch.uint16, torch.bool, torch.bfloat16, torch.half, torch.complex64, torch.complex128]:
input = input.to(dtype).to(device)
other = other.to(dtype).to(device)
try:
f(input, other)
eager_pass = "passed"
except Exception as e:
print(f"Eager Error: {e}")
eager_pass = "failed"
try:
cf(input, other)
compile_pass = "passed"
except Exception as e:
compile_pass = "failed"
if eager_pass != compile_pass:
print(f"Inconsistent behavior on: {dtype}, {device}\n Eager: {eager_pass}\n Compile: {compile_pass}")
```
</details>
To my best knowledge, I track other related issues here.
cc @malfet @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng
```[tasklist]
### Tasks
- [ ] https://github.com/pytorch/pytorch/issues/144314
- [ ] https://github.com/pytorch/pytorch/issues/144310
- [ ] https://github.com/pytorch/pytorch/issues/144247
- [ ] https://github.com/pytorch/pytorch/issues/143779
- [ ] https://github.com/pytorch/pytorch/issues/143801
- [ ] https://github.com/pytorch/pytorch/issues/143752
- [ ] https://github.com/pytorch/pytorch/issues/143729
```
| module: error checking,triaged,module: structured kernels,oncall: pt2,module: inductor | low | Critical |
2,774,041,146 | godot | `.blend` "Save to File" breaks when assigned uid. | ### Tested versions
- Reproducible in: Godot v4.4.dev7.official
### System information
Windows 11 - Godot v4.4.dev7.official - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 4070
### Issue description
When using the `.blend` import settings and saving a mesh to a separate path using "Save to File" -> "Path", Godot may set this path to a `uid` path when overwriting an existing mesh. This results in [this](https://github.com/godotengine/godot/blob/d2ada64a03d2abdb97cafe8f10623db8a2ce1d4c/editor/import/3d/resource_importer_scene.cpp#L2877) assert failing.
Presumably this fails because `DirAccess::exists(save_path.get_base_dir())` does not work when `save_path` is a `uid://123...`?
Specifically, this message is printed in Output (from MRP linked below):
```
ERROR: editor/import/3d/resource_importer_scene.cpp:2879 - Condition "!save_path.is_empty() && !DirAccess::exists(save_path.get_base_dir())" is true. Returning: ERR_FILE_BAD_PATH
ERROR: Error importing 'res://Test.blend'.
```
From that point forward, double-clicking to open the import settings for the `.blend` file will result in:
```
ERROR: Failed loading resource: res://Test.blend. Make sure resources have been imported by opening the project in the editor at least once.
ERROR: editor/editor_node.cpp:1291 - Condition "!res.is_valid()" is true. Returning: ERR_CANT_OPEN
```
### Steps to reproduce
* Add a `.blend` file to a project.
* Open the import settings and save a mesh using the "Save to File" feature.
* Open the import settings a second time and set the same "Save to File" path to the one you did before, confirm overwrite. This should now ensure the `YOURFILE.blend.import` file is using a `save_to_file/path` that is a uid.
### Minimal reproduction project (MRP)
[CannotReimportBlend.zip](https://github.com/user-attachments/files/18340423/CannotReimportBlend.zip)
| bug,topic:editor,regression | low | Critical |
2,774,069,582 | flutter | 9-Patch Images Renders Incorrectly with ColorFilter on v3.27 Android | ### Steps to reproduce
When using 9-patch images with a ColorFilter, a thin line appears at the slice positions. This appears with Flutter v3.27 with both Impeller and non-Impeller.

1. Download the 9-patch demo image bg_chat_bubble_right.png.
2. Run the demo.
### Expected results
9 Patch images with ColorFilter should render correctly, without issues.
### Actual results
The 9-patch image with ColorFilter should be rendered correctly, without any visible lines or artifacts.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(widget.title),
),
body: ListView.builder(
itemCount: 10,
itemBuilder: (context, index) {
// Use only the "right" chat bubble for simplicity
return const ChatBubble();
},
),
);
}
}
class ChatBubble extends StatelessWidget {
const ChatBubble({super.key});
@override
Widget build(BuildContext context) {
return Container(
margin: const EdgeInsets.all(20),
// Simplified decoration with only the 9-patch image
decoration: BoxDecoration(
image: DecorationImage(
image: AssetImage('assets/images/bg_chat_bubble_right.png'),
scale: 3,
centerSlice: const Rect.fromLTWH(14, 12, 28, 8),
colorFilter:
const ColorFilter.mode(Color(0xFF545CEA), BlendMode.srcATop),
fit: BoxFit.fill,
),
borderRadius: const BorderRadius.all(Radius.circular(6)),
),
child: CupertinoContextMenu.builder(
actions: [
CupertinoContextMenuAction(
onPressed: () {
Navigator.pop(context);
},
trailingIcon: CupertinoIcons.doc_on_clipboard_fill,
child: const Text('Copy'),
),
],
builder: (BuildContext context, Animation<double> animation) {
return const Padding(
padding: EdgeInsets.all(20),
child: Text(
'This is a chat bubble',
style: TextStyle(color: Colors.black, fontSize: 16),
),
);
},
),
);
}
}
```
</details>

### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.2 24C101 darwin-arm64, locale en-US)
• Flutter version 3.27.1 on channel stable at /Users/ray/Development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (3 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/ray/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.96.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (4 available)
• A063 (mobile) • P112AC001544 • android-arm64 • Android 14 (API 34)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.2 24C101 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.2 24C101 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.206
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| waiting for customer response,in triage | low | Major |
2,774,086,041 | rust | Inconsistent/wrong choice of trait impls under `miri` with transmuted vtables | Here is a test case
```rust
#![allow(coherence_leak_check)]
fn main() {
let x: &dyn Trait<Marker1> = &();
let y: &dyn Trait<Marker2> = unsafe { std::mem::transmute(x) };
y.report();
}
type Marker1 = fn(&()) -> (&(), &'static ());
type Marker2 = fn(&()) -> (&'static (), &());
trait Trait<M: 'static> {
fn report(&self);
}
impl<M: 'static> Trait<M> for () {
fn report(&self) {
who_am_i::<M>();
}
}
fn who_am_i<M: 'static>() {
let marker1 = std::any::TypeId::of::<Marker1>();
let marker2 = std::any::TypeId::of::<Marker2>();
let m = std::any::TypeId::of::<M>();
let m_is = if m == marker1 {
"Marker1"
} else if m == marker2 {
"Marker2"
} else {
unreachable!()
};
println!("M == {m_is}");
}
```
([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=f3ea7b28ea1c770d5f84068c2f699e09))
When run normally, this prints
```rust
M == Marker1
```
When run with miri, this prints
```rust
M == Marker2
```
### Expected behavior: Miri either reports UB, or behaves the same way like the actual codegen does.
---
It gets even more interesting if we add a trait bound to the `impl`:
```rust
impl<M: 'static> Trait<M> for ()
where
M: Bound,
{
fn report(&self) {
who_am_i::<M>();
println!("---");
M::who_am_i();
}
}
trait Bound: 'static + Sized {
fn who_am_i() {
who_am_i::<Self>();
}
}
impl Bound for Marker1 {}
```
([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=032bb768232c99aa7316d3cf5c742495))
When run normally, this prints
```rust
M == Marker1
---
M == Marker1
```
When run with miri, this prints
```rust
M == Marker2
---
M == Marker1
```
*Oh wonderful, the type apparently just changes in the middle of it!*
---
If we add a second implementation, Miri reconsiders its choices
```rust
impl Bound for Marker1 {}
impl Bound for Marker2 {}
```
([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=8f46f10cff12456009d7287c428c54f7))
*(Behavior when run normally: unchanged.)*
When run with miri, this prints
```rust
M == Marker2
---
M == Marker2
```
*(still wrong though / [actual execution without `miri` is reporting `Marker1`, `Marker1`])*
---
This behavior must come from some sort of rough heuristic that wasn't supposed to ever matter… because… if the second `impl` *exists* but comes with an impossible trait bound, then miri still seems to try to make use of this `Marker2`-impl nonetheless:
```rust
impl Bound for Marker1 {}
impl Bound for Marker2 where Self: Unimplemented {}
trait Unimplemented {}
```
([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=75708feacbe51b65b17ffa08e0cc4415))
When run normally, this still prints
```rust
M == Marker1
---
M == Marker1
```
When run with miri, you get ICE:
```plain
error: internal compiler error: compiler/rustc_middle/src/ty/instance.rs:585:21: failed to resolve instance for <() as Trait<for<'a> fn(&'a ()) -> (&(), &'a ())>>::report
--> src/main.rs:13:5
|
13 | fn report(&self);
| ^^^^^^^^^^^^^^^^^
thread 'rustc' panicked at compiler/rustc_middle/src/ty/instance.rs:585:21:
Box<dyn Any>
stack backtrace:
0: 0x7d96ae6d260a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hcfde92856b8e7b4d
1: 0x7d96aee135e6 - core::fmt::write::h8aa28d7c2e766574
2: 0x7d96afcc0491 - std::io::Write::write_fmt::h0473f60143e76874
3: 0x7d96ae6d2462 - std::sys::backtrace::BacktraceLock::print::he0a43b48023f5fb3
4: 0x7d96ae6d4a07 - std::panicking::default_hook::{{closure}}::h16c37508eb1e165d
5: 0x7d96ae6d47f0 - std::panicking::default_hook::h188c5d4452b2e2a8
6: 0x7d96ad8518a8 - std[fac44eaeb111bcc8]::panicking::update_hook::<alloc[66489a0f9c76ca63]::boxed::Box<rustc_driver_impl[1ee6f045412d773c]::install_ice_hook::{closure#1}>>::{closure#0}
7: 0x7d96ae6d5253 - std::panicking::rust_panic_with_hook::h1cf1663d92a293a0
8: 0x7d96ad889dd1 - std[fac44eaeb111bcc8]::panicking::begin_panic::<rustc_errors[8e1a8b7a3353af80]::ExplicitBug>::{closure#0}
9: 0x7d96ad87efb6 - std[fac44eaeb111bcc8]::sys::backtrace::__rust_end_short_backtrace::<std[fac44eaeb111bcc8]::panicking::begin_panic<rustc_errors[8e1a8b7a3353af80]::ExplicitBug>::{closure#0}, !>
10: 0x7d96ad87ef9d - std[fac44eaeb111bcc8]::panicking::begin_panic::<rustc_errors[8e1a8b7a3353af80]::ExplicitBug>
11: 0x7d96ad893d31 - <rustc_errors[8e1a8b7a3353af80]::diagnostic::BugAbort as rustc_errors[8e1a8b7a3353af80]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x7d96adde29ac - <rustc_errors[8e1a8b7a3353af80]::DiagCtxtHandle>::span_bug::<rustc_span[7d28ac27f72ec6b1]::span_encoding::Span, alloc[66489a0f9c76ca63]::string::String>
13: 0x7d96ade67927 - rustc_middle[5606c862b127c2dc]::util::bug::opt_span_bug_fmt::<rustc_span[7d28ac27f72ec6b1]::span_encoding::Span>::{closure#0}
14: 0x7d96ade4c9ca - rustc_middle[5606c862b127c2dc]::ty::context::tls::with_opt::<rustc_middle[5606c862b127c2dc]::util::bug::opt_span_bug_fmt<rustc_span[7d28ac27f72ec6b1]::span_encoding::Span>::{closure#0}, !>::{closure#0}
15: 0x7d96ade4c85b - rustc_middle[5606c862b127c2dc]::ty::context::tls::with_context_opt::<rustc_middle[5606c862b127c2dc]::ty::context::tls::with_opt<rustc_middle[5606c862b127c2dc]::util::bug::opt_span_bug_fmt<rustc_span[7d28ac27f72ec6b1]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
16: 0x7d96ac8b3e97 - rustc_middle[5606c862b127c2dc]::util::bug::span_bug_fmt::<rustc_span[7d28ac27f72ec6b1]::span_encoding::Span>
17: 0x7d96af4ae73c - <rustc_middle[5606c862b127c2dc]::ty::instance::Instance>::expect_resolve
18: 0x7d96af9d0695 - <rustc_middle[5606c862b127c2dc]::ty::instance::Instance>::expect_resolve_for_vtable
19: 0x7d96af6b28f7 - rustc_trait_selection[d64518d2976bfae4]::traits::vtable::vtable_entries::{closure#0}
20: 0x7d96af41d9f0 - rustc_trait_selection[d64518d2976bfae4]::traits::vtable::vtable_entries
21: 0x7d96af41d72a - rustc_query_impl[25a774c2c57585ee]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[25a774c2c57585ee]::query_impl::vtable_entries::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5606c862b127c2dc]::query::erase::Erased<[u8; 16usize]>>
22: 0x7d96af41d6f9 - <rustc_query_impl[25a774c2c57585ee]::query_impl::vtable_entries::dynamic_query::{closure#2} as core[249175d58a5edd5c]::ops::function::FnOnce<(rustc_middle[5606c862b127c2dc]::ty::context::TyCtxt, rustc_type_ir[e0f584499d9d9d64]::binder::Binder<rustc_middle[5606c862b127c2dc]::ty::context::TyCtxt, rustc_type_ir[e0f584499d9d9d64]::predicate::TraitRef<rustc_middle[5606c862b127c2dc]::ty::context::TyCtxt>>)>>::call_once
23: 0x7d96afb91f59 - rustc_query_system[880048adabd2048e]::query::plumbing::try_execute_query::<rustc_query_impl[25a774c2c57585ee]::DynamicConfig<rustc_query_system[880048adabd2048e]::query::caches::DefaultCache<rustc_type_ir[e0f584499d9d9d64]::binder::Binder<rustc_middle[5606c862b127c2dc]::ty::context::TyCtxt, rustc_type_ir[e0f584499d9d9d64]::predicate::TraitRef<rustc_middle[5606c862b127c2dc]::ty::context::TyCtxt>>, rustc_middle[5606c862b127c2dc]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[25a774c2c57585ee]::plumbing::QueryCtxt, false>
24: 0x7d96afb91cb8 - rustc_query_impl[25a774c2c57585ee]::query_impl::vtable_entries::get_query_non_incr::__rust_end_short_backtrace
25: 0x574a5c488044 - <rustc_const_eval[e2da1d737b1da01f]::interpret::eval_context::InterpCx<miri[7fefe668a30715d]::machine::MiriMachine>>::vtable_entries
26: 0x574a5c4ab00b - <rustc_const_eval[e2da1d737b1da01f]::interpret::eval_context::InterpCx<miri[7fefe668a30715d]::machine::MiriMachine>>::init_fn_call
27: 0x574a5c5245fa - miri[7fefe668a30715d]::eval::eval_entry::{closure#0}
28: 0x574a5c52072b - miri[7fefe668a30715d]::eval::eval_entry
29: 0x574a5c3d4216 - <miri[38dcf146ac8aeb09]::MiriCompilerCalls as rustc_driver_impl[1ee6f045412d773c]::Callbacks>::after_analysis
30: 0x7d96afdf13ab - rustc_interface[a5b8f6a3ca67129a]::passes::create_and_enter_global_ctxt::<core[249175d58a5edd5c]::option::Option<rustc_interface[a5b8f6a3ca67129a]::queries::Linker>, rustc_driver_impl[1ee6f045412d773c]::run_compiler::{closure#0}::{closure#2}>::{closure#2}::{closure#0}
31: 0x7d96afd116d6 - rustc_interface[a5b8f6a3ca67129a]::interface::run_compiler::<(), rustc_driver_impl[1ee6f045412d773c]::run_compiler::{closure#0}>::{closure#1}
32: 0x7d96afc14c07 - std[fac44eaeb111bcc8]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[a5b8f6a3ca67129a]::util::run_in_thread_with_globals<rustc_interface[a5b8f6a3ca67129a]::util::run_in_thread_pool_with_globals<rustc_interface[a5b8f6a3ca67129a]::interface::run_compiler<(), rustc_driver_impl[1ee6f045412d773c]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
33: 0x7d96afc150a4 - <<std[fac44eaeb111bcc8]::thread::Builder>::spawn_unchecked_<rustc_interface[a5b8f6a3ca67129a]::util::run_in_thread_with_globals<rustc_interface[a5b8f6a3ca67129a]::util::run_in_thread_pool_with_globals<rustc_interface[a5b8f6a3ca67129a]::interface::run_compiler<(), rustc_driver_impl[1ee6f045412d773c]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[249175d58a5edd5c]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
34: 0x7d96afc16681 - std::sys::pal::unix::thread::Thread::new::thread_start::h1f4f8d3a2ffc672f
35: 0x7d96aa08aa94 - <unknown>
36: 0x7d96aa117a34 - clone
37: 0x0 - <unknown>
```
To run into this ICE, calling a method of `Bound` isn't actually necessary. Once the `y.report()` call it reached, it already goes ICE:
```rust
#![allow(coherence_leak_check)]
fn main() {
let x: &dyn Trait<Marker1> = &();
let y: &dyn Trait<Marker2> = unsafe { std::mem::transmute(x) };
y.report();
}
type Marker1 = fn(&()) -> (&(), &'static ());
type Marker2 = fn(&()) -> (&'static (), &());
trait Trait<M> {
fn report(&self) {}
}
impl<M: Bound> Trait<M> for () {}
trait Bound {}
impl Bound for Marker1 {}
impl Bound for Marker2 where Self: Unimplemented {}
trait Unimplemented {}
```
([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=f1f89308af6ac9062053658d8945c8e8))
@rustbot label A-miri, A-trait-objects, I-ICE, T-compiler | I-ICE,T-compiler,C-bug,A-miri,A-trait-objects | low | Critical |
2,774,086,253 | ant-design | TreeSelect maxCount UI效果未生效 | ### Reproduction link
[https://ant.design/components/tree-select-cn#tree-select-demo-maxcount](https://ant.design/components/tree-select-cn#tree-select-demo-maxcount)
### Steps to reproduce
打开官网demo,勾选parent1,再尝试勾选parent2
### What is expected?
勾选完parent1后,勾选parent2不生效,且parent2应该被禁用
### What is actually happening?
parent2未被禁用
| Environment | Info |
| --- | --- |
| antd | 5.23.0 |
| React | 19 |
| System | MacOS |
| Browser | Chrome |
---
锁在以下版本时表现正常
"rc-tree": "~5.11.0",
"rc-tree-select": "~5.25.0",

<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Minor |
2,774,087,022 | pytorch | [XPU] quantile related tests failed with Assertion failed: helper.isSupportedLayout() && "Unexpected srcLayout in ReduceOpConversion" | ### 🐛 Describe the bug
When running the UT on Windows/Linux:
```Python
pytest -k test_comprehensive_nanquantile_xpu_float32 -v test_torchinductor_opinfo.py
pytest -k test_comprehensive_quantile_xpu_float32 -v test_torchinductor_opinfo.py
```
The test failed with the following:
```Python
Assertion failed: helper.isSupportedLayout() && "Unexpected srcLayout in ReduceOpConversion"
```
### Versions
PyTorch: d0f5df83a50d9bb630764c92ac63fcb2640b1f94
Triton (for intel xpu): c23ff25775780cc4bb1ca530fd3ae33b0cf3b56e
Platform: Ubuntu 24.10 / Windows 11
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,774,096,111 | node | call exe by node reutrn 3221225477 | ### Version
18.20.1
### Platform
```text
a c++ program can run corrcetly by cmd.But when we call this in node spawn with code 3221225477.run program by node child_process.spwan corrcetly,but it broked return 3221225477.
I used windebug to display access violation. Is there any difference between using node to pull up the memory allocation and using cmd to pull it up? Aren't they both independent memory spaces? Why can cmd execute it?
```
### Subsystem
win10
### What steps will reproduce the bug?
const { spawn } = require('child_process');
const cppProcess = spawn('./myProgram.exe');
cppProcess.stdout.on('data', (data) => {
console.log(stdout: ${data});
});
cppProcess.stderr.on('data', (data) => {
console.error(stderr: ${data});
});
cppProcess.on('close', (code) => {
if (code !== 0) {
console.error(C++ 程序发生错误,退出码 ${code});
} else {
console.log('C++ 程序正常退出');
}
});
cppProcess.on('error', (err) => {
console.error(启动 C++ 程序时出错: ${err.message});
});
### How often does it reproduce? Is there a required condition?
must,but differentcomputer return different condition,it may run correctly
### What is the expected behavior? Why is that the expected behavior?
run correctly like cmd way
### What do you see instead?
return 3221225477,
mov qword ptr [rdi],rdx ds:00000000`00000000=?????????????????
### Additional information
_No response_ | windows | low | Critical |
2,774,124,098 | kubernetes | Pod GC should sort by finish Timestamp | ### What would you like to be added?
In Controller Manager, terminated pods gc should sort by it's finish time, other than it's creation time.
### Why is this needed?
When terminated pods reach gc's threshold (`--terminated-pod-gc-threshold` in Controller Manager, default 12500), Controller Manager will trigger pod gc, delete terminated pods util the pods count below threshold.
But when KCM start deletion, pod sort by creation time, which means pods that created firts will delete first. In some case, like in cluster there are both AI training job and spark job, spark job running and finish very quick, but some AI pods will running for a long time.
If cluster's total terminated reach gc threshold, just when long run pods running successfully, it may be gc immediately, because of it's creation time is longest, other workflow controller could not processed it in time. | kind/feature,needs-sig,needs-triage | low | Major |
2,774,127,858 | flutter | When using FlutterEngineGroup on Android, the state of the page stack cannot be restored when a process is terminated due to a permission change. | ### Steps to reproduce
One solution to restore the page stack is to transition between pages using `Navigator.restorablePush`.
I confirmed this behavior when developing an application using `MultipleFlutter` with `FlutterEngineGroup`.
The above solution did not seem to work on `Flutter` screens launched by `FlutterEngine` generated using `FlutterEngineGroup`.
In more detail, this issue occurs when `Intent` is generated using `CachedEngineIntentBuilder` and then `Activity` is launched.
When launching without using a cached `FlutterEngine`, the page stack state seems to be restored using `Navigator.restorablePush`.
Below are the steps and sample code:
1. Use the `FlutterEngine` created using `FlutterEngineGroup` to create an `Intent` using `CachedEngineIntentBuilder` and launch FlutterActivity.
2. Within the launched Flutter screen, `Navigator.restorablePush` is used to perform screen transitions.
3. Change the permissions from the app information (change from Allow to Deny)
4. Back to Application
### Expected results
Expects page stack restoration via `Navigator.restorablePush`.
### Actual results
The page stack is not restored and it appears as if the stack is discarded.
### Code sample
<details close><summary>Code sample Flutter</summary>
```dart
void main() {
runApp(
const MyApp(
entryPoint: EntryPoint.main,
),
);
}
@pragma('vm:entry-point')
void secondMain() {
runApp(
const MyApp(
entryPoint: EntryPoint.second,
),
);
}
enum EntryPoint {
main,
second,
}
class MyApp extends StatelessWidget {
const MyApp({super.key, required this.entryPoint});
final EntryPoint entryPoint;
@override
Widget build(BuildContext context) {
return MaterialApp(
restorationScopeId: entryPoint.name,
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: Provider.value(
value: entryPoint,
child: MyHomePage(
entryPoint: entryPoint,
),
),
);
}
}
class MyHomePage extends StatelessWidget {
const MyHomePage({super.key, required this.entryPoint});
final EntryPoint entryPoint;
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('${entryPoint}Main'),
leading: entryPoint == EntryPoint.second
? BackButton(
onPressed: () => SystemNavigator.pop(animated: true),
)
: null,
),
body: Center(
child: ElevatedButton(
child: const Text('NextPage'),
onPressed: () {
Navigator.restorablePush(
context,
_buildRoute,
);
// Navigator.of(context).push(MaterialPageRoute<void>(
// builder: (BuildContext context) => const NextPage(),
// ));
},
),
),
);
}
static Route _buildRoute(BuildContext context, Object? params) {
return MaterialPageRoute<void>(
builder: (BuildContext context) => const NextPage(),
);
}
}
class NextPage extends StatelessWidget {
const NextPage({super.key});
static const MethodChannel methodChannel = MethodChannel('channel');
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('NextPage'),
),
body: Center(
child: ElevatedButton(
onPressed: () {
methodChannel.invokeMethod('startNextActivity');
},
child: const Text('NextActivity'),
),
),
);
}
}
```
</details>
<details close><summary>Code sample Android</summary>
```Kotlin
/**
* Application Class.
*/
class SampleApplication : Application() {
enum class EntryPoint(val mainMethod: String) {
MAIN("main"), SECOND("secondMain");
}
lateinit var engines: FlutterEngineGroup
override fun onCreate() {
super.onCreate()
engines = FlutterEngineGroup(this)
}
}
/**
* MainActivity.
*
* This Activity is the main Activity launched by the intent-filter.
*/
class MainActivity : FlutterActivity() {
override fun configureFlutterEngine(flutterEngine: FlutterEngine) {
super.configureFlutterEngine(flutterEngine)
MethodChannel(flutterEngine.dartExecutor.binaryMessenger, "channel").setMethodCallHandler { call, result ->
if (call.method == "startNextActivity") {
startActivity(Intent(this, NextActivity::class.java))
result.success(true)
} else {
result.notImplemented()
}
}
}
}
class NextFlutterActivity : FlutterActivity() {
companion object {
private const val ENTRY_POINT: String = "EntryPoint"
fun intentWithCachedEngine(application: SampleApplication, entryPoint: SampleApplication.EntryPoint): Intent {
if (!FlutterEngineCache.getInstance().contains(entryPoint.name)) {
val engine: FlutterEngine = application.engines.createAndRunEngine(
application,
DartExecutor.DartEntrypoint(
FlutterInjector.instance().flutterLoader().findAppBundlePath(),
entryPoint.mainMethod
)
)
FlutterEngineCache.getInstance().put(entryPoint.name, engine)
}
return CachedEngineIntentBuilder(NextFlutterActivity::class.java, entryPoint.name).build(application).putExtra(ENTRY_POINT, entryPoint)
}
}
override fun onCreate(savedInstanceState: Bundle?) {
// The engine cache is destroyed on app startup after the activity is destroyed.
// If you try to get the cached engine in super.onCreate, the engine won't work and the app will crash.
// Due to the structure of FlutterEngineCache, it is stored statically, so it is inevitable that static variables will be destroyed when the process is killed.
// Therefore, as a workaround, check the cache before super.onCreate and generate it to keep it around just in case.
// Related issue: https://github.com/flutter/flutter/issues/106192
// Workaround on Stack Overflow: https://stackoverflow.com/a/64010515
val entryPoint = intent.getSerializableExtra(ENTRY_POINT) as? SampleApplication.EntryPoint ?: SampleApplication.EntryPoint.MAIN
if (!FlutterEngineCache.getInstance().contains(entryPoint.name)) {
val engine: FlutterEngine = (application as SampleApplication).engines.createAndRunEngine(
application,
DartExecutor.DartEntrypoint(
FlutterInjector.instance().flutterLoader().findAppBundlePath(),
entryPoint.mainMethod
)
)
FlutterEngineCache.getInstance().put(entryPoint.name, engine)
}
super.onCreate(savedInstanceState)
}
}
```
</details>
<details close><summary>Code sample Android Manifest</summary>
```xml
<manifest xmlns:android="http://schemas.android.com/apk/res/android">
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<application
android:name=".SampleApplication"
android:icon="@mipmap/ic_launcher"
android:label="multiple_flutter_android_activity">
<activity
android:name=".MainActivity"
android:exported="true"
android:hardwareAccelerated="true"
android:taskAffinity=""
android:theme="@style/LaunchTheme"
android:windowSoftInputMode="adjustResize">
<meta-data
android:name="io.flutter.embedding.android.NormalTheme"
android:resource="@style/NormalTheme" />
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<activity
android:name=".NextFlutterActivity"
android:theme="@android:style/Theme.Material.NoActionBar" />
<meta-data
android:name="flutterEmbedding"
android:value="2" />
</application>
<queries>
<intent>
<action android:name="android.intent.action.PROCESS_TEXT" />
<data android:mimeType="text/plain" />
</intent>
</queries>
</manifest>
```
</details>
### Screenshots or Video
<details close>
<summary>When using FlutterEngineGroup</summary>
https://github.com/user-attachments/assets/7ec6af78-ea94-4fd4-b85f-e0f00028f8b4
</details>
<details close>
<summary>When not using FlutterEngineGroup</summary>
https://github.com/user-attachments/assets/cc07f444-6eba-4045-a4df-17a2836b9099
</details>
### Logs
<details close><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details close><summary>Doctor output</summary>
```console
% fvm flutter doctor -v
[✓] Flutter (Channel stable, 3.24.5, on macOS 14.5 23F79 darwin-arm64, locale ja-JP)
• Flutter version 3.24.5 on channel stable at /Users/uu137317/fvm/versions/3.24.5
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision dec2ee5c1f (8 weeks ago), 2024-11-13 11:13:06 -0800
• Engine revision a18df97ca5
• Dart version 3.5.4
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/uu137317/Library/Android/sdk
• Platform android-35, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.6+0-17.0.6b829.9-10027231)
[✓] VS Code (version 1.77.1)
• VS Code at /Users/uu137317/Downloads/Visual Studio Code.app/Contents
• Flutter extension can be installed from:
🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (4 available)
• sdk gphone64 arm64 (mobile) • emulator-5554 • android-arm64 • Android 14 (API 34) (emulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.5 23F79 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.5 23F79 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.206
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| platform-android,a: quality,has reproducible steps,P2,team-android,triaged-android,found in release: 3.27,found in release: 3.28 | low | Critical |
2,774,135,901 | ui | [bug]: Dropdown component in a Sidebar appears behind the sidebar on the first click | ### Describe the bug
When a `DropdownMenu` component is part of a `Sidebar`, the very first click after a page refresh the pop up box ends up showing the selector items behind the sidebar. Any subsequent click it appears above the sidebar as expected. I've attempted to play with z-index with no success.
This also seems to happen with the `Select` component.
<details><summary>Code</summary>
<p>
```
'use client'
import * as React from 'react'
import { Calendar, ChevronDown } from 'lucide-react'
import { Button } from "@/components/ui/button"
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuTrigger,
} from "@/components/ui/dropdown-menu"
import {
Sidebar,
SidebarContent,
SidebarHeader,
SidebarMenu,
SidebarMenuItem,
SidebarMenuButton,
SidebarProvider,
SidebarInset,
} from "@/components/ui/sidebar"
const months = [
'A Really really really really long selector item', 'Another really long selector item', 'and a third one just for good measure'
]
export default function Page() {
const [selectedMonth, setSelectedMonth] = React.useState(months[0])
return (
<SidebarProvider>
<div className="flex h-screen">
<Sidebar>
<SidebarHeader>
<SidebarMenu>
<SidebarMenuItem>
<DropdownMenu>
<DropdownMenuTrigger asChild>
<SidebarMenuButton>
{selectedMonth}
<ChevronDown className="ml-auto h-4 w-4" />
</SidebarMenuButton>
</DropdownMenuTrigger>
<DropdownMenuContent>
{months.map((month) => (
<DropdownMenuItem
key={month}
onSelect={() => setSelectedMonth(month)}
>
{month}
</DropdownMenuItem>
))}
</DropdownMenuContent>
</DropdownMenu>
</SidebarMenuItem>
</SidebarMenu>
</SidebarHeader>
<SidebarContent>
<SidebarMenu>
<SidebarMenuItem>
<SidebarMenuButton>
<Calendar className="mr-2 h-4 w-4" />
<span>Calendar</span>
</SidebarMenuButton>
</SidebarMenuItem>
</SidebarMenu>
</SidebarContent>
</Sidebar>
<SidebarInset>
<main className="flex-1 p-6">
<h1 className="text-3xl font-bold">{selectedMonth}</h1>
</main>
</SidebarInset>
</div>
</SidebarProvider>
)
}
```
</p>
</details>
### Affected component/components
Dropdown, Select, Sidebar
### How to reproduce
1. Go to the v0 share
2. Click the selector in the upper left
3. Observe that the selector box is behind the sidebar
4. Click the selector twice
5. Observe that the selector box is now above the sidebar
6. Refresh the Page
7. Repeat step 2
### Codesandbox/StackBlitz link
https://v0.dev/chat/7UBVEfJvCsa?b=b_GrcHeC7iBWf
### Logs
_No response_
### System Info
```bash
MacOS, Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,774,153,251 | flutter | Firebase Flutter tutorial missing statement "import" statement instruction | [This page ](https://firebase.google.com/codelabs/firebase-get-to-know-flutter#8) in the Firebase Flutter Tutorial is missing instruction to include statement "import 'yes_no_selection.dart';" in file lib/home_page.dart.
| team-codelabs,p: firebase,P2,triaged-codelabs | low | Minor |
2,774,160,687 | yt-dlp | [RFE] Supported Site Request - Means TV aka means.tv | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Worldwide I think? Definitely in the USA
### Example URLs
- Collection: https://means.tv/programs/mmn
- Single video: https://means.tv/programs/mmn?cid=4003569&permalink=mmn-daily_122024
- Single video: https://means.tv/programs/mmn-daily_122024
### Provide a description that is worded well enough to be understood
This site is for anti-capitalist content. The example URLs all contain free downloads (the collection contains a mix of free and subscription), I checked the supported sites list and did not find it. I tried running with both the collection and the first single video and the generic extractor was unable to process either request.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--no-call-home', '--add-metadata', '--cookies', 'cookie-jar-file', '--embed-metadata', '--embed-thumbnail', '-v', 'https://means.tv/programs/mmn?cid=4003569&permalink=mmn-daily_122024']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [65cf46cdd] (source)
[debug] Lazy loading extractors is disabled
[debug] Git HEAD: 0b6b7742c
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.0-128-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg N-113348-g0a5813fc68-20240119 (setts), ffprobe N-113348-g0a5813fc68-20240119
[debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2024.02.02, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-2.1.0, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[generic] Extracting URL: https://means.tv/programs/mmn?cid=4003569&permalink=mmn-daily_122024
[generic] mmn?cid=4003569&permalink=mmn-daily_122024: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] mmn?cid=4003569&permalink=mmn-daily_122024: Extracting information
[debug] Looking for embeds
[debug] Identified a JSON LD
[generic] Extracting URL: https://means.tv/programs/mmn-daily_122024#__youtubedl_smuggle=%7B%22force_videoid%22%3A+%22mmn%3Fcid%3D4003569%26permalink%3Dmmn-daily_122024%22%2C+%22to_generic%22%3A+true%2C+%22referer%22%3A+%22https%3A%2F%2Fmeans.tv%2Fprograms%2Fmmn%3Fcid%3D4003569%26permalink%3Dmmn-daily_122024%22%7D
[generic] mmn?cid=4003569&permalink=mmn-daily_122024: Downloading webpage
[generic] mmn?cid=4003569&permalink=mmn-daily_122024: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://means.tv/programs/mmn-daily_122024
Traceback (most recent call last):
File "/home/h/dev/yt-dlp/yt_dlp/YoutubeDL.py", line 1634, in wrapper
return func(self, *args, **kwargs)
File "/home/h/dev/yt-dlp/yt_dlp/YoutubeDL.py", line 1769, in __extract_info
ie_result = ie.extract(url)
File "/home/h/dev/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/home/h/dev/yt-dlp/yt_dlp/extractor/generic.py", line 2553, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://means.tv/programs/mmn-daily_122024
```
| site-request,triage | low | Critical |
2,774,186,090 | flutter | [Impeller] re-enable runtime mipmap generation on Adreno GPU. | Based on a large number of issue reports and my own testing, generating mipmaps on Adreno GPU can occassionally corrupt the image - though this depends on the exact dimensions and number of mips. This can happen on 6XX, 7XX, and 8XX series.
* https://github.com/flutter/flutter/issues/160441
* https://github.com/flutter/flutter/issues/159876
* https://github.com/flutter/flutter/issues/160587
The strategy we use is _essentially_ the same as https://docs.vulkan.org/samples/latest/samples/api/texture_mipmap_generation/README.html (and even then, I replaced our code with a copy paste of that one and still reproduced the same problems). I am also fairly certain that it is not a synchronization problem, as I've tested with "everything" barrier between blits and can still reproduce the problems.
My current best guess is a bug in the driver. We have a few options to work around it
1. Generate mips on the CPU. This will be slow, and additionally require readback in places that we previously didnt need it. Last choice.
2. Test and determine if there are magic numbers that work consistently. Surely the driver can't be _that_ broken right? Maybe with a square power of two texture we can guarantee no corruption. Then we could blit regions onto a correctly sized texture.
3. Use a render pass chain to do the downsizing. Less bad than 1.
4. Use a compute pass to do the downsizing. If this is how well blit passes work on adreno I don't even want to touch compute lmao.
I'm going to try a bit of 2. and then probably do 3. But not until higher priority issues are solved.
Potentially related issues:
* https://issuetracker.unity3d.com/issues/vulkan-qualcomm-855-mipmaps-of-the-render-texture-are-wrote-to-incorrect-areas-of-a-texture
* https://issuetracker.unity3d.com/issues/vulkan-adreno-630-gpu-renders-black-artifacts-on-the-terrain-when-draw-instanced-is-enabled | P3,e: impeller,team-engine,triaged-engine | low | Critical |
2,774,249,522 | langchain | An error occurred while attempting to delete the message . | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.messages import RemoveMessage
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from langgraph.prebuilt import tools_condition
model = ChatOpenAI(model="ep-20241223171230-8tv46",
api_key='x',
base_url='https://ark.cn-beijing.volces.com/api/v3',
streaming=True, temperature=0,
)
retrieve_model = ChatOpenAI(model="ep-20241223171230-8tv46",
api_key='x',
base_url='https://ark.cn-beijing.volces.com/api/v3',
streaming=True, temperature=0.1,
)
file_path1 = '../data/080901.pdf'
file_path2 = '../data/非学历证书可免考课程列表.pdf'
@tool(response_format="content_and_artifact")
def computer_science_major_plan(query: str):
"""查询知识库,该知识库包含计算机科学与技术(专升本)专业考试计划,包含开设课程、课程学分级及毕业条件。"""
return '', ''
@tool(response_format="content_and_artifact")
def course_exemption_info(query: str):
"""查询知识库,该知识库包含福建省高等教育自学考试课程免考实施细则。"""
return '', ''
def citation_rag_agent(state):
"""查询知识库,该知识库包含福建省高等教育自学考试课程免考实施细则。"""
system_msg = (
"You are an expert Q&A system!"
"Please provide an answer based solely on the provided sources. "
"When referencing information from a source, "
"cite the appropriate source(s) using their corresponding numbers. "
"Every answer should include at least one source citation. "
"You should use format '[source number]' to cite the source!"
"Only cite a source when you are explicitly referencing it. "
"If none of the sources are helpful, you should indicate that. "
"For example:\n"
"Source 1:\n"
"The sky is red in the evening and blue in the morning.\n"
"Source 2:\n"
"Water is wet when the sky is red.\n"
"User query: When is water wet?\n"
"Your answer: Water will be wet when the sky is red [2], "
"which occurs in the evening [1].\n"
"Now it's your turn. Below are several numbered sources of information:"
"\n------\n"
"{context}"
"\n------\n"
)
prompt_template = ChatPromptTemplate.from_messages(
[
(
"system",
system_msg,
),
MessagesPlaceholder(variable_name="messages"),
]
)
messages = state["messages"]
conversation_messages = [
message
for message in state["messages"]
if message.type in ("human", "system")
or (message.type == "ai" and not message.tool_calls)
]
# question = messages[-2].tool_calls[0]['args']['query']
docs = messages[-1].content
# Chain
rag_chain = prompt_template | retrieve_model
# Run
response = rag_chain.invoke({"context": docs, "messages":conversation_messages})
return {"messages": [response]}
tools = [computer_science_major_plan,course_exemption_info]
from typing import Annotated, Sequence
from typing_extensions import TypedDict
from langchain_core.messages import BaseMessage
from langgraph.graph.message import add_messages
class AgentState(TypedDict):
# The add_messages function defines how an update should be processed
# Default is to replace. add_messages says "append"
messages: Annotated[Sequence[BaseMessage], add_messages]
question: str
### Nodes
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
SYSTEM_PROMPT =(
"you are helpful assistant! your name is '小安'.\n"
"You should answer user questions base on tool response, rather than relying on your previous knowledge.\n"
"When the user's statement is unclear or you feel confused, you can ask the user for confirmation.\n"
"# User Information\n"
"1. User's personal information can help you to have a personalized response.\n"
"Below is user information:\n"
"```\n"
"{user_info}\n"
"```\n"
)
USER_INFO = {
"name": 'Arvin',
"age": '18',
"user hobby": ['basketball', 'listening music'],
}
prompt_template = ChatPromptTemplate.from_messages(
[
(
"system",
SYSTEM_PROMPT,
),
MessagesPlaceholder(variable_name="messages"),
]
)
prompt_template_with_user = prompt_template.partial(user_info=str(USER_INFO))
def agent(state):
"""
Invokes the agent model to generate a response based on the current state. Given
the question, it will decide to retrieve using the retriever tool, or simply end.
Args:
state (messages): The current state
Returns:
dict: The updated state with the agent response appended to messages
"""
print("---CALL AGENT---")
model_with_tool = model.bind_tools(tools)
prompt = prompt_template_with_user.invoke(state)
response = model_with_tool.invoke(prompt)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
from langgraph.graph import END, StateGraph, START
from langgraph.prebuilt import ToolNode
# Define a new graph
workflow = StateGraph(AgentState)
# Define the nodes we will cycle between
workflow.add_node("agent", agent) # agent
workflow.add_node("citation", citation_rag_agent)
tool_node = ToolNode(tools)
workflow.add_node('tools', tool_node)
workflow.add_edge(START, "agent")
# Decide whether to retrieve
workflow.add_conditional_edges(
"agent",
# Assess agent decision
tools_condition,
{
# Translate the condition outputs to nodes in our graph
"tools": "tools",
END: END,
},
)
workflow.add_edge('tools', "citation")
workflow.add_edge("citation", END)
# Compile
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
graph = workflow.compile(checkpointer=memory)
CONFIG = {"configurable": {"thread_id": "abc123"}}
inputs = {
"messages": [
("user", 'hi'),
]
}
response = graph.invoke(inputs, config=CONFIG)
print(response["messages"][-1].content)
messages = graph.get_state(CONFIG).values["messages"]
graph.update_state(CONFIG,{"messages": [RemoveMessage(id=m.id) for m in messages]})
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/chenjq/miniconda3/envs/RAG/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/chenjq/miniconda3/envs/RAG/lib/python3.10/site-packages/langgraph/pregel/manager.py", line 37, in ChannelsManager
yield (
File "/home/chenjq/miniconda3/envs/RAG/lib/python3.10/site-packages/langgraph/pregel/__init__.py", line 1079, in update_state
run.invoke(
File "/home/chenjq/miniconda3/envs/RAG/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3024, in invoke
input = context.run(step.invoke, input, config)
File "/home/chenjq/miniconda3/envs/RAG/lib/python3.10/site-packages/langgraph/utils/runnable.py", line 184, in invoke
ret = context.run(self.func, input, **kwargs)
File "/home/chenjq/miniconda3/envs/RAG/lib/python3.10/site-packages/langgraph/graph/graph.py", line 95, in _route
result = self.path.invoke(value, config)
File "/home/chenjq/miniconda3/envs/RAG/lib/python3.10/site-packages/langgraph/utils/runnable.py", line 176, in invoke
ret = context.run(self.func, input, **kwargs)
File "/home/chenjq/miniconda3/envs/RAG/lib/python3.10/site-packages/langgraph/prebuilt/tool_node.py", line 636, in tools_condition
raise ValueError(f"No messages found in input state to tool_edge: {state}")
ValueError: No messages found in input state to tool_edge: {'messages': []}
```
### Description
I try to delete messages and this error occur.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Mon Oct 19 16:18:59 UTC 2020
> Python Version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.13
> langsmith: 0.2.4
> langchain_app: Installed. No version info available.
> langchain_chroma: 0.1.4
> langchain_huggingface: 0.1.2
> langchain_openai: 0.2.14
> langchain_text_splitters: 0.3.4
> langgraph_sdk: 0.1.48
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.10
> async-timeout: 4.0.3
> chromadb: 0.5.23
> dataclasses-json: 0.6.7
> fastapi: 0.115.6
> httpx: 0.28.1
> httpx-sse: 0.4.0
> huggingface-hub: 0.26.5
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.58.1
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.3
> pydantic-settings: 2.7.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> sentence-transformers: 3.3.1
> SQLAlchemy: 2.0.36
> tenacity: 8.5.0
> tiktoken: 0.8.0
> tokenizers: 0.20.3
> transformers: 4.46.3
> typing-extensions: 4.12.2
| 🤖:bug | low | Critical |
2,774,250,951 | tauri | [bug] `AppHandle::restart` may exit process without waiting for `RunEvent::Exit` event emit to plugins / app. | ### Describe the bug
`AppHandle::restart` may exit process without waiting for `RunEvent::Exit` event emit to plugins / app.
This is upstream / super issue for:
- #11392
- tauri-apps/plugins-workspace#1692
- tauri-apps/plugins-workspace#2256
### Reproduction
Add log on `RunEvent::Exit`, make do something a little slow (like file system operation, or just sleep), and button to call `restart()`.
mini app based on npx tauri init. (removing lib crate)
main.rs
```rust
// Prevents additional console window on Windows in release, DO NOT REMOVE!!
#![cfg_attr(not(debug_assertions), windows_subsystem = "windows")]
fn main() {
tauri::Builder::default()
.invoke_handler(tauri::generate_handler![restart])
.build(tauri::generate_context!())
.expect("error while running tauri application")
.run(|_, event| match event {
tauri::RunEvent::ExitRequested { .. } => {
println!("Exit requested...");
std::thread::sleep(std::time::Duration::from_millis(1));
}
tauri::RunEvent::Exit => {
println!("Exiting...");
}
_ => {}
});
}
#[tauri::command]
async fn restart(app: tauri::AppHandle) -> () {
app.restart();
}
```
index.html
```html
<!doctype html>
<script>
var restart = async () => {
try {
await __TAURI_INTERNALS__.invoke("restart")
} catch (e) {
console.error(e);
}
}
</script>
<button onclick="restart()">restart</button>
```
tauri.conf.json
```json
{
"$schema": "https://schema.tauri.app/config/2.0.0-rc",
"productName": "Tauri App",
"version": "0.1.0",
"identifier": "com.tauri.dev",
"build": {
"frontendDist": "."
},
"app": {
"windows": [
{
"title": "Tauri",
"width": 800,
"height": 600,
"resizable": true,
"fullscreen": false
}
]
}
}
```
### Expected behavior
`tauri::RunEvent::Exit` should always be called
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 15.0.1 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.6.0
- pnpm: 8.15.4
- npm: 10.8.2
- deno: deno 1.45.5
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- tauri-cli 🦀: 2.0.0-rc.3
- @tauri-apps/api : 2.1.1 (outdated, latest: 2.2.0)
- @tauri-apps/cli : 2.1.0 (outdated, latest: 2.2.2)
[-] Plugins
- tauri-plugin-dialog 🦀: 2.2.0
- @tauri-apps/plugin-dialog : not installed!
- tauri-plugin-single-instance 🦀: 2.2.0
- @tauri-apps/plugin-single-instance : not installed!
- tauri-plugin-updater 🦀: 2.3.0
- @tauri-apps/plugin-updater : not installed!
- tauri-plugin-fs 🦀: 2.2.0
- @tauri-apps/plugin-fs : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: out
- devUrl: http://localhost:3030/
- framework: React (Next.js)
- bundler: Webpack
```
### Stack trace
_No response_
### Additional context
I'm looking for fix this issue with waiting for exit event in restart. | type: bug,status: needs triage | low | Critical |
2,774,304,311 | PowerToys | Add a shortcut for taskbar show/hide settings | ### Description of the new feature / enhancement
There's a setting in windows Settings > Personalization > Taskbar that says "Automatically hide the taskbar in desktop mode". Natively, there's no shortcut assigned to it, and there isn't a way to manually assign one. It'd be great to have it on a shortcut, so that the user can turn the setting on/off on command, without having to go to the settings menu every time.
### Scenario when this would be used?
I personally like having as much screen real estate as possible, but sometimes, I do need the taskbar to not be on auto hide.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,774,323,131 | go | cmd/compile/internal/arm64: unrecognized failures | ```
#!watchflakes
default <- pkg == "cmd/compile/internal/arm64" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8726379937861302561)):
FAIL cmd/compile/internal/arm64 [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| help wanted,OS-NetBSD,NeedsInvestigation,compiler/runtime | low | Critical |
2,774,333,126 | neovim | `:lua>` command to dump Lua values to a buffer | ### Problem
When writing or configuring plugins, it's often useful to see the output of functions or variables in a buffer where you can navigate with `/` and `%` and other vim motions. Since lua is the core config language of neovim, this would be a great built-in feature.
You can achieve something like this with `:new | put =luaeval('vim.inspect(value_to_inspect)')` but it's awkward, not a proper scratch buffer, and it doesn't autocomplete (since the target lua command is inside a string).
Proposed solution: Similar to `:lua=` to print a lua expression, add `:lua>` to dump an expression into a scratch buffer.
Essentially, bind it to this function:
```lua
-- Dump a lua value to a buffer for inspection.
function View(v)
-- Use a unique filename to avoid opening an existing buffer.
vim.cmd.vnew("lua output ".. os.time())
vim.bo.buftype = "nofile"
vim.bo.bufhidden = "delete"
vim.bo.swapfile = false
vim.cmd.setfiletype("lua")
local start_line = 0
local bufnr = vim.fn.bufnr()
for i=1,select('#', ...) do
local val = select(i, ...)
local lines = vim.split(vim.inspect(val), "\n")
if i == 1 then
lines[1] = "output = ".. lines[1] -- make buffer closer to valid lua
else
lines[1] = ", ".. lines[1]
end
vim.api.nvim_buf_set_lines(bufnr, start_line, -1, false, lines)
start_line = -1
end
end
```
I picked `>` to be like redirecting it to a buffer. I'm not sure if there's another symbol that's both easy to implement in the parser and has an association with buffers. Maybe `lua^` or `lua!` could work too.
### Expected behavior
With this command, if you want to inspect the active lsp config, you can dump it into a buffer to examine:
```vim
:lua> vim.lsp.get_active_clients()
```
Or even browse the entire vim api:
```vim
:lua> vim
```
Both of these commands would create a new scratch buffer (that's destroyed on close and with no backing file) and populate it with the pretty-printed output of those lua values. Another complete example using multiple values:
```vim
:lua> vim.cmd, vim.bo
```
Splits to create a scratch buffer with this content:
```lua
output = {
autocmd = <function 1>,
file = <function 2>,
new = <function 3>,
setfiletype = <function 4>,
vnew = <function 5>,
<metatable> = {
__call = <function 6>,
__index = <function 7>
}
}
, {
<metatable> = {
__index = <function 1>,
__newindex = <function 2>
}
}
```
| enhancement,lua | low | Major |
2,774,340,629 | vscode | View/Editor actions padding inconsistent | Related to https://github.com/microsoft/vscode/issues/236223
The change in https://github.com/microsoft/vscode/commit/276e24792198ecb7e83533d575568b66da722064 reduced `padding` of actions in toolbars to `2px`. But the change does not seem to impact the editor part, resulting in different paddings for components that originally were designed to be the same:
`2px` padding in views:

`3px` padding in editors:

| bug,ux,papercut :drop_of_blood: | low | Minor |
2,774,350,308 | ant-design | RangePicker 在容器右侧时,弹出层的箭头位置不正确 | ### Reproduction link
[](https://codesandbox.io/p/sandbox/fan-wei-xuan-ze-qi-antd-5-23-0-forked-4rscry)
### Steps to reproduce
打开选择器,查看箭头
### What is expected?
箭头指向输入框的位置
### What is actually happening?
位置不正确
| Environment | Info |
| --- | --- |
| antd | 5.23.0 |
| React | latest |
| System | mac |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Minor |
2,774,355,094 | flutter | Every Time Image Capture after Camera is Crash in android devices | ```
Issue the next TakePictureRequest.
D/Camera2CapturePipeline(22351): createPipeline: captureMode = 1, flashMode = 2, flashType = 0, pipeline tasks = []
D/Camera2CameraImpl(22351): {Camera@ddb2381[id=1]} Issue capture request
D/CaptureSession(22351): Issuing capture request.
D/Camera2CaptureRequestBuilder(22351): createCaptureRequest
D/TakePictureManager(22351): Issue the next TakePictureRequest.
D/TakePictureManager(22351): No new request.
W/com.coollive(22351): 0xebadde09 skipped times: 0
F/libc (22351): Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x424 in tid 22371 (FinalizerDaemon), pid 22351 (com.coollive)
*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
Build fingerprint: 'samsung/m10ltedd/m10lte:10/QP1A.190711.020/M105FDDS5CWA1:user/release-keys'
Revision: '4'
ABI: 'arm'
Timestamp: 2025-01-08 11:15:45+0530
pid: 22351, tid: 22371, name: FinalizerDaemon >>> com.coollive <<<
uid: 11027
signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x424
Cause: null pointer dereference
r0 00000424 r1 00000000 r2 00000000 r3 5fc1c646
r4 00000424 r5 00000000 r6 00000001 r7 13238398
r8 00000000 r9 e58b0000 r10 bdc94390 r11 bdc94314
ip e676f46c sp bdc94260 lr e673ae9f pc e6f0e5d8
[log] null
backtrace:
#00 pc 000a85d8 /apex/com.android.runtime/lib/bionic/libc.so (pthread_mutex_lock+4) (BuildId: 9629be880fb61625a90b250575ed6bc7)
#01 pc 0006de9b /system/lib/libgui.so (android::ConsumerBase::abandon()+10) (BuildId: fd11e17ccf75c671e3e47265cd527310)
#02 pc 0011cb43 /system/lib/libandroid_runtime.so (android::SurfaceTexture_release(_JNIEnv*, _jobject*)+50) (BuildId: d26b0e515deee332e415c25ba41e4566)
#03 pc 002bb0eb /system/framework/arm/boot-framework.oat (art_jni_trampoline+74) (BuildId: 3710d11cb3d127ba91592303d2e49054011a7f54)
#04 pc 000d7bc5 /apex/com.android.runtime/lib/libart.so (art_quick_invoke_stub_internal+68) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#05 pc 00434b77 /apex/com.android.runtime/lib/libart.so (art_quick_invoke_stub+250) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#06 pc 000dffa3 /apex/com.android.runtime/lib/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+166) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#07 pc 00210907 /apex/com.android.runtime/lib/libart.so (art::interpreter::ArtInterpreterToCompiledCodeBridge(art::Thread*, art::ArtMethod*, art::ShadowFrame*, unsigned short, art::JValue*)+274) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#08 pc 0020ca7f /apex/com.android.runtime/lib/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+802) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#09 pc 0042bc2f /apex/com.android.runtime/lib/libart.so (MterpInvokeDirect+358) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#10 pc 000d2914 /apex/com.android.runtime/lib/libart.so (mterp_op_invoke_direct+20) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#11 pc 00404fbc /system/framework/framework.jar (android.graphics.SurfaceTexture.release)
#12 pc 00429f85 /apex/com.android.runtime/lib/libart.so (MterpInvokeVirtual+1184) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#13 pc 000d2814 /apex/com.android.runtime/lib/libart.so (mterp_op_invoke_virtual+20) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#14 pc 003a2cba [anon:dalvik-classes22.dex extracted in memory from /data/app/com.coollive-K4DrxDgAZ-CjKpnST1v-CQ==/base.apk!classes22.dex] (io.flutter.embedding.engine.renderer.SurfaceTextureWrapper.release+14)
#15 pc 00429f85 /apex/com.android.runtime/lib/libart.so (MterpInvokeVirtual+1184) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#16 pc 000d2814 /apex/com.android.runtime/lib/libart.so (mterp_op_invoke_virtual+20) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#17 pc 003a1dfa [anon:dalvik-classes22.dex extracted in memory from /data/app/com.coollive-K4DrxDgAZ-CjKpnST1v-CQ==/base.apk!classes22.dex] (io.flutter.embedding.engine.renderer.FlutterRenderer$SurfaceTextureRegistryEntry.release+78)
#18 pc 0042b4a5 /apex/com.android.runtime/lib/libart.so (MterpInvokeInterface+1472) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#19 pc 000d2a14 /apex/com.android.runtime/lib/libart.so (mterp_op_invoke_interface+20) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#20 pc 003a2ab8 [anon:dalvik-classes22.dex extracted in memory from /data/app/com.coollive-K4DrxDgAZ-CjKpnST1v-CQ==/base.apk!classes22.dex] (io.flutter.embedding.engine.renderer.SurfaceTextureSurfaceProducer.release+4)
#21 pc 00429f85 /apex/com.android.runtime/lib/libart.so (MterpInvokeVirtual+1184) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#22 pc 000d2814 /apex/com.android.runtime/lib/libart.so (mterp_op_invoke_virtual+20) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#23 pc 003a2a58 [anon:dalvik-classes22.dex extracted in memory from /data/app/com.coollive-K4DrxDgAZ-CjKpnST1v-CQ==/base.apk!classes22.dex] (io.flutter.embedding.engine.renderer.SurfaceTextureSurfaceProducer.finalize+16)
#24 pc 00429f85 /apex/com.android.runtime/lib/libart.so (MterpInvokeVirtual+1184) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#25 pc 000d2814 /apex/com.android.runtime/lib/libart.so (mterp_op_invoke_virtual+20) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#26 pc 001b3ac2 /apex/com.android.runtime/javalib/core-libart.jar (java.lang.Daemons$FinalizerDaemon.doFinalize+22)
#27 pc 0042beab /apex/com.android.runtime/lib/libart.so (MterpInvokeDirect+994) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#28 pc 000d2914 /apex/com.android.runtime/lib/libart.so (mterp_op_invoke_direct+20) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#29 pc 001b3bb4 /apex/com.android.runtime/javalib/core-libart.jar (java.lang.Daemons$FinalizerDaemon.runInternal+164)
#30 pc 00429f85 /apex/com.android.runtime/lib/libart.so (MterpInvokeVirtual+1184) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#31 pc 000d2814 /apex/com.android.runtime/lib/libart.so (mterp_op_invoke_virtual+20) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#32 pc 001b38b6 /apex/com.android.runtime/javalib/core-libart.jar (java.lang.Daemons$Daemon.run+50)
#33 pc 0042b4a5 /apex/com.android.runtime/lib/libart.so (MterpInvokeInterface+1472) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#34 pc 000d2a14 /apex/com.android.runtime/lib/libart.so (mterp_op_invoke_interface+20) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#35 pc 000ea9e4 /apex/com.android.runtime/javalib/core-oj.jar (java.lang.Thread.run+8)
#36 pc 001ecd6b /apex/com.android.runtime/lib/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEbb.llvm.6272689385175480488+194) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#37 pc 001f13dd /apex/com.android.runtime/lib/libart.so (art::interpreter::EnterInterpreterFromEntryPoint(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*)+120) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#38 pc 0041e6cd /apex/com.android.runtime/lib/libart.so (artQuickToInterpreterBridge+832) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#39 pc 000dc5a1 /apex/com.android.runtime/lib/libart.so (art_quick_to_interpreter_bridge+32) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#40 pc 000d7bc5 /apex/com.android.runtime/lib/libart.so (art_quick_invoke_stub_internal+68) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#41 pc 00434b77 /apex/com.android.runtime/lib/libart.so (art_quick_invoke_stub+250) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#42 pc 000dffa3 /apex/com.android.runtime/lib/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+166) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#43 pc 0037535b /apex/com.android.runtime/lib/libart.so (art::(anonymous namespace)::InvokeWithArgArray(art::ScopedObjectAccessAlreadyRunnable const&, art::ArtMethod*, art::(anonymous namespace)::ArgArray*, art::JValue*, char const*)+54) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#44 pc 00376065 /apex/com.android.runtime/lib/libart.so (art::InvokeVirtualOrInterfaceWithJValues(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, _jmethodID*, jvalue const*)+300) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#45 pc 003a79c3 /apex/com.android.runtime/lib/libart.so (art::Thread::CreateCallback(void*)+974) (BuildId: bc4a81ac1e1d54390f5daa1b78199d89)
#46 pc 000a7d17 /apex/com.android.runtime/lib/bionic/libc.so (__pthread_start(void*)+20) (BuildId: 9629be880fb61625a90b250575ed6bc7)
#47 pc 00061127 /apex/com.android.runtime/lib/bionic/libc.so (__start_thread+30) (BuildId: 9629be880fb61625a90b250575ed6bc7)
``` | waiting for customer response,in triage | low | Critical |
2,774,361,882 | pytorch | Unable to compile models using tensorrt backend: CUDNN_STATUS_BAD_PARAM_STREAM_MISMATCH | ### 🐛 Describe the bug
When i use torch compile with tensorrt backend, im getting following error.
apparently tracing for conv2d operation is getting too many values (my guess)?
```bash
convolution = torch.ops.aten.convolution.default(slice_1, arg3_1, None, [2, 2], [3, 3], [1, 1], False, [0, 0], 1); slice_1 = arg3_1 = None
```
Convolution operation recieves only 7 arguments, but while tracing this has recieved 9.
Following is the trace log.
The error only pops up when im testing my library with pytest. I am not sure how to write reproducible code here.
```
--------------------------------------------------------------------------------------------------------------------------- Captured log call ---------------------------------------------------------------------------------------------------------------------------
WARNING torch_tensorrt.dynamo._compiler:_compiler.py:354 Node linear_default of op type call_function does not have metadata. This could sometimes lead to undefined behavior.
WARNING torch_tensorrt.dynamo._compiler:_compiler.py:363 Some nodes do not have metadata (shape and dtype information). This could lead to problems sometimes if the graph has PyTorch and TensorRT segments.
WARNING torch_tensorrt.dynamo.backend.backends:backends.py:123 TRT conversion failed on the subgraph. See trace above. Returning GraphModule forward instead.
Traceback (most recent call last):
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch_tensorrt/dynamo/backend/backends.py", line 114, in _pretraced_backend
trt_compiled = compile_module(
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch_tensorrt/dynamo/_compiler.py", line 464, in compile_module
trt_module = convert_module(
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 142, in convert_module
interpreter_result = interpret_module_to_result(
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 105, in interpret_module_to_result
output_dtypes = infer_module_output_dtypes(
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 49, in infer_module_output_dtypes
module_outputs = module(*torch_inputs, **torch_kwarg_inputs)
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch/fx/graph_module.py", line 784, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch/fx/graph_module.py", line 361, in __call__
raise e
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch/fx/graph_module.py", line 348, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "<eval_with_key>.8", line 9, in forward
convolution = torch.ops.aten.convolution.default(slice_1, arg3_1, None, [2, 2], [3, 3], [1, 1], False, [0, 0], 1); slice_1 = arg3_1 = None
File "/home/mzcar/miniconda3/lib/python3.10/site-packages/torch/_ops.py", line 717, in __call__
return self._op(*args, **kwargs)
RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM_STREAM_MISMATCH
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.120
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 5881.0000
CPU min MHz: 400.0000
BogoMIPS: 8983.44
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu11==11.11.3.6
[pip3] nvidia-cuda-cupti-cu11==11.8.87
[pip3] nvidia-cuda-nvrtc-cu11==11.8.89
[pip3] nvidia-cuda-runtime-cu11==11.8.89
[pip3] nvidia-cudnn-cu11==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-curand-cu11==10.3.0.86
[pip3] nvidia-cusolver-cu11==11.4.1.48
[pip3] nvidia-cusparse-cu11==11.7.5.86
[pip3] nvidia-nccl-cu11==2.21.5
[pip3] nvidia-nvtx-cu11==11.8.86
[pip3] onnx==1.17.0
[pip3] onnx_tensorrt==10.5.0
[pip3] onnxruntime-gpu==1.19.2
[pip3] torch==2.5.0+cu118
[pip3] torch_tensorrt==2.5.0+cu118
[pip3] torchvision==0.20.0+cu118
[pip3] triton==3.1.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
[conda] nvidia-cudnn-cu11 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
[conda] nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
[conda] nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
[conda] nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
[conda] nvidia-nccl-cu11 2.21.5 pypi_0 pypi
[conda] nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
[conda] torch 2.5.0+cu118 pypi_0 pypi
[conda] torch-tensorrt 2.5.0+cu118 pypi_0 pypi
[conda] torchvision 0.20.0+cu118 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng | triaged,oncall: pt2,module: inductor | low | Critical |
2,774,373,473 | langchain | DOC: Missing Information About the model Field in HuggingFaceEndpoint | ### URL
https://python.langchain.com/api_reference/huggingface/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.model
### Checklist
- [X] I added a very descriptive title to this issue.
- [x] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In the `langchain_huggingface` package, the `HuggingFaceEndpoint` class has a required field named model, which is used to specify the model name, such as `microsoft/Phi-3-mini-4k-instruct`. However, the corresponding documentation does not include any details about this property.


This omission creates confusion, particularly because the `repo_id` field also accepts repository ID strings (e.g., `microsoft/Phi-3-mini-4k-instruct`). The key differences between these fields are as follows:
- The `model` field is required.
- The `repo_id` field is marked as optional.
- The `endpoint_url` field requires the model's URL. (Not a part of current issue)
The lack of clear documentation for the model field, combined with its similarity to repo_id, makes it difficult for developers to understand which field to use.
### Idea or request for content:
Writing the differences between the field `repo_id` and `model`, and describing the purposes of each property. | 🤖:docs | low | Minor |
2,774,388,491 | deno | Proposal: deno-lint-ignore-line to ignore the linter warning for the same line | # Reason
Adding a new `// deno-lint-ignore` comment line before an offending line of code, is useful to draw attention. But when the ignore issue is minor, ideally attention should not be drawn, by simply adding the comment after the statement, without creating a new line for it.
## Before
```ts
import {
getFirestore,
// deno-lint-ignore no-unused-vars -- Firestore is used as a return type but https://github.com/denoland/deno/issues/27583#issue-2774370928
Firestore,
} from 'firebase-admin/firestore';
```
## After
```ts
import { getFirestore, Firestore } from 'firebase-admin/firestore'; // deno-lint-ignore-line no-unused-vars -- Firestore is used as a return type but https://github.com/denoland/deno/issues/27583#issue-2774370928
```
# Prior art
[ESLint offers](https://eslint.org/docs/latest/use/configure/rules#using-configuration-comments-1) `// eslint-disable-line` in addition to `// eslint-disable-next-line`. | suggestion,lint | low | Minor |
2,774,404,478 | pytorch | torch.compile post_accumulate_grad_hook ordering is wrong for tiebreakers | ### 🐛 Describe the bug
```python
import torch
import torch.nn as nn
import functools
model = nn.Sequential(
nn.Linear(10, 10, bias=False), # i=0
nn.Linear(10, 10, bias=False), # i=1
nn.Linear(10, 10, bias=False), # i=2
)
hook_ordering = []
def hook(param, i):
global hook_ordering
hook_ordering.append(i)
for i, param in enumerate(model.parameters()):
param.register_post_accumulate_grad_hook(functools.partial(hook, i=i))
x = torch.randn(10, 10)
out = model(x)
out.sum().backward()
print(f"eager hook ordering: {hook_ordering}")
# eager hook ordering: [2, 1, 0]
model.zero_grad()
hook_ordering = []
out = torch.compile(model, backend="eager")(x)
out.sum().backward()
print(f"compiled backend=eager hook ordering: {hook_ordering}")
# compiled backend=eager hook ordering: [2, 1, 0]
model.zero_grad()
hook_ordering = []
out = torch.compile(model, backend="aot_eager")(x)
out.sum().backward()
print(f"compiled backend=aot_eager hook ordering: {hook_ordering}")
# compiled backend=aot_eager hook ordering: [0, 1, 2]
```
We found this while working on Functional Autograd + Compiled Autograd. This is a consequence of implementing CompiledFunction as an autograd.Function. `CompiledFunction.backward` gradient return order must match the input order to `CompiledFunction.forward` i.e. [0, 1, 2].
While autograd does schedule AccumulateGrad nodes (and their post hook) ASAP, it can't peek into the autograd node, so there is a tiebreaker scenario when the autograd node returns multiple grads. The current autograd engine implementation just follows the output order.
One possible solution is to have the partitioner tell the autograd engine the desired ordering of outputs.
### Versions
main
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | triaged,oncall: pt2,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,774,465,671 | flutter | Flutter app is crash due to custom painter in ios but working fine in android using impeller | ### Steps to reproduce
flutter run
### Expected results
app should be run fine on both platfrom
### Actual results
when I open a screen where Custom Painter uses the app, it crashes on IOS but working fine on android also no any logs if i run application from android studio but error showing in xcode
### Code sample
<details open><summary>Code sample</summary>
```dart
class CurvedChartPainter extends CustomPainter {
final List<Map<String, Map<String, dynamic>>> xValues;
final List<Map<String, double>> yValues;
final Color? color;
final String currency;
final double strokeWidth;
final List<Color> gradientColors;
final List<double> gradientStops;
final List<Color> pathGradientColors;
final List<double> pathGradientStops;
final List<Offset> chartPoints = [];
final TextStyle labelTextStyle;
final Offset? selectedRecord;
final Function(List<Offset> points) getPoints;
final Function(Offset) onTap;
// Constructor
CurvedChartPainter({
required this.xValues,
required this.yValues,
required this.strokeWidth,
required this.pathGradientColors,
required this.pathGradientStops,
required this.currency,
this.selectedRecord,
this.color,
required this.gradientColors,
required this.gradientStops,
this.labelTextStyle = const TextStyle(color: descriptionTextColor, fontSize: 12),
required this.getPoints,
required this.onTap,
});
// The paint method is called when the custom painter needs to paint
@override
void paint(Canvas canvas, Size size) {
// Set up the paint for the chart line
var paint = Paint();
paint.color = color ?? const Color(0xFFF63E02);
paint.style = PaintingStyle.stroke;
paint.strokeWidth = strokeWidth;
// Set up the paint for the chart fill
var fillPaint = Paint();
final dottedLinePaint = Paint()
..color = grey
..strokeWidth = 1.5;
fillPaint.style = PaintingStyle.fill;
final selectedDottedLinePaint = Paint()
..color = primaryColor
..strokeWidth = 2.5;
fillPaint.style = PaintingStyle.fill;
//
// // point painter
Paint paintCircle = Paint()..color = white;
Paint paintBorder = Paint()
..color = primaryColor
..strokeWidth = 1.5
..style = PaintingStyle.stroke;
// Create paths for the chart line and fill
var path = Path();
var fillPath = Path();
var arrowPath = Path();
var toolTipPainter = Paint();
toolTipPainter.color = white;
toolTipPainter.style = PaintingStyle.fill;
toolTipPainter.strokeWidth = strokeWidth;
var toolTipBorderPainter = Paint();
toolTipBorderPainter.color = primaryColor;
// toolTipBorderPainter.blendMode = ui.BlendMode.srcIn;
toolTipBorderPainter.style = PaintingStyle.stroke;
toolTipPainter.strokeWidth = strokeWidth;
chartPoints.clear();
// Check if there are enough values to draw the chart
yValues.sort((a, b) => a.values.first.compareTo(b.values.first));
// xValues.first.values.toList().sort((a, b) => a.values.first.compareTo(b.values.first));
// draw y axis labels
const double yItemHeight = 12;
const double yItemHeightSpace = 5;
double left = 0;
double top = 0;
double maxyLabelTextWidth = 0;
for (int i = 0; i < yValues.length ; i++) {
double y = (size.height -8) * i / (yValues.length - 1) - 10;
double labelValue = yValues.last.values.elementAt(0) *
(yValues.length - i - 1) /
(yValues.length - 1);
var textPainter = TextPainter(
text: TextSpan(
text: labelValue.toStringAsFixed(0), style: labelTextStyle),
textDirection: TextDirection.ltr,
);
textPainter.layout();
textPainter.paint(
canvas, Offset(5, y - ((textPainter.height / 2) -8 )));
if(maxyLabelTextWidth < textPainter.width){
maxyLabelTextWidth = textPainter.width;
}
}
maxyLabelTextWidth += 5;
// final int itemCount = ((size.height) / (yItemHeight + yItemHeightSpace)).floor();
// final List<double> values = createIntervals(yValues.first.values.first, yValues.last.values.first, itemCount).reversed.toList();
// for(int i = 0; i <values.length; i++ ) {
//
// var yTextPainter = TextPainter(
// text: TextSpan(text: values[i].toInt().toString(), style: kOutFitLight.copyWith(color: descriptionTextColor,fontSize: 10)),
// textDirection: TextDirection.ltr,
//
// );
// yTextPainter.layout();
// yTextPainter.paint(canvas, ui.Offset(left, top));
// // canvas.drawRect(ui.Rect.fromLTWH(left, top, yTextPainter.width, yItemHeight), ui.Paint()
// // ..color = Colors.red);
// top += yItemHeight + yItemHeightSpace;
// logDebug(yTextPainter.width.toString());
// if(yTextPainter.width > maxyLabelTextWidth){
// maxyLabelTextWidth = yTextPainter.width;
// }
//
// }
logDebug(maxyLabelTextWidth.toString());
if (xValues.length > 1 && yValues.isNotEmpty) {
// Calculate some initial values
final maxValue = yValues.last.values.reduce((value, element) => max(value, element));
final firstValueHeight = size.height * (xValues.first.values.first['international_rate'] / maxValue);
final firstY = size.height - strokeWidth - ((size.height - strokeWidth) * (xValues[0].values.elementAt(0)['international_rate'] / maxValue));
// Initialize the paths with the first point
path.moveTo(maxyLabelTextWidth + 5, size.height - firstValueHeight);
fillPath.moveTo(maxyLabelTextWidth + 5, size.height);
fillPath.lineTo(maxyLabelTextWidth + 5, size.height - firstValueHeight);
chartPoints.add(Offset(maxyLabelTextWidth + 5, firstY.isNaN ? 0 : firstY));
// Calculate the distance between each x value
final itemXDistance = (size.width -left - maxyLabelTextWidth - 10 ) / (xValues.length - 1);
// Loop through the x values and draw the chart line and fill
for (var i = 1; i < xValues.length; i++) {
final x = itemXDistance * i + (left + maxyLabelTextWidth + 10);
final valueHeight = size.height - strokeWidth - ((size.height - strokeWidth) * (xValues[i].values.elementAt(0)['international_rate'] / maxValue));
final previousValueHeight =
size.height - strokeWidth - ((size.height - strokeWidth) * (xValues[i - 1].values.elementAt(0)['international_rate'] / maxValue));
// Draw a quadratic bezier curve between each point
path.quadraticBezierTo(
x - (itemXDistance / 2) - (itemXDistance / 8),
previousValueHeight,
x - (itemXDistance / 2),
valueHeight + ((previousValueHeight - valueHeight) / 2),
);
path.quadraticBezierTo(
x - (itemXDistance / 2) + (itemXDistance / 8),
valueHeight,
x,
valueHeight,
);
// Add the current point to the chart points
chartPoints.add(Offset(x, valueHeight.isNaN ? 0 : valueHeight));
// Draw the fill path using the same quadratic bezier curves
fillPath.quadraticBezierTo(
x - (itemXDistance / 2) - (itemXDistance / 8),
previousValueHeight,
x - (itemXDistance / 2),
valueHeight + ((previousValueHeight - valueHeight) / 2),
);
fillPath.quadraticBezierTo(
x - (itemXDistance / 2) + (itemXDistance / 8),
valueHeight,
x,
valueHeight,
);
}
// if (myWeight.isFocusing) {
//draw dotted lines
for (int i = 0; i < chartPoints.length; i++) {
double startY = -20;
const dashHeight = 7, dashSpace = 5;
if (i == chartPoints.length - 1) chartPoints[i] = chartPoints[i].translate(-10, 0);
double opacityIncrement = 0.0;
while (startY < size.height - 15) {
if (startY >= 0 - 20 && startY <= size.height - size.height * 0.95) {
// Apply gradient effect at the top and bottom
// selectedDottedLinePaint.shader = LinearGradient(
// colors: [
// primaryColor.withOpacity(0), // fade into transparency
// primaryColor.withOpacity(min(1, 0.2 + opacityIncrement) ),
// ],
// stops: const [0.0,1],
// begin: Alignment.topRight,
// end: Alignment.bottomLeft,
// ).createShader(Rect.fromLTRB(chartPoints[i].dx, startY, chartPoints[i].dx, startY + dashHeight));
opacityIncrement +=0.2;
} else {
selectedDottedLinePaint.shader = null;
selectedDottedLinePaint.color = primaryColor;
}
canvas.drawLine(Offset(chartPoints[i].dx, startY), Offset(chartPoints[i].dx, startY + dashHeight), selectedRecord != null && chartPoints[i].dx.toInt() > selectedRecord!.dx - 8 && chartPoints[i].dx.toInt() < selectedRecord!.dx + 8? selectedDottedLinePaint : dottedLinePaint) ;
startY += dashHeight + dashSpace;
}
}
// canvas.drawLine(ui.Offset(50, size.height + 50), const ui.Offset(50, 52), ui.Paint()..color = Colors.red);
// Close the fill path
fillPath.lineTo(size.width, size.height);
fillPath.close();
}
for (int i = 0; i < xValues.length; i++) {
final width = i== 0 ? size.width -20 : size.width;
double x = width * i / (xValues.length - 1);
var textPainter = TextPainter(
text:
TextSpan(text: xValues[i].keys.elementAt(0), style: labelTextStyle),
textDirection: TextDirection.ltr,
);
textPainter.layout();
textPainter.paint(
canvas,ui.Offset(chartPoints[i].dx - (maxyLabelTextWidth - (i == xValues.length -1 ? 0:2)),size.height.toDouble()));
}
// Create a gradient for the fill
LinearGradient gradient = LinearGradient(
colors: gradientColors,
stops: gradientStops,
begin: Alignment.topCenter,
end: Alignment.bottomCenter,
);
LinearGradient pathGradient = LinearGradient(
colors: pathGradientColors,
stops: pathGradientStops,
begin: Alignment.centerLeft,
end: Alignment.centerRight,
);
Rect rect = Rect.fromLTWH(0, 0, size.width, size.height);
fillPaint.shader = gradient.createShader(rect);
paint.shader = pathGradient.createShader(rect);
// Draw the fill path with the gradient
canvas.drawPath(fillPath, fillPaint);
// Draw the chart line
canvas.drawPath(path, paint);
path.close();
fillPath.close();
// chartPoints[chartPoints.length -1] = chartPoints[chartPoints.length -1].translate(-10, 0);
// pass current points
getPoints(chartPoints);
// // Draw X axis labels
// for (int i = 0; i < xValues.length; i++) {
// double x = (size.width + left + maxyLabelTextWidth + 5 ) * i / (xValues.length - 1);
// var textPainter = TextPainter(
// text: TextSpan(text: xValues[i].keys.elementAt(0), style: kOutFitLight.copyWith(color: descriptionTextColor,fontSize: 10)),
// textDirection: TextDirection.ltr,
//
// );
// textPainter.layout();
// textPainter.paint(canvas, Offset( i == xValues.length -1 ? size.width - textPainter.width -8:x - textPainter.width / 2, size.height - 5));
// } // // Draw X axis labels
// for (int i = 0; i < xValues.length; i++) {
// double x = (size.width + left + maxyLabelTextWidth + 5 ) * i / (xValues.length - 1);
// var textPainter = TextPainter(
// text: TextSpan(text: xValues[i].keys.elementAt(0), style: kOutFitLight.copyWith(color: descriptionTextColor,fontSize: 10)),
// textDirection: TextDirection.ltr,
//
// );
// textPainter.layout();
// textPainter.paint(canvas, Offset( i == xValues.length -1 ? size.width - textPainter.width -8:x - textPainter.width / 2, size.height - 5));
// }
// Draw X axis labels
// for (int i = 0; i < xValues.length; i++) {
// double x = size.width * i / (xValues.length - 1);
// var textPainter = TextPainter(
// text:
// TextSpan(text: xValues[i].keys.elementAt(0), style: labelTextStyle),
// textDirection: TextDirection.ltr,
// );
// textPainter.layout();
// textPainter.paint(
// canvas, Offset(x - textPainter.width / 2, size.height + 2));
// }
//
// for (int i = 0; i < yValues.length; i++) {
// double y = size.height * i / (yValues.length - 1);
// double labelValue = yValues.last.values.elementAt(0) *
// (yValues.length - i - 1) /
// (yValues.length - 1);
// var textPainter = TextPainter(
// text: TextSpan(
// text: labelValue.toStringAsFixed(0), style: labelTextStyle),
// textDirection: TextDirection.ltr,
// );
// textPainter.layout();
// textPainter.paint(
// canvas, Offset(-textPainter.width - 2, y - textPainter.height / 2));
// }
if (selectedRecord != null) {
logDebug(chartPoints.toString());
for (int i = 0; i < chartPoints.length; i++) {
logDebug(chartPoints.toString());
if (chartPoints[i].dx.toInt() > selectedRecord!.dx - 8 && chartPoints[i].dx.toInt() < selectedRecord!.dx + 8) {
logDebug('found record $selectedRecord');
var path = Path();
path.addOval(Rect.fromCircle(center: chartPoints[i], radius: 8));
canvas.drawShadow(path, kDefaultIconDarkColor, 5, true);
canvas.drawPath(path, paintCircle);
canvas.drawCircle(chartPoints[i], 8, paintBorder);
/// slab pain
///
///
TextStyle slabStyle = kOutFitSemiBold.copyWith(
color: descriptionTextColor,
fontSize: 12,
height: 1,
);
TextSpan slabSpan = TextSpan(
text: 'Up to ${xValues[i].keys.first}',
style: slabStyle.copyWith(
height: 1,
),
);
final slabPainter = TextPainter(
text: slabSpan,
textDirection: TextDirection.ltr,
);
slabPainter.layout(
minWidth: 0,
maxWidth: size.width,
);
/// paint spending
TextStyle spendingAmountStyle = kOutFitSemiBold.copyWith(
color: primaryColor,
fontSize: 14,
height: 1,
);
TextSpan spendingAmountSpan = TextSpan(
text: '${xValues[i].values.first['international_rate']} per KG',
style: spendingAmountStyle.copyWith(
height: 1,
),
);
final spendingPainter = TextPainter(
text: spendingAmountSpan,
textDirection: TextDirection.ltr,
);
spendingPainter.layout(
minWidth: 0,
maxWidth: size.width,
);
//
// comparisonPainter.layout(
// minWidth: 0,
// maxWidth: size.width,
// );
//
//
// final comparisonXCenter = xValues[i].values.first['Diff_in_percentage'] == 'NaN' || xValues[i].values.first['Diff_in_percentage'] == 'Infinity' ?chartPoints[i].dx + 12 + spendingPainter.width > size.width ? chartPoints[i].dx - spendingPainter.width - 12 : chartPoints[i].dx + 12:chartPoints[i].dx + 12 + comparisonPainter.width > size.width || chartPoints[i].dx + 12 + spendingPainter.width > size.width
// ? chartPoints[i].dx - 12 - comparisonPainter.width
// : chartPoints[i].dx + 12;
//
// final comparisonYCenter =
// chartPoints[i].dx + 8 + spendingPainter.width > size.width
// ? min(chartPoints[i].dy, chartPoints[i - 1].dy) - 30
// : min(chartPoints[i].dy, chartPoints[i + 1].dy) - 30;
//
final spendingYCenter = chartPoints[i].dy - 30;
final spendingXCenter = chartPoints[i].dx;
final backgroundRect = ui.Rect.fromLTWH(
i ==0 ? chartPoints[i].dx + 8 :chartPoints[i].dx - (max(slabPainter.width, spendingPainter.width) / 2),
chartPoints[i].dy - slabPainter.height - spendingPainter.height - 25,
max(spendingPainter.width, slabPainter.width) + 10,
spendingPainter.height + slabPainter.height + 10
// center: ui.Offset(spendingXCenter, spendingYCenter),
// width: max(spendingPainter.width, spendingPainter.width) + 10,
// height: spendingPainter.height + spendingPainter.height
);
final backgroundRectForSpaceIssues = ui.Rect.fromLTWH(
size.width - backgroundRect.width - 8,
chartPoints[i].dy - slabPainter.height - spendingPainter.height - 25,
max(spendingPainter.width, slabPainter.width) + 10,
spendingPainter.height + slabPainter.height + 10
// center: ui.Offset(spendingXCenter, spendingYCenter),
// width: max(spendingPainter.width, spendingPainter.width) + 10,
// height: spendingPainter.height + spendingPainter.height
);
// canvas.drawRect(backgroundRect, toolTipPainter);
final borderRect = ui.Rect.fromLTWH(
chartPoints[i].dx - (max(slabPainter.width, spendingPainter.width) / 2),
chartPoints[i].dy - slabPainter.height - spendingPainter.height - 22,
max(spendingPainter.width, slabPainter.width) + 10 ,
spendingPainter.height + slabPainter.height + 10
// center: ui.Offset(spendingXCenter, spendingYCenter),
// width: max(spendingPainter.width, spendingPainter.width) + 10,
// height: spendingPainter.height + spendingPainter.height
);
logDebug('${borderRect.left} ${borderRect.right} ${size.width}');
canvas.drawRRect(ui.RRect.fromRectAndRadius(backgroundRect.right > size.width + 8 ? backgroundRectForSpaceIssues: backgroundRect,const ui.Radius.circular(0)), toolTipPainter);
spendingPainter.paint(canvas, Offset(backgroundRect.right > size.width + 8 ? backgroundRectForSpaceIssues.left + 5 : backgroundRect.left + 5, spendingYCenter));
slabPainter.paint(canvas, Offset(backgroundRect.right > size.width + 8 ? backgroundRectForSpaceIssues.left + 5 :backgroundRect.left + 5, spendingYCenter - 20));
//
// if(xValues[i].values.first['Diff_in_percentage'] != 'NaN' && xValues[i].values.first['Diff_in_percentage'] != 'Infinity' )comparisonPainter.paint(canvas, Offset(comparisonXCenter, comparisonYCenter));
/// paint months
// TextStyle textStyle = kSFMedium.copyWith(
// color: blue4.withOpacity(0.8),
// );
// TextSpan textSpan = TextSpan(
// text: xValues[i].keys.first,
// style: textStyle.copyWith(
// height: 1,
// ),
// );
// final textPainter = TextPainter(
// text: textSpan,
// textDirection: TextDirection.ltr,
// );
// textPainter.layout(
// minWidth: 0,
// maxWidth: size.width,
// );
// final xCenter = chartPoints[i].dx + 8 + textPainter.width > size.width ? chartPoints[i].dx - 8 - textPainter.width : chartPoints[i].dx + 8;
// final yCenter = size.height +2;
// chartPoints[i].dy > size.height * 0.99
// ? size.height - 30
// : chartPoints[i].dy > size.height * 0.9
// ? size.height - 4
// : size.height - 20;
// final offset = Offset(xCenter, yCenter);
// textPainter.paint(canvas, Offset(xCenter, yCenter));
}
}
}
}
// Determine whether the chart should repaint
@override
bool shouldRepaint(CustomPainter oldDelegate) => oldDelegate != this;
}
```
</details>
here my chart painter code it's working fine by running --no-enable-impeller on both android and ios
### Screenshots or Video

### Logs
io.flutter.1.raster (9): EXC_BAD_ACCESS (code=1, address=0xfffffffffffffff0)
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 14.4.1 23E224 darwin-arm64, locale en-IN)
• Flutter version 3.27.1 on channel stable at /Users/justcode-m1/Desktop/development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (3 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/justcode-m1/Library/Android/sdk
• Platform android-35, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.3)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15E204a
• CocoaPods version 1.13.0
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[✓] IntelliJ IDEA Community Edition (version 2024.2.3)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
```
</details>
| waiting for customer response,in triage | low | Critical |
2,774,482,269 | flutter | [pigeon]Support EventChannelApi in multiple interface files | ### Use case
In my project, I organize interface definitions across multiple files. I want to define EventChannelApi in multiple files using the `@EventChannelApi()` annotation.
However, this results in the generation of duplicate classes (e.g., `PigeonEventChannelWrapper` and `PigeonEventSink` in Swift), which causes build errors.
<details><summary>Code Example</summary>
## Input files
a.dart
```dart
import 'package:pigeon/pigeon.dart';
@ConfigurePigeon(PigeonOptions(
dartOut: 'lib/src/a.g.dart',
dartOptions: DartOptions(),
kotlinOut: 'android/app/src/main/kotlin/dev/flutter/pigeon_sample/A.g.kt',
kotlinOptions: KotlinOptions(),
swiftOut: 'ios/Runner/A.g.swift',
swiftOptions: SwiftOptions(),
))
@EventChannelApi()
abstract class EventA {
int streamA();
}
```
b.dart
```dart
import 'package:pigeon/pigeon.dart';
@ConfigurePigeon(PigeonOptions(
dartOut: 'lib/src/b.g.dart',
dartOptions: DartOptions(),
kotlinOut: 'android/app/src/main/kotlin/dev/flutter/pigeon_sample/B.g.kt',
kotlinOptions: KotlinOptions(includeErrorClass: false),
swiftOut: 'ios/Runner/B.g.swift',
swiftOptions: SwiftOptions(includeErrorClass: false),
))
@EventChannelApi()
abstract class EventB {
int streamB();
}
```
## Output files
A.g.swift
```swift
// Autogenerated from Pigeon (v22.7.2), do not edit directly.
// See also: https://pub.dev/packages/pigeon
import Foundation
#if os(iOS)
import Flutter
#elseif os(macOS)
import FlutterMacOS
#else
#error("Unsupported platform.")
#endif
/// Error class for passing custom error details to Dart side.
final class PigeonError: Error {
let code: String
let message: String?
let details: Any?
init(code: String, message: String?, details: Any?) {
self.code = code
self.message = message
self.details = details
}
var localizedDescription: String {
return
"PigeonError(code: \(code), message: \(message ?? "<nil>"), details: \(details ?? "<nil>")"
}
}
private func isNullish(_ value: Any?) -> Bool {
return value is NSNull || value == nil
}
private func nilOrValue<T>(_ value: Any?) -> T? {
if value is NSNull { return nil }
return value as! T?
}
private class APigeonCodecReader: FlutterStandardReader {
}
private class APigeonCodecWriter: FlutterStandardWriter {
}
private class APigeonCodecReaderWriter: FlutterStandardReaderWriter {
override func reader(with data: Data) -> FlutterStandardReader {
return APigeonCodecReader(data: data)
}
override func writer(with data: NSMutableData) -> FlutterStandardWriter {
return APigeonCodecWriter(data: data)
}
}
class APigeonCodec: FlutterStandardMessageCodec, @unchecked Sendable {
static let shared = APigeonCodec(readerWriter: APigeonCodecReaderWriter())
}
var aPigeonMethodCodec = FlutterStandardMethodCodec(readerWriter: APigeonCodecReaderWriter());
private class PigeonStreamHandler<ReturnType>: NSObject, FlutterStreamHandler {
private let wrapper: PigeonEventChannelWrapper<ReturnType>
private var pigeonSink: PigeonEventSink<ReturnType>? = nil
init(wrapper: PigeonEventChannelWrapper<ReturnType>) {
self.wrapper = wrapper
}
func onListen(withArguments arguments: Any?, eventSink events: @escaping FlutterEventSink)
-> FlutterError?
{
pigeonSink = PigeonEventSink<ReturnType>(events)
wrapper.onListen(withArguments: arguments, sink: pigeonSink!)
return nil
}
func onCancel(withArguments arguments: Any?) -> FlutterError? {
pigeonSink = nil
wrapper.onCancel(withArguments: arguments)
return nil
}
}
class PigeonEventChannelWrapper<ReturnType> {
func onListen(withArguments arguments: Any?, sink: PigeonEventSink<ReturnType>) {}
func onCancel(withArguments arguments: Any?) {}
}
class PigeonEventSink<ReturnType> {
private let sink: FlutterEventSink
init(_ sink: @escaping FlutterEventSink) {
self.sink = sink
}
func success(_ value: ReturnType) {
sink(value)
}
func error(code: String, message: String?, details: Any?) {
sink(FlutterError(code: code, message: message, details: details))
}
func endOfStream() {
sink(FlutterEndOfEventStream)
}
}
class StreamAStreamHandler: PigeonEventChannelWrapper<Int64> {
static func register(with messenger: FlutterBinaryMessenger,
instanceName: String = "",
streamHandler: StreamAStreamHandler) {
var channelName = "dev.flutter.pigeon.pigeon_sample.EventA.streamA"
if !instanceName.isEmpty {
channelName += ".\(instanceName)"
}
let internalStreamHandler = PigeonStreamHandler<Int64>(wrapper: streamHandler)
let channel = FlutterEventChannel(name: channelName, binaryMessenger: messenger, codec: aPigeonMethodCodec)
channel.setStreamHandler(internalStreamHandler)
}
}
```
B.g.swift
```swift
// Autogenerated from Pigeon (v22.7.2), do not edit directly.
// See also: https://pub.dev/packages/pigeon
import Foundation
#if os(iOS)
import Flutter
#elseif os(macOS)
import FlutterMacOS
#else
#error("Unsupported platform.")
#endif
private func isNullish(_ value: Any?) -> Bool {
return value is NSNull || value == nil
}
private func nilOrValue<T>(_ value: Any?) -> T? {
if value is NSNull { return nil }
return value as! T?
}
private class BPigeonCodecReader: FlutterStandardReader {
}
private class BPigeonCodecWriter: FlutterStandardWriter {
}
private class BPigeonCodecReaderWriter: FlutterStandardReaderWriter {
override func reader(with data: Data) -> FlutterStandardReader {
return BPigeonCodecReader(data: data)
}
override func writer(with data: NSMutableData) -> FlutterStandardWriter {
return BPigeonCodecWriter(data: data)
}
}
class BPigeonCodec: FlutterStandardMessageCodec, @unchecked Sendable {
static let shared = BPigeonCodec(readerWriter: BPigeonCodecReaderWriter())
}
var bPigeonMethodCodec = FlutterStandardMethodCodec(readerWriter: BPigeonCodecReaderWriter());
private class PigeonStreamHandler<ReturnType>: NSObject, FlutterStreamHandler {
private let wrapper: PigeonEventChannelWrapper<ReturnType>
private var pigeonSink: PigeonEventSink<ReturnType>? = nil
init(wrapper: PigeonEventChannelWrapper<ReturnType>) {
self.wrapper = wrapper
}
func onListen(withArguments arguments: Any?, eventSink events: @escaping FlutterEventSink)
-> FlutterError?
{
pigeonSink = PigeonEventSink<ReturnType>(events)
wrapper.onListen(withArguments: arguments, sink: pigeonSink!)
return nil
}
func onCancel(withArguments arguments: Any?) -> FlutterError? {
pigeonSink = nil
wrapper.onCancel(withArguments: arguments)
return nil
}
}
class PigeonEventChannelWrapper<ReturnType> {
func onListen(withArguments arguments: Any?, sink: PigeonEventSink<ReturnType>) {}
func onCancel(withArguments arguments: Any?) {}
}
class PigeonEventSink<ReturnType> {
private let sink: FlutterEventSink
init(_ sink: @escaping FlutterEventSink) {
self.sink = sink
}
func success(_ value: ReturnType) {
sink(value)
}
func error(code: String, message: String?, details: Any?) {
sink(FlutterError(code: code, message: message, details: details))
}
func endOfStream() {
sink(FlutterEndOfEventStream)
}
}
class StreamBStreamHandler: PigeonEventChannelWrapper<Int64> {
static func register(with messenger: FlutterBinaryMessenger,
instanceName: String = "",
streamHandler: StreamBStreamHandler) {
var channelName = "dev.flutter.pigeon.pigeon_sample.EventB.streamB"
if !instanceName.isEmpty {
channelName += ".\(instanceName)"
}
let internalStreamHandler = PigeonStreamHandler<Int64>(wrapper: streamHandler)
let channel = FlutterEventChannel(name: channelName, binaryMessenger: messenger, codec: bPigeonMethodCodec)
channel.setStreamHandler(internalStreamHandler)
}
}
```
</p>
</details>
### Proposal
I propose introducing a new configuration option in `SwiftOptions` and `KotlinOptions` to control the generation of shared utility classes (e.g., `PigeonEventChannelWrapper` and `PigeonEventSink`) similar to the `includeErrorClass` option. | package,c: proposal,team-ecosystem,p: pigeon,P2,triaged-ecosystem | low | Critical |
2,774,583,370 | flutter | [Android] Severe glitch when re-rending some widgets | ### Steps to reproduce
Take specific device:
Wall mounted 10.1 inch android POE tablet touch screen display : RK3568
Rockchip3568 Quad-core cortex A55 processor
and run flutter counter app
//
I give to the user a version with disabled impeller, but I'm not receiving from him any message by now(9hours ago). I will update comment if i get result that if it has been solved or issue persist even with disabled impeller.
### Code sample
Simple List View and use `SetState` or `BLoC` for re-rending.
### Performance profiling on master channel
- [ ] The issue still persists on the master channel
### Timeline Traces
I don't have access to physical device, user is in another country.
### Video demonstration
<details open>
<summary>Video demonstration</summary>
[ Without Animated Opacity ] and [Wrapped Circular loading indicator into RepaintBoundary]
https://github.com/user-attachments/assets/61d2eda9-4c4b-42db-8295-67ea332433ec
[With Animated Opacity] and [No RepaintBoundary]
https://github.com/user-attachments/assets/d079457f-40bb-44c3-af00-f0b1e0a17911
</details>
### What target platforms are you seeing this bug on?
Android
### OS/Browser name and version | Device information
Wall mounted 10.1 inch android POE tablet touch screen display : RK3568
Rockchip3568 Quad-core cortex A55 processor
### Does the problem occur on emulator/simulator as well as on physical devices?
Unknown
### Is the problem only reproducible with Impeller?
N/A
### Logs
N/A
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
❯ flutter doctor -v
[√] Flutter (Channel stable, 3.24.4, on Microsoft Windows [Version 10.0.22631.4602], locale en-US)
• Flutter version 3.24.4 on channel stable at C:\src\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 603104015d (3 months ago), 2024-10-24 08:01:25 -0700
• Engine revision db49896cf2
• Dart version 3.5.4
• DevTools version 2.37.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at C:\Users\PC\AppData\Local\Android\sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: C:\Program Files\Android\Android Studio2\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.8.4)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.8.34408.163
• Windows 10 SDK version 10.0.22621.0
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio2
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
[√] VS Code (version 1.96.2)
• VS Code at C:\Users\PC\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.102.0
[√] Connected device (4 available)
• SM X710 (mobile) • R54WA00JKDE • android-arm64 • Android 14 (API 34)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4602]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.109
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.112
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| e: device-specific,platform-android,engine,c: rendering,P3,e: impeller,team-engine,triaged-engine | low | Critical |
2,774,635,442 | flutter | CupertinoButton is missing minWidth and minHeight. | ### Use case
Most of the time, our buttons may need to be rectangular rather than perfectly square. Currently, CupertinoButton only has a minSize property. For users, it is unclear whether this refers to width or height. Moreover, it is impossible to set a minimum size with unequal width and height. The only option is to adjust the size of its child to achieve the desired dimensions.
Specifically:
https://github.com/flutter/flutter/blob/b5df29072de2778dc843a3a8ad3ffa8b2bdc885c/packages/flutter/lib/src/cupertino/button.dart#L418-L428
I think this place should use minWidth and minHeight.
### Proposal
Add minWidth and minHeight to replace minSize. | framework,waiting for PR to land (fixed),f: cupertino,c: proposal,P2,team-design,triaged-design | low | Minor |
2,774,650,406 | PowerToys | Open an app with active focus | ### Description of the new feature / enhancement
I would really appreciate the possibility to have `Active Focus` in visibility other than `Normal` and `Hidden` in **Keyboard Manager > Remap a shortcut**.
### Scenario when this would be used?
You can create a shortcut like opening a terminal app (like Linux) on top of all other apps.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,774,682,985 | flutter | iOS BackdropFilter Performance Issues with Impeller Engine | ### Steps to reproduce
When using BackdropFilter with ImageFilter.blur in my iOS app with Impeller enabled, I noticed significant performance issues and lag, especially with multiple blur effects. However, when I disabled Impeller in Info.plist, the performance dramatically improved and matched Android's performance.
1. Create a Flutter app using BackdropFilter with ImageFilter.blur
2. Test on iOS device with Impeller enabled (default)
3. Notice lag and performance issues
4. Disable Impeller by adding FLTEnableImpeller: false in Info.plist
5. Test again and notice significantly improved performance
Device Information:
- iOS Version: 15.8.3
- Device Model: IPhone 7 Plus
- Flutter Version: Channel stable, 3.24.5
Expected Behavior:
BackdropFilter should perform smoothly with Impeller enabled, similar to performance when Impeller is disabled.
Additional Notes:
- Performance issues are more noticeable when multiple BackdropFilters are used
- Same code works perfectly on Android devices
- Disabling Impeller resolves the performance issues completely
Would it be possible to optimize the BackdropFilter implementation for Impeller on iOS?
### Code sample
<details open><summary>Code sample</summary>
BackdropFilter(
filter: ImageFilter.blur(sigmaX: 5, sigmaY: 5),
child: Container(
decoration: BoxDecoration(
gradient: LinearGradient(
colors: [
Colors.white.withOpacity(0.2),
Colors.white.withOpacity(0.1),
],
),
),
),
)
</details>
### Performance profiling on master channel
- [x] The issue still persists on the master channel
### Timeline Traces
<details open><summary>Timeline Traces JSON</summary>
```json
[Paste the Timeline Traces here]
```
</details>
### Video demonstration
<details open>
<summary>Video demonstration</summary>
[Upload media here]
</details>
### What target platforms are you seeing this bug on?
iOS
### OS/Browser name and version | Device information
macOS 15.1.1 24B91 darwin-arm64
Device: IPhone 7 Plus
### Does the problem occur on emulator/simulator as well as on physical devices?
Unknown
### Is the problem only reproducible with Impeller?
N/A
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.5, on macOS 15.1.1 24B91 darwin-arm64, locale tr-TR)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.2)
[✓] VS Code (version 1.96.2)
[✓] Connected device (3 available)
[✓] Network resources
• No issues found!
</details>
| waiting for customer response,in triage | low | Critical |
2,774,688,196 | deno | Long running tasks in parallel | The ability to run tasks in all workspace projects recursively is very useful, specifically for build tasks, but it would be nice to be able to run long running tasks recursively in parallel similar to how [Turborepo](https://turbo.build/repo/docs) works. | bug,task runner | low | Major |
2,774,722,477 | rust | improve spans for `CallArgument` constraints | that's a quirk of `CallArgument` constraints. they get their span from the `Location` of the call terminator; since they don't store the span of the particular argument, it gets lost in diagnostics. that said, it does look like we have the spans of each argument in `TypeChecker::check_call_inputs`, so they could store that.
_Originally posted by @dianne in https://github.com/rust-lang/rust/pull/133858#discussion_r1906130482_
| C-enhancement,A-diagnostics,A-borrow-checker,T-compiler,D-imprecise-spans | low | Minor |
2,774,724,066 | fastapi | Duplicated OperationID when adding route with multiple methods | ### Discussed in https://github.com/fastapi/fastapi/discussions/8449
<div type='discussions-op-text'>
<sup>Originally posted by **bruchar1** March 30, 2022</sup>
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
router.add_api_route(
"/clear",
clear,
methods=["POST", "DELETE"]
)
```
### Description
Seems to be caused by #4650.
The new `generate_unique_id()` function uses `list(route.methods)[0].lower()` as suffix for the `operation_id`. Therefore, in my example, both post and delete endpoints get `_post` suffix for operation_id, causing it to no longer be unique.
It then issues a "UserWarning: Duplicate Operation ID"
### Operating System
Windows
### Operating System Details
_No response_
### FastAPI Version
0.75.0
### Python Version
3.10.2
### Additional Context
_No response_</div> | question,question-migrate | low | Minor |
2,774,739,040 | flutter | flutte version 3.27.1 runnig report | ### Steps to reproduce
flutter run -v
### Expected results
success
### Actual results
```console
PS F:\project_mix\fluter_v2\navers> flutter run -v
[ +458 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ +2 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update.
[ +4 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +108 ms] executing: E:\android_sdk\platform-tools\adb.exe devices -l
[ +344 ms] List of devices attached
emulator-5554 device product:MI 9 model:MI_9 device:star2qltechn transport_id:1
[ +13 ms] E:\android_sdk\platform-tools\adb.exe -s emulator-5554 shell getprop
[ +261 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ +16 ms] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +296 ms] Skipping pub get: version match.
[ +219 ms] Generating F:\project_mix\fluter_v2\navers\android\app\src\main\java\io\flutter\plugins\GeneratedPluginRegistrant.java
[ +105 ms] ro.hardware = qcom
[ +159 ms] No packages with native assets. Skipping native assets compilation.
[ +5 ms] Initializing file store
[ +17 ms] Skipping target: gen_localizations
[ +7 ms] gen_dart_plugin_registrant: Starting due to {InvalidatedReasonKind.inputChanged: The following inputs have updated contents: F:\project_mix\fluter_v2\navers\.dart_tool\package_config_subset}
[ +297 ms] gen_dart_plugin_registrant: Complete
[ +1 ms] Skipping target: _composite
[ +2 ms] complete
[ +8 ms] Launching lib\main.dart on MI 9 in debug mode...
[ +5 ms] F:\flutter_folder\flutter\bin\cache\dart-sdk\bin\dartaotruntime.exe F:\flutter_folder\flutter\bin\cache\dart-sdk\bin\snapshots\frontend_server_aot.dart.snapshot --sdk-root F:\flutter_folder\flutter\bin\cache\artifacts\engine\common\flutter_patched_sdk/
--incremental --target=flutter --experimental-emit-debug-metadata --output-dill C:\Users\ADMINI~1\AppData\Local\Temp\flutter_tools.2528eb50\flutter_tool.b0386b9e\app.dill --packages F:\project_mix\fluter_v2\navers\.dart_tool\package_config.json
-Ddart.vm.profile=false -Ddart.vm.product=false --enable-asserts --track-widget-creation --filesystem-scheme org-dartlang-root --initialize-from-dill build\cache.dill.track.dill --verbosity=error --enable-experiment=alternative-invalidation-strategy
[ +35 ms] executing: E:\android_sdk\platform-tools\adb.exe -s emulator-5554 shell -x logcat -v time -t 1
[ +191 ms] <- compile package:navers/main.dart
[ +189 ms] --------- beginning of main
01-07 11:12:18.503 I/Finsky ( 3039): [166] pls.h(7): Already at the latest configurations for experiment package com.google.android.finsky.regular.
[ +111 ms] executing: E:\android_sdk\platform-tools\adb.exe version
[ +191 ms] Android Debug Bridge version 1.0.41
Version 35.0.1-11580240
Installed as E:\android_sdk\platform-tools\adb.exe
Running on Windows 10.0.19042
[ +2 ms] executing: E:\android_sdk\platform-tools\adb.exe start-server
[ +180 ms] Building APK
[ +16 ms] executing: F:\android_studio_el\jbr\bin\java -version
[ +264 ms] Exit code 0 from: F:\android_studio_el\jbr\bin\java -version
[ +1 ms] Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=utf-8
Picked up _JAVA_OPTIONS: -Dfile.encoding=utf-8
openjdk version "11.0.15" 2022-04-19
OpenJDK Runtime Environment (build 11.0.15+0-b2043.56-9505619)
OpenJDK 64-Bit Server VM (build 11.0.15+0-b2043.56-9505619, mixed mode)
[ +3 ms] executing: F:\JetBrainsToolbox\Android Studio\jbr\bin\java -version
[ +334 ms] Exit code 0 from: F:\JetBrainsToolbox\Android Studio\jbr\bin\java -version
[ +1 ms] Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=utf-8
Picked up _JAVA_OPTIONS: -Dfile.encoding=utf-8
openjdk version "17.0.11" 2024-04-16
OpenJDK Runtime Environment (build 17.0.11+0--11852314)
OpenJDK 64-Bit Server VM (build 17.0.11+0--11852314, mixed mode)
[ +7 ms] executing: F:\JetBrainsToolbox\Android Studio\jbr\bin\java -version
[ +352 ms] Exit code 0 from: F:\JetBrainsToolbox\Android Studio\jbr\bin\java -version
[ ] Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=utf-8
Picked up _JAVA_OPTIONS: -Dfile.encoding=utf-8
openjdk version "17.0.11" 2024-04-16
OpenJDK Runtime Environment (build 17.0.11+0--11852314)
OpenJDK 64-Bit Server VM (build 17.0.11+0--11852314, mixed mode)
[ +18 ms] executing: F:\JetBrainsToolbox\Android Studio\jbr\bin\java --version
[ +257 ms] Exit code 0 from: F:\JetBrainsToolbox\Android Studio\jbr\bin\java --version
[ ] openjdk 17.0.11 2024-04-16
OpenJDK Runtime Environment (build 17.0.11+0--11852314)
OpenJDK 64-Bit Server VM (build 17.0.11+0--11852314, mixed mode)
[ ] Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=utf-8
Picked up _JAVA_OPTIONS: -Dfile.encoding=utf-8
[ +18 ms] CMake project not found, skipping support Android 15 16k page size migration.
[ +26 ms] Using gradle from F:\project_mix\fluter_v2\navers\android\gradlew.bat.
[ +2 ms] Running Gradle task 'assembleDebug'...
[ +6 ms] executing: [F:\project_mix\fluter_v2\navers\android/] F:\project_mix\fluter_v2\navers\android\gradlew.bat --full-stacktrace --info -Pverbose=true -Ptarget-platform=android-x64 -Ptarget=F:\project_mix\fluter_v2\navers\lib\main.dart
-Pbase-application-name=android.app.Application -Pdart-obfuscation=false -Ptrack-widget-creation=true -Ptree-shake-icons=false -Pfilesystem-scheme=org-dartlang-root assembleDebug
[ +173 ms] Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=utf-8
[ +3 ms] Picked up _JAVA_OPTIONS: -Dfile.encoding=utf-8
[ +212 ms] 错误: 找不到或无法加载主类 Dfile.encoding=utf-8
[ +1 ms] 原因: java.lang.ClassNotFoundException: Dfile.encoding=utf-8
[ +24 ms] Running Gradle task 'assembleDebug'... (completed in 405ms)
[+8399 ms] Error: Gradle task assembleDebug failed with exit code 1
[ +1 ms] "flutter run" took 12,980ms.
[ +26 ms]
#0 throwToolExit (package:flutter_tools/src/base/common.dart:10:3)
#1 RunCommand.runCommand (package:flutter_tools/src/commands/run.dart:860:9)
<asynchronous suspension>
#2 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1450:27)
<asynchronous suspension>
#3 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#4 CommandRunner.runCommand (package:args/command_runner.dart:212:13)
<asynchronous suspension>
#5 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:421:9)
<asynchronous suspension>
#6 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#7 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:364:5)
<asynchronous suspension>
#8 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:131:9)
<asynchronous suspension>
#9 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#10 main (package:flutter_tools/executable.dart:94:3)
<asynchronous suspension>
[ +89 ms] ensureAnalyticsSent: 86ms
[ ] Running 2 shutdown hooks
[ +5 ms] Shutdown hooks complete
[ +7 ms] exiting with code 1
```
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
```console
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, 3.27.1, on Microsoft Windows [版本 10.0.19042.1415], locale zh-CN)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[√] Chrome - develop for the web
[√] Visual Studio - develop Windows apps (Visual Studio 生成工具 2019 16.11.39)
[√] Android Studio (version 2022.1)
[√] Android Studio (version 2024.1)
[√] Connected device (4 available)
``` | waiting for customer response,in triage | low | Critical |
2,774,743,712 | flutter | Linux_pixel_7pro integration_ui_keyboard_resize is 2.17% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Linux_pixel_7pro integration_ui_keyboard_resize"
}
-->
The post-submit test builder `Linux_pixel_7pro integration_ui_keyboard_resize` had a flaky ratio 2.17% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Linux_pixel_7pro%20integration_ui_keyboard_resize/4489
Commit: https://github.com/flutter/flutter/commit/62c6859e593ba7c7b075ee850c3690eb44401afa
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Linux_pixel_7pro%20integration_ui_keyboard_resize/4489
https://ci.chromium.org/ui/p/flutter/builders/prod/Linux_pixel_7pro%20integration_ui_keyboard_resize/4470
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Linux_pixel_7pro%20integration_ui_keyboard_resize
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| P0,c: flake,team-tool | high | Major |
2,774,744,864 | flutter | [Proposal] AppBar `leadingWidth` should be available in `AppBarTheme` | ### Use case
In `AppBarTheme` I can provide the `toolbarHeight`.
But since app bar leading width uses the constant `kToolbarHeight` by default it is no longer a square because toolbar height is changed in the theme.
On Design Device (toolbar height `56`):
<img width="402" alt="Screenshot 2025-01-08 at 14 21 40" src="https://github.com/user-attachments/assets/a197ffd0-fe52-411e-a383-6ffe305f3143" />
On iPad 13inch (toolbar height scaled to `90` based on custom logic):
<img width="632" alt="Screenshot 2025-01-08 at 14 21 30" src="https://github.com/user-attachments/assets/ab13fdda-9fc9-4e51-b1be-ce7ce2fcf40b" />
### Proposal
1. Either `leadingWidth` should use toolbar height from theme internally
2. Or I should be able to provide `leadingWidth` in `AppBarTheme` along with `toolbarHeight` | c: new feature,framework,f: material design,waiting for PR to land (fixed),c: proposal,P3,team-design,triaged-design | low | Minor |
2,774,750,324 | kubernetes | Report event or record error/info log when drop podUpdates message | ### What would you like to be added?
In func (p *podWorkers) UpdatePod(options UpdatePodOptions) (pod_worker.go):
select {
case podUpdates <- struct{}{}:
default:
}
podUpdates buffer is 1, if the previous update message not finish, the new message will drop without any warning.
User cannot get any idea about this. Event or error/info log should added for debugging.
### Why is this needed?
The update pod message drop without any warning.
It is better to add some debug log. | sig/node,kind/feature,needs-triage | low | Critical |
2,774,788,159 | pytorch | [ONNX] MelSpectrogram results in "Pads has incorrect number of values" | ### 🐛 Describe the bug
``` python
class DataCov(nn.Module):
def __init__(self):
super(DataCov, self).__init__()
self.transform = nn.Sequential(
torchaudio.transforms.MelSpectrogram(sample_rate=48000, n_fft=1536, hop_length=768, f_min=20, f_max=20000)
)
def forward(self, x1):
return self.transform(x1)
def export_datacov_onnx(path):
model = DataCov()
model.eval()
src_wav = torch.randn((1, 1, 48000 * 12), requires_grad=True)
input_names = ["wav_data"]
output_names = ["ans"]
args = (src_wav,)
torch.onnx.export(
model,
args,
path,
export_params=True,
opset_version=19,
do_constant_folding=True,
verbose=False,
input_names=input_names,
output_names=output_names,
dynamo=True,
report=True
)
onnx_model = onnx.load(path)
onnx.checker.check_model(onnx_model)
def test_data_cov_onnx(onnx_path):
sess_options = ort.SessionOptions()
sess_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL
providers = [
'CUDAExecutionProvider',
'DmlExecutionProvider',
'CPUExecutionProvider'
]
session = ort.InferenceSession(onnx_path, sess_options,
providers=providers)
src_wav = torch.randn((1, 1, 48000 * 12))
ort_inputs = {session.get_inputs()[0].name: src_wav.numpy(), }
ort_outs = session.run(None, ort_inputs)
ort_outs = ort_outs[0]
ort_outs = torch.from_numpy(ort_outs)
model = DataCov()
model.eval()
deal_1 = model(src_wav)
print(f'Torch Output Shape: {deal_1.shape}, ONNX Output Shape: {ort_outs.shape}')
print(f'Torch Output Min/Max: {torch.min(deal_1)}, {torch.max(deal_1)}')
print(f'ONNX Output Min/Max: {torch.min(ort_outs)}, {torch.max(ort_outs)}')
print(f'Torch Output Mean/Std: {torch.mean(deal_1)}, {torch.std(deal_1)}')
print(f'ONNX Output Mean/Std: {torch.mean(ort_outs)}, {torch.std(ort_outs)}')
np.testing.assert_allclose(deal_1.detach().numpy(), ort_outs.detach().numpy(), rtol=1e-02, atol=1e-04)
if __name__ == '__main__':
export_datacov_onnx("DataCov.onnx")
test_data_cov_onnx("DataCov.onnx")
```
error code:
``` shell
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (_inlfunc_aten_reflection_pad1d_n11) Op (Pad) [ShapeInferenceError] Pads has incorrect number of values. Expected 2 * 3 values. Got 4 values.
```
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250107+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.8 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:31:09) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz
CPU family: 6
Model: 167
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 53%
CPU max MHz: 4900.0000
CPU min MHz: 800.0000
BogoMIPS: 4992.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20250108
[pip3] onnxsim==0.4.36
[pip3] onnxslim==0.1.46
[pip3] torch==2.7.0.dev20250107+cpu
[pip3] torchaudio==2.6.0.dev20250107+cpu
[pip3] torchvision==0.22.0.dev20250107+cpu
[pip3] triton==3.1.0
[conda] numpy 2.2.1 pypi_0 pypi
[conda] torch 2.7.0.dev20250107+cpu pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250107+cpu pypi_0 pypi
[conda] torchvision 0.22.0.dev20250107+cpu pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi | module: onnx,triaged | low | Critical |
2,774,808,705 | svelte | Untrack documentation is difficult to understand | ### Describe the problem
The documentation for `untrack` showcases usage of a function `save` which does not exist, so its hard to understand how or what it showcases in my opinion
https://svelte.dev/docs/svelte/svelte#untrack
### Describe the proposed solution
Write a bit more thorough explanation of what it does
### Importance
would make my life easier | documentation | low | Minor |
2,774,819,088 | rust | Performance regression on toy problem, but not for opt-level=1. | <!--
Thank you for filing a regression report! 🐛 A regression is something that changed between versions of Rust but was not supposed to.
Please provide a short summary of the regression, along with any information you feel is relevant to replicate it.
-->
I tried this code:
```rust
fn hanoi_int_inner(n : usize, start: usize, end : usize, aux : usize) -> usize {
if n == 0 {
0
} else {
let mut out = hanoi_int_inner(n-1, start, aux, end);
out += 1;
out += hanoi_int_inner(n-1, aux, end, start);
out
}
}
fn hanoi_int(n: usize) -> usize {
hanoi_int_inner(n,0,1,2)
}
```
I expected compiling with `rustc -C opt-level=3` to be faster than `rustc -C opt-level=1`. I have not seen any resources online giving a recommendation for `opt-level=1` or any downside to more optimization other than binary size.
However, only with `opt-level=1` does rustc figure out that `start`, `end`, and `aux` are useless and combine the two recursive calls. Combining the two recursive calls results in linear time complexity vs exponential so it is immediately noticeable even without any tight timing (> minutes vs milliseconds).
Previous versions correctly compile this toy code into non-exponential machine code. It seems to have broken around 1.70 and is currently broken on both stable and nightly.
I am totally willing to investigate this, but I would need some guidance because I have never looked deeply into LLVM's or Rust's optimization passes.
### Version it worked on
<!--
Provide the most recent version this worked on, for example:
It most recently worked on: Rust 1.47
-->
It most recently worked on `rustc --version --verbose`:
```
rustc 1.69.0 (84c898d65 2023-04-16)
binary: rustc
commit-hash: 84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc
commit-date: 2023-04-16
host: x86_64-unknown-linux-gnu
release: 1.69.0
LLVM version: 15.0.7
```
### Version with regression
`rustc --version --verbose`:
```
rustc 1.70.0 (90c541806 2023-05-31)
binary: rustc
commit-hash: 90c541806f23a127002de5b4038be731ba1458ca
commit-date: 2023-05-31
host: x86_64-unknown-linux-gnu
release: 1.70.0
LLVM version: 16.0.2
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
rustc 1.86.0-nightly (ad211ced8 2025-01-07)
binary: rustc
commit-hash: ad211ced81509462cdfe4c29ed10f97279a0acae
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.6
```
<!--
If you know when this regression occurred, please add a line like below, replacing `{channel}` with one of stable, beta, or nightly.
@rustbot modify labels: +regression-from-stable-to-{channel} -regression-untriaged
-->
| A-LLVM,I-slow,P-low,T-compiler,regression-untriaged,C-optimization | low | Critical |
2,774,848,402 | flutter | [impeller] Performance and rendering problems on PowerVR device | ### Steps to reproduce
When running a Flutter app with Impeller on a Galaxy Tab 7 Lite (Android 14), the performance is really bad. Rendering stutters, crashes, rendering glitches. No problem at all without Impeller.
The device is a Samsung SM-T220, the GPU is PowerVR Rogue GE8320
It probably is something with all PowerVR devices, see this older bug:
https://github.com/flutter/flutter/issues/143573
### Expected results
fast, clean rendering with Impeller
### Actual results
crash, bad rendering, hiccups
### Code sample
<details open><summary>Code sample</summary>
Just use the default demo app with Impeller
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.1.1 24B2091 darwin-arm64, locale en-US)
• Flutter version 3.27.1 on channel stable at /opt/homebrew/Caskroom/flutter/3.24.5/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (3 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/wouter/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_SDK_ROOT = /Users/wouter/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[✓] VS Code (version 1.96.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (5 available)
• SM T220 (mobile) • 192.168.1.45:37687 • android-arm64 • Android 14 (API 34)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1.1 24B2091 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1.1 24B2091 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| waiting for customer response,in triage | low | Critical |
2,774,851,486 | flutter | [impeller] rendering, performance problems with impeller on PowerVR devices | ### Steps to reproduce
When running a Flutter app with Impeller on a Galaxy Tab 7 Lite (Android 14), the performance is really bad. Rendering stutters, crashes, rendering glitches. No problem at all without Impeller.
The device is a Samsung SM-T220, the GPU is PowerVR Rogue GE8320
It probably is something with all PowerVR devices, see this older bug:
https://github.com/flutter/flutter/issues/143573
### Expected results
fast, clean rendering with Impeller
### Actual results
crash, bad rendering, hiccups
### Code sample
<details open><summary>Code sample</summary>
Just use the default demo app with Impeller
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.1.1 24B2091 darwin-arm64, locale en-US)
• Flutter version 3.27.1 on channel stable at /opt/homebrew/Caskroom/flutter/3.24.5/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (3 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/wouter/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_SDK_ROOT = /Users/wouter/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[✓] VS Code (version 1.96.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (5 available)
• SM T220 (mobile) • 192.168.1.45:37687 • android-arm64 • Android 14 (API 34)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1.1 24B2091 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1.1 24B2091 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| waiting for customer response,in triage | low | Critical |
2,774,881,248 | rust | Tracking Issue for nonnull_provenance | <!--
Thank you for creating a tracking issue!
Tracking issues are for tracking a feature from implementation to stabilization.
Make sure to include the relevant RFC for the feature if it has one.
If the new feature is small, it may be fine to skip the RFC process. In that
case, you can use `issue = "none"` in your initial implementation PR. The
reviewer will ask you to open a tracking issue if they agree your feature can be
added without an RFC.
-->
Feature gate: `#![feature(nonnull_provenance)]`
This is a tracking issue for some provenance functions on NonNull that were missed in the initial strict provenance stabilization.
<!--
Include a short description of the feature.
-->
### Public API
```rust
// core::ptr
impl<T> NonNull<T> {
pub const fn without_provenance(addr: NonZero<usize>) -> Self;
pub fn from_exposed_provenance(addr: NonZero<usize>) -> Self;
}
impl<T: ?Sized> NonNull<T> {
pub fn expose_provenance(self) -> NonZero<usize>;
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [x] ACP: https://github.com/rust-lang/libs-team/issues/518
- [ ] Implementation: https://github.com/rust-lang/rust/pull/135242
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,774,885,426 | ant-design | Space.Compact 紧凑布局问题 | ### Reproduction link
[https://codepen.io/sjzcxc/pen/KwPQwNp?editors=0010](https://codepen.io/sjzcxc/pen/KwPQwNp?editors=0010)
### Steps to reproduce
1、使用Space.Compact 紧凑布局,前部分是 Select,后部分是 TextArea,会导致圆角样式和高度都不对齐。
2、经过初步排查,TextArea 只要添加了 allowClear 属性,高度就会对不齐。
### What is expected?
期望圆角样式和高度能够对齐
### What is actually happening?
实际对不齐
| Environment | Info |
| --- | --- |
| antd | 5.23.0 |
| React | 18 |
| System | Mac |
| Browser | Chrome最新 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug,help wanted | low | Major |
2,774,890,073 | rust | `x86_64-unknown-linux-musl` with `-Ctarget-feature=-crt-static` links to glibc | tested with rustc stable 1.83.0, on Arch Linux (kernel 6.6.69 LTS)
```
rustup install stable-x86_64-unknown-linux-gnu
rustup target add x86_64-unknown-linux-musl
cargo new musl-hm
cd musl-hm
cargo rustc --target x86_64-unknown-linux-musl -- -Ctarget-feature=-crt-static
ldd target/x86_64-unknown-linux-musl/debug/musl-hm
linux-vdso.so.1 (0x00007ffc176a7000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x0000755e74d12000)
libc.so.6 => /usr/lib/libc.so.6 (0x0000755e74b21000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x0000755e74dc4000)
```
The same thing happens with `cargo build` and `RUSTFLAGS` instead of `cargo rustc`.
This happens irrespective of whether a the musl system package is installed or not.
If you set `target.x86_64-unknown-linux-musl.linker = "musl-gcc"` then linking fails with `-Ctarget-feature=-crt-static` (complains about missing `libgcc_s`), and without `-Ctarget-feature`, the resulting binary crashes instantly (that's a duplicate of https://github.com/rust-lang/rust/issues/95926), but is dynamically linked to the musl library and loader.
I think rustc should error out on musl targets if attempting to disable `crt-static`, since anything else produces wrong or broken binaries. If the idea is that `-crt-static` works on targets where musl is the system libc (e.g. Alpine), then perhaps there should be something that detects whether the system libc is musl, and errors (or ignores `crt-static`) if not. | A-linkage,O-linux,T-compiler,O-musl,C-bug,A-target-feature | medium | Critical |
2,775,021,398 | flutter | Issue rendering CustomPainter using Impeller | ### Steps to reproduce
I have updated to version 3.27, in which Impeller is enabled by default.
The issue is relatively easy to reproduce: simply create a CustomPainter widget with a lot of content inside (in my case, it contains approximately 200 Text widgets, a local image, a widget with a barcode, and another with a QR code using the `barcode_widget` and `qr_flutter` libraries). Then, navigate to this widget using a "fade" transition. Upon entering and leaving the widget, a series of noticeable flickers occur, which did not happen in version 3.24.
### Expected results
The expected outcome is a clean and smooth rendering without flickering.
### Actual results
What is currently observed is a series of flickers that occur only the first few times. If you navigate back to the same widget, the flickering still happens but is significantly reduced.
### Code sample
<details open><summary>Code sample</summary>
```dart
Route fadeTransitionRoute(Widget destination) {
return PageRouteBuilder(
pageBuilder: (_, __, ___) => destination,
transitionsBuilder: (_, animation, __, child) {
return FadeTransition(
opacity: animation,
child: child,
);
},
);
}
```
</details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
```console
[√] Flutter (Channel stable, 3.27.0, on Microsoft Windows [Versi¢n 10.0.19045.5011], locale es-ES)
• Flutter version 3.27.0 on channel stable at C:\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 8495dee1fd (4 weeks ago), 2024-12-10 14:23:39 -0800
• Engine revision 83bacfc525
• Dart version 3.6.0
• DevTools version 2.40.2
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at C:\Users\JMMARTINEZ\AppData\Local\Android\sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.7+0-b2043.56-10550314)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Professional 2022 17.8.6)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Professional
• Visual Studio Professional 2022 version 17.8.34525.116
• Windows 10 SDK version 10.0.22621.0
[√] Android Studio (version 2023.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.7+0-b2043.56-10550314)
[√] VS Code (version 1.96.2)
• VS Code at C:\Users\JMMARTINEZ\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.102.0
[√] Connected device (4 available)
• CPH2483 (mobile) • IZZXSCAQ6XZTIJ55 • android-arm64 • Android 14 (API 34)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Versi¢n 10.0.19045.5011]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.112
[√] Network resources
• All expected network resources are available.
• No issues found!
``` | waiting for customer response,in triage | low | Minor |
2,775,025,153 | react-native | flexWrap wrapping unexpectedly with alignItems: 'flex-end' | ### Description
This bug/issue seems to be **screen size dependent**. In the provided example with the used text the incorrect wrapping is visible on the Pixel 7 screen. The same issue is also on ios, but there I did not test which text + device causes the too early overflow. When the text is changed (shortened or lengthened) the overflow disappears until the correct overflow occurs. Maybe this has something to do with the way the Views are nested, but I still don't understand how this is expected behaviour.
**Pixel 7** the text wraps, but it should not yet and the backgroundColor is also only displayed on the top Text.
<img width="300" alt="Bildschirmfoto 2025-01-08 um 11 27 57" src="https://github.com/user-attachments/assets/0fd92712-1f5f-4020-b6e0-5d5eb9b994a8" />
**Pixel 4** the same text does not wrap:
<img width="308" alt="Bildschirmfoto 2025-01-08 um 11 26 01" src="https://github.com/user-attachments/assets/1d86d353-9094-4934-ad27-9be134062c6c" />
### Steps to reproduce
1. Locally run the code provided in the expo snack using a pixel 7 simulator or test it directly in expo snack
### React Native Version
0.76.1
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: macOS 14.6
CPU: (16) x64 Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Memory: 10.96 GB / 64.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 23.3.0
path: /usr/local/bin/node
Yarn: Not Found
npm:
version: 10.9.0
path: /usr/local/bin/npm
Watchman:
version: 2024.12.02.00
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.2
- iOS 18.2
- macOS 15.2
- tvOS 18.2
- visionOS 2.2
- watchOS 11.2
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2412.12266719
Xcode:
version: 16.2/16C5032a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: /usr/bin/javac
Ruby:
version: 3.3.5
path: /usr/local/opt/ruby/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.0
wanted: 15.0.0
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.1
wanted: ^0.76.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
No crash/failure occurs
```
### Reproducer
https://snack.expo.dev/@alex_p8/gnarly-orange-ramen
### Screenshots and Videos
<img width="300" alt="Bildschirmfoto 2025-01-08 um 11 27 57" src="https://github.com/user-attachments/assets/7270a417-c206-48ee-8a0e-48e8783a9513" />
| Issue: Author Provided Repro,Newer Patch Available | low | Critical |
2,775,025,722 | vscode | Deleting a folder ending in a dot, actually deletes a different folder |
Type: <b>Bug</b>
I accidentally created a folder ending with a dot, say `src/app/folder.`, when the directory `src/app/folder` (without a trailing dot) also existed. The folder ending with a dot was an accident, so I tried to delete it. However, when deleting the folder `src/app/folder.` in the GUI, VSCode actually deletes the different folder `src/app/folder`!
While other software (Including Microsoft Windows Explorer) also have trouble deleting folders ending with dots, I think it is good if VSCode also takes this into account.
Despite a folder ending in a dot being a possible illegal path, the fact that these folders can be created and can exist, I think that VSCode should check for these situations to prevent deleting a different folder from the one selected. For me it caused me to lose some uncommitted progress.
In this case, how the folder was created does not matter for the issue VSCode is having, but you can read more about that on [the Issue I created for Angular-CLI](https://github.com/angular/angular-cli/issues/29275).
I hope this helps!
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i5-1145G7 @ 2.60GHz (8 x 2611)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.84GB (1.12GB free)|
|Process Argv|--crash-reporter-id 002b88b2-788f-4c4e-83e7-25653b04045e|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (35)</summary>
Extension|Author (truncated)|Version
---|---|---
ng-template|Ang|19.0.3
vscode-tailwindcss|bra|0.12.17
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
codespaces|Git|1.17.3
copilot|Git|1.255.0
copilot-chat|Git|0.23.2
rainbow-csv|mec|3.13.0
git-graph|mhu|1.30.0
folderformatter|mic|0.0.1
azure-pipelines|ms-|1.249.0
azure-dev|ms-|0.8.4
vscode-azureappservice|ms-|0.25.4
vscode-azurecontainerapps|ms-|0.7.1
vscode-azurefunctions|ms-|1.16.1
vscode-azureresourcegroups|ms-|0.10.1
vscode-azurestaticwebapps|ms-|0.12.2
vscode-azurestorage|ms-|0.16.1
vscode-azurevirtualmachines|ms-|0.6.6
vscode-cosmosdb|ms-|0.24.1
csharp|ms-|2.55.29
vscode-dotnet-runtime|ms-|2.2.3
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
azure-account|ms-|0.12.0
powershell|ms-|2024.4.0
vscode-node-azure-pack|ms-|1.2.0
vscode-serial-monitor|ms-|0.13.1
vscode-speech|ms-|0.12.1
vsliveshare|ms-|1.0.5948
vscode-yaml|red|1.15.0
sonarlint-vscode|Son|4.14.1
gitblame|wad|11.1.1
markdown-all-in-one|yzh|3.6.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
vscaac:30438847
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | bug,file-explorer,windows,confirmation-pending | low | Critical |
2,775,039,383 | rust | Item-bounds can be used to non-productively prove themselves | This issue has been discovered by @steffahn in https://github.com/rust-lang/rust/issues/135011#issuecomment-2574201519
> ```rust
> // We only check that GAT where-clauses of the *trait* while normalizing;
> // normalizing `<T as Trait<U>>::Proof` to `U` trivially succeeds.
> trait Trait<R>: Sized {
> type Proof: Trait<R, Proof = Self>;
> }
> impl<L, R> Trait<R> for L {
> // We prove that the impl item is compatible with the trait in the
> // env of the trait, which is pretty much empty.
> //
> // `L: Trait<R>` is trivial
> // `R: Trait<R, Proof = <L::Proof as Trait<R>>::Proof>` normalizes to
> // `R: Trait<R, Proof = <R as Trait<R>>::Proof>` normalizes to
> // `R: Trait<R, Proof = R>` is trivial
> //
> // Proving the item-bound holds assumes the *impl where-bounds*.
> // For this we normalize the where-bound `R: Trait<R, Proof = <L::Proof as Trait<R>>::Proof>`
> // by using the item-bound of `L::Proof`: `R: Trait<R, Proof = L>` 💀¹. Proving the
> // item-bound of `<L as Trait<R>>::Proof` is now trivial.
> type Proof
> = R
> where
> L: Trait<R>,
> R: Trait<R, Proof = <L::Proof as Trait<R>>::Proof>;
> }
> fn transmute<L: Trait<R>, R>(r: L) -> <L::Proof as Trait<R>>::Proof { r }
> fn main() {
> let s: String = transmute::<_, String>(vec![65_u8, 66, 67]);
> println!("{}", s); // ABC
> }
> ```
> What's happening at ¹ is that proving that the item-bounds of an associated type is able
> to assume the item-bounds of exactly that associated type. This is non-productive cyclic reasoning.
>
> You've found a new way to exploit https://github.com/rust-lang/trait-system-refactor-initiative/issues/62, answering the question posed in https://github.com/rust-lang/trait-system-refactor-initiative/issues/116 😊
_Originally posted by @lcnr in [#135011](https://github.com/rust-lang/rust/issues/135011#issuecomment-2574766479)_ | P-medium,A-associated-items,I-unsound,S-blocked,T-types | low | Minor |
2,775,065,445 | storybook | [Documentation]: Debugging in VSCode | ### Describe the problem
Need to explain how to set `launch.json` to make vscode breakpoints work with storybook.
### Additional context
https://github.com/storybookjs/storybook/issues/1754
https://github.com/storybookjs/storybook/discussions/26153 | documentation | low | Critical |
2,775,081,947 | excalidraw | radix popup overflowing viewport issues | - arrowhead picker

- color and font pickers should have max height so it's scrollable when they don't fit

| UX/UI | low | Minor |
2,775,087,274 | vscode | Flag invalid engine version in package.json schema of extensions | Right now extensions can put
```
"engines": {
"vscode": "*"
}
```
This is invalid per our docs https://code.visualstudio.com/api/references/extension-manifest
Can we flag this as a lint warning / error via JSON schema? | feature-request,extensions | low | Critical |
2,775,089,782 | PowerToys | Guides on screen | ### Description of the new feature / enhancement
One of the best and most useful features for developers and designers could be adding screen guides to help them build layouts without third-party tools.
Some useful actions:
- create it vertically and horizontally,
- lock it
- change the colours
- move it manually and with the keyboard
### Scenario when this would be used?
Build layout and measure directly in the browser
help manage spacing between contents
align contents visually
### Supporting information
https://xscopeapp.com/guide#guides | Needs-Triage | low | Minor |
2,775,139,605 | ant-design | Email validator allows hypens to be at the first and the last position in domain part | ### Reproduction link
[](https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-w2ntkh)
### Steps to reproduce
1. Create a `Form` component with `Form.Item` having rules type email in it
2. Enter email with hyphens at the first and/or the last position of a domain part
3. Try to submit the form
### What is expected?
Validation isn't passed. `onFinish` is not triggered. `onFinishFailed` is triggered instead
### What is actually happening?
Validation is passed successfully. Form `onFinish` is triggered.
| Environment | Info |
| --- | --- |
| antd | 5.20.2 |
| React | 18.3.1 |
| System | MacOS |
| Browser | **Arc Version** 1.74.0 (57065); **Chromium Engine Version** 131.0.6778.205 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Critical |
2,775,160,592 | Python | Error regarding imports in replit | ### What would you like to share?
My code in replit is throwing an error regarding the import itertools function. The error is that it is showing is that the import is unsorted or unformatted. What should I do?
### Additional information
_No response_ | awaiting triage | medium | Critical |
2,775,233,749 | godot | Classes that inherit `PackedScene` do not store UIDs when serialized and do not have their script when reloaded | ### Tested versions
v4.4.dev7.official [46c8f8c5c]
master [d2ada64a03d2abdb97cafe8f10623db8a2ce1d4c]
### System information
Godot v4.4.dev7 - Fedora Linux 41 (KDE Plasma) on Wayland - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - integrated Intel(R) UHD Graphics 620 (KBL GT2) - Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz (8 threads)
### Issue description
Scripts can inherit `PackedScene`, which I think is great for customizability!
However, due to how `PackedScene`s are saved, this means there is no script attached to the file when reloaded. Unlike other resources, which have a `[resource]` section where their `script` property is set, this does not have one, understandably. For some reason, it still references its script as an `ext_resource` even though it is never referenced in the file, and there are no UIDs referenced anywhere.
I'm not sure what the best way to fix this would be, but I think a warning when a script inherits `PackedScene` that the script will be lost when reloaded is required here, and a workaround would be to add the script after loading the resource.
### Steps to reproduce
Have a script that inherits `PackedScene` and add a `class_name`. Have a node that will initialize that script to a variable with the type of that script and pack itself, then save the resource. Then load the resource and set it to that same variable. It will fail because the resource is not of the type of that script.
You can also check the text file to see that the script is referenced as an `ext_resource`, yet never used, and that UIDs are non-existent for both the `PackedScene` header itself and the `ext_resource`s. It will be fixed when re-saved in the Godot Editor.
### Minimal reproduction project (MRP)
[packedsceneproblem2.zip](https://github.com/user-attachments/files/18346554/packedsceneproblem2.zip)
| bug,topic:core,confirmed | low | Minor |
2,775,258,777 | PowerToys | Version 0.87.1 disabled PC audio | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
I had version 0.80.0 installed on my work PC, with WIN11 (KB5048685). I did the automatic update in PowerTools UI, the update worked, but I didn't restart my PC immediately. The new version was 0.87.1.
Some time after the update, I noticed that all the audio outputs were not working, nor were the PC speakers, headset or bluetooth headphones.
I uninstalled PowerToys and restarted my PC a few times, but that didn't solve the problem.
After a few attempts, I decided to reinstall version 0.80.0 and, after restarting, the audio started working again.
Since it's my work PC, I don't have full administrator access to update or disable audio drivers (Realtek Audio), so I couldn't try other solutions.
The problem is solved for me now, but I think it was good to report this bug, since it was somehow caused by the update.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,775,310,461 | PowerToys | [Workspaces] Workspaces editor window opens outside the visible area if you have multiple monitor configurations | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
Have two monitor settings:
- Set up the monitor to be to the left of the laptop monitor
- Open Workspaces' editor window
- Resize and move the window to the monitor in the left
- Close it
- Set up the monitor to be to the right of the laptop monitor
- Try opening the Workspaces' editor window
- It will be hidden
Some extra details: these two setups are completely separate. So I turn the PC on either on one setup or in the other depending on where I'm working at. My home setup has it to the left with an ultra-wide monitor, while at the office I have a normal monitor to the right. All three are 1080p 100% zoom, but have varying pixel densities (bigger or smaller monitors in cm, but in pixels they are equivalent to 1080p)
### ✔️ Expected Behavior
If the top left edge of the window is in a negative position, position it at 0:0 (I don't really know how this works. I'm assuming it works like this)
### ❌ Actual Behavior
The window gets hidden:

### Other Software
_No response_ | Issue-Bug,Resolution-Fix Committed,Product-Workspaces | low | Minor |
2,775,344,782 | go | runtime/pprof: mechanism to show runtime frames in heap profiles | runtime/pprof [hides "runtime" frames](https://cs.opensource.google/go/go/+/master:src/runtime/pprof/protomem.go;l=39;drc=b50ccef67a5cd4a2919131cfeb6f3a21d6742385) in heap profiles, so allocations in maps appear at the map assignment location in the calling code, but not in `runtime.mapassign` itself.
Personally I find this behavior frustrating because it makes it much more difficult to look at the overall impact of maps in general on the heap, since there is no common frame to look at. I think it would be nice to have a mechanism to keep runtime frames in heap profiles. Perhaps something like `GODEBUG=heapprofileruntimeframes=1`.
I broke the runtime frame hiding in 1.24 (#71174) and discovered the bug because @bboreham mentioned that they liked the change! | NeedsDecision,compiler/runtime,BugReport | low | Critical |
2,775,362,652 | node | Confusing error message when using `import` inside CJS file | ### Version
23.6.0
### Platform
```text
n/a
```
### Subsystem
esm
### What steps will reproduce the bug?
- Create a `file.cjs` file containing `import "path";`
- Run `node file.cjs`
### How often does it reproduce? Is there a required condition?
/
### What is the expected behavior? Why is that the expected behavior?
It should tell me that to use `import` inside file I need to use `.mjs` and not `.cjs`
### What do you see instead?
```
Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension.
```
It also offers the solution of setting "type": "module" in the package.json, but that is wrong since my file is .cjs.
### Additional information
_No response_ | esm | low | Critical |
2,775,373,837 | langchain | DOC: <Issue related to /v0.2/docs/tutorials/rag/> | ### URL
https://python.langchain.com/v0.2/docs/tutorials/rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://python.langchain.com/v0.2/docs/tutorials/rag/
The installation guide seems to have a typo for conda
The packages are called in conda langchain-community, langchain-chroma, that is with an "-" and not with "_". With "-", I cpould install them. (As of 2025 January 8th)

### Idea or request for content:
_No response_ | 🤖:docs | low | Minor |
2,775,415,858 | react | [React 19] Error when use create-react-app | ## Summary
When you try to run the command 'npx create-react-app .' and proceed to install the dependencies with npm it returns an error code ERESOLVE Unable to resolve dependency tree as it generates the package.json with React version 19 but @testing-library/[email protected] is compatible exclusively with React 18.
## Screenshots

| React 19 | medium | Critical |
2,775,416,559 | godot | Crash when duplicating file with Ctrl+Drag and drop | ### Tested versions
4.4 dev7, dev6, didn't test earlier
### System information
W10
### Issue description
```
CrashHandlerException: Program crashed
Engine version: Godot Engine v4.4.dev.custom_build (d2ada64a03d2abdb97cafe8f10623db8a2ce1d4c)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[0] std::_Atomic_storage<unsigned __int64,8>::load (C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.39.33519\include\atomic:1121)
[1] std::_Atomic_storage<unsigned __int64,8>::load (C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.39.33519\include\atomic:1121)
[2] SafeNumeric<unsigned __int64>::conditional_increment (C:\godot_source\core\templates\safe_refcount.h:139)
[3] CowData<char32_t>::_ref (C:\godot_source\core\templates\cowdata.h:502)
[4] String::String (C:\godot_source\core\string\ustring.h:602)
[5] EditorFileSystemDirectory::get_file (C:\godot_source\editor\editor_file_system.cpp:94)
[6] EditorFileSystemDirectory::get_file_path (C:\godot_source\editor\editor_file_system.cpp:126)
[7] EditorFileSystem::_update_scan_actions (C:\godot_source\editor\editor_file_system.cpp:847)
[8] EditorFileSystem::_notification (C:\godot_source\editor\editor_file_system.cpp:1714)
[9] EditorFileSystem::_notificationv (C:\godot_source\editor\editor_file_system.h:145)
[10] Object::notification (C:\godot_source\core\object\object.cpp:883)
[11] SceneTree::_process_group (C:\godot_source\scene\main\scene_tree.cpp:1063)
[12] SceneTree::_process (C:\godot_source\scene\main\scene_tree.cpp:1140)
[13] SceneTree::process (C:\godot_source\scene\main\scene_tree.cpp:580)
[14] Main::iteration (C:\godot_source\main\main.cpp:4493)
[15] ProgressDialog::_update_ui (C:\godot_source\editor\progress_dialog.cpp:135)
[16] ProgressDialog::task_step (C:\godot_source\editor\progress_dialog.cpp:222)
[17] EditorNode::progress_task_step (C:\godot_source\editor\editor_node.cpp:4965)
[18] EditorProgress::step (C:\godot_source\editor\editor_node.cpp:180)
[19] EditorFileSystem::reimport_files (C:\godot_source\editor\editor_file_system.cpp:3118)
[20] EditorFileSystem::_update_scan_actions (C:\godot_source\editor\editor_file_system.cpp:963)
[21] EditorFileSystem::_refresh_filesystem (C:\godot_source\editor\editor_file_system.cpp:3044)
[22] call_with_variant_args_helper<EditorFileSystem> (C:\godot_source\core\variant\binder_common.h:320)
[23] call_with_variant_args<EditorFileSystem> (C:\godot_source\core\variant\binder_common.h:430)
[24] CallableCustomMethodPointer<EditorFileSystem,void>::call (C:\godot_source\core\object\callable_method_pointer.h:109)
[25] Callable::callp (C:\godot_source\core\variant\callable.cpp:58)
[26] Object::emit_signalp (C:\godot_source\core\object\object.cpp:1206)
[27] Object::emit_signal<> (C:\godot_source\core\object\object.h:926)
[28] SceneTree::process (C:\godot_source\scene\main\scene_tree.cpp:574)
[29] Main::iteration (C:\godot_source\main\main.cpp:4493)
[30] OS_Windows::run (C:\godot_source\platform\windows\os_windows.cpp:2062)
[31] widechar_main (C:\godot_source\platform\windows\godot_windows.cpp:181)
[32] _main (C:\godot_source\platform\windows\godot_windows.cpp:206)
[33] main (C:\godot_source\platform\windows\godot_windows.cpp:220)
[34] WinMain (C:\godot_source\platform\windows\godot_windows.cpp:234)
[35] __scrt_common_main_seh (D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288)
[36] <couldn't map PC to fn name>
-- END OF BACKTRACE --
```
Alternatively this error will appear:
```
ERROR: Condition "idx == -1" is true. Continuing.
at: EditorFileSystem::_update_scan_actions (C:\godot_source\editor/editor_file_system.cpp:899)
ERROR: Attempted to call reimport_files() recursively, this is not allowed.
at: (C:\godot_source\editor/editor_file_system.cpp:3055)
```
### Steps to reproduce
1. Have any PNG file in your project
2. Drag and drop it to another directory while holding Ctrl
3. Either crash or error
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,crash,regression | low | Critical |
2,775,478,492 | rust | Generic trait bound hides concrete type associated type | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
trait Tr<'x> {
type Out;
fn build() -> Self::Out;
}
struct Foo;
impl<'x> Tr<'x> for Foo {
type Out = i32;
fn build() -> Self::Out {
42
}
}
fn test<'x>() -> i32
where
Foo: Tr<'x>, // commenting out this line fixes the error
{
Foo::build()
}
```
I expect this to compile, but it gives the following error:
```
error[E0308]: mismatched types
--> src/lib.rs:19:5
|
15 | fn test<'x>() -> i32
| --- expected `i32` because of return type
...
19 | Foo::build()
| ^^^^^^^^^^^^ expected `i32`, found associated type
|
= note: expected type `i32`
found associated type `<Foo as Tr<'_>>::Out`
= help: consider constraining the associated type `<Foo as Tr<'_>>::Out` to `i32`
= note: for more information, visit https://doc.rust-lang.org/book/ch19-03-advanced-traits.html
```
The code also works if the trait is made non-generic.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.86.0-nightly (ad211ced8 2025-01-07)
binary: rustc
commit-hash: ad211ced81509462cdfe4c29ed10f97279a0acae
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.6
```
same result on stable | A-trait-system,C-bug,T-types,needs-triage | low | Critical |
2,775,569,420 | langchain | DOC: <Issue related to --upgrade flag in the lang chain documentation / > | ### URL
https://python.langchain.com/docs/integrations/memory/google_firestore/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
URL : https://python.langchain.com/docs/integrations/memory/google_firestore/
The installation guide seems to have a small typo mistake :
The installation guide for this particular package 'langchain-google-firestore' should have --upgrade but in documentation it is only upto -upgrade

The error that a developer gets whenever he/she follows the documented installation guide :

### Idea or request for content:
_No response_ | 🤖:docs | low | Critical |
2,775,587,275 | vscode | Accessibility: Screen readers cut off lines in latest insider build |
Type: <b>Bug</b>
In this latest insiders build, screen readers (tested with both JAWS and NVDA) started cutting off lines of code. the amount read is a bit longer than what is visible on screen, but still it's nearly impossible to work because before the whole line was read no matter what.
Please fix or revert this.
VS Code version: Code - Insiders 1.97.0-insider (9b0b13d9bfe21c3dfd227bfaa8ed5693e309a2e0, 2025-01-08T05:06:32.681Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i7-1270P (16 x 2496)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.24GB (6.47GB free)|
|Process Argv|--disable-extensions . --crash-reporter-id 798ba14e-190a-4e22-8cdc-4ff2f816f42d|
|Screen Reader|yes|
|VM|0%|
</details>Extensions disabled<details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
vsaa593cf:30376535
py29gd2263:31024238
vscaat:30438846
c4g48928:30535728
a9j8j154:30646983
962ge761:30841072
pythonnoceb:30776497
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:30980852
pythonait:30973460
dvdeprecation:31040973
dwnewjupyter:31046869
newcmakeconfigv2:31071590
nativerepl1:31134653
pythonrstrctxt:31093868
nativeloc1:31118317
cf971741:31144450
parsetypescript:31116713
e80f6927:31120813
iacca1:31150324
notype1:31143044
dwcopilot:31158714
h409b430:31177054
2j25a237:31183119
c3hdf307:31184662
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | bug,accessibility | medium | Critical |
2,775,612,843 | vscode | "outline.icons" stops working for jupyter notebook |
Type: <b>Bug</b>
Setting "outline.icons=false" no longer hides icons (e.g. "M↓" for md cell) in outline of .ipynb files. It still works in json and markdown.

VS Code version: Code 1.96.2 (Universal) (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Darwin arm64 24.2.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M3 Pro (12 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|8, 8, 8|
|Memory (System)|36.00GB (4.44GB free)|
|Process Argv|--disable-extensions --crash-reporter-id 74c3d86a-e745-43aa-b70f-ed871a8590b7|
|Screen Reader|no|
|VM|0%|
</details>Extensions disabled<details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
14424-chatv3:31212864
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | bug,notebook-toc-outline | low | Critical |
2,775,617,643 | rust | rust-lld: error: undefined symbol: __gxx_personality_v0 | We encountered a bug, where the experimental rust-lld currently activated under nightly linux / x86-64, can't links certain executables in release mode with the "intel_tex_2" crate. Switching to the old linker solves the problem.
I created a small repository which includes a minimal example.
https://github.com/hasenbanck/rust_lld_bug
It will crash when compiled via:
```rust
cargo run --release
```
Normaly it should compile fine, but we instead get the following error:
```rust
error: linking with `cc` failed: exit status: 1
|
= note: LC_ALL="C" PATH=".....*A LOT OF PATHS*...."
= note: some arguments are omitted. use `--verbose` to show all linker arguments
= note: rust-lld: error: undefined symbol: __gxx_personality_v0
>>> referenced by ispc_texcomp_astc.cpp
>>> ispc_texcomp_astc.o:(DW.ref.__gxx_personality_v0) in archive /mnt/c/Development/rust_lld_bug/target/release/deps/libintel_tex_2-2caaaa92d5e41391.rlib
collect2: error: ld returned 1 exit status
error: could not compile `rust_lld_bug` (bin "rust_lld_bug") due to 1 previous error
```
We used the nightly toolchain 2025-01-07, but older nightly versions are also affected.
We reproduces this error under WSL Ubuntu 24.04 and Fedora 41 running on bare metal.
The old linker works fine:
```rust
RUSTFLAGS="-Z linker-features=-lld" cargo run --release
``` | A-linkage,T-compiler,C-bug,requires-nightly,regression-untriaged,S-has-mcve,A-linkers | low | Critical |
2,775,638,943 | go | x/tools/gopls: "negative WaitGroup counter" panic in diagnoseChangedViews | ```
#!stacks
"runtime.gopanic" && "sync.(*WaitGroup).Add:+19" && "diagnoseChangedViews.func1:+16`"
```
Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks).
```go
func (wg *WaitGroup) Add(delta int) {
...
state := wg.state.Add(uint64(delta) << 32)
v := int32(state >> 32)
w := uint32(state)
...
if v < 0 {
panic("sync: negative WaitGroup counter") // <---------- here
}
```
This stack `OKWeGA` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2025-01-03.json):
- `crash/crash`
- [`runtime.gopanic:+69`](https://cs.opensource.google/go/go/+/go1.23.3:src/runtime/panic.go;l=804)
- [`sync.(*WaitGroup).Add:+19`](https://cs.opensource.google/go/go/+/go1.23.3:src/sync/waitgroup.go;l=64)
- [`sync.(*WaitGroup).Done:=89`](https://cs.opensource.google/go/go/+/go1.23.3:src/sync/waitgroup.go;l=89)
- [`golang.org/x/tools/gopls/internal/server.(*server).diagnoseChangedViews.func1:+16`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/server/diagnostics.go;l=169)
- [`golang.org/x/tools/gopls/internal/server.(*server).diagnoseChangedViews.gowrap1:+16`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.1:gopls/internal/server/diagnostics.go;l=169)
- `runtime.goexit:+0`
```
golang.org/x/tools/[email protected] go1.23.3 linux/amd64 vscode (1)
```
| NeedsInvestigation,gopls,Tools,compiler/runtime,gopls/telemetry-wins,BugReport | low | Critical |
2,775,651,061 | next.js | [SourceMap] `prepareStackTrace` patch minifies server stack trace | ### Link to the code that reproduces this issue
https://github.com/troyt-42/nextjs-source-map
### To Reproduce
1. Checkout `https://github.com/troyt-42/nextjs-source-map`
2. Install dependencies
3. Generate a production build: `yarn build`
4. Start the server with source map enabled: `NODE_OPTIONS=--enable-source-maps yarn start`
5. Access `localhost:3000`
### Current vs. Expected behavior
Current:
The manual log's stack trace is minified even we have the `page.js.map` file generated
```bash
Where am I: Error:
at i (.../nextjs-source-map/.next/server/app/page.js:1:31591)
at ek (.../nextjs-source-map/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:84:13368)
at e (.../nextjs-source-map/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:84:17266)
at e$ (.../nextjs-source-map/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:84:17728)
at Array.toJSON (.../nextjs-source-map/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:84:14874)
at stringify (<anonymous>)
at eU (.../nextjs-source-map/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:84:26231)
at eB (.../nextjs-source-map/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:84:26461)
at eq (.../nextjs-source-map/node_modules/next/dist/compiled/next-server/app-page.runtime.prod.js:84:27015)
at AsyncLocalStorage.run (node:async_hooks:346:14)
```
Expected:
After commenting out this line: https://github.com/vercel/next.js/blob/8ffa6c74b1f3fe357ce25bb455a565c6327dbd1e/packages/next/src/server/patch-error-inspect.ts#L373, I can see the source-mapped stack trace
```bash
Where am I: Error
at i (webpack://nextjs-source-map/src/app/page.tsx:4:30)
at renderFunctionComponent (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.production.js:1006:15)
at renderElement (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.production.js:1082:12)
at renderModelDestructive (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.production.js:1121:1)
at Array.toJSON (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.production.js:1156:40)
at stringify (<anonymous>)
at emitChunk (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.production.js:1734:43)
at retryTask (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.production.js:1755:11)
at eq (webpack://next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.production.js:1803:7)
at AsyncLocalStorage.run (node:async_hooks:346:14)
```
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Wed Jul 31 20:48:04 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T6030
Available memory (MB): 36864
Available CPU cores: 12
Binaries:
Node: 20.12.2
npm: 10.8.1
Yarn: 1.22.22
pnpm: 9.15.3
Relevant Packages:
next: 15.1.3
eslint-config-next: 15.1.3
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Runtime
### Which stage(s) are affected? (Select all that apply)
next start (local)
### Additional context
After reviewing [patch-error-inspect.ts](https://github.com/vercel/next.js/blob/canary/packages/next/src/server/patch-error-inspect.ts), it seems intentional that prepareStackTrace prevents error.stack from being source-mapped. If this is the intended behavior, there may be an issue in the downstream code.
This behavior is causing maintenance challenges for Faire after upgrading to Next.js version 15.1.3. Having source-mapped stack traces is critical for efficient debugging, and the absence of this feature significantly hinders our ability to troubleshoot issues. | Runtime,linear: next | low | Critical |
2,775,652,771 | vscode | User settings UI does not update when configuration changes | Does this issue occur when all extensions are disabled?: Yes
Version: 1.96.2 (Universal)
Commit: fabdb6a30b49f79a7aba0f2ad9df9b399473380f
Date: 2024-12-19T10:22:47.216Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.2.0
Steps to Reproduce:
1. Implement configuration change handler for a configuration value that will update a configuration value dynamically (so that when the user has the user settings window open a configuration value will update)
2. Update a configuration value in the user settings UI that will trigger an update to a setting while the user settings window is opened and focused.
Note that the UI will become stale. If the UI loses focus it will update but it will not update as long as the UI is focused.
| bug,settings-editor,confirmation-pending | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.