id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,741,646,795 | ant-design | Mocky.io/V2 is not found | ### Reproduction link
[https://4x.ant.design/components/upload/#header](https://4x.ant.design/components/upload/#header)
### Steps to reproduce
You need to recreate the Mocky API and make sure you set as never expire.
### What is expected?
When i Upload the files the IT SHOULD BE UPLOADED
### What is actually happening?
API response as Not found 400 for the any file upload .
| Environment | Info |
| --- | --- |
| antd | 4.24.16 |
| React | 17.0.2 |
| System | 23H2 |
| Browser | Chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | help wanted | low | Minor |
2,741,651,670 | vscode | source control |
Type: <b>Bug</b>
> Problem Troubleshooting The problem has been determined to be related to Visual Studio Code.
Switch files source control and timeline view to load for a long time

VS Code version: Code 1.96.0 (138f619c86f1199955d53b4166bef66ef252935c, 2024-12-11T02:29:09.626Z)
OS version: Windows_NT x64 10.0.22000
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz (8 x 2803)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.78GB (8.12GB free)|
|Process Argv|--log vscode.git=trace --crash-reporter-id 53646982-3320-40cd-b43f-f9bb39c194e6|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (1)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-language-pack-zh-hans|MS-|1.96.2024121109
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
vscaac:30438847
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | git | low | Critical |
2,741,657,663 | deno | lint(no-undef): says `document` is undefined | Version: Deno 2.1.4
The lint rule `no-undef` https://docs.deno.com/lint/rules/no-undef/ says that it disallows undeclared variables, but I wouldn't expect this to apply to globally available variables like `document`.
### deno.json
```json
{
"compilerOptions": {
"lib": ["dom", "dom.iterable", "dom.asynciterable", "deno.ns"]
},
"lint": {
"rules": {
"include": ["no-undef"]
}
}
}
```
### main.ts
```ts
console.log(document);
```
### terminal
```
error[no-undef]: document is not defined
--> /Users/soul/Projects/lint/main.ts:1:13
|
1 | console.log(document);
| ^^^^^^^^
docs: https://lint.deno.land/rules/no-undef
Found 1 problem
Checked 1 file
```
| bug,lint | low | Critical |
2,741,673,320 | vscode | Error: EPIPE: broken pipe, write |
Type: <b>Bug</b>
using VS Code as Text Editor to Vivado.
Open a **.v** file in Vivado by VSCode,shows up the error panel below. Click 'yes' or cancel it, then it turns to VSCode and successfully open.
<img width="419" alt="Image" src="https://github.com/user-attachments/assets/db0999ba-5caf-43c1-8a28-5ff9d9a76949" />
Reinstall in another disk didn't help.
How to solve the issue or ignore this error panel and make it directly turn to VSCode?
VS Code version: Code 1.96.0 (138f619c86f1199955d53b4166bef66ef252935c, 2024-12-11T02:29:09.626Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i9-13900HX (32 x 2419)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.70GB (5.22GB free)|
|Process Argv|E:/download/diglogic-2023/pkg/lab1/multipler/testbench.v -l3 --crash-reporter-id 866be85d-39d2-4bb0-9303-c9db6dd66fb6|
|Screen Reader|no|
|VM|67%|
</details><details><summary>Extensions (14)</summary>
Extension|Author (truncated)|Version
---|---|---
verilog-formatter|Isa|1.0.0
remote-wsl|ms-|0.88.5
cpptools|ms-|1.22.11
veriloghdl|msh|1.15.5
java|red|1.37.0
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
vim|vsc|1.29.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
vscaat:30438848
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | bug,upstream,workbench-cli | low | Critical |
2,741,722,148 | react | Bug: Scheduler default time interval problem | Hello React Team,
While reviewing the `packages/scheduler/src/SchedulerFeatureFlags.js` file, I noticed that the default frame yield time (`frameYieldMs`) is set to **5 milliseconds**:
```javascript
export const frameYieldMs = 5;
```
I have a few questions regarding this configuration and would appreciate your clarification:
1. Why is 5ms Chosen as the Default Value?
Why not set it to 16.7ms (which corresponds to the 60fps frame rate of most screens) or 50ms (the threshold for long tasks defined by browsers)?
Impact of Frequent Task Interruptions
2. Does setting it to 5ms cause tasks to be interrupted too frequently?
Considering that calling performance.now() itself might take more than 5ms, could this lead to performance overhead?
3. Is This a Bug or an Intended Feature?
Is this setting intentional to optimize specific performance scenarios?
Or could it potentially lead to unintended task interruption issues?
Additional Information
React Version: 17.0.2
Browser Environment: Chrome Version 112.0.5615.49
Steps to Reproduce:
When performing extensive layout computations, I observed that tasks are being frequently interrupted, which results in decreased rendering performance.
Expected Behavior:
The Scheduler should interrupt tasks near the frame time (approximately 16.7ms) to balance rendering performance and responsiveness effectively.
Actual Behavior:
With frameYieldMs set to 5ms, tasks are frequently interrupted, leading to increased scheduling overhead and rendering performance that falls below expectations.
Thank You
Thank you for your hard work and dedication to maintaining React! I look forward to your explanation and any insights you can provide regarding this configuration. | Status: Unconfirmed | medium | Critical |
2,741,768,331 | vscode | Overtype Mode: backspace should insert spaces in its stead | Idea comes from https://github.com/microsoft/vscode/pull/233188
Make regular backspace add spaces in its stead while deleting code in overtype mode. You might also think of this as "destructive" left-arrow. | feature-request,editor-input | low | Minor |
2,741,782,453 | angular | Suggestion: Integrate defer and Resource APIs for Seamless Asynchronous State Handling | ### Which @angular/* package(s) are relevant/related to the feature request?
_No response_
### Description
With the introduction of the `defer` and the `Resource` API in Angular 19, developers now have powerful tools to handle asynchronous data more declaratively in their applications. Both APIs support states like `loading`, `resolved`, and `error`, which are critical for managing async workflows in the UI.
However, these two features currently work independently. It seems like a natural fit for `defer` and `Resource` to work together, leveraging their respective strengths to simplify and enhance the developer experience for handling asynchronous data.
### Proposed solution
Introduce an integration between the `defer` and the `Resource` API to enable seamless usage in templates. Possible features could include:
1. Allowing `defer` to accept a `Resource` as its input, automatically binding to its state transitions (`loading`, `resolved`, `error`).
2. Providing built-in optimizations for rendering `Resource` states declaratively with `defer`. For example, a `Resource`-aware `defer` could simplify boilerplate code for displaying loading spinners, error messages, or resolved data.
3. Ensuring consistent state management and declarative workflows across the framework when combining these APIs.
### Alternatives considered
- **Manual Handling:** Developers can currently use `defer` and `Resource` separately, manually binding their states to templates. While functional, this requires additional boilerplate and lacks the declarative synergy that integration could provide.
- **Custom Wrappers:** Developers can write custom helper to bridge `defer` and `Resource`, but this approach increases complexity and reduces framework-wide consistency.
- **Leave as Is:** The features can remain independent, but integration would better align with Angular’s goal of reducing friction in building modern web applications.
This proposed integration would streamline how developers handle asynchronous data, making Angular more intuitive and powerful for real-world use cases. | area: core,core: reactivity,cross-cutting: signals,core: defer | low | Critical |
2,741,806,938 | create-react-app | Are polyfills always injected in bundle? | Polyfills are mentioned in several issues as injected in the bundle (https://github.com/facebook/create-react-app/issues/3397#issuecomment-341754187, https://github.com/facebook/create-react-app/issues/2696#issuecomment-312184589).
However, I have used[ wepack-license-plugin](https://www.npmjs.com/package/webpack-license-plugin) to extract the licenses of an Ionic React app built with create-react-app and polyfills are not included.
**Is it possible that polyfills packages are actually not included in the bundle?**
**Are they included in every app whose bundle is created with CRA or only in some of them?**
These are my package.json file and the output of the webpack-license-plugin:
[package.json](https://github.com/user-attachments/files/18126579/package.json)
[thirdPartyNotice.json](https://github.com/user-attachments/files/18126637/thirdPartyNotice.json)
The alternative it that polyfills are actually injected but not detected by the plugin, I have asked about this possibility [here](https://github.com/codepunkt/webpack-license-plugin/issues/1103). | needs triage | low | Minor |
2,741,823,008 | kubernetes | HPA scales up despite utilization being under target | ### What happened?
We use Argo Rollouts to perform canary deployments of our services. During a canary deployment, new pods are brought up (the canary pods) which are included in the [Status](https://github.com/kubernetes/kubernetes/blob/5ba2b78eae18645744b51d94d279582bdcccec23/pkg/apis/autoscaling/types.go#L51) of the Rollout's scale subresource. When HPA is configured to scale on a metric with a high utilization ratio (generally memory), this results in HPA scaling out, despite the fact that the utilization is under the target.
This seems to be a result of the behaviour of the [replica calculator](https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/podautoscaler/replica_calculator.go#L65) where:
- the recommended replicas is a simple function of utilizationRatio and the total number of pods returned by the selector
- missing pods with a utilization ratio of below 1 are assumed to be because of a "scale down" and are treated as if they are [consuming 100% of resources](https://github.com/kubernetes/kubernetes/blob/5ba2b78eae18645744b51d94d279582bdcccec23/pkg/controller/podautoscaler/replica_calculator.go#L108-L113) (whereas in this case we are scaling out during a rollout)
In addition, the utilization ratio is never checked prior to updating prior to HPA increasing the desired replicas leading to [incorrect/misleading log messages](https://github.com/kubernetes/kubernetes/blob/5ba2b78eae18645744b51d94d279582bdcccec23/pkg/controller/podautoscaler/horizontal.go#L857-L859) like "New size: X; reason: memory resource utilization (percentage of request) above target".
Further details can be found in issues logged for Argo Rollouts [here](https://github.com/argoproj/argo-rollouts/issues/2857) and [here](https://github.com/argoproj/argo-rollouts/issues/3849).
### What did you expect to happen?
HPA should not scale up when utilization is below target (I.e. when `utilizationRatio < 1`)
### How can we reproduce it (as minimally and precisely as possible)?
The issue can be reproduced by simulating the behaviour of Argo Rollouts and creating a 2nd `ReplicaSet` with selectors that match an existing `Deployment`. For example:
Create base `Deployment` and `HorizontalPodAutoscaler` resources:
``` yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: test
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
resources:
requests:
memory: "10Mi"
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx
namespace: test
spec:
maxReplicas: 50
minReplicas: 3
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 50
```
Wait to ensure that metrics are available and HPA is stable at 3 pods.
``` bash
$ kubectl get pods,hpa
NAME READY STATUS RESTARTS AGE
pod/nginx-7fb4f6d65f-jgdzm 1/1 Running 0 30s
pod/nginx-7fb4f6d65f-mtnxv 1/1 Running 0 30s
pod/nginx-7fb4f6d65f-tc6c8 1/1 Running 0 30s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/nginx Deployment/nginx memory: 42%/50% 3 50 3 30s
```
Create a 2nd replica set:
``` yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx-canary
namespace: test
spec:
replicas: 3
selector:
matchLabels:
app: nginx
role: canary
template:
metadata:
labels:
app: nginx
role: canary
spec:
containers:
- image: nginx
name: nginx
resources:
requests:
memory: "10Mi"
```
Observe that HPA will scale the existing `Deployment`:
``` bash
$ kubectl get pods,hpa
NAME READY STATUS RESTARTS AGE
pod/nginx-7fb4f6d65f-2hh7f 1/1 Running 0 2m36s
pod/nginx-7fb4f6d65f-6lfr5 1/1 Running 0 2m6s
pod/nginx-7fb4f6d65f-9wgfn 1/1 Running 0 6s
pod/nginx-7fb4f6d65f-gpzp9 1/1 Running 0 2m6s
pod/nginx-7fb4f6d65f-hdsff 1/1 Running 0 3m6s
pod/nginx-7fb4f6d65f-j24kp 1/1 Running 0 3m6s
pod/nginx-7fb4f6d65f-jgdzm 1/1 Running 0 4m36s
pod/nginx-7fb4f6d65f-kxbvz 1/1 Running 0 66s
pod/nginx-7fb4f6d65f-l4rzr 1/1 Running 0 2m36s
pod/nginx-7fb4f6d65f-mtnxv 1/1 Running 0 4m36s
pod/nginx-7fb4f6d65f-qvpqx 1/1 Running 0 3m6s
pod/nginx-7fb4f6d65f-rq2r8 1/1 Running 0 96s
pod/nginx-7fb4f6d65f-s52hj 1/1 Running 0 66s
pod/nginx-7fb4f6d65f-tc6c8 1/1 Running 0 4m36s
pod/nginx-7fb4f6d65f-zwpfc 1/1 Running 0 36s
pod/nginx-canary-2prb6 1/1 Running 0 3m41s
pod/nginx-canary-9l5rc 1/1 Running 0 3m41s
pod/nginx-canary-g79tk 1/1 Running 0 3m41s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/nginx Deployment/nginx memory: 42%/50% 3 50 14 4m36s
```
Check events on HPA to see details on why scaling occurred:
```
$ kubectl describe hpa nginx
Name: nginx
Namespace: test
Labels: <none>
Annotations: <none>
CreationTimestamp: Mon, 16 Dec 2024 10:20:04 +0100
Reference: Deployment/nginx
Metrics: ( current / target )
resource memory on pods (as a percentage of request): 42% (4503324444m) / 50%
Min replicas: 3
Max replicas: 50
Deployment pods: 8 current / 8 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 4m35s (x3 over 4m46s) horizontal-pod-autoscaler unable to get metric memory: no metrics returned from resource metrics API
Normal SuccessfulRescale 3m31s horizontal-pod-autoscaler New size: 6; reason: memory resource utilization (percentage of request) above target
Normal SuccessfulRescale 3m1s horizontal-pod-autoscaler New size: 8; reason: memory resource utilization (percentage of request) above target
Normal SuccessfulRescale 2m31s horizontal-pod-autoscaler New size: 10; reason: memory resource utilization (percentage of request) above target
Normal SuccessfulRescale 2m1s horizontal-pod-autoscaler New size: 11; reason: memory resource utilization (percentage of request) above target
Normal SuccessfulRescale 91s horizontal-pod-autoscaler New size: 13; reason: memory resource utilization (percentage of request) above target
Normal SuccessfulRescale 61s horizontal-pod-autoscaler New size: 14; reason: memory resource utilization (percentage of request) above target
Normal SuccessfulRescale 31s horizontal-pod-autoscaler New size: 15; reason: memory resource utilization (percentage of request) above target
Normal SuccessfulRescale 1s horizontal-pod-autoscaler New size: 16; reason: memory resource utilization (percentage of request) above target
```
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.3
Kustomize Version: v5.4.2
Server Version: v1.30.5-gke.1443001
```
</details>
### Cloud provider
<details>
GKE
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/autoscaling,needs-triage | low | Critical |
2,741,898,263 | kubernetes | fix: noderesources plugin flaw | ### What happened?
The score function of resourceAllocationScorer should not iterate over args.ScoringStrategy.Resources. For example: the strategy parameters are set to [gpu:2,cpu:1,mem:1]. At this time, the pod only applies for cpu and mem, but because k8s traverses parameters instead of traversing pod application, gpu becomes the final The key factor in scoring, but in fact, the pod only needs to consider resource application issues of CPU and MEM, and does not need to consider GPU. After all, it has not applied for GPU. If it is the MostRequestedPriority policy at this time, after consideration, the CPU strength will be scheduled to the GPU. On the machine, there is a fragmentation problem
### What did you expect to happen?
Which resources will be considered for scoring based on which resources the pod has applied for,Instead of scoring all the resources configured in the configuration file, which may be counterproductive in real scenarios.
### How can we reproduce it (as minimally and precisely as possible)?
The MostRequestedPriority policy parameters are set to [gpu:2,cpu:1,mem:1]. At this time, the pod only applies for cpu and mem. It is found that the pod is scheduled to a machine with concentrated GPUs. As a result, when there are large model GPU tasks that need to be scheduled, The machine's GPU is satisfied but the CPU is not.
### Anything else we need to know?
_No response_
### Kubernetes version
latest version
### Cloud provider
only k8s
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,needs-sig,needs-triage | low | Major |
2,741,903,923 | ui | [bug]: Select | ### Describe the bug
Since the Select component requires a value, I am using it to select a country code. I loop through all the countries using map, and for each country, I have a <div> with the key={name + '' + code}. Then, I have the <Select /> with the value={code}.
However, I am encountering this issue:
Encountered two children with the same key, '+61'. Keys should be unique so that components maintain their identity across updates. Non-unique keys may cause children to be duplicated and/or omitted — the behavior is unsupported and could change in a future version.
I believe the issue is caused by the fact that behind the scenes, shadcn is mapping the list and using key={value}, which results in this conflict.
```{COUNTRY_CODE.map(({ flag, code, name, abbreviation }) => (
<div key={`${code}-${name}`}>
<SelectItem value={code} key={`${code}-${name}`}>
<div className="flex space-x-[12px]">
<div className="relative h-[20px] w-[20px]">
<Image
src={flag}
alt={abbreviation}
fill
className="object-cover"
/>
</div>
<div className="flex space-x-[4px]">
<span className="text-black-mid text-[12px]">
({code})
</span>
</div>
</div>
</SelectItem>
</div>
))}```
### Affected component/components
Select
### How to reproduce
{COUNTRY_CODE.map(({ flag, code, name, abbreviation }) => (
<div key={`${code}-${name}`}>
<SelectItem value={code} key={`${code}-${name}`}>
<div className="flex space-x-[12px]">
<div className="relative h-[20px] w-[20px]">
<Image
src={flag}
alt={abbreviation}
fill
className="object-cover"
/>
</div>
<div className="flex space-x-[4px]">
<span className="text-black-mid text-[12px]">
({code})
</span>
</div>
</div>
</SelectItem>
</div>
))}
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
Encountered two children with the same key, `+61`. Keys should be unique so that components maintain their identity across updates. Non-unique keys may cause children to be duplicated and/or omitted — the behavior is unsupported and could change in a future version.
```
### System Info
```bash
Encountered two children with the same key, `+61`. Keys should be unique so that components maintain their identity across updates. Non-unique keys may cause children to be duplicated and/or omitted — the behavior is unsupported and could change in a future version.
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,741,919,087 | three.js | TSL texture reading / passed as argument in Fn does not work | ### Description
Hello there !
Sampling a texture via inline materialnode look straightforward :
```
const uTex = texture( tex );
material.colorNode = uTex.sample(oneMinus( uv() ));
```
Using Fn and passing a texture as argument makes it challenging and not working ( tdisp.sample is not a function ) :
```
const sampleTex = Fn(([tdisp])=> {
return tdisp.sample(oneMinus( uv() ) )
}).setLayout( {
name: 'sampleTex',
type: 'float',
inputs: [
{
name: "sampleTex",
//type: "texture"
type: "sampler2D"
}
]
} )
material.colorNode = Fn(()=>{
return sampleTex(uTex)
})()
```
Been looking at the example folder, and there is no example of passing a texture in a Fn
The transpiler gives wrong results :
<img width="1660" alt="Screenshot 2024-12-16 at 11 02 45" src="https://github.com/user-attachments/assets/07fadb18-9e34-428b-9b81-1cb810eb2409" />
But also, it all seems unclear how to use textures as uniforms / in nodes,
To test you can move the CASE variable from 0 to 1
### Live example
[https://jsfiddle.net/0w3mhbfa/107/](https://jsfiddle.net/0w3mhbfa/107/)
### Screenshots
_No response_
### Version
last
### Device
Desktop
### Browser
Chrome
### OS
MacOS | Enhancement,TSL | low | Major |
2,741,951,738 | PowerToys | KeyMapping Works only on one monitor | ### Microsoft PowerToys version
0.86
### Installation method
GitHub
### Running as admin
None
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
i want to remap the volume mute key to nothing.
remapping a key ( volume mute ) to disabled on one monitor works as long as I'm in focus in that monitor but if i click on the 2nd monitor its functioning again as normal (not being disabled)
--Monitor 1--
Changing the volume mute to disabled
testing the key
its disabled and it wont mute
Now Hovering to 2nd Monitor
--Monitor 2--
making the windows focus on 2nd monitor ( any random click anywhere )
testing the key
it mutes the volume
if i bring the program it self (PowerToys) to the 2nd monitor it disables the key
testing the key
its disabled and doesn't mute
Now Hovering to 1nd Monitor
--Monitor 1--
making the windows focus on 1st monitor ( any random click anywhere )
testing the key
it mutes the volume
...................................................
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Product-Keyboard Shortcut Manager,Needs-Triage,Needs-Team-Response | low | Minor |
2,741,979,772 | flutter | Nested generic function inside factory constructor crashes flutter run (web) compiler | ### Steps to reproduce
1. Update to flutter 3.27.0
2. `flutter run -d chrome`
### Expected results
Chrome opens with the flutter application running
### Actual results
Dart compiler crashes. Note that this does not happen for other non-web runs (eg windows). This also does not happen when compiling a js bundle from it
### Code sample
<details open><summary>Code sample</summary>
Due to the confidential nature and the lack of any indication in which file the bug is triggered, we cannot provide a code sample to reproduce this. If we somehow can get even more verbose logs that point us to which function is being compiled that crashes the compiler, we may be able to extract a code sample.
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib\main.dart on Chrome in debug mode...
Unhandled exception:
Exception: RtiTypeEnvironments should not receive extended type parameters.
#0 RtiTypeEnvironment.extend (package:dev_compiler/src/kernel/type_environment.dart:137)
#1 ProgramCompiler._emitFunction (package:dev_compiler/src/kernel/compiler.dart:3590)
#2 ProgramCompiler.visitFunctionDeclaration (package:dev_compiler/src/kernel/compiler.dart:4882)
#3 FunctionDeclaration.accept (package:kernel/src/ast/statements.dart:1747)
#4 ProgramCompiler._visitStatement (package:dev_compiler/src/kernel/compiler.dart:3971)
#5 MappedIterable.elementAt (dart:_internal/iterable.dart:395)
#6 ListIterator.moveNext (dart:_internal/iterable.dart:364)
#7 new _GrowableList._ofEfficientLengthIterable (dart:core-patch/growable_array.dart:189)
#8 new _GrowableList.of (dart:core-patch/growable_array.dart:150)
#9 new List.of (dart:core-patch/array_patch.dart:39)
#10 SetBase.toList (dart:collection/set.dart:119)
#11 ProgramCompiler.visitBlock (package:dev_compiler/src/kernel/compiler.dart:4160)
#12 Block.accept (package:kernel/src/ast/statements.dart:103)
#13 ProgramCompiler._visitStatement (package:dev_compiler/src/kernel/compiler.dart:3971)
#14 ProgramCompiler._emitFunctionScopedBody (package:dev_compiler/src/kernel/compiler.dart:4004)
#15 ProgramCompiler._emitFactoryConstructor.<anonymous closure> (package:dev_compiler/src/kernel/compiler.dart:2460)
#16 ProgramCompiler._withLetScope (package:dev_compiler/src/kernel/compiler.dart:2749)
#17 ProgramCompiler._withCurrentFunction (package:dev_compiler/src/kernel/compiler.dart:3749)
#18 ProgramCompiler._emitFactoryConstructor (package:dev_compiler/src/kernel/compiler.dart:2458)
#19 ProgramCompiler._emitClassMethods (package:dev_compiler/src/kernel/compiler.dart:2230)
#20 ProgramCompiler._emitClassDeclaration (package:dev_compiler/src/kernel/compiler.dart:1041)
#21 ProgramCompiler._emitClass (package:dev_compiler/src/kernel/compiler.dart:973)
#22 List.forEach (dart:core-patch/growable_array.dart:417)
#23 ProgramCompiler._emitLibrary (package:dev_compiler/src/kernel/compiler.dart:912)
#24 List.forEach (dart:core-patch/growable_array.dart:417)
#25 ProgramCompiler.emitModule (package:dev_compiler/src/kernel/compiler.dart:620)
#26 IncrementalJavaScriptBundler.compile (package:frontend_server/src/javascript_bundle.dart:231)
#27 FrontendCompiler.writeJavaScriptBundle (package:frontend_server/frontend_server.dart:874)
<asynchronous suspension>
#28 FrontendCompiler.compile (package:frontend_server/frontend_server.dart:690)
<asynchronous suspension>
#29 listenAndCompile.<anonymous closure> (package:frontend_server/frontend_server.dart:1392)
<asynchronous suspension>
the Dart compiler exited unexpectedly.
Waiting for connection from debug service on Chrome... 14.7s
Failed to compile application.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, 3.27.0, on Microsoft Windows [Version 10.0.22631.4602], locale en-GB)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[!] Android toolchain - develop for Android devices (Android SDK version 33.0.2)
X cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
X Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/windows-android-setup for more details.
[√] Chrome - develop for the web
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.12.3)
[!] Android Studio (not installed)
[√] IntelliJ IDEA Ultimate Edition (version 2024.3)
[√] Connected device (3 available)
[√] Network resources
! Doctor found issues in 2 categories.
```
</details>
| c: regression,tool,dependency: dart,platform-web,has reproducible steps,P1,team-tool,triaged-tool,dependency:dart-triaged,found in release: 3.27,found in release: 3.28 | medium | Critical |
2,742,085,888 | react | Error in compiler polyfill, using React 18 | I have react 18 installed, I can see the compiler is running against my source code however during runtime this code
```javascript
// Re-export React.c if present, otherwise fallback to the userspace polyfill for versions of React
// < 19.
export const c =
// @ts-expect-error
typeof React.__COMPILER_RUNTIME?.c === 'function'
? // @ts-expect-error
React.__COMPILER_RUNTIME.c
: function c(size: number) {
return React.useMemo<Array<unknown>>(() => {
const $ = new Array(size);
for (let ii = 0; ii < size; ii++) {
$[ii] = $empty;
}
// This symbol is added to tell the react devtools that this array is from
// useMemoCache.
// @ts-ignore
$[$empty] = true;
return $;
}, []);
};
```
When it reaches React.useMemo I get the error
```javascript
Uncaught TypeError: Cannot read properties of null (reading 'useMemo')
at Object.useMemo (react.development.js:1650:21)
at c2 (index.ts:31:22)
at AppRoutes (Routes.tsx:121:26)
at App.tsx:14:36`
```
The offending code is the dispatcher being null
```javascript
function useMemo(create, deps) {
var dispatcher = resolveDispatcher();
return dispatcher.useMemo(create, deps);
}
```
I am using vite with the babel-plugin-react-compiler plugin targeting React 18
```javascript
react({
babel: {
plugins: [["babel-plugin-react-compiler", ReactCompilerConfig]],
},
})
``` | React 19 | medium | Critical |
2,742,090,497 | vscode | [css] `:after' suggested inside comment | Repro:
Settings:
```js
"editor.quickSuggestions": {
"comments": "off"
},
```
Go to a css file comment and add `:` to the end of a line, 🐛 quick suggestions trigger.
@aeschli could reproduce

| bug,css-less-scss | low | Minor |
2,742,158,529 | vscode | Expose the current editor overtype state | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
When dealing with webviews using the standalone version of Monaco Editor, we'd like to align our environment with the current VS Code state. That means if the user is currently using `overtype` mode, that should be activated by default in our webviews.
We'd like to have access to:
- `inputMode: 'insert' | 'overtype'`
- `onDidChangeInputMode(...)`
| bug,editor-input | low | Major |
2,742,177,261 | godot | Exporting in headless mode freezes if project files have uniform timestamps | ### Tested versions
- Reproducible in: 4.3.stable
- Not reproducible in: 4.2.2.stable
- Other versions not tested.
### System information
Tested with Void Linux (current), Debian 11 (Bullseye), both x86_64
### Issue description
Similar to #95287.
Where I have a fully-imported project, with `.godot/` present, *but* with its files timestamped identically to the rest of the project's files, running an export with `--headless --export-release` will hang immediately after the version header is output.
Touching the files within `.godot/`, or re-running the import, fixes the issue. I ran into this problem because I'd just duplicated a project with `cp -r`; likewise, `cp -a` fixes the issue.
After Godot finishes starting up, `strace` repeats the following, so presumably Godot is stuck in some kind of polling deadlock:
```
clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=6406000}, 0x7ffe06508a80) = 0
clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=1000}, 0x7ffe06508c90) = 0
poll([{fd=4, events=POLLIN}], 1, 0) = 0 (Timeout)
poll([{fd=5, events=POLLIN}], 1, 0) = 0 (Timeout)
```
### Steps to reproduce
- Create a new project with any export configured
- Ensure imports are done, `.godot/` exists
- Either recursively touch all files, or `cp -r` the project
- Try running the export with `godot --headless --export-release ...` (or `-debug`)
### Minimal reproduction project (MRP)
[src.zip](https://github.com/user-attachments/files/18149641/src.zip)
| bug,regression,topic:export | low | Critical |
2,742,197,762 | flutter | Crashes in Google Play pre-launch report after upgrading to 3.27.0 | ### Steps to reproduce
I have upgraded my app to use Flutter 3.27.0 and now I see the following crash in the Stability tab in Goggle Play Pre-lunch report (every time I upload the bundle).
### Expected results
No crashes in Google Play Pre-launch reports.
### Actual results
The app crashes in Google Play Pre-launch reports.
### Code sample
Not sure what code I can provide but it seems it is related to firebase.
### Screenshots or Video
<img width="1161" alt="image" src="https://github.com/user-attachments/assets/af74eb17-2dbc-4e84-96c3-a8457fd8922e" />
### Logs
<details open><summary>Logs</summary>
```console
"main" tid=1 Native
#00 pc 0x00000000000b26f7 /apex/com.android.runtime/lib64/bionic/libc.so (read+7)
#01 pc 0x0000000000011798 /vendor/lib64/libOpenglSystemCommon.so (QemuPipeStream::commitBufferAndReadFully(unsigned long, void*, unsigned long)+232)
#02 pc 0x000000000015a16a /vendor/lib64/libvulkan_enc.so (goldfish_vk::VulkanStreamGuest::read(void*, unsigned long)+74)
#03 pc 0x000000000019f939 /vendor/lib64/libvulkan_enc.so (goldfish_vk::VkEncoder::vkCreateImageView(VkDevice_T*, VkImageViewCreateInfo const*, VkAllocationCallbacks const*, VkImageView_T**, unsigned int)+553)
#04 pc 0x00000000001715ca /vendor/lib64/libvulkan_enc.so (goldfish_vk::ResourceTracker::on_vkCreateImageView(void*, VkResult, VkDevice_T*, VkImageViewCreateInfo const*, VkAllocationCallbacks const*, VkImageView_T**)+234)
#05 pc 0x000000000024c8a7 /vendor/lib64/libvulkan_enc.so (goldfish_vk::entry_vkCreateImageView(VkDevice_T*, VkImageViewCreateInfo const*, VkAllocationCallbacks const*, VkImageView_T**)+103)
#06 pc 0x00000000008832a4 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#07 pc 0x000000000088281d /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#08 pc 0x0000000000535bf8 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#09 pc 0x00000000008989a6 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#10 pc 0x000000000084efb1 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#11 pc 0x00000000008be123 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#12 pc 0x00000000004929b7 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#13 pc 0x000000000049ad06 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#14 pc 0x000000000049b1dd /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#15 pc 0x0000000000490b78 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#16 pc 0x00000000008c3e5f /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#17 pc 0x00000000008c2f36 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#18 pc 0x000000000049d1b7 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#19 pc 0x00000000003a03ab /apex/com.android.art/lib64/libart.so (art_quick_generic_jni_trampoline+219)
#20 pc 0x000000000038c945 /apex/com.android.art/lib64/libart.so (nterp_helper+3837)
#21 pc 0x00000000002f3d38 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (io.flutter.embedding.engine.FlutterJNI.performNativeAttach+8617984)
#22 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#23 pc 0x00000000002f3e66 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (io.flutter.embedding.engine.FlutterJNI.attachToNative+30)
#24 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#25 pc 0x00000000002f3886 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (io.flutter.embedding.engine.a.f+18)
#26 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#27 pc 0x00000000002f3762 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (io.flutter.embedding.engine.a.<init>+446)
#28 pc 0x000000000038d096 /apex/com.android.art/lib64/libart.so (nterp_helper+5710)
#29 pc 0x00000000002f324e /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (io.flutter.embedding.engine.b.b+22)
#30 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#31 pc 0x00000000002f31c6 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (io.flutter.embedding.engine.b.a+110)
#32 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#33 pc 0x00000000001d8b70 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (U5.e.I+408)
#34 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#35 pc 0x00000000001d8222 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (U5.e.q+14)
#36 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#37 pc 0x00000000001da796 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (U5.i.onAttach+22)
#38 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#39 pc 0x000000000027b6a2 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (androidx.fragment.app.p.performAttach+98)
#40 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#41 pc 0x00000000002762c4 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (androidx.fragment.app.O.c+372)
#42 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#43 pc 0x0000000000276cae /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (androidx.fragment.app.O.m+274)
#44 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#45 pc 0x0000000000274654 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (androidx.fragment.app.I.e0+696)
#46 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#47 pc 0x0000000000274fda /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (androidx.fragment.app.I.l1+162)
#48 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#49 pc 0x00000000002726d6 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (androidx.fragment.app.I.b0+42)
#50 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#51 pc 0x0000000000273dc4 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (androidx.fragment.app.I.T+80)
#52 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#53 pc 0x0000000000273816 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (androidx.fragment.app.I.y+22)
#54 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#55 pc 0x000000000026f76c /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (androidx.fragment.app.y.c+12)
#56 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#57 pc 0x000000000026eb10 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/base.apk (androidx.fragment.app.u.onStart+40)
#58 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#59 pc 0x000000000023f6b8 /system/framework/framework.jar (android.app.Instrumentation.callActivityOnStart+20480)
#60 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#61 pc 0x00000000001d0d56 /system/framework/framework.jar (android.app.Activity.performStart+54)
#62 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#63 pc 0x00000000001c7b6c /system/framework/framework.jar (android.app.ActivityThread.handleStartActivity+44)
#64 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#65 pc 0x00000000002cb8e8 /system/framework/framework.jar (android.app.servertransaction.TransactionExecutor.performLifecycleSequence+248)
#66 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#67 pc 0x00000000002cb5c0 /system/framework/framework.jar (android.app.servertransaction.TransactionExecutor.cycleToPath+20)
#68 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#69 pc 0x00000000002cb7ba /system/framework/framework.jar (android.app.servertransaction.TransactionExecutor.executeLifecycleState+50)
#70 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#71 pc 0x00000000002cb670 /system/framework/framework.jar (android.app.servertransaction.TransactionExecutor.execute+152)
#72 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#73 pc 0x00000000001be522 /system/framework/framework.jar (android.app.ActivityThread$H.handleMessage+254)
#74 pc 0x000000000207b647 /memfd:jit-zygote-cache (android.os.Handler.dispatchMessage+183)
#75 pc 0x000000000038c945 /apex/com.android.art/lib64/libart.so (nterp_helper+3837)
#76 pc 0x00000000004595f6 /system/framework/framework.jar (android.os.Looper.loopOnce+334)
#77 pc 0x0000000002097d06 /memfd:jit-zygote-cache (android.os.Looper.loop+550)
#78 pc 0x000000000038baed /apex/com.android.art/lib64/libart.so (nterp_helper+165)
#79 pc 0x00000000001c8a1e /system/framework/framework.jar (android.app.ActivityThread.main+202)
#80 pc 0x00000000003953f6 /apex/com.android.art/lib64/libart.so (art_quick_invoke_static_stub+806)
#81 pc 0x000000000041da89 /apex/com.android.art/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+233)
#82 pc 0x00000000008194c2 /apex/com.android.art/lib64/libart.so (_jobject* art::InvokeMethod<(art::PointerSize)8>(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, _jobject*, _jobject*, unsigned long)+1442)
#83 pc 0x0000000000772698 /apex/com.android.art/lib64/libart.so (art::Method_invoke(_JNIEnv*, _jobject*, _jobject*, _jobjectArray*)+56)
at io.flutter.embedding.engine.FlutterJNI.nativeAttach (FlutterJNI.java)
at io.flutter.embedding.engine.FlutterJNI.performNativeAttach (FlutterJNI.java:432)
at io.flutter.embedding.engine.FlutterJNI.attachToNative (FlutterJNI.java:424)
at io.flutter.embedding.engine.FlutterEngine.attachToJni (FlutterEngine.java:399)
at io.flutter.embedding.engine.FlutterEngine.<init> (FlutterEngine.java:369)
at io.flutter.embedding.engine.FlutterEngineGroup.createEngine (FlutterEngineGroup.java:206)
at io.flutter.embedding.engine.FlutterEngineGroup.createAndRunEngine (FlutterEngineGroup.java:158)
at io.flutter.embedding.android.FlutterActivityAndFragmentDelegate.setUpFlutterEngine (FlutterActivityAndFragmentDelegate.java:332)
at io.flutter.embedding.android.FlutterActivityAndFragmentDelegate.onAttach (FlutterActivityAndFragmentDelegate.java:194)
at io.flutter.embedding.android.FlutterFragment.onAttach (FlutterFragment.java:1056)
at androidx.fragment.app.Fragment.performAttach (Fragment.java:3071)
at androidx.fragment.app.FragmentStateManager.attach (FragmentStateManager.java:502)
at androidx.fragment.app.FragmentStateManager.moveToExpectedState (FragmentStateManager.java:271)
at androidx.fragment.app.FragmentManager.executeOpsTogether (FragmentManager.java:2103)
at androidx.fragment.app.FragmentManager.removeRedundantOperationsAndExecute (FragmentManager.java:1998)
at androidx.fragment.app.FragmentManager.execPendingActions (FragmentManager.java:1941)
at androidx.fragment.app.FragmentManager.dispatchStateChange (FragmentManager.java:3206)
at androidx.fragment.app.FragmentManager.dispatchActivityCreated (FragmentManager.java:3116)
at androidx.fragment.app.FragmentController.dispatchActivityCreated (FragmentController.java:263)
at androidx.fragment.app.FragmentActivity.onStart (FragmentActivity.java:350)
at android.app.Instrumentation.callActivityOnStart (Instrumentation.java:1467)
at android.app.Activity.performStart (Activity.java:8099)
at android.app.ActivityThread.handleStartActivity (ActivityThread.java:3732)
at android.app.servertransaction.TransactionExecutor.performLifecycleSequence (TransactionExecutor.java:221)
at android.app.servertransaction.TransactionExecutor.cycleToPath (TransactionExecutor.java:201)
at android.app.servertransaction.TransactionExecutor.executeLifecycleState (TransactionExecutor.java:173)
at android.app.servertransaction.TransactionExecutor.execute (TransactionExecutor.java:97)
at android.app.ActivityThread$H.handleMessage (ActivityThread.java:2253)
at android.os.Handler.dispatchMessage (Handler.java:106)
at android.os.Looper.loopOnce (Looper.java:201)
at android.os.Looper.loop (Looper.java:288)
at android.app.ActivityThread.main (ActivityThread.java:7870)
at java.lang.reflect.Method.invoke (Native method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:548)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:1003)
"Signal Catcher" tid=2 Runnable
#00 pc 0x000000000073ccef /apex/com.android.art/lib64/libart.so (art::DumpNativeStack(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, int, BacktraceMap*, char const*, art::ArtMethod*, void*, bool)+127)
#01 pc 0x0000000000882530 /apex/com.android.art/lib64/libart.so (art::Thread::DumpStack(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, bool, BacktraceMap*, bool) const+368)
#02 pc 0x00000000008a32ba /apex/com.android.art/lib64/libart.so (art::DumpCheckpoint::Run(art::Thread*)+1082)
#03 pc 0x000000000089c16c /apex/com.android.art/lib64/libart.so (art::ThreadList::RunCheckpoint(art::Closure*, art::Closure*)+220)
#04 pc 0x000000000089b3db /apex/com.android.art/lib64/libart.so (art::ThreadList::Dump(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, bool)+1723)
#05 pc 0x000000000089abef /apex/com.android.art/lib64/libart.so (art::ThreadList::DumpForSigQuit(std::__1::basic_ostream<char, std::__1::char_traits<char> >&)+1423)
#06 pc 0x0000000000835fa8 /apex/com.android.art/lib64/libart.so (art::Runtime::DumpForSigQuit(std::__1::basic_ostream<char, std::__1::char_traits<char> >&)+216)
#07 pc 0x000000000084c004 /apex/com.android.art/lib64/libart.so (art::SignalCatcher::HandleSigQuit()+1924)
#08 pc 0x000000000084ad75 /apex/com.android.art/lib64/libart.so (art::SignalCatcher::Run(void*)+341)
#09 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#10 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"ADB-JDWP Connection Control Thread" tid=5 Waiting
#00 pc 0x00000000000b3a5a /apex/com.android.runtime/lib64/bionic/libc.so (__ppoll+10)
#01 pc 0x000000000006a21a /apex/com.android.runtime/lib64/bionic/libc.so (poll+74)
#02 pc 0x000000000000a711 /apex/com.android.art/lib64/libadbconnection.so (adbconnection::AdbConnectionState::RunPollLoop(art::Thread*)+849)
#03 pc 0x0000000000008c01 /apex/com.android.art/lib64/libadbconnection.so (adbconnection::CallbackFunction(void*)+1425)
#04 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#05 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"perfetto_hprof_listener" tid=6 Native
#00 pc 0x00000000000b26f5 /apex/com.android.runtime/lib64/bionic/libc.so (read+5)
#01 pc 0x0000000000029790 /apex/com.android.art/lib64/libperfetto_hprof.so (void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ArtPlugin_Initialize::$_33> >(void*)+288)
#02 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#03 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"Jit thread pool worker thread 0" tid=7 Native
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x0000000000425d1e /apex/com.android.art/lib64/libart.so (art::ConditionVariable::WaitHoldingLocks(art::Thread*)+110)
#02 pc 0x00000000008a4dd7 /apex/com.android.art/lib64/libart.so (art::ThreadPool::GetTask(art::Thread*)+103)
#03 pc 0x00000000008a40f1 /apex/com.android.art/lib64/libart.so (art::ThreadPoolWorker::Run()+113)
#04 pc 0x00000000008a3be8 /apex/com.android.art/lib64/libart.so (art::ThreadPoolWorker::Callback(void*)+264)
#05 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#06 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"HeapTaskDaemon" tid=8 Waiting
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x0000000000425d1e /apex/com.android.art/lib64/libart.so (art::ConditionVariable::WaitHoldingLocks(art::Thread*)+110)
#02 pc 0x00000000005719c1 /apex/com.android.art/lib64/libart.so (art::gc::TaskProcessor::GetTask(art::Thread*)+529)
#03 pc 0x0000000000572242 /apex/com.android.art/lib64/libart.so (art::gc::TaskProcessor::RunAllTasks(art::Thread*)+66)
at dalvik.system.VMRuntime.runHeapTasks (Native method)
at java.lang.Daemons$HeapTaskDaemon.runInternal (Daemons.java:531)
at java.lang.Daemons$Daemon.run (Daemons.java:139)
at java.lang.Thread.run (Thread.java:920)
"ReferenceQueueDaemon" tid=9 Waiting
at java.lang.Object.wait (Native method)
at java.lang.Object.wait (Object.java:442)
at java.lang.Object.wait (Object.java:568)
at java.lang.Daemons$ReferenceQueueDaemon.runInternal (Daemons.java:217)
at java.lang.Daemons$Daemon.run (Daemons.java:139)
at java.lang.Thread.run (Thread.java:920)
"FinalizerDaemon" tid=10 Waiting
at java.lang.Object.wait (Native method)
at java.lang.Object.wait (Object.java:442)
at java.lang.ref.ReferenceQueue.remove (ReferenceQueue.java:190)
at java.lang.ref.ReferenceQueue.remove (ReferenceQueue.java:211)
at java.lang.Daemons$FinalizerDaemon.runInternal (Daemons.java:273)
at java.lang.Daemons$Daemon.run (Daemons.java:139)
at java.lang.Thread.run (Thread.java:920)
"FinalizerWatchdogDaemon" tid=11 Waiting
at java.lang.Object.wait (Native method)
at java.lang.Object.wait (Object.java:442)
at java.lang.Object.wait (Object.java:568)
at java.lang.Daemons$FinalizerWatchdogDaemon.sleepUntilNeeded (Daemons.java:341)
at java.lang.Daemons$FinalizerWatchdogDaemon.runInternal (Daemons.java:321)
at java.lang.Daemons$Daemon.run (Daemons.java:139)
at java.lang.Thread.run (Thread.java:920)
"Binder:9596_1" tid=12 Native
#00 pc 0x00000000000b2997 /apex/com.android.runtime/lib64/bionic/libc.so (__ioctl+7)
#01 pc 0x0000000000067ad8 /apex/com.android.runtime/lib64/bionic/libc.so (ioctl+216)
#02 pc 0x0000000000058a9f /system/lib64/libbinder.so (android::IPCThreadState::talkWithDriver(bool)+319)
#03 pc 0x0000000000058d80 /system/lib64/libbinder.so (android::IPCThreadState::getAndExecuteCommand()+16)
#04 pc 0x000000000005982f /system/lib64/libbinder.so (android::IPCThreadState::joinThreadPool(bool)+63)
#05 pc 0x0000000000085857 /system/lib64/libbinder.so (android::PoolThread::threadLoop()+23)
#06 pc 0x0000000000013b69 /system/lib64/libutils.so (android::Thread::_threadLoop(void*)+313)
#07 pc 0x00000000000df2ac /system/lib64/libandroid_runtime.so (android::AndroidRuntime::javaThreadShell(void*)+140)
#08 pc 0x00000000000133d9 /system/lib64/libutils.so (thread_data_t::trampoline(thread_data_t const*)+425)
#09 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#10 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"Binder:9596_2" tid=13 Native
#00 pc 0x00000000000b2997 /apex/com.android.runtime/lib64/bionic/libc.so (__ioctl+7)
#01 pc 0x0000000000067ad8 /apex/com.android.runtime/lib64/bionic/libc.so (ioctl+216)
#02 pc 0x0000000000058a9f /system/lib64/libbinder.so (android::IPCThreadState::talkWithDriver(bool)+319)
#03 pc 0x0000000000058d80 /system/lib64/libbinder.so (android::IPCThreadState::getAndExecuteCommand()+16)
#04 pc 0x000000000005982f /system/lib64/libbinder.so (android::IPCThreadState::joinThreadPool(bool)+63)
#05 pc 0x0000000000085857 /system/lib64/libbinder.so (android::PoolThread::threadLoop()+23)
#06 pc 0x0000000000013b69 /system/lib64/libutils.so (android::Thread::_threadLoop(void*)+313)
#07 pc 0x00000000000df2ac /system/lib64/libandroid_runtime.so (android::AndroidRuntime::javaThreadShell(void*)+140)
#08 pc 0x00000000000133d9 /system/lib64/libutils.so (thread_data_t::trampoline(thread_data_t const*)+425)
#09 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#10 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"Profile Saver" tid=14 Native
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x0000000000425d1e /apex/com.android.art/lib64/libart.so (art::ConditionVariable::WaitHoldingLocks(art::Thread*)+110)
#02 pc 0x00000000005e787e /apex/com.android.art/lib64/libart.so (art::ProfileSaver::Run()+526)
#03 pc 0x00000000005edd7b /apex/com.android.art/lib64/libart.so (art::ProfileSaver::RunProfileSaverThread(void*)+171)
#04 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#05 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"Firebase-Messaging-Init" tid=15 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2067)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take (ScheduledThreadPoolExecutor.java:1120)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take (ScheduledThreadPoolExecutor.java:849)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.android.gms.common.util.concurrent.zza.run (com.google.android.gms:play-services-basement@@18.3.0:2)
at java.lang.Thread.run (Thread.java:920)
"Firebase-Messaging-Topics-Io" tid=16 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2067)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take (ScheduledThreadPoolExecutor.java:1120)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take (ScheduledThreadPoolExecutor.java:849)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.android.gms.common.util.concurrent.zza.run (com.google.android.gms:play-services-basement@@18.3.0:2)
at java.lang.Thread.run (Thread.java:920)
"Firebase Background Thread #0" tid=17 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2067)
at java.util.concurrent.LinkedBlockingQueue.take (LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.firebase.concurrent.CustomThreadFactory.lambda$newThread$0 (CustomThreadFactory.java:47)
at java.lang.Thread.run (Thread.java:920)
"FirebaseInstanceId" tid=18 Timed Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.parkNanos (LockSupport.java:230)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos (AbstractQueuedSynchronizer.java:1063)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos (AbstractQueuedSynchronizer.java:1358)
at java.util.concurrent.CountDownLatch.await (CountDownLatch.java:278)
at com.google.android.gms.tasks.zzad.zzb (com.google.android.gms:play-services-tasks@@18.1.0:1)
at com.google.android.gms.tasks.Tasks.await (com.google.android.gms:play-services-tasks@@18.1.0:18)
at com.google.firebase.iid.FirebaseInstanceId.awaitTask (com.google.firebase:firebase-iid@@21.1.0:1)
at com.google.firebase.iid.FirebaseInstanceId.getToken (com.google.firebase:firebase-iid@@21.1.0:9)
at com.google.firebase.iid.FirebaseInstanceId.blockingGetMasterToken (com.google.firebase:firebase-iid@@21.1.0:1)
at com.google.firebase.iid.SyncTask.maybeRefreshToken (com.google.firebase:firebase-iid@@21.1.0:3)
at com.google.firebase.iid.SyncTask.run (com.google.firebase:firebase-iid@@21.1.0:10)
at java.util.concurrent.Executors$RunnableAdapter.call (Executors.java:462)
at java.util.concurrent.FutureTask.run (FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run (ScheduledThreadPoolExecutor.java:301)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.android.gms.common.util.concurrent.zza.run (com.google.android.gms:play-services-basement@@18.3.0:2)
at java.lang.Thread.run (Thread.java:920)
"queued-work-looper" tid=19 Native
#00 pc 0x00000000000b395a /apex/com.android.runtime/lib64/bionic/libc.so (__epoll_pwait+10)
#01 pc 0x000000000001809a /system/lib64/libutils.so (android::Looper::pollInner(int)+250)
#02 pc 0x0000000000017f3e /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+126)
#03 pc 0x0000000000169fa3 /system/lib64/libandroid_runtime.so (android::android_os_MessageQueue_nativePollOnce(_JNIEnv*, _jobject*, long, int)+35)
#04 pc 0x00000000003a03ab /apex/com.android.art/lib64/libart.so (art_quick_generic_jni_trampoline+219)
#05 pc 0x000000000206e3f3 /memfd:jit-zygote-cache (android.os.MessageQueue.next+291)
#06 pc 0x000000000038c945 /apex/com.android.art/lib64/libart.so (nterp_helper+3837)
#07 pc 0x00000000004594b4 /system/framework/framework.jar (android.os.Looper.loopOnce+12)
#08 pc 0x0000000002097d06 /memfd:jit-zygote-cache (android.os.Looper.loop+550)
#09 pc 0x000000000038baed /apex/com.android.art/lib64/libart.so (nterp_helper+165)
#10 pc 0x000000000042f2d0 /system/framework/framework.jar (android.os.HandlerThread.run+56)
#11 pc 0x0000000000395094 /apex/com.android.art/lib64/libart.so (art_quick_invoke_stub+756)
#12 pc 0x000000000041da7a /apex/com.android.art/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+218)
#13 pc 0x000000000081aabe /apex/com.android.art/lib64/libart.so (art::JValue art::InvokeVirtualOrInterfaceWithJValues<art::ArtMethod*>(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, art::ArtMethod*, jvalue const*)+478)
#14 pc 0x000000000087a08f /apex/com.android.art/lib64/libart.so (art::Thread::CreateCallback(void*)+1343)
#15 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#16 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
at android.os.MessageQueue.nativePollOnce (Native method)
at android.os.MessageQueue.next (MessageQueue.java:335)
at android.os.Looper.loopOnce (Looper.java:161)
at android.os.Looper.loop (Looper.java:288)
at android.os.HandlerThread.run (HandlerThread.java:67)
"firebase-iid-executor" tid=20 Timed Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.parkNanos (LockSupport.java:230)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos (AbstractQueuedSynchronizer.java:2109)
at java.util.concurrent.LinkedBlockingQueue.poll (LinkedBlockingQueue.java:467)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1091)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.android.gms.common.util.concurrent.zza.run (com.google.android.gms:play-services-basement@@18.3.0:2)
at java.lang.Thread.run (Thread.java:920)
"Firebase Background Thread #1" tid=22 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2067)
at java.util.concurrent.LinkedBlockingQueue.take (LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.firebase.concurrent.CustomThreadFactory.lambda$newThread$0 (CustomThreadFactory.java:47)
at java.lang.Thread.run (Thread.java:920)
"Firebase Background Thread #2" tid=23 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2067)
at java.util.concurrent.LinkedBlockingQueue.take (LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.firebase.concurrent.CustomThreadFactory.lambda$newThread$0 (CustomThreadFactory.java:47)
at java.lang.Thread.run (Thread.java:920)
"MessengerIpcClient" tid=25 Timed Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.parkNanos (LockSupport.java:230)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos (AbstractQueuedSynchronizer.java:2109)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take (ScheduledThreadPoolExecutor.java:1132)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take (ScheduledThreadPoolExecutor.java:849)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.android.gms.common.util.concurrent.zza.run (com.google.android.gms:play-services-basement@@18.3.0:2)
at java.lang.Thread.run (Thread.java:920)
"Firebase Blocking Thread #2" tid=27 Timed Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.parkNanos (LockSupport.java:230)
at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill (SynchronousQueue.java:461)
at java.util.concurrent.SynchronousQueue$TransferStack.transfer (SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll (SynchronousQueue.java:937)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1091)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.firebase.concurrent.CustomThreadFactory.lambda$newThread$0 (CustomThreadFactory.java:47)
at java.lang.Thread.run (Thread.java:920)
"Firebase Background Thread #3" tid=28 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2067)
at java.util.concurrent.LinkedBlockingQueue.take (LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.firebase.concurrent.CustomThreadFactory.lambda$newThread$0 (CustomThreadFactory.java:47)
at java.lang.Thread.run (Thread.java:920)
"pool-15-thread-1" tid=29 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2067)
at java.util.concurrent.LinkedBlockingQueue.take (LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at java.lang.Thread.run (Thread.java:920)
"GmsDynamite" tid=33 Waiting
at java.lang.Object.wait (Native method)
at java.lang.Object.wait (Object.java:442)
at java.lang.Object.wait (Object.java:568)
at com.google.android.gms.dynamite.zza.run (com.google.android.gms:play-services-basement@@18.3.0:2)
"DefaultDispatcher-worker-1" tid=35 Timed Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.parkNanos (LockSupport.java:353)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.park (CoroutineScheduler.kt:838)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.tryPark (CoroutineScheduler.kt:783)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker (CoroutineScheduler.kt:731)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run (CoroutineScheduler.kt:684)
"DefaultDispatcher-worker-2" tid=36 Timed Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.parkNanos (LockSupport.java:353)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.park (CoroutineScheduler.kt:838)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.tryPark (CoroutineScheduler.kt:783)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker (CoroutineScheduler.kt:731)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run (CoroutineScheduler.kt:684)
"RenderThread" tid=37 Native
#00 pc 0x00000000000b395a /apex/com.android.runtime/lib64/bionic/libc.so (__epoll_pwait+10)
#01 pc 0x000000000001809a /system/lib64/libutils.so (android::Looper::pollInner(int)+250)
#02 pc 0x0000000000017f3e /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+126)
#03 pc 0x000000000052f9d5 /system/lib64/libhwui.so (android::uirenderer::ThreadBase::waitForWork()+133)
#04 pc 0x000000000052f837 /system/lib64/libhwui.so (android::uirenderer::renderthread::RenderThread::threadLoop()+87)
#05 pc 0x0000000000013b69 /system/lib64/libutils.so (android::Thread::_threadLoop(void*)+313)
#06 pc 0x00000000000133d9 /system/lib64/libutils.so (thread_data_t::trampoline(thread_data_t const*)+425)
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"OkHttp ConnectionPool" tid=38 Timed Waiting
at java.lang.Object.wait (Native method)
at com.android.okhttp.ConnectionPool$1.run (ConnectionPool.java:106)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at java.lang.Thread.run (Thread.java:920)
"Okio Watchdog" tid=41 Waiting
at java.lang.Object.wait (Native method)
at java.lang.Object.wait (Object.java:442)
at java.lang.Object.wait (Object.java:568)
at com.android.okhttp.okio.AsyncTimeout.awaitTimeout (AsyncTimeout.java:313)
at com.android.okhttp.okio.AsyncTimeout.access$000 (AsyncTimeout.java:42)
at com.android.okhttp.okio.AsyncTimeout$Watchdog.run (AsyncTimeout.java:288)
"Binder:9596_3" tid=43 Native
#00 pc 0x00000000000b2997 /apex/com.android.runtime/lib64/bionic/libc.so (__ioctl+7)
#01 pc 0x0000000000067ad8 /apex/com.android.runtime/lib64/bionic/libc.so (ioctl+216)
#02 pc 0x0000000000058a9f /system/lib64/libbinder.so (android::IPCThreadState::talkWithDriver(bool)+319)
#03 pc 0x0000000000058d80 /system/lib64/libbinder.so (android::IPCThreadState::getAndExecuteCommand()+16)
#04 pc 0x000000000005982f /system/lib64/libbinder.so (android::IPCThreadState::joinThreadPool(bool)+63)
#05 pc 0x0000000000085857 /system/lib64/libbinder.so (android::PoolThread::threadLoop()+23)
#06 pc 0x0000000000013b69 /system/lib64/libutils.so (android::Thread::_threadLoop(void*)+313)
#07 pc 0x00000000000df2ac /system/lib64/libandroid_runtime.so (android::AndroidRuntime::javaThreadShell(void*)+140)
#08 pc 0x00000000000133d9 /system/lib64/libutils.so (thread_data_t::trampoline(thread_data_t const*)+425)
#09 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#10 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"ticus.receipti" tid=9602 Unknown
#00 pc 0x00000000000b3137 /apex/com.android.runtime/lib64/bionic/libc.so (nanosleep+7)
#01 pc 0x000000000006cfcb /apex/com.android.runtime/lib64/bionic/libc.so (sleep+43)
#02 pc 0x00000000005d4059 /apex/com.android.art/lib64/libart.so (art::jit::RunPollingThread(void*)+41)
#03 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#04 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"ticus.receiptix" tid=9664 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x000000000000d0d0 /vendor/lib64/libandroidemu.so (android::base::guest::MessageChannelBase::beforeRead()+48)
#04 pc 0x000000000000e800 /vendor/lib64/libandroidemu.so (android::base::guest::WorkPoolThread::threadFunc()+144)
#05 pc 0x000000000000e769 /vendor/lib64/libandroidemu.so (std::__1::__function::__func<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'(), std::__1::allocator<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'()>, long ()>::operator()()+9)
#06 pc 0x000000000000d5df /vendor/lib64/libandroidemu.so (android::base::guest::Thread::thread_main(void*)+95)
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"ticus.receiptix" tid=9665 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x000000000000d0d0 /vendor/lib64/libandroidemu.so (android::base::guest::MessageChannelBase::beforeRead()+48)
#04 pc 0x000000000000e800 /vendor/lib64/libandroidemu.so (android::base::guest::WorkPoolThread::threadFunc()+144)
#05 pc 0x000000000000e769 /vendor/lib64/libandroidemu.so (std::__1::__function::__func<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'(), std::__1::allocator<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'()>, long ()>::operator()()+9)
#06 pc 0x000000000000d5df /vendor/lib64/libandroidemu.so (android::base::guest::Thread::thread_main(void*)+95)
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"ticus.receiptix" tid=9666 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x000000000000d0d0 /vendor/lib64/libandroidemu.so (android::base::guest::MessageChannelBase::beforeRead()+48)
#04 pc 0x000000000000e800 /vendor/lib64/libandroidemu.so (android::base::guest::WorkPoolThread::threadFunc()+144)
#05 pc 0x000000000000e769 /vendor/lib64/libandroidemu.so (std::__1::__function::__func<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'(), std::__1::allocator<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'()>, long ()>::operator()()+9)
#06 pc 0x000000000000d5df /vendor/lib64/libandroidemu.so (android::base::guest::Thread::thread_main(void*)+95)
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"ticus.receiptix" tid=9667 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x000000000000d0d0 /vendor/lib64/libandroidemu.so (android::base::guest::MessageChannelBase::beforeRead()+48)
#04 pc 0x000000000000e800 /vendor/lib64/libandroidemu.so (android::base::guest::WorkPoolThread::threadFunc()+144)
#05 pc 0x000000000000e769 /vendor/lib64/libandroidemu.so (std::__1::__function::__func<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'(), std::__1::allocator<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'()>, long ()>::operator()()+9)
#06 pc 0x000000000000d5df /vendor/lib64/libandroidemu.so (android::base::guest::Thread::thread_main(void*)+95)
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"1.ui" tid=9672 Unknown
#00 pc 0x00000000000b395a /apex/com.android.runtime/lib64/bionic/libc.so (__epoll_pwait+10)
#01 pc 0x000000000001809a /system/lib64/libutils.so (android::Looper::pollInner(int)+250)
#02 pc 0x0000000000017f3e /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+126)
#03 pc 0x000000000001938f /system/lib64/libandroid.so (ALooper_pollOnce+95)
#04 pc 0x00000000004c20ff /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#05 pc 0x00000000004bfc38 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#06 pc 0x00000000004bfa46 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"1.raster" tid=9673 Unknown
#00 pc 0x00000000000b395a /apex/com.android.runtime/lib64/bionic/libc.so (__epoll_pwait+10)
#01 pc 0x000000000001809a /system/lib64/libutils.so (android::Looper::pollInner(int)+250)
#02 pc 0x0000000000017f3e /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+126)
#03 pc 0x000000000001938f /system/lib64/libandroid.so (ALooper_pollOnce+95)
#04 pc 0x00000000004c20ff /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#05 pc 0x00000000004bfc38 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#06 pc 0x00000000004bfa46 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"1.io" tid=9674 Unknown
#00 pc 0x00000000000b395a /apex/com.android.runtime/lib64/bionic/libc.so (__epoll_pwait+10)
#01 pc 0x000000000001809a /system/lib64/libutils.so (android::Looper::pollInner(int)+250)
#02 pc 0x0000000000017f3e /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+126)
#03 pc 0x000000000001938f /system/lib64/libandroid.so (ALooper_pollOnce+95)
#04 pc 0x00000000004c20ff /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#05 pc 0x00000000004bfc38 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#06 pc 0x00000000004bfa46 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"io.worker.1" tid=9675 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x00000000004a6953 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#04 pc 0x00000000004baac5 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#05 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#06 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"io.worker.2" tid=9676 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x00000000004a6953 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#04 pc 0x00000000004baac5 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#05 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#06 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"dart:io EventHa" tid=9677 Unknown
#00 pc 0x00000000000b395a /apex/com.android.runtime/lib64/bionic/libc.so (__epoll_pwait+10)
#01 pc 0x0000000000906793 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#02 pc 0x000000000093aece /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#03 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#04 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"io.worker.1" tid=9678 Unknown
#00 pc 0x00000000000b26f7 /apex/com.android.runtime/lib64/bionic/libc.so (read+7)
#01 pc 0x0000000000011798 /vendor/lib64/libOpenglSystemCommon.so (QemuPipeStream::commitBufferAndReadFully(unsigned long, void*, unsigned long)+232)
#02 pc 0x000000000015a16a /vendor/lib64/libvulkan_enc.so (goldfish_vk::VulkanStreamGuest::read(void*, unsigned long)+74)
#03 pc 0x00000000001a3939 /vendor/lib64/libvulkan_enc.so (goldfish_vk::VkEncoder::vkCreateRenderPass(VkDevice_T*, VkRenderPassCreateInfo const*, VkAllocationCallbacks const*, VkRenderPass_T**, unsigned int)+553)
#04 pc 0x000000000024d72b /vendor/lib64/libvulkan_enc.so (goldfish_vk::entry_vkCreateRenderPass(VkDevice_T*, VkRenderPassCreateInfo const*, VkAllocationCallbacks const*, VkRenderPass_T**)+91)
#05 pc 0x00000000008a17f0 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#06 pc 0x000000000089ec37 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#07 pc 0x000000000089dbfd /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#08 pc 0x00000000004bace4 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#09 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#10 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"IplrVkFenceWait" tid=9679 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x00000000004a6953 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#04 pc 0x000000000089ae0e /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#05 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#06 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"IplrVkResMgr" tid=9680 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x00000000004a6953 /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#04 pc 0x00000000008a335d /data/app/~~qbUZ2m7GhWgpZxsEqF05CA==/com.receipticus.receiptix-KOspURRf7HVWaV7dSJawiQ==/split_config.x86_64.apk!libflutter.so
#05 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#06 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.0, on macOS 15.1.1 24B91 darwin-arm64, locale en-GB)
• Flutter version 3.27.0 on channel stable at /Users/snaky/Dev/flutter
• Upstream repository ssh://[email protected]/flutter/flutter.git
• Framework revision 8495dee1fd (6 days ago), 2024-12-10 14:23:39 -0800
• Engine revision 83bacfc525
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0-rc4)
• Android SDK at /Users/snaky/Dev/android-sdk
• Platform android-35, build-tools 35.0.0-rc4
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.95.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.98.0
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: crash,platform-android,waiting for customer response,engine,a: production,P3,team-engine,triaged-engine | low | Critical |
2,742,260,275 | deno | `deno test --watch`: `Deno.chdir()` affects parsing of CLI args on reset | Version: Deno 2.1.4
```
// test.ts
Deno.test("foo", () => {
Deno.chdir("..");
});
```
`deno test -A --watch test.ts`
And then save the file again. It will fail to find `test.ts` the second time (assuming there is no `../test.ts`). | bug,cli,--watch | low | Minor |
2,742,260,977 | PowerToys | [File Locksmith] Files containing russian capitalised k fail to load and crash the program | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
File Locksmith
### Steps to reproduce
OS: Win10 Pro 22H2 (19045.5131)
To reproduce:
1. Put a capitalised russian k letter ("К") in any file's or folder's name and try to use locksmith on it.
2. Locksmith should now crash when trying to view the file/folder.
Normal uncapitalised russian k letter ("к") does not have this issue. In fact, every other russian letter both capitalised and not works fine ("абвгдеёжзийклмнопрстуфхцчшщъыьэюя" and "АБВГДЕЁЖЗИЙ ЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯ"). ONLY seems to crash when a capitalised russian k is present anywhere in the name of the file/folder.
Also tested in a VM, same issue, so it's not limited to my machine's configuration.
### ✔️ Expected Behavior
It shows the file locksmith window and let you use the program normally.
### ❌ Actual Behavior
File locksmith crashes without notice (no error boxes, no blank UI elements; nothing pops up) and produces the following (translated) log in event viewer:
> Faulting application name: PowerToys.FileLocksmithUI.exe, version: 0.86.0.0, time stamp: 0x66e80000
> Faulting module name: KERNELBASE.dll, version: 10.0.19041.5131, time stamp: 0x011921da
> Exception code: 0xc000027b
> Fault offset: 0x0000000000133942
> Faulting process ID: 0xd44
> Faulting application start time: 0x01db4fb030758d52
> Faulting application path: C:\Users\####\AppData\Local\PowerToys\WinUI3Apps\PowerToys.FileLocksmithUI.exe
> Faulting module path: C:\Windows\System32\KERNELBASE.dll
> Full name of the faulting package:
> Application ID associated with the faulting package:
### Other Software
_No response_ | Issue-Bug,Status-Reproducible,Product-File Locksmith | low | Critical |
2,742,265,715 | storybook | [Bug]: Controls don't work/show up in composed storybook | ### Describe the bug
In a composed storybook, while controls show up properly in the stories of the parent storybook, they don't show up in stories of the child storybooks
Parent:

Child:

### Reproduction link
https://stackblitz.com/edit/github-nk6zbbmz?file=.storybook-composed%2Fmain.js
### Reproduction steps
1. Got to above link
2. Navigate to the Primary Button story in the main storybook => Controls are there
3. Navigate to the Primary Button story in the composite storybook => Controls are missing
### System
```bash
Storybook Environment Info:
System:
OS: Linux 5.0 undefined
CPU: (8) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Shell: 1.0 - /bin/jsh
Binaries:
Node: 18.20.3 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 10.2.3 - /usr/local/bin/npm <----- active
pnpm: 8.15.6 - /usr/local/bin/pnpm
npmPackages:
@storybook/addon-essentials: ^8.5.0-alpha.21 => 8.5.0-alpha.21
@storybook/addon-themes: 8.5.0-alpha.21 => 8.5.0-alpha.21
@storybook/blocks: ^8.5.0-alpha.21 => 8.5.0-alpha.21
@storybook/test: ^8.5.0-alpha.21 => 8.5.0-alpha.21
@storybook/web-components: ^8.5.0-alpha.21 => 8.5.0-alpha.21
@storybook/web-components-vite: ^8.5.0-alpha.21 => 8.5.0-alpha.21
storybook: ^8.5.0-alpha.21 => 8.5.0-alpha.21
```
### Additional context
This might be related to https://github.com/storybookjs/storybook/issues/30058 | bug,needs triage | low | Critical |
2,742,275,485 | rust | Warn when more specific lint overridden by a lint group | ### Code
```Rust
#![deny(dead_code)]
#![allow(unused)]
fn f() {}
fn main() {}
```
### Current output
```Shell
No output
```
### Desired output
```Shell
warning: `#![deny(dead_code)]` at line `1` does not have any effect as it is overridden by `#![allow(unused)]` at line `2`
help: put the more specific lint after or remove it.
```
### Rationale and extra context
_No response_
### Other cases
```Rust
```
### Rust Version
```Shell
latest nightly and stable.
```
### Anything else?
@rustbot label +A-lint | A-lints,A-diagnostics,T-lang,T-compiler,C-discussion | low | Minor |
2,742,319,820 | rust | ICE: Broken MIR while testing type compatibility | I found a compiler crash that looks similar to #130921 and #131886.
### Code
[playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=9346aece676bf91fdea1fcdc927496e2)
```Rust
pub trait BaseTrait {
type BaseType;
}
// Works if trait bound `BaseTrait` becomes `BaseTrait<BaseType = ()>`
pub trait IntermediateTrait2<'a>: BaseTrait {}
// Works if trait bound `BaseTrait<BaeType = ()>` becomes `BaseTrait`
pub trait IntermediateTrait1: BaseTrait<BaseType = ()> {}
pub trait FinalTrait: IntermediateTrait1 + for<'a> IntermediateTrait2<'a> {}
struct IntermediateTraitsImpl;
impl BaseTrait for IntermediateTraitsImpl {
type BaseType = ();
}
impl IntermediateTrait1 for IntermediateTraitsImpl {}
impl<'a> IntermediateTrait2<'a> for IntermediateTraitsImpl {}
// Works if `Box<dyn FinalTrait<BaseType = ()>>` becomes `Box<dyn FinalTrait>`
pub struct Foo {
final_trait_impl: Box<dyn FinalTrait<BaseType = ()>>,
}
impl Foo {
pub fn backends(&self) -> &dyn FinalTrait<BaseType = ()> {
self.final_trait_impl.as_ref()
}
}
```
### Meta
`rustc --version --verbose`:
```
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
```
The bug also triggers in beta and nightly in a similar manner.
### Error output
```
thread 'rustc' panicked at compiler/rustc_mir_transform/src/validate.rs:95:25:
broken MIR in Item(DefId(0:20 ~ compiler_bug[9e9d]::{impl#3}::backends)) (after phase change to runtime-optimized) at bb0[0]:
encountered `Assign((_2, &((*_1).0: std::boxed::Box<dyn FinalTrait>)))` with incompatible types:
left-hand side has type: &Box<dyn FinalTrait>
right-hand side has type: &Box<dyn FinalTrait>
```
<details><summary><strong>Program output</strong></summary>
<p>
```
thread 'rustc' panicked at compiler/rustc_mir_transform/src/validate.rs:95:25:
broken MIR in Item(DefId(0:20 ~ compiler_bug[9e9d]::{impl#3}::backends)) (after phase change to runtime-optimized) at bb0[0]:
encountered `Assign((_2, &((*_1).0: std::boxed::Box<dyn FinalTrait>)))` with incompatible types:
left-hand side has type: &Box<dyn FinalTrait>
right-hand side has type: &Box<dyn FinalTrait>
stack backtrace:
0: 0x75d55504012a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h5b6bd5631a6d1f6b
1: 0x75d5558218f8 - core::fmt::write::h7550c97b06c86515
2: 0x75d556a58b91 - std::io::Write::write_fmt::h7b09c64fe0be9c84
3: 0x75d55503ff82 - std::sys::backtrace::BacktraceLock::print::h2395ccd2c84ba3aa
4: 0x75d555042456 - std::panicking::default_hook::{{closure}}::he19d4c7230e07961
5: 0x75d5550422a0 - std::panicking::default_hook::hf614597d3c67bbdb
6: 0x75d554104556 - std[c6eb78587944e35c]::panicking::update_hook::<alloc[148a978a4a62f5d]::boxed::Box<rustc_driver_impl[4c2d2ad79fb810ac]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x75d555042b68 - std::panicking::rust_panic_with_hook::h8942133a8b252070
8: 0x75d55504293a - std::panicking::begin_panic_handler::{{closure}}::hb5f5963570096b29
9: 0x75d5550405d9 - std::sys::backtrace::__rust_end_short_backtrace::h6208cedc1922feda
10: 0x75d5550425fc - rust_begin_unwind
11: 0x75d552abb160 - core::panicking::panic_fmt::h0c3082644d1bf418
12: 0x75d552e223e3 - <rustc_mir_transform[b36c87ceb4bb9a8e]::validate::CfgChecker>::fail::<alloc[148a978a4a62f5d]::string::String>
13: 0x75d552e21752 - <rustc_mir_transform[b36c87ceb4bb9a8e]::validate::Validator as rustc_mir_transform[b36c87ceb4bb9a8e]::pass_manager::MirPass>::run_pass
14: 0x75d55580b674 - rustc_mir_transform[b36c87ceb4bb9a8e]::pass_manager::run_passes_inner
15: 0x75d555d0a9fa - rustc_mir_transform[b36c87ceb4bb9a8e]::optimized_mir
16: 0x75d555d08369 - rustc_query_impl[db795c774d495014]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[db795c774d495014]::query_impl::optimized_mir::dynamic_query::{closure#2}::{closure#0}, rustc_middle[a886f61dbc61428a]::query::erase::Erased<[u8; 8usize]>>
17: 0x75d555c5449a - rustc_query_system[b2bb6e43dd6b7fda]::query::plumbing::try_execute_query::<rustc_query_impl[db795c774d495014]::DynamicConfig<rustc_query_system[b2bb6e43dd6b7fda]::query::caches::DefIdCache<rustc_middle[a886f61dbc61428a]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[db795c774d495014]::plumbing::QueryCtxt, true>
18: 0x75d555c52aa3 - rustc_query_impl[db795c774d495014]::query_impl::optimized_mir::get_query_incr::__rust_end_short_backtrace
19: 0x75d5521210b5 - <rustc_middle[a886f61dbc61428a]::ty::context::TyCtxt>::instance_mir
20: 0x75d5530f8d3a - rustc_monomorphize[4c5fa529dcdcd4f2]::collector::collect_items_rec::{closure#0}
21: 0x75d5562be2b2 - rustc_monomorphize[4c5fa529dcdcd4f2]::collector::collect_items_rec
22: 0x75d5562c4b3a - rustc_monomorphize[4c5fa529dcdcd4f2]::partitioning::collect_and_partition_mono_items
23: 0x75d5565f5b16 - rustc_query_impl[db795c774d495014]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[db795c774d495014]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2}::{closure#0}, rustc_middle[a886f61dbc61428a]::query::erase::Erased<[u8; 24usize]>>
24: 0x75d5565f5adb - <rustc_query_impl[db795c774d495014]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2} as core[c06ff78fa456ca03]::ops::function::FnOnce<(rustc_middle[a886f61dbc61428a]::ty::context::TyCtxt, ())>>::call_once
25: 0x75d556802ea1 - rustc_query_system[b2bb6e43dd6b7fda]::query::plumbing::try_execute_query::<rustc_query_impl[db795c774d495014]::DynamicConfig<rustc_query_system[b2bb6e43dd6b7fda]::query::caches::SingleCache<rustc_middle[a886f61dbc61428a]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[db795c774d495014]::plumbing::QueryCtxt, true>
26: 0x75d55680275a - rustc_query_impl[db795c774d495014]::query_impl::collect_and_partition_mono_items::get_query_incr::__rust_end_short_backtrace
27: 0x75d5562d90cb - rustc_codegen_ssa[76f2c5b87770fd75]::back::symbol_export::exported_symbols_provider_local
28: 0x75d555a4b839 - rustc_query_impl[db795c774d495014]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[db795c774d495014]::query_impl::exported_symbols::dynamic_query::{closure#2}::{closure#0}, rustc_middle[a886f61dbc61428a]::query::erase::Erased<[u8; 16usize]>>
29: 0x75d555a4b80f - <rustc_query_impl[db795c774d495014]::query_impl::exported_symbols::dynamic_query::{closure#2} as core[c06ff78fa456ca03]::ops::function::FnOnce<(rustc_middle[a886f61dbc61428a]::ty::context::TyCtxt, rustc_span[3e5cf3424d44936d]::def_id::CrateNum)>>::call_once
30: 0x75d55652995b - rustc_query_system[b2bb6e43dd6b7fda]::query::plumbing::try_execute_query::<rustc_query_impl[db795c774d495014]::DynamicConfig<rustc_query_system[b2bb6e43dd6b7fda]::query::caches::VecCache<rustc_span[3e5cf3424d44936d]::def_id::CrateNum, rustc_middle[a886f61dbc61428a]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[db795c774d495014]::plumbing::QueryCtxt, true>
31: 0x75d556528f96 - rustc_query_impl[db795c774d495014]::query_impl::exported_symbols::get_query_incr::__rust_end_short_backtrace
32: 0x75d5568da277 - <rustc_metadata[a330ceb16976a880]::rmeta::encoder::EncodeContext>::encode_crate_root
33: 0x75d5568cc59a - rustc_metadata[a330ceb16976a880]::rmeta::encoder::encode_metadata
34: 0x75d5568bdb33 - rustc_metadata[a330ceb16976a880]::fs::encode_and_write_metadata
35: 0x75d5568bc9d6 - <rustc_interface[88a02114bbdb2383]::queries::Linker>::codegen_and_build_linker
36: 0x75d5565621d4 - rustc_interface[88a02114bbdb2383]::interface::run_compiler::<core[c06ff78fa456ca03]::result::Result<(), rustc_span[3e5cf3424d44936d]::ErrorGuaranteed>, rustc_driver_impl[4c2d2ad79fb810ac]::run_compiler::{closure#0}>::{closure#1}
37: 0x75d5565533d9 - std[c6eb78587944e35c]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[88a02114bbdb2383]::util::run_in_thread_with_globals<rustc_interface[88a02114bbdb2383]::interface::run_compiler<core[c06ff78fa456ca03]::result::Result<(), rustc_span[3e5cf3424d44936d]::ErrorGuaranteed>, rustc_driver_impl[4c2d2ad79fb810ac]::run_compiler::{closure#0}>::{closure#1}, core[c06ff78fa456ca03]::result::Result<(), rustc_span[3e5cf3424d44936d]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[c06ff78fa456ca03]::result::Result<(), rustc_span[3e5cf3424d44936d]::ErrorGuaranteed>>
38: 0x75d556622fac - <<std[c6eb78587944e35c]::thread::Builder>::spawn_unchecked_<rustc_interface[88a02114bbdb2383]::util::run_in_thread_with_globals<rustc_interface[88a02114bbdb2383]::interface::run_compiler<core[c06ff78fa456ca03]::result::Result<(), rustc_span[3e5cf3424d44936d]::ErrorGuaranteed>, rustc_driver_impl[4c2d2ad79fb810ac]::run_compiler::{closure#0}>::{closure#1}, core[c06ff78fa456ca03]::result::Result<(), rustc_span[3e5cf3424d44936d]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[c06ff78fa456ca03]::result::Result<(), rustc_span[3e5cf3424d44936d]::ErrorGuaranteed>>::{closure#1} as core[c06ff78fa456ca03]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
39: 0x75d556623a6b - std::sys::pal::unix::thread::Thread::new::thread_start::hcc78f3943333fa94
40: 0x75d550aa339d - <unknown>
41: 0x75d550b2849c - <unknown>
42: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.83.0 (90b35a623 2024-11-26) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type lib -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [optimized_mir] optimizing MIR for `<impl at src/lib.rs:26:1: 26:9>::backends`
#1 [collect_and_partition_mono_items] collect_and_partition_mono_items
end of query stack
```
</p>
</details>
| I-ICE,T-compiler,C-bug | low | Critical |
2,742,331,236 | vscode | Actions no longer same padded from part title vs. view title bar | Refs: https://github.com/microsoft/vscode/commit/276e24792198ecb7e83533d575568b66da722064
Changing the `padding` to `2px` just for the part titles, makes the actions be misaligned with other toolbars, such as the custom title. In this case notice how a secondary sidebar below the custom title aligns the actions differently:

I think we need to be careful of spreading `margin` or `padding` changes into views only so that we can ensure other locations still look the same.
Why not have the `2px` change on the toolbar itself? | bug,under-discussion,layout,papercut :drop_of_blood: | low | Minor |
2,742,363,838 | flutter | [Proposal] TextOverflow.ellipsis should be baselined for Japanese Text | ### Steps to reproduce
1. Create a Text widget:
- Add a Text widget containing a Japanese string that exceeds the width of its container.
- Japanese text: このテキストは日本語で非常に長い内容を含んでおり、スクリーン幅を超えると切り捨てられます。
2. Set the overflow property:
- Set the overflow property of the Text widget to TextOverflow.ellipsis.
3. Limit the widget size:
- Wrap the Text widget in a SizedBox to restrict its width.
```
const SizedBox(
height: 30,
width: 140,
child: Text(
'このテキストは日本語で非常に長い内容を含んでおり、スクリーン幅を超えると切り捨てられます。',
overflow: TextOverflow.ellipsis,
textDirection: TextDirection.ltr,
),
);
```
4. Observe the behavior:
- For Japanese text, the ellipsis (...) appears center-aligned within the truncated area as shown in the screenshot below.
Expected results | Actual results
:-------------------------:|:-------------------------:
 | It(...) is supposed to be at vertical bottom instead of vertical center  (test in Jetpack compose and it looks as expected.)
Expected : このテキストは日本...
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/widgets.dart';
void main() =>
runApp(
const Center(
child:
SizedBox(
height: 30,
width: 140,
child: Text('このテキストは日本語で非常に長い内容を含んでおり、スクリーン幅を超えると切り捨てられます。',
key: Key('title'),
overflow: TextOverflow.ellipsis,
textDirection: TextDirection.ltr,
),
),
),
);
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel master, 3.27.0-1.0.pre.181, on macOS 15.1.1 24B91 darwin-arm64, locale en-NP)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2023.1)
[✓] IntelliJ IDEA Community Edition (version 2023.1)
[✓] VS Code (version 1.96.0)
[✓] Connected device (4 available)
[✓] Network resources
```
</details>
| engine,a: internationalization,dependency: skia,a: typography,c: proposal,P3,team-engine,triaged-engine | low | Major |
2,742,399,495 | tauri | How do I change the system menu name and keep the original event | tauri 2.1.1 wants to implement multi-language switching of the system menu, but keep the original event of the system menu. Please give the implementation scheme.
The solution I am using now is to re-implement the menu items myself and add custom events to each of them to implement the original system menu events.
There is no way to just change the name of the system menu and also keep the original event.
Thank you! | type: documentation | low | Minor |
2,742,416,644 | electron | Read-only shared buffer (`ArrayBuffer`) shared from main process to renderer process | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
I'm interested in repeatedly passing large amounts of data between the main and renderer processes. Currently I send binary blobs from the main process to the renderer process but this adds significant overhead. I'm interested in sharing this data more directly somehow.
### Proposed Solution
`Microsoft Edge WebView2` has introduced an API for sharing buffers in a similar way by sharing an `ArrayBuffer` and using messaging to let the renderer process know when the buffer is readable.
The documentation for the `WebView2` API is located at https://github.com/MicrosoftEdge/WebView2Feedback/blob/main/specs/SharedBuffer.md
This API seems like it would be compatible with Electron.
There is some more discussion about the API (including how it could work with atomics) in https://github.com/MicrosoftEdge/WebView2Feedback/issues/89
### Alternatives Considered
- Sending data repeatedly which is too slow for my needs
- Rendering part of the HTML in the main process, and sending the output to the renderer process instead - this is complicated because of compositing (e.g., https://github.com/electron/electron/issues/10547)
- Using `WebView2` instead - this would be a big move for my project, which is why I'm hoping we could add the API to Electron instead | enhancement :sparkles: | low | Major |
2,742,426,577 | pytorch | EXCEPTION : /python3.11/distutils/core.py | ### 🐛 Describe the bug
Facing the same issue with python 3.10 and 3.11 as well with latest torch versions
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
```
from torchvision import datasets
```
```
File "/my_workspace/src/dataloader.py", line 7, in <module>
from torchvision import datasets
File "/lib/python3.11/site-packages/torchvision/__init__.py", line 10, in <module>
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/torchvision/models/__init__.py", line 2, in <module>
from .convnext import *
File "/lib/python3.11/site-packages/torchvision/models/convnext.py", line 8, in <module>
from ..ops.misc import Conv2dNormActivation, Permute
File "/lib/python3.11/site-packages/torchvision/ops/__init__.py", line 23, in <module>
from .poolers import MultiScaleRoIAlign
File "/lib/python3.11/site-packages/torchvision/ops/poolers.py", line 10, in <module>
from .roi_align import roi_align
File "/lib/python3.11/site-packages/torchvision/ops/roi_align.py", line 7, in <module>
from torch._dynamo.utils import is_compile_supported
File "/lib/python3.11/site-packages/torch/_dynamo/__init__.py", line 3, in <module>
from . import convert_frame, eval_frame, resume_execution
File "/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 31, in <module>
from torch._dynamo.utils import CompileTimeInstructionCounter
File "/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1320, in <module>
if has_triton_package():
^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/torch/utils/_triton.py", line 9, in has_triton_package
from triton.compiler.compiler import triton_key
File "/lib/python3.11/site-packages/triton/__init__.py", line 8, in <module>
from .runtime import (
File "/lib/python3.11/site-packages/triton/runtime/__init__.py", line 1, in <module>
from .autotuner import (Autotuner, Config, Heuristics, autotune, heuristics)
File "/lib/python3.11/site-packages/triton/runtime/autotuner.py", line 9, in <module>
from ..testing import do_bench, do_bench_cudagraph
File "/lib/python3.11/site-packages/triton/testing.py", line 7, in <module>
from . import language as tl
File "/lib/python3.11/site-packages/triton/language/__init__.py", line 4, in <module>
from . import math
File "/lib/python3.11/site-packages/triton/language/math.py", line 1, in <module>
from . import core
File "/lib/python3.11/site-packages/triton/language/core.py", line 10, in <module>
from ..runtime.jit import jit
File "/lib/python3.11/site-packages/triton/runtime/jit.py", line 12, in <module>
from ..runtime.driver import driver
File "/lib/python3.11/site-packages/triton/runtime/driver.py", line 1, in <module>
from ..backends import backends
File "/lib/python3.11/site-packages/triton/backends/__init__.py", line 50, in <module>
backends = _discover_backends()
^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/triton/backends/__init__.py", line 44, in _discover_backends
driver = _load_module(name, os.path.join(root, name, 'driver.py'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/triton/backends/__init__.py", line 12, in _load_module
spec.loader.exec_module(module)
File "/lib/python3.11/site-packages/triton/backends/amd/driver.py", line 7, in <module>
from triton.runtime.build import _build
File "/lib/python3.11/site-packages/triton/runtime/build.py", line 8, in <module>
import setuptools
File "/lib/python3.11/site-packages/setuptools/__init__.py", line 8, in <module>
import _distutils_hack.override # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/_distutils_hack/override.py", line 1, in <module>
__import__('_distutils_hack').do_override()
File "/lib/python3.11/site-packages/_distutils_hack/__init__.py", line 77, in do_override
ensure_local_distutils()
File "/lib/python3.11/site-packages/_distutils_hack/__init__.py", line 64, in ensure_local_distutils
assert '_distutils' in core.__file__, core.__file__
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: /python3.11/distutils/core.py
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-192-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 45 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 8
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
Stepping: 7
CPU MHz: 2992.968
BogoMIPS: 5985.93
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 256 KiB
L1i cache: 256 KiB
L2 cache: 8 MiB
L3 cache: 286 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Vulnerable, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu | needs reproduction,triaged,module: third_party,oncall: pt2 | low | Critical |
2,742,434,889 | TypeScript | The parsed identifier is incomplete due to updateSourceFile. | ### 🔎 Search Terms
updateSourceFile
### 🕗 Version & Regression Information
5.6.0
### ⏯ Playground Link
_No response_
### 💻 Code
```ts
// Your code here
```
### 🙁 Actual behavior
`class Test {
testMethod() {
let newLocal = "kk"
let a = 123
let b = 12
}
}`
After the value of variable b is changed, the identifiers set of sourceFile parsed by the updateSourceFile method contains only Test, testMethod, and b. Variables a and newLocal are lost.
### 🙂 Expected behavior
the identifiers set of sourceFile contains Test, testMethod, b, a and newLocal.
### Additional information about the issue
_No response_ | Bug,Help Wanted | low | Minor |
2,742,467,494 | tensorflow | [XLA] `tf.keras.layers.LSTM` behaves differently on GPU | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
nightly
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When executing LSTM on **XLA**, it fails.
However, when executing it without XLA, it passes.
The above failure is on GPU.
If I use CPU as backend, with or without XLA both pass the check.
### Standalone code to reproduce the issue
```python
import os
import tensorflow
import tensorflow as tf
tf.random.set_seed(42)
class RecurrentModel(tf.keras.Model):
def __init__(self):
super(RecurrentModel, self).__init__()
self.lstm = tf.keras.layers.LSTM(units=64, return_sequences=True)
@tf.function(jit_compile=True)
def call(self, x):
return self.lstm(x)
model = RecurrentModel()
input_shape = (10, 20, 1)
x = tf.random.normal(shape=input_shape)
inputs = [x]
output = model(*inputs)
print(output)
```
### Relevant log output
```shell
InvalidArgumentError Traceback (most recent call last)
<ipython-input-4-0938fdccd1fa> in <cell line: 24>()
22 inputs = [x]
23
---> 24 output = model(*inputs)
25 print(output)
1 frames
/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
51 try:
52 ctx.ensure_initialized()
---> 53 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
54 inputs, attrs, num_outputs)
55 except core._NotOkStatusException as e:
InvalidArgumentError: Exception encountered when calling RecurrentModel.call().
Detected unsupported operations when trying to compile graph __inference_call_877[_XlaMustCompile=true,config_proto=6001324581131673121,executor_type=11160318154034397263] on XLA_GPU_JIT: CudnnRNNV3 (No registered 'CudnnRNNV3' OpKernel for XLA_GPU_JIT devices compatible with node {{node lstm_3_1/CudnnRNNV3}}){{node lstm_3_1/CudnnRNNV3}}
The op is created at:
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
File "/usr/local/lib/python3.10/dist-packages/colab_kernel_launcher.py", line 37, in <module>
File "/usr/local/lib/python3.10/dist-packages/traitlets/config/application.py", line 992, in launch_instance
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py", line 619, in start
File "/usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py", line 195, in start
File "/usr/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
File "/usr/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 685, in <lambda>
File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 738, in _run_callback
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 825, in inner
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 786, in run
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 361, in process_one
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 261, in dispatch_shell
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 539, in execute_request
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
File "/usr/local/lib/python3.10/dist-packages/ipykernel/ipkernel.py", line 302, in do_execute
File "/usr/local/lib/python3.10/dist-packages/ipykernel/zmqshell.py", line 539, in run_cell
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 2975, in run_cell
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3030, in _run_cell
File "/usr/local/lib/python3.10/dist-packages/IPython/core/async_helpers.py", line 78, in _pseudo_sync_runner
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3257, in run_cell_async
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3473, in run_ast_nodes
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3553, in run_code
File "<ipython-input-4-0938fdccd1fa>", line 24, in <cell line: 24>
File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py", line 117, in error_handler
File "/usr/local/lib/python3.10/dist-packages/keras/src/layers/layer.py", line 826, in __call__
File "/usr/local/lib/python3.10/dist-packages/keras/src/layers/layer.py", line 1376, in _maybe_build
File "/usr/local/lib/python3.10/dist-packages/keras/src/backend/tensorflow/core.py", line 212, in compute_output_spec
File "<ipython-input-1-0938fdccd1fa>", line 13, in call
File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py", line 117, in error_handler
File "/usr/local/lib/python3.10/dist-packages/keras/src/layers/layer.py", line 901, in __call__
File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py", line 117, in error_handler
File "/usr/local/lib/python3.10/dist-packages/keras/src/ops/operation.py", line 46, in __call__
File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py", line 156, in error_handler
File "/usr/local/lib/python3.10/dist-packages/keras/src/layers/rnn/lstm.py", line 570, in call
File "/usr/local/lib/python3.10/dist-packages/keras/src/layers/rnn/rnn.py", line 406, in call
File "/usr/local/lib/python3.10/dist-packages/keras/src/layers/rnn/lstm.py", line 537, in inner_loop
File "/usr/local/lib/python3.10/dist-packages/keras/src/backend/tensorflow/rnn.py", line 841, in lstm
File "/usr/local/lib/python3.10/dist-packages/keras/src/backend/tensorflow/rnn.py", line 933, in _cudnn_lstm
tf2xla conversion failed while converting __inference_call_877[_XlaMustCompile=true,config_proto=6001324581131673121,executor_type=11160318154034397263]. Run with TF_DUMP_GRAPH_PREFIX=/path/to/dump/dir and --vmodule=xla_compiler=2 to obtain a dump of the compiled functions. [Op:__inference_call_877]
Arguments received by RecurrentModel.call():
• x=tf.Tensor(shape=(10, 20, 1), dtype=float32)
```
| stat:awaiting tensorflower,type:bug,comp:gpu,comp:xla,TF 2.18 | medium | Critical |
2,742,495,815 | vscode | Should View use `canToggleVisibility` | The following views set `canToggleVisibility` to false:
- Comments
- Chat
- Edits
- Popular Extensions View
- Explorer
- Search
- Testing
- UserDataSyncViews: Conflicts
I believe we should keep it only for `Explorer` and `UserDataSyncViews: Conflicts`?
@sandy081 why are you using it for the `Popular` extensions view?
// cc @bpasero | polish,workbench-views | low | Minor |
2,742,497,166 | flutter | VoiceOver reads the items disordered in a scrollable view | ### Steps to reproduce
1. Run the provided example in macOS
2. Activate VoiceOver
3. Navigate the items with VoiceOver (control + option + right arrow)
4. Notice how the order of the items is not respected
### Expected results
When using VoiceOver in a scrollable view I expect all items to be read in order.
### Actual results
When using VoiceOver in a scrollable view the items are not read in order, some of them even being unreachable.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
void main() {
runApp(const App());
}
class App extends StatelessWidget {
const App({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: SingleChildScrollView(
child: Column(
crossAxisAlignment: CrossAxisAlignment.stretch,
children: List.generate(20, (index) => _Item(index: index)),
),
),
),
);
}
}
class _Item extends StatelessWidget {
const _Item({required this.index});
final int index;
@override
Widget build(BuildContext context) {
debugDumpSemanticsTree();
return ColoredBox(
color: Colors.primaries[index % Colors.primaries.length],
child: SizedBox.square(
dimension: 200,
child: Center(child: Text('Item $index')),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/e82bce35-2716-4353-8d7b-229e23dd3f98
</details>
### Logs
<details open><summary>Logs</summary>
```console
flutter: SemanticsNode#0
flutter: │ Rect.fromLTRB(0.0, 0.0, 800.0, 600.0)
flutter: │
flutter: └─SemanticsNode#1
flutter: │ Rect.fromLTRB(0.0, 0.0, 800.0, 600.0)
flutter: │ textDirection: ltr
flutter: │
flutter: └─SemanticsNode#2
flutter: │ Rect.fromLTRB(0.0, 0.0, 800.0, 600.0)
flutter: │ sortKey: OrdinalSortKey#46a7b(order: 0.0)
flutter: │
flutter: └─SemanticsNode#3
flutter: │ Rect.fromLTRB(0.0, 0.0, 800.0, 600.0)
flutter: │ flags: scopesRoute
flutter: │
flutter: └─SemanticsNode#4
flutter: │ Rect.fromLTRB(0.0, 0.0, 800.0, 600.0)
flutter: │ actions: scrollUp
flutter: │ flags: hasImplicitScrolling
flutter: │ scrollExtentMin: 0.0
flutter: │ scrollPosition: 0.0
flutter: │ scrollExtentMax: 3400.0
flutter: │
flutter: ├─SemanticsNode#5
flutter: │ Rect.fromLTRB(378.4, 90.0, 421.6, 110.0)
flutter: │ label: "Item 0"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#6
flutter: │ Rect.fromLTRB(379.6, 290.0, 420.4, 310.0)
flutter: │ label: "Item 1"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#7
flutter: │ Rect.fromLTRB(378.6, 490.0, 421.4, 510.0)
flutter: │ label: "Item 2"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#8
flutter: │ Rect.fromLTRB(378.4, 690.0, 421.6, 710.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 3"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#9
flutter: │ Rect.fromLTRB(378.3, 890.0, 421.7, 910.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 4"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#10
flutter: │ Rect.fromLTRB(378.5, 1090.0, 421.5, 1110.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 5"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#11
flutter: │ Rect.fromLTRB(378.4, 1290.0, 421.6, 1310.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 6"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#12
flutter: │ Rect.fromLTRB(378.8, 1490.0, 421.2, 1510.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 7"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#13
flutter: │ Rect.fromLTRB(378.4, 1690.0, 421.6, 1710.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 8"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#14
flutter: │ Rect.fromLTRB(378.4, 1890.0, 421.6, 1910.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 9"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#15
flutter: │ Rect.fromLTRB(375.0, 2090.0, 425.0, 2110.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 10"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#16
flutter: │ Rect.fromLTRB(376.2, 2290.0, 423.8, 2310.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 11"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#17
flutter: │ Rect.fromLTRB(375.2, 2490.0, 424.8, 2510.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 12"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#18
flutter: │ Rect.fromLTRB(375.1, 2690.0, 424.9, 2710.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 13"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#19
flutter: │ Rect.fromLTRB(374.9, 2890.0, 425.1, 2910.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 14"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#20
flutter: │ Rect.fromLTRB(375.1, 3090.0, 424.9, 3110.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 15"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#21
flutter: │ Rect.fromLTRB(375.0, 3290.0, 425.0, 3310.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 16"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#22
flutter: │ Rect.fromLTRB(375.5, 3490.0, 424.5, 3510.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 17"
flutter: │ textDirection: ltr
flutter: │
flutter: ├─SemanticsNode#23
flutter: │ Rect.fromLTRB(375.0, 3690.0, 425.0, 3710.0)
flutter: │ flags: isHidden
flutter: │ HIDDEN
flutter: │ label: "Item 18"
flutter: │ textDirection: ltr
flutter: │
flutter: └─SemanticsNode#24
flutter: Rect.fromLTRB(375.0, 3890.0, 425.0, 3910.0)
flutter: flags: isHidden
flutter: HIDDEN
flutter: label: "Item 19"
flutter: textDirection: ltr
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.22.3, on macOS 14.7.1 23H222 darwin-arm64, locale en-US)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[!] Xcode - develop for iOS and macOS (Xcode 16.2)
! iOS 18.2 Simulator not installed; this may be necessary for iOS and macOS development.
To download and install the platform, open Xcode, select Xcode > Settings > Platforms,
and click the GET button for the required platform.
For more information, please visit:
https://developer.apple.com/documentation/xcode/installing-additional-simulator-runtimes
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.1)
[✓] VS Code (version 1.96.0)
[✓] Connected device (4 available)
[✓] Network resources
! Doctor found issues in 1 category.
```
</details>
| framework,a: accessibility,platform-mac,f: scrolling,has reproducible steps,P2,team-macos,triaged-macos,found in release: 3.27,found in release: 3.28 | low | Critical |
2,742,528,686 | ui | [bug]: Dropdown Fails to Reopen After Closing Dialog | ### Describe the bug
When a dropdown is opened and a dialog is subsequently opened, the dropdown becomes unresponsive after the dialog is closed. Attempting to interact with the dropdown again does not reopen it. This behavior seems to indicate a conflict between the dropdown and dialog components.
https://github.com/user-attachments/assets/b66edf9e-5e01-47ac-aa7b-6bd1aeba85c7
### Affected component/components
Dropdown menu, Dialog
### How to reproduce
1. Open a dropdown component.
2. While the dropdown is open, open a dialog component.
3. Close the dialog.
4. Attempt to reopen the dropdown.
### Codesandbox/StackBlitz link
https://ui.shadcn.com/docs/components/dropdown-menu
### Logs
_No response_
### System Info
```bash
OS: Ubuntu, Browser: Google Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,742,536,710 | pytorch | RFC: Dynamically Quantized 4 bit matmul API and usage | # 4-Bit Dynamically Quantized Matrix Multiplication in PyTorch
This RFC introduces two new operations to enable efficient 4-bit weight quantization and matrix multiplication in PyTorch. These operations provide a mechanism for low-precision arithmetic to be used for both training and inference, improving performance and reducing memory usage. The two new operations are:
- `torch.ops.aten._dyn_quant_pack_4bit_weight` - Packs the quantized weights, scales, and bias for a Linear layer into a compact format using 4-bit symmetric quantization.
- `torch.ops.aten._dyn_quant_matmul_4bit` - Performs matrix multiplication using quantized weights, optimized for 4-bit precision.
## 1. `torch.ops.aten._dyn_quant_pack_4bit_weight`
This operation is used to pack the quantized weights, scales, and optional bias for a Linear layer. The function expects 4-bit quantized weights and returns a packed representation.
### **Parameters:**
- **`weight`** (`Tensor`): The original weights of the Linear layer.
- **`scales_and_zeros`** (`Tensor`): A tensor containing the quantization scales for each group. The tensor has the shape `[num_groups]`.
- **`bias`** (`Tensor`, optional): The bias tensor for the Linear layer. This parameter is optional.
- **`groupsize`** (`int`): The number of channels per group. The value must be a multiple of 32 or equal to `in_features`.
- **`in_features`** (`int`): The number of input features (the size of the input tensor).
- **`out_features`** (`int`): The number of output features (the size of the output tensor).
### **Returns:**
A tensor representing the packed 4-bit weights and scales, which can be passed to the matrix multiplication operation.
---
## 2. `torch.ops.aten._dyn_quant_matmul_4bit`
This operation performs matrix multiplication using the quantized weights in 4-bit precision, optimized for efficient execution.
### **Parameters:**
- **`input`** (`Tensor`): The input tensor for the matrix multiplication, typically with shape `[batch_size, in_features]`.
- **`packed_weights`** (`Tensor`): The packed 4-bit weights, returned by `torch.ops.aten._dyn_quant_pack_4bit_weight`.
- **`groupsize`** (`int`): The number of channels per group. The value must be a multiple of 32 or equal to `in_features`.
- **`in_features`** (`int`): The number of input features (same as the Linear layer's `in_features`).
- **`out_features`** (`int`): The number of output features (same as the Linear layer's `out_features`).
### **Returns:**
A tensor representing the result of the matrix multiplication, with shape `[batch_size, out_features]`.
---
## API Usage Example
In Below Comment there is an example of how to use these operations for quantization, execution, and benchmarking:
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim | oncall: quantization | low | Major |
2,742,587,229 | next.js | Unexpected token Delim('$') at [project]/app/globals.css:757:29 | ### Link to the code that reproduces this issue
https://github.com/J4v4Scr1pt/ThnJK/tree/upgradeToNextjs15andReact19
### To Reproduce
Start the project with --turbopack enabled and navigate to any page. As soon as it compile it will fail on below error.
### Current vs. Expected behavior
Current:
Fails as soon as Turbo is trying to compile a page you navigate to.
Expected:
To work as with webpack.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 32470
Available CPU cores: 12
Binaries:
Node: 22.12.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.15.0
Relevant Packages:
next: 15.1.0 // Latest available version is detected (15.1.0).
eslint-config-next: 15.1.0
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local), Other (Deployed)
### Additional context
I have some custom Tailwind configuration that used to work with Nextjs and webpack. With NextJs 15 and turbo being stable I wanted to use this. But then I get this error:

```
⨯ ./app/globals.css:758:30
Parsing css source code failed
756 | --background: ({ opacityVariable, opacityValue }) => {
757 | if (!isNaN(+opacityValue)) {
> 758 | return `hsl(var(${nextuiColorVariable}) / ${opacityValue})`;
| ^
759 | }
760 | if (opacityVariable) {
761 | return `hsl(var(${nextuiColorVariable}) / var(${nextuiOpacityVariable}, var(${opacityVariable})))`;
Unexpected token Delim('$') at [project]/app/globals.css:757:29
```
I can only fix the issue by removing the method or using webpack, which is not what I want :). | Turbopack | low | Critical |
2,742,597,433 | vscode | Scroll glitch in resolve conflicts view | Testing #5648
https://github.com/microsoft/vscode/assets/964386/1a172e05-8cda-4a9c-bbe8-d972d0cf2872
| bug,merge-editor | low | Minor |
2,742,610,367 | vscode | Ability to Automatically Expand Replies in commentThread | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Hello,
I’m developing a VSCode extension that uses the commentThread feature. According to the [official documentation](https://code.visualstudio.com/api/references/vscode-api#CommentThread), it is possible to add comments and replies to a thread. However, I’ve noticed that replies are collapsed by default in the UI, and there doesn’t seem to be a way to control this behavior programmatically.
Feature Request:
I would like to request a feature that allows developers to control whether replies in a commentThread are expanded or collapsed by default. For example, an additional property like isRepliesExpanded: boolean in the CommentThread API would be extremely helpful.

| feature-request,comments | low | Minor |
2,742,620,509 | flutter | CarouselView.weighted scrolls fast with higher weights | ### Steps to reproduce
1. Create a CarouselView.weighted with flexWeight [7, 1].
2. Examine the scroll speed. feels pretty natural.
3. change the flexWeight to [1, 7].
4. The scroll speed is now way too fast.
### Expected results
Scroll speed should always be natural and not depend on the flexWeight.
### Actual results
The scroll speed is way to fast, if the flexWeight starts with a low number.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter_riverpod/flutter_riverpod.dart';
import 'package:go_router/go_router.dart';
class KoalaPage extends ConsumerWidget {
const KoalaPage({super.key});
/// Number of koala images shown
static const imageCount = 10;
@override
Widget build(BuildContext context, WidgetRef ref) {
return Scaffold(
appBar: AppBar(title: const Text('koala')),
body: SingleChildScrollView(
child: Column(
spacing: 16,
children: [
const Text('FlexWeight [7, 1]'),
ConstrainedBox(
constraints: BoxConstraints(
maxHeight: MediaQuery.sizeOf(context).height / 3,
),
child: CarouselView.weighted(
flexWeights: const [7, 1],
itemSnapping: true,
onTap: (index) => context.go('/koala/detail/koala_$index'),
children: [
for (var i = 0; i < imageCount; i++)
Container(
color: Colors.primaries[i % Colors.primaries.length],
),
],
),
),
const Text('FlexWeight [1, 7]'),
ConstrainedBox(
constraints: BoxConstraints(
maxHeight: MediaQuery.sizeOf(context).height / 3,
),
child: CarouselView.weighted(
flexWeights: const [1, 7],
itemSnapping: true,
onTap: (index) => context.go('/koala/detail/koala_$index'),
children: [
for (var i = 0; i < imageCount; i++)
Container(
color: Colors.primaries[i % Colors.primaries.length],
),
],
),
),
],
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/42308baf-ccd1-4536-a771-47b957d68146
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter doctor -v
[✓] Flutter (Channel stable, 3.27.0, on macOS 15.2 24C101 darwin-arm64, locale en-CH)
• Flutter version 3.27.0 on channel stable at /Users/hoschi/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 8495dee1fd (6 days ago), 2024-12-10 14:23:39 -0800
• Engine revision 83bacfc525
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/hoschi/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = /Users/hoschi/Library/Android/sdk
• Java binary at: /Users/hoschi/.sdkman/candidates/java/17.0.7-tem/bin/java
• Java version OpenJDK Runtime Environment Temurin-17.0.7+7 (build 17.0.7+7)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.100.0
[✓] Connected device (5 available)
• Pixel 7 (mobile) • 29051FDH2003HK • android-arm64 • Android 15 (API 35)
• iPhone 16 (mobile) • 045606BF-A743-4B8B-85D6-F13FB09F7DF4 • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-1 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.2 24C101 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.2 24C101 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.140
[✓] Network resources
• All expected network resources are available.
• No issues found!```
</details>
| framework,f: material design,f: scrolling,has reproducible steps,P2,team-design,triaged-design,found in release: 3.27,found in release: 3.28 | low | Major |
2,742,648,409 | kubernetes | `AuthorizedTTL`, `UnauthorizedTTL` in `apiserver.config.k8s.io/v1{alpha1,beta1}.AuthorizationConfiguration` cannot be set to `0` | ### What would you like to be added?
https://github.com/kubernetes/kubernetes/blob/16da2955d0ffeb7fcdfd7148ef2fb6c1ce1a9ef5/staging/src/k8s.io/apiserver/pkg/apis/apiserver/v1/types.go#L95-L104 suggests that the `AuthorizedTTL` and `UnauthorizedTTL` fields behave like the "legacy" `--authorization-webhook-cache-{un}authorized-ttl` flags. Yet, since they are no pointer types, they get defaulted if they are unset or explicitly set to `0` (which isn't the case when using the flags): https://github.com/kubernetes/kubernetes/blob/16da2955d0ffeb7fcdfd7148ef2fb6c1ce1a9ef5/staging/src/k8s.io/apiserver/pkg/apis/apiserver/v1/defaults.go#L52-L59
This prevents disabling caching, since setting it to `0` leads to defaulting. A workaround is to set a very low duration (`1ns` or similar), but it would be cleaner if the fields could simply be set to `0`.
Shouldn't we make them `*metav1.Duration` so that we can properly distinguish the cases where they are unset (and should be defaulted) and where they were explicitly set to `0`?
_Edit: Since this proposal is a breaking API change ([ref](https://github.com/kubernetes/kubernetes/pull/129237#issuecomment-2590367515)), another approach is [introducing dedicated fields](https://github.com/kubernetes/kubernetes/pull/129237#issuecomment-2591263513) for disabling caching explicitly._
### Why is this needed?
Prevent caching the decisions at all, similar to how it was possible before with setting | kind/feature,sig/auth,needs-triage | low | Minor |
2,742,673,023 | pytorch | `bias=False` fails in `Transformer` when `batch_first=True` and in eval mode | ### 🐛 Describe the bug
```
import torch
from torch import nn
transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12, bias=False, batch_first=True)
src = torch.rand((10, 32, 512))
tgt = torch.rand((10, 32, 512))
transformer_model.eval()
out = transformer_model(src, tgt)
```
```
Traceback (most recent call last):
File "/Users/adam.amster/Library/Application Support/JetBrains/PyCharm2024.3/scratches/scratch_35.py", line 9, in <module>
out = transformer_model(src, tgt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 206, in forward
memory = self.encoder(src, mask=src_mask, src_key_padding_mask=src_key_padding_mask,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 391, in forward
output = mod(output, src_mask=mask, is_causal=is_causal, src_key_padding_mask=src_key_padding_mask_for_layers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 676, in forward
elif not all((x.device.type in _supported_device_type) for x in tensor_args):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/adam.amster/PycharmProjects/seq2seq translation/.venv/lib/python3.11/site-packages/torch/nn/modules/transformer.py", line 676, in <genexpr>
elif not all((x.device.type in _supported_device_type) for x in tensor_args):
^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'device'
```
note that it works fine in train mode
### Versions
```
PyTorch version: 2.2.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (x86_64)
GCC version: Could not collect
Clang version: 14.0.3 (clang-1403.0.22.14.1)
CMake version: version 3.24.0
Libc version: N/A
Python version: 3.11.9 (main, Apr 2 2024, 08:25:04) [Clang 15.0.0 (clang-1500.1.0.2.5)] (64-bit runtime)
Python platform: macOS-14.5-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.2.2
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.4.0.post0
[pip3] torchtext==0.17.2
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Critical |
2,742,708,706 | vscode | onDidChangeConfiguration event fires for a window scope setting | 1. Create a `window` scope configuration setting
```json
"git.decorations.enabled": {
"type": "boolean",
"default": true,
"description": "%config.decorations.enabled%"
},
```
2. Use the following snippet to listen for configuration changes
```ts
const onEnablementChange = filterEvent(workspace.onDidChangeConfiguration, e => e.affectsConfiguration('git.decorations.enabled'));
onEnablementChange(this.update, this, this.disposables);
```
3. Open the user settings file and add the following JSON
```json
"git.decorations.enabled": true
```
Actual: event fires even though the value did not change
Expected: event should not fire since the value did not change | api,config,under-discussion | low | Minor |
2,742,808,585 | vscode | Setting http.proxy in Client VSCode will affect Copilot running in SSH - Remote server. | <!-- Please search existing issues to avoid creating duplicates -->
<!-- Please attach logs to help us diagnose your issue -->
Setting http.proxy in Client VSCode will affect copilot running in SSH - Remote server.
- Copilot Chat Extension Version: v1.219.0
- VSCode Version: 1.93.0-insider (user setup)
- Logs:
```
2024-08-05 11:28:35.616 [error] [auth] auth: FetchError: tunneling socket could not be established, cause=connect ECONNREFUSED 127.0.0.1:7890
at fetch (/home/ubuntu/.vscode-server-insiders/extensions/github.copilot-1.219.0/node_modules/@adobe/helix-fetch/src/fetch/index.js:99:11)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at cachingFetch (/home/ubuntu/.vscode-server-insiders/extensions/github.copilot-1.219.0/node_modules/@adobe/helix-fetch/src/fetch/index.js:288:16)
at XO.fetch (/home/ubuntu/.vscode-server-insiders/extensions/github.copilot-1.219.0/lib/src/network/helix.ts:96:22)
at fetchCopilotToken (/home/ubuntu/.vscode-server-insiders/extensions/github.copilot-1.219.0/lib/src/auth/copilotToken.ts:197:16)
at authFromGitHubToken (/home/ubuntu/.vscode-server-insiders/extensions/github.copilot-1.219.0/lib/src/auth/copilotToken.ts:130:22)
at auth (/home/ubuntu/.vscode-server-insiders/extensions/github.copilot-1.219.0/extension/src/auth.ts:40:25)
at authShowWarnings (/home/ubuntu/.vscode-server-insiders/extensions/github.copilot-1.219.0/extension/src/auth.ts:131:25)
at EM.getCopilotToken (/home/ubuntu/.vscode-server-insiders/extensions/github.copilot-1.219.0/extension/src/auth.ts:174:33)
at attemptAuthentication (/home/ubuntu/.vscode-server-insiders/extensions/github.copilot-1.219.0/extension/src/auth.ts:58:9) {
type: 'system',
_name: 'FetchError',
code: 'ECONNREFUSED',
errno: -111,
erroredSysCall: 'connect'
}
2024-08-05 11:28:35.737 [error] [auth] tunneling socket could not be established, cause=connect ECONNREFUSED 127.0.0.1:7890
2024-08-05 11:28:35.907 [error] [default] Error sending telemetry FetchError: tunneling socket could not be established, cause=connect ECONNREFUSED 127.0.0.1:7890
at fetch (/home/ubuntu/.vscode-server-insiders/extensions/github.copilot-1.219.0/node_modules/@adobe/helix-fetch/src/fetch/index.js:99:11)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at cachingFetch (/home/ubuntu/.vscode-server-insiders/extensions/github.copilot-1.219.0/node_modules/@adobe/helix-fetch/src/fetch/index.js:288:16)
at XO.fetch (/home/ubuntu/.vscode-server-insiders/extensions/github.copilot-1.219.0/lib/src/network/helix.ts:96:22) {
type: 'system',
_name: 'FetchError',
code: 'ECONNREFUSED',
errno: -111,
erroredSysCall: 'connect'
}
```
Steps to Reproduce:
1. Install VSCode Insiders and start with pure profile.
2. Set `"http.proxy": "http://127.0.0.1:7890"` in client VSCode
3. Prepare an test SSH - Remote server, and Connect to it
4. Install Copilot on server VSCode
5. It throws error: `Error sending telemetry FetchError: tunneling socket could not be established, cause=connect ECONNREFUSED 127.0.0.1:7890`
6. Unset `"http.proxy": "http://127.0.0.1:7890"` in client VSCode
7. The error disappeared.
Here is the video:
https://github.com/user-attachments/assets/659836d0-78ab-4517-af4d-7902ef535bdd
| feature-request,network | low | Critical |
2,742,821,482 | rust | Function parameter type resolution incorrect when multiple generics used | I tried this code:
https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=9788298bae9b348de0833cd9b125c70e
```rust
use smallvec::SmallVec;
use std::iter::repeat_n;
use std::iter::RepeatN;
use std::ops::Index;
#[derive(Debug, Copy, Clone)]
pub enum EndMasks {
L(u8),
R(u8),
BOTH(u8, u8),
}
#[derive(Debug, Copy, Clone)]
pub struct FindResults {
index: usize,
end_masks: EndMasks,
}
pub enum FindEnds<F>
where
F: Fn(u8, u8) -> Option<(u8, u8)>,
{
Either(u8, u8),
Both(F),
}
pub trait LenTrait {
fn len(&self) -> usize;
}
impl LenTrait for [u8] {
fn len(&self) -> usize {
self.len()
}
}
impl LenTrait for Vec<u8> {
fn len(&self) -> usize {
self.len()
}
}
#[derive(Debug, Clone, Copy)]
pub struct BMRepeatPattern {
byte: u8,
len: usize,
}
impl BMRepeatPattern {
#[inline]
pub fn new(byte: u8, len: usize) -> BMRepeatPattern {
BMRepeatPattern { byte, len }
}
}
impl LenTrait for BMRepeatPattern {
#[inline]
fn len(&self) -> usize {
self.len
}
}
impl Index<usize> for BMRepeatPattern {
type Output = u8;
#[inline]
fn index(&self, idx: usize) -> &u8 {
assert!(idx < self.len);
&self.byte
}
}
impl<'a> IntoIterator for &'a BMRepeatPattern {
type Item = &'a u8;
type IntoIter = RepeatN<&'a u8>;
fn into_iter(self) -> Self::IntoIter {
repeat_n(&self.byte, self.len)
}
}
#[derive(Debug, Clone, Copy)]
pub struct BMRepeatBadCharShiftMap {
pattern: BMRepeatPattern,
}
impl BMRepeatBadCharShiftMap {
#[inline]
pub fn new(pattern: BMRepeatPattern) -> Self {
Self { pattern }
}
}
impl Index<u8> for BMRepeatBadCharShiftMap {
type Output = usize;
fn index(&self, index: u8) -> &Self::Output {
if self.pattern.byte == index {
&0
} else {
&self.pattern.len
}
}
}
#[derive(Debug)]
pub struct BMRepeat {
bad_char_shift_map: BMRepeatBadCharShiftMap,
pattern: BMRepeatPattern,
}
impl BMRepeat {
pub fn new(byte: u8, len: usize) -> BMRepeat {
let pattern = BMRepeatPattern::new(byte, len);
let bad_char_shift_map = BMRepeatBadCharShiftMap { pattern };
BMRepeat {
bad_char_shift_map,
pattern,
}
}
}
impl BMRepeat {
pub fn find_first_in<'a, T, F>(&'a self, text: &'a T, e: FindEnds<F>) -> Option<FindResults>
where
T: Index<usize, Output = u8> + LenTrait + ?Sized,
&'a T: IntoIterator<Item = &'a u8>,
<&'a T as IntoIterator>::IntoIter: Sized + DoubleEndedIterator + ExactSizeIterator,
F: Fn(u8, u8) -> Option<(u8, u8)>,
{
find_spec(text, &self.pattern, &self.bad_char_shift_map, 1, e)
.first()
.copied()
}
}
pub fn find_spec<'a, TT: 'a, TP: 'a, F>(
text: &'a TT,
pattern: &'a TP,
bad_char_shift_map: &BMRepeatBadCharShiftMap,
limit: usize,
e: FindEnds<F>,
) -> SmallVec<[FindResults; 1]>
where
TT: Index<usize, Output = u8> + LenTrait + ?Sized,
&'a TT: IntoIterator<Item = &'a u8>,
<&'a TT as IntoIterator>::IntoIter: Sized + DoubleEndedIterator + ExactSizeIterator,
TP: Index<usize, Output = u8> + LenTrait + ?Sized,
&'a TP: IntoIterator<Item = &'a u8>,
<&'a TP as IntoIterator>::IntoIter: Sized + DoubleEndedIterator + ExactSizeIterator,
F: Fn(u8, u8) -> Option<(u8, u8)>,
{
unimplemented!()
}
fn main() {
}
```
I expected to see this happen:
Compiler correctly identifies the types used in the function
Instead, this happened:
```
error[E0308]: mismatched types
--> src/main.rs:136:25
|
129 | pub fn find_first_in<'a, T, F>(&'a self, text: &'a T, e: FindEnds<F>) -> Option<FindResults>
| - expected this type parameter
...
136 | find_spec(text, &self.pattern, &self.bad_char_shift_map, 1, e)
| --------- ^^^^^^^^^^^^^ expected `&T`, found `&BMRepeatPattern`
| |
| arguments to this function are incorrect
|
= note: expected reference `&T`
found reference `&BMRepeatPattern`
note: function defined here
--> src/main.rs:142:8
|
142 | pub fn find_spec<'a, TT: 'a, TP: 'a, F>(
| ^^^^^^^^^
143 | text: &'a TT,
144 | pattern: &'a TP,
| ---------------
```
This error is fixed if I explicitly set the function types
```rust
pub fn find_first_in<'a, T, F>(&'a self, text: &'a T, e: FindEnds<F>) -> Option<FindResults>
where
T: Index<usize, Output = u8> + LenTrait + ?Sized,
&'a T: IntoIterator<Item = &'a u8>,
<&'a T as IntoIterator>::IntoIter: Sized + DoubleEndedIterator + ExactSizeIterator,
F: Fn(u8, u8) -> Option<(u8, u8)>,
{
find_spec::<T, BMRepeatPattern, F>(text, &self.pattern, &self.bad_char_shift_map, 1, e)
.first()
.copied()
}
```
### Meta
`rustc --version --verbose`:
```
rustc --version --verbose
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: aarch64-apple-darwin
release: 1.83.0
LLVM version: 19.1.1
```
This behavior also exists in Beta and Nightly.
| T-compiler,C-bug,T-types,needs-triage | low | Critical |
2,742,822,253 | TypeScript | Organize Import removes React import on React JS files | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.0
- OS Version: Windows_NT x64 10.0.22631
Version: 1.96.0 (user setup)
Commit: 138f619c86f1199955d53b4166bef66ef252935c
Date: 2024-12-11T02:29:09.626Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.22631
Steps to Reproduce:
1. Open a JS file containing a React import
2. Have Organize Imports as a code action on save
3. See the React import get removed, despite it being used
Looks like this got re-introduced in latest update microsoft/vscode#47287 | Needs More Info | low | Critical |
2,742,825,299 | go | runtime: deadlock detection does not work properly in js/wasm | ### Go version
go version go1.23.3 windows/amd64
### Output of `go env` in your module/workspace:
```shell
set GO111MODULE=
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\zxilly\AppData\Local\go-build
set GOENV=C:\Users\zxilly\AppData\Roaming\go\env
set GOEXE=.exe
set GOEXPERIMENT=
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOINSECURE=
set GOMODCACHE=C:\Users\zxilly\go\pkg\mod
set GONOPROXY=1
set GONOSUMDB=
set GOOS=windows
set GOPATH=C:\Users\zxilly\go
set GOPRIVATE=
set GOPROXY=https://proxy.golang.org,direct
set GOROOT=C:\Program Files\Go
set GOSUMDB=sum.golang.org
set GOTMPDIR=
set GOTOOLCHAIN=auto
set GOTOOLDIR=C:\Program Files\Go\pkg\tool\windows_amd64
set GOVCS=
set GOVERSION=go1.23.3
set GODEBUG=
set GOTELEMETRY=local
set GOTELEMETRYDIR=C:\Users\zxilly\AppData\Roaming\go\telemetry
set GCCGO=gccgo
set GOAMD64=v1
set AR=ar
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=NUL
set GOWORK=
set CGO_CFLAGS=-O2 -g
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-O2 -g
set CGO_FFLAGS=-O2 -g
set CGO_LDFLAGS=-O2 -g
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=C:\Users\zxilly\AppData\Local\Temp\go-build192904976=/tmp/go-build -gno-record-gcc-switches
```
### What did you do?
Compile and run following code
```go
package main
func main() {
select {}
}
```
### What did you see happen?
This will lead to
```
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [select (no cases)]:
main.main()
T:/wasm/test.go:4 +0xf
```
on `windows/amd64`.
However, on `js/wasm` build, it will lead to
```
PS T:\wasm> node "C:\Program Files\Go\misc\wasm\wasm_exec_node.js" t.wasm
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x0 addr=0x0 pc=0x0]
goroutine 5 [running]:
panic({0xbc20, 0xc2280})
C:/Program Files/Go/src/runtime/panic.go:804 +0x23
runtime.panicmem(...)
C:/Program Files/Go/src/runtime/panic.go:262
runtime.sigpanic()
C:/Program Files/Go/src/runtime/os_wasm.go:30 +0xb
runtime.handleEvent()
C:/Program Files/Go/src/runtime/lock_js.go:289 +0x14
runtime.goexit({})
C:/Program Files/Go/src/runtime/asm_wasm.s:434 +0x1
```
### What did you expect to see?
Similiar output on wasm platform | NeedsInvestigation,arch-wasm,compiler/runtime | low | Critical |
2,742,867,505 | flutter | Creating an empty `Vertices` object causes an exception on web (canvaskit) | ## Description
If you attempt to create a Vertices object which doesn't have any vertices in it, the following exception is thrown, which didn't use to be the case. This is currently blocking the Flutter roll, see cl/705912913 for details.
```
DartError: TypeError: null: type 'Null' is not a subtype of type 'JavaScriptObject'
dart-sdk/lib/_internal/js_dev_runtime/private/ddc_runtime/errors.dart 307:3 throw_
errors.dart:307
dart-sdk/lib/_internal/js_dev_runtime/private/profile.dart 117:39 _failedAsCheck
profile.dart:117
dart-sdk/lib/_internal/js_shared/lib/rti.dart 1554:3 _generalAsCheckImplementation
rti.dart:1554
dart-sdk/lib/_internal/js_shared/lib/js_util_patch.dart 110:3 callMethod$
js_util_patch.dart:110
lib/_engine/engine/canvaskit/canvaskit_api.dart 95:8 CanvasKitExtension.MakeVertices
canvaskit_api.dart:95
lib/_engine/engine/canvaskit/vertices.dart 82:45 __
vertices.dart:82
lib/_engine/engine/canvaskit/vertices.dart 66:23 raw
vertices.dart:66
lib/_engine/engine/canvaskit/renderer.dart 124:18 createVerticesRaw
renderer.dart:124
lib/ui/canvas.dart 45:28 raw
canvas.dart:45
packages/manual_tests/simple_test.dart 7:41 main
simple_test.dart:7
web_entrypoint.dart 24:31 <fn>
web_entrypoint.dart:24
lib/ui_web/ui_web/initialization.dart 41:15 <fn>
initialization.dart:41
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 622:19 <fn>
async_patch.dart:622
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 647:23 <fn>
async_patch.dart:647
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 593:31 <fn>
async_patch.dart:593
dart-sdk/lib/async/zone.dart 1849:54 runUnary
zone.dart:1849
dart-sdk/lib/async/future_impl.dart 208:18 handleValue
future_impl.dart:208
dart-sdk/lib/async/future_impl.dart 932:44 handleValueCallback
future_impl.dart:932
dart-sdk/lib/async/future_impl.dart 961:13 _propagateToListeners
future_impl.dart:961
dart-sdk/lib/async/future_impl.dart 712:5 [_completeWithValue]
future_impl.dart:712
dart-sdk/lib/async/future_impl.dart 792:7 callback
future_impl.dart:792
dart-sdk/lib/async/schedule_microtask.dart 40:11 _microtaskLoop
schedule_microtask.dart:40
dart-sdk/lib/async/schedule_microtask.dart 49:5 _startMicrotaskLoop
schedule_microtask.dart:49
dart-sdk/lib/_internal/js_dev_runtime/patch/async_patch.dart 186:7 <fn>
async_patch.dart:186
```
Here's a simple reproduction case:
```dart
import 'dart:typed_data';
import 'dart:ui' as ui;
import 'package:flutter/material.dart';
void main() {
final ui.Vertices verts = ui.Vertices.raw(
VertexMode.triangles,
Float32List(0),
indices: Uint16List(0),
textureCoordinates: Float32List(0),
);
print('$verts');
runApp(const SizedBox.shrink());
}
```
| c: regression,c: crash,engine,platform-web,c: rendering,e: web_canvaskit,P1,team-web,triaged-web | medium | Critical |
2,742,878,405 | next.js | Fragment surrounding metatag component breaks scroll behavior | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/5fqq5c
### To Reproduce
1. start the application
2. scroll to the bottom of the preview
3. click the "Bar" link and observe scroll position is at the top. This page contains a metatag inside a `<div>`
4. Scroll to the bottom of the preview
5. click the "Foo" link and observe scroll position is still at the bottom. This page contains a metatag inside a fragement `<>`
### Current vs. Expected behavior
Using React 19's metatag behaviors, it is expected that the scroll position would be at the top for both "Foo" and "Bar" pages. The position of a metatag inside a fragment should not alter the scroll position behavior when navigating between pages.
ie.
```TypeScript
<>
<title>Foo</title>
<h1>Foo</h1>
</>
```
breaks the scroll, while the below scrolls appropriately.
```TypeScript
<div>
<title>Bar</title>
<h1>Bar</h1>
</div>
```
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.12.0
npm: 10.5.0
Yarn: 1.22.19
pnpm: 8.15.6
Relevant Packages:
next: 15.1.1-canary.6 // Latest available version is detected (15.1.1-canary.6).
eslint-config-next: 15.0.3
react: 18.3.1
react-dom: 18.3.1
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed)
### Additional context
_No response_ | Navigation | low | Minor |
2,742,951,645 | deno | `deno outdated` flag clash | I've found another confusing behavior of `deno outdated`
```
: deno outdated
```
```
: deno outdated --compatible
┌─────────────┬────────────────┬────────────────┬─────────┐
│ Package │ Current │ Update │ Latest │
├─────────────┼────────────────┼────────────────┼─────────┤
│ npm:daisyui │ 5.0.0-alpha.47 │ 5.0.0-alpha.48 │ 4.12.22 │
└─────────────┴────────────────┴────────────────┴─────────┘
Run deno outdated --update to update to the latest compatible versions,
or deno outdated --help for more information.
```
```
: deno outdated --compatible -u
error: the argument '--compatible' cannot be used with '--update'
Usage: deno outdated --compatible [filters]...
```
```
: deno outdated -u
Updated 1 dependency:
- npm:daisyui 5.0.0-alpha.47 -> 5.0.0-alpha.48
```
I would expect `deno outdated -u` to do nothing if `deno outdated` outputs nothing and `deno outdated --compatible -u` to update daisyui to the update version outputed by `deno outdated --compatible` but `-u` doesn't work with `--compatible`
_Originally posted by @kuchta in https://github.com/denoland/deno/issues/27025#issuecomment-2543772117_
| outdated | low | Critical |
2,742,972,732 | rust | Better error reporting for `T: ?Sized` types when `impl Receiver for MyType<T>` is implicitly sized | Originally from @adetaylor:
---
I'm looking for advice on the diagnostics around the `Sized`ness of a receiver, and I'm hoping @estebank can advise (or of course anyone else @wesleywiser @compiler-errors ).
The background (for Esteban):
* This is the tracking issue for "arbitrary self types v2", which allows methods like this:
```rust
impl Foo {
fn bar(self: MySmartPtr<Self>) {} // note type of self
}
```
* The key part of this is that types (such as `MySmartPtr`, above) which wish to act as method receivers must implement a new trait `Receiver`
* Before writing the [RFC](https://github.com/rust-lang/rfcs/pull/3519), I happened to make a mistake which I think might be quite common, so in the [Diagnostics section of the RFC](https://github.com/rust-lang/rfcs/blob/master/text/3519-arbitrary-self-types-v2.md#diagnostics) I proposed adding a diagnostic specifically for this case (the second bullet in that linked section).
The case I want to generate an error for: [see this code](https://github.com/rust-lang/rust/commit/a3a43b079b6ff6593dc824b2d1329a5649ba2197#diff-fef56aa988c6838c2a99e5b21e1c0986d7dd101c97b4941079b278cfa9183827R6) but, in short, someone implements `Receiver` for `T` but not `T: ?Sized`.
Questions. Even partial answers to some questions might point me in the right direction.
### Overall approach
1. I can quite easily get hold of an unmet obligation for why the type of `self` doesn't implement `Receiver`. But how can I determine if some missing `?Sized` bound is the problem?
a. Re-run the resolution process with some simulated fake sized `Self` type? See if the obligation resolves in that case, and if so, show a diagnostic.
b. Simulate that some `impl<T> Receiver for T` block is actually `impl <T: ?Sized> Receiver for T`, elsewhere in the program or even in another crate. See if the obligation resolves in that case, and if so, show a diagnostic.
c. Suggest "ensure any Receiver implementations cover !Sized" without actually checking that this is the problem. This might give lots of false positives.
---
@adetaylor: I've split this out into its own issue. Let's try to avoid discussion on tracking issues. It's really not the purpose of a tracking issue, and we've locked tracking issues in the past for exactly the same reason (it often pings like... 40 people who are subscribed to the issue).
These days tracking issues carry the note:
> As with all tracking issues for the language, please file anything unrelated to implementation history, that is: bugs and design questions, as separate issues as opposed to leaving comments here. The status of the feature should also be covered by the feature gate label. Please do not ask about developments here. | A-diagnostics,F-arbitrary_self_types,D-confusing,C-discussion | low | Critical |
2,743,022,648 | flutter | App fails to rerender when launched with an External Accessory [EAAccessory] in iOS | ### Steps to reproduce
1. Create a hello world flutter app
2. [Add supported external accessories](https://developer.apple.com/documentation/bundleresources/information-property-list/uisupportedexternalaccessoryprotocols)
3. [Enable External Accessory background mode](https://developer.apple.com/documentation/xcode/configuring-background-execution-modes)
4. App must be built/ran using `--release` (because it will be launched in an unattached state via the accessory)
5. Launch app via connecting external accessory
---
> May be related to https://github.com/flutter/flutter/issues/55969
> I have confirmed that this launch method works on flutter `1.12` pre-metal as per the above issue
### Expected results
App should launch and render hello world `flutter create` screen
### Actual results
App fails to render hello world screen and gets stuck at launch/blank page
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// Try running your application with "flutter run". You'll see the
// application has a blue toolbar. Then, without quitting the app, try
// changing the primarySwatch below to Colors.green and then invoke
// "hot reload" (press "r" in the console where you ran "flutter run",
// or simply save your changes to "hot reload" in a Flutter IDE).
// Notice that the counter didn't reset back to zero; the application
// is not restarted.
primarySwatch: Colors.blue,
// This makes the visual density adapt to the platform that you run
// the app on. For desktop platforms, the controls will be smaller and
// closer together (more dense) than on mobile platforms.
visualDensity: VisualDensity.adaptivePlatformDensity,
),
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
// This call to setState tells the Flutter framework that something has
// changed in this State, which causes it to rerun the build method below
// so that the display can reflect the updated values. If we changed
// _counter without calling setState(), then the build method would not be
// called again, and so nothing would appear to happen.
_counter++;
});
}
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
return Scaffold(
appBar: AppBar(
// Here we take the value from the MyHomePage object that was created by
// the App.build method, and use it to set our appbar title.
title: Text(widget.title),
),
body: Center(
// Center is a layout widget. It takes a single child and positions it
// in the middle of the parent.
child: Column(
// Column is also a layout widget. It takes a list of children and
// arranges them vertically. By default, it sizes itself to fit its
// children horizontally, and tries to be as tall as its parent.
//
// Invoke "debug painting" (press "p" in the console, choose the
// "Toggle Debug Paint" action from the Flutter Inspector in Android
// Studio, or the "Toggle Debug Paint" command in Visual Studio Code)
// to see the wireframe for each widget.
//
// Column has various properties to control how it sizes itself and
// how it positions its children. Here we use mainAxisAlignment to
// center the children vertically; the main axis here is the vertical
// axis because Columns are vertical (the cross axis would be
// horizontal).
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headline4,
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: Icon(Icons.add),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
}```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.0, on macOS 15.0 24A335 darwin-arm64, locale
en-US)
• Flutter version 3.27.0 on channel stable at /usr/local/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 8495dee1fd (6 days ago), 2024-12-10 14:23:39 -0800
• Engine revision 83bacfc525
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/davewesterhoff/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build
17.0.6+0-17.0.6b829.9-10027231)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build
17.0.6+0-17.0.6b829.9-10027231)
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (4 available)
• macOS (desktop) • macos • darwin-arm64
• macOS 15.0 24A335 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin
• macOS 15.0 24A335 darwin-arm64
• Chrome (web) • chrome •
web-javascript • Google Chrome 131.0.6778.140
[✓] Network resources
• All expected network resources are available.```
</details>
| platform-ios,P2,team-ios,triaged-ios | low | Critical |
2,743,024,805 | godot | mipmap are not applied at runtime load glb file | ### Tested versions
4.3 stable
### System information
Windows 11 - Godot 4.3 Stable
### Issue description
1. editor imported *.glb model - mipmap ok
2. separate model (*.gltf+*.bin+*.png) - mipmap ok
3. embed texture model (*.glb) - mipmap not ok
When loading a glTF file with embedded textures at runtime, mipmaps are not applied. I tried using the `GLTFState.get_images()` function to retrieve the images, calling the `generate_mipmap` function on them, and then resetting them with `GLTFState.set_images`, but it still doesn't work.
Looking through past issues, it seems that mipmaps are not applied to glTF files with embedded textures. Is there any plan to support mipmap generation for *.glb files with embedded textures when loaded at runtime?
I couldn’t find anything related in the milestones.
best regards.
### Steps to reproduce
```gdscript
extends Node3D
var loaded_model
func _ready() -> void:
var gltf_document = GLTFDocument.new()
var gltf_state = GLTFState.new()
var error = gltf_document.append_from_file("res://Model/TestModel.glb", gltf_state)
if error == OK:
# generate mipmap
var images = gltf_state.get_images()
var new_images_array = []
for img in images:
var src_img = img.get_image()
src_img.generate_mipmaps()
new_images_array.append(ImageTexture.create_from_image(src_img))
print("src_img.get_size() -> ", src_img.get_size())
print("src_img.get_mipmap_count() -> ", src_img.get_mipmap_count())
gltf_state.set_images(new_images_array)
loaded_model = gltf_document.generate_scene(gltf_state)
add_child(loaded_model)
```
### Minimal reproduction project (MRP)
[mipmaptest.zip](https://github.com/user-attachments/files/18154749/mipmaptest.zip)
| bug,topic:import,topic:3d | low | Critical |
2,743,045,093 | vscode | During open search bar leave cursor in editor if before search open some text was selected (and put into search field) | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->

| bug,editor-find,confirmation-pending | low | Minor |
2,743,047,451 | vscode | Clear search field value if I open it (again, after search) | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
| feature-request,search | low | Minor |
2,743,090,559 | flutter | Improve message and/or functionality for Material banner with no actions | ### Steps to reproduce
1. show a material banner with actions [] and trigger it at runtime
### Expected results
1. Material banner imho should work with no action - as a snack bar on top with material banner style. or have a default action of "Okay" to dismiss or simlar
2. At least make it easy to find your usage 103 lines and i can't see where i called it from
### Actual results
```
Gives an exception with not very useful info and "please file a bug" message
======== Exception caught by widgets library =======================================================
The following assertion was thrown building MaterialBanner-[#f2347](dirty, dependencies: [MediaQuery], state: _MaterialBannerState#82133):
'package:flutter/src/material/banner.dart': Failed assertion: line 329 pos 12: 'widget.actions.isNotEmpty': is not true.
Either the assertion indicates an error in the framework itself, or we should provide substantially more information in this error message to help you determine and fix the underlying cause.
In either case, please report this assertion by filing a bug on GitHub:
https://github.com/flutter/flutter/issues/new?template=2_bug.yml
```
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
======== Exception caught by widgets library =======================================================
The following assertion was thrown building MaterialBanner-[#f2347](dirty, dependencies: [MediaQuery], state: _MaterialBannerState#82133):
'package:flutter/src/material/banner.dart': Failed assertion: line 329 pos 12: 'widget.actions.isNotEmpty': is not true.
Either the assertion indicates an error in the framework itself, or we should provide substantially more information in this error message to help you determine and fix the underlying cause.
In either case, please report this assertion by filing a bug on GitHub:
https://github.com/flutter/flutter/issues/new?template=2_bug.yml
The relevant error-causing widget was:
Scaffold Scaffold:file:///Users/nwarner/youversion-flutter/apps/bible-app/lib/app/home/home.dart:212:37
When the exception was thrown, this was the stack:
#2 _MaterialBannerState.build (package:flutter/src/material/banner.dart:329:12)
#3 StatefulElement.build (package:flutter/src/widgets/framework.dart:5729:27)
#4 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5617:15)
#5 StatefulElement.performRebuild (package:flutter/src/widgets/framework.dart:5780:11)
#6 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#7 ComponentElement._firstBuild (package:flutter/src/widgets/framework.dart:5599:5)
#8 StatefulElement._firstBuild (package:flutter/src/widgets/framework.dart:5771:11)
#9 ComponentElement.mount (package:flutter/src/widgets/framework.dart:5593:5)
... Normal element mounting (13 frames)
#22 Element.inflateWidget (package:flutter/src/widgets/framework.dart:4468:16)
#23 MultiChildRenderObjectElement.inflateWidget (package:flutter/src/widgets/framework.dart:7035:36)
#24 Element.updateChild (package:flutter/src/widgets/framework.dart:3963:18)
#25 Element.updateChildren (package:flutter/src/widgets/framework.dart:4150:32)
#26 MultiChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:7060:17)
#27 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#28 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#29 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#30 ProxyElement.update (package:flutter/src/widgets/framework.dart:5946:5)
#31 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#32 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#33 StatefulElement.performRebuild (package:flutter/src/widgets/framework.dart:5780:11)
#34 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#35 StatefulElement.update (package:flutter/src/widgets/framework.dart:5803:5)
#36 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#37 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#38 StatefulElement.performRebuild (package:flutter/src/widgets/framework.dart:5780:11)
#39 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#40 StatefulElement.update (package:flutter/src/widgets/framework.dart:5803:5)
#41 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#42 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#43 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#44 ProxyElement.update (package:flutter/src/widgets/framework.dart:5946:5)
#45 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#46 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#47 StatefulElement.performRebuild (package:flutter/src/widgets/framework.dart:5780:11)
#48 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#49 StatefulElement.update (package:flutter/src/widgets/framework.dart:5803:5)
#50 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#51 SingleChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:6907:14)
#52 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#53 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#54 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#55 ProxyElement.update (package:flutter/src/widgets/framework.dart:5946:5)
#56 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#57 SingleChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:6907:14)
#58 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#59 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#60 StatefulElement.performRebuild (package:flutter/src/widgets/framework.dart:5780:11)
#61 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#62 StatefulElement.update (package:flutter/src/widgets/framework.dart:5803:5)
#63 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#64 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#65 StatefulElement.performRebuild (package:flutter/src/widgets/framework.dart:5780:11)
#66 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#67 StatefulElement.update (package:flutter/src/widgets/framework.dart:5803:5)
#68 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#69 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#70 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#71 ProxyElement.update (package:flutter/src/widgets/framework.dart:5946:5)
#72 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#73 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#74 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#75 ProxyElement.update (package:flutter/src/widgets/framework.dart:5946:5)
#76 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#77 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#78 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#79 ProxyElement.update (package:flutter/src/widgets/framework.dart:5946:5)
#80 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#81 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#82 StatefulElement.performRebuild (package:flutter/src/widgets/framework.dart:5780:11)
#83 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#84 StatefulElement.update (package:flutter/src/widgets/framework.dart:5803:5)
#85 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#86 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#87 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#88 ProxyElement.update (package:flutter/src/widgets/framework.dart:5946:5)
#89 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#90 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5642:16)
#91 StatefulElement.performRebuild (package:flutter/src/widgets/framework.dart:5780:11)
#92 Element.rebuild (package:flutter/src/widgets/framework.dart:5333:7)
#93 BuildScope._tryRebuild (package:flutter/src/widgets/framework.dart:2693:15)
#94 BuildScope._flushDirtyElements (package:flutter/src/widgets/framework.dart:2752:11)
#95 BuildOwner.buildScope (package:flutter/src/widgets/framework.dart:3048:18)
#96 WidgetsBinding.drawFrame (package:flutter/src/widgets/binding.dart:1162:21)
#97 RendererBinding._handlePersistentFrameCallback (package:flutter/src/rendering/binding.dart:468:5)
#98 SchedulerBinding._invokeFrameCallback (package:flutter/src/scheduler/binding.dart:1397:15)
#99 SchedulerBinding.handleDrawFrame (package:flutter/src/scheduler/binding.dart:1318:9)
#100 SchedulerBinding._handleDrawFrame (package:flutter/src/scheduler/binding.dart:1176:5)
#101 _invoke (dart:ui/hooks.dart:312:13)
#102 PlatformDispatcher._drawFrame (dart:ui/platform_dispatcher.dart:419:5)
#103 _drawFrame (dart:ui/hooks.dart:283:31)
(elided 2 frames from class _AssertionError)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter doctor
┌─────────────────────────────────────────────────────────┐
│ A new version of Flutter is available! │
│ │
│ To update to the latest version, run "flutter upgrade". │
└─────────────────────────────────────────────────────────┘
Doctor summary (to see all details, run flutter doctor -v):
[!] Flutter (Channel stable, 3.24.3, on macOS 14.7.1 23H222 darwin-arm64, locale en-US)
! Warning: `dart` on your path resolves to /opt/homebrew/Cellar/dart/3.5.4/libexec/bin/dart, which is not inside your current Flutter SDK checkout at /Users/nwarner/flutter. Consider adding
/Users/nwarner/flutter/bin to the front of your path.
[!] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/macos-android-setup for more details.
[✗] Xcode - develop for iOS and macOS
✗ Xcode installation is incomplete; a full installation is necessary for iOS and macOS development.
Download at: https://developer.apple.com/xcode/
Or install Xcode via the App Store.
Once installed, run:
sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer
sudo xcodebuild -runFirstLaunch
✗ CocoaPods not installed.
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
For installation instructions, see https://guides.cocoapods.org/using/getting-started.html#installation
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.2)
[✓] IntelliJ IDEA Community Edition (version 2024.3)
[✓] VS Code (version 1.95.3)
[✓] Connected device (3 available)
[✓] Network resources
```
</details>
| c: new feature,framework,f: material design,a: error message,has reproducible steps,P3,team-design,triaged-design,found in release: 3.27,found in release: 3.28 | low | Critical |
2,743,119,695 | vscode | editor.rulers and column count are off by one | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.97.0-insider ce50bd4876af457f64d83cfd956bc916535285f4 x64 (fresh install)
- OS Version: Linux 6.12.1-arch1-1 x86_64
Steps to Reproduce:
1. Use the editor rulers feature by turning it on in settings, e.g.
```json
"editor.rulers": [
{
"column": 0,
"color": "#b3ff0070"
},
8,
{
"column": 10,
"color": "#ff000070"
},
12,
],
```
2. Observe that the column lines are consistently one off from the column count displayed elsewhere in vscode.

It appears that columns are 1-indexed in some places and 0-indexed in others?
Personally I would prefer seeing everything 0-indexed, for me I want column count should be equivalent to line length (i.e. I like the rulers how they are, but the column count shown elsewhere is off). But any amount of consistency would be an improvement.
Semi-related:
- https://github.com/microsoft/vscode/issues/144942
- https://github.com/microsoft/vscode/issues/107869 | bug,editor-contrib | low | Critical |
2,743,124,583 | flutter | [desktop] create a way to specify the initial window size | ## Description
I'm creating a game with Flame and I want the window size to be a 16x9 aspect ratio when it opens up. There doesn't appear to be a way to set the default window size without the [usage of plugins](https://stackoverflow.com/questions/62549979/change-the-flutter-desktop-app-window-size) or editing the platform specific files (like `MainFlutterWindow.swift`).
I would like to be able to define this in a cross-platform way, preferably with Dart. If we can't do it in Dart, we should be able to do it in the pubspec.
## Example
```dart
void main() {
runApp(
SizedBox(
width: 512,
height: 288,
child: GameWidget(
game: Game(),
),
),
);
}
```
## Notes
- Sounds like there was some plan to do this as part of multi window support
- It was noted that there was some work done to make sure we don't have a window until we are drawing with dart, I think it's just done on windows though.
cc @cbracken @goderbauer | c: proposal,P3,team-framework,triaged-framework | low | Minor |
2,743,127,941 | godot | Thirdparty module compilation errors | ### Tested versions
- Reproducible in: master
- Not reproducible in : 4.3-stable, 4.2-stable.
### System information
windows 10 - mingw - architecture "x86_64"
### Issue description
The engine compilation fails due to some thirds party modules linking error. It seems that the paths generation are broken. Infortunatly, I don't know the build system well enough to identify the source.
Modules impacted are (some others may also be impacted) :
- freetype
- glslang
- jolt
- mbedtls
### Compilation Command
```
D:\godot-master> scons deprecated=no target=editor
scons: Reading SConscript files ...
Automatically detected platform: windows
Auto-detected 12 CPU cores available for build parallelism. Using 11 cores by default. You can override it with the `-j` or `num_jobs` arguments.
Using MinGW, arch x86_64
Building for platform "windows", architecture "x86_64", target "editor".
Checking for C header file mntent.h... (cached) no
NOTE: Performing initial build, progress percentage unavailable!
scons: done reading SConscript files.
scons: Building targets ...
```
### Compilation Output
```
scons: done reading SConscript files.
scons: Building targets ...
...
Linking Static Library modules\freetype\libfreetype_builtin.windows.editor.x86_64.a ...
ERROR: ar: thirdpartyfreetypesrcautofitautofit.windows.editor.x86_64.o: No such file or directory
...
Linking Static Library modules\libmodule_glslang.windows.editor.x86_64.a ...
ERROR: ar: thirdpartyglslangglslangGenericCodeGenCodeGen.windows.editor.x86_64.o: No such file or directory
...
Linking Static Library modules\libmodule_jolt_physics.windows.editor.x86_64.a ...
ERROR: ar: thirdpartyjolt_physicsJoltRegisterTypes.windows.editor.x86_64.o: No such file or directory
```
### Steps to reproduce
- On a Windows environment
- Clone project
- Switch to master branch
- Pull last changes
- Try to compile with `scons deprecated=no target=editor` command (I use default options)
### Minimal reproduction project (MRP)
N/A | bug,platform:windows,topic:buildsystem | low | Critical |
2,743,128,114 | pytorch | Add CPU scalar support in addcdiv | ### 🚀 The feature, motivation and pitch
Continuation of #143264 .
Allow user to pass in a cpu scalar to addcdiv. I can do this as soon as the mentioned PR is merged!
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD | triaged,enhancement,actionable,module: python frontend | low | Minor |
2,743,156,059 | vscode | Move Editor Actions should apply to selected tabs, and add sorting and reordering operations | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
[#119327](https://github.com/microsoft/vscode/issues/119327) is a terrific new feature which I use quite often. Sometimes I perform operations on the selected tabs, like the following, which already execute on all selected tabs by default:
> workbench.action.pinEditor
> workbench.action.unpinEditor
> workbench.action.moveEditorToNewWindow
> workbench.action.copyEditorToNewWindow
Other commands, however, like those listed in proposal 1, only apply to the active editor rather than all selected editors.
**Proposal 1**: make certain editor commands, e.g. the commands below, apply to all selected tabs/editors by default, or create a setting to control the behavior:
> workbench.action.editor.changeLanguageMode
> workbench.action.moveEditorLeftInGroup
> workbench.action.moveEditorRightInGroup
> workbench.action.moveEditorToFirstGroup
> workbench.action.moveEditorToLastGroup
> workbench.action.moveEditorToNextGroup
> workbench.action.moveEditorToPreviousGroup
**Proposal 2**: Introduce two new commands to perform sort and reordering operations on the selected editors, such as:
a) Reverse order of selected tabs
b) Sort selected tabs alphabetically
**Use case for (a) in Proposal 2**: I am about to perform work on each step in a multi-step form. Each step in the form is a react component, and I have just opened them all from references in a parent component. But now I wish I had opened them in the reverse order so that the steps in the form would be ordered in my workspace from beginning to end, left to right, instead of the reverse, so that I can work on them in a logical sequence from left to right. Instead of closing them all and reopening them in the desired order, or dragging them around one by one, it would be nice to be able to select them all quickly (easily done by clicking the first tab, then holding shift while clicking the last tab in the group) and perform a simple action from the command pallet like "Reverse order of selected tabs". | feature-request,workbench-tabs | low | Minor |
2,743,156,704 | vscode | Reference Files with CoPilot Edits | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Reference files for Copilot Edits. I've had several issues now where I'd like to add reference files (variables reference, logs, filestructure, etc) that the bot could use for the project, but I don't want edited. | feature-request,cross-file-editing | low | Minor |
2,743,166,223 | vscode | Switching chat sessions more easily | Navigating to previous chat conversations in Copilot Chat is difficult:
* Uses a weird time icon
* Takes the user to the quick pick with list of chats
Should we instead support Vertical tabs in chat? Similar to what we do in Terminal or what ChatGPT does.
The key is to not render this vertical space by default (since it would eat up too much real estate). But there needs to be an affordance to show it, and a layout should first hide the list of tabs when not enough horizontal space.
@roblourens was this already requested / discussed?
I am not convinced using Tabs is the right solution, but I do feel like the history is not accessible enough.
Alternative idea is to show a list of previous chats in the Chat Welcome view as a list.
fyi @joaomoreno

| feature-request,chat | low | Minor |
2,743,168,952 | vscode | Title bar too big in some screens, does not scale with zoom | In Ubuntu 24.04 under Wayland, in my main hidpi monitor, the title bar size is correct. When I connect a second (bigger) monitor, and other applications are moved over to the second screen, they are correctly automatically resized down (their size in pixel decreased, so their physical size stays more or less constant). VSCode is not resized, and therefore appears too big. I can zoom out inside VSCode so the size is corrected, but this does not affect the title bar, which, as can be seen in the image, stays way too big compared to other windows (next to a terminal in the example).
There could be two ways of fixing this behavior: allow zooming out the title bar, and/or make vscode correctly resize when moving to a monitor of different size. Ideally both.
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.0
- OS Version: Ubuntu 24.04 LTS
Steps to Reproduce:
1. Using vscode in Ubuntu (Wayland)
2. Use custom title bar

| bug,upstream,electron,titlebar | low | Minor |
2,743,172,941 | pytorch | [dynamo, guards] Implement FrameLocalsMapping version of check_verbose_nopybind | Follow up to https://github.com/pytorch/pytorch/pull/140063.
> Add FrameLocalsMapping version for check_verbose_nopybind in order to match behavior between check_nopybind and check_verbose_nopybind. This can prevent difficult debugging situations where guards fail (check_nopybind returns false) but no guard error message is generated (check_verbose_nopybind succeeds).
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,743,175,297 | pytorch | [dynamo, guards] Move SHAPE_ENV guard to C++ | Followup to https://github.com/pytorch/pytorch/pull/140063.
> Rewrite the SHAPE_ENV guard into C++ - it is a fairly common guard that results in FrameLocalsMapping needing to convert to a dict
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Minor |
2,743,202,244 | next.js | Invalid URL when returning a relative location header in middleware | ### Link to the code that reproduces this issue
https://github.com/blairmcalpine/next-invalid-url-repro
### To Reproduce
Clone the repo
Run `npm i`
Run `npm dev`
Visit `localhost:3000/broken`.
See the error message "Invalid URL".
### Current vs. Expected behavior
I expect to be properly redirected to the home page, but instead there is an error.
This is because we get the `location` header from the response from middleware ([which is allowed to be relative](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Location#:~:text=May%20be%20relative%20to%20the%20request%20URL%20or%20an%20absolute%20URL.)), and pass it directly into the `NextURL` class:
https://github.com/vercel/next.js/blob/7ae9c79044e30f0eb6ea9f31c27a560c8d45e6c2/packages/next/src/server/web/adapter.ts#L358
This, under the hood, calls `new URL` with that value (and no baseUrl), causing the error.
The fix here would be to append the base url to the `location` variable here before passing it to `new NextURL`.
To be clear here as well, the repro works here by just explicitly returning a `NextResponse.json` in middleware. While this could be fixed by just using `NextResponse.rewrite` in this case, this doesn't work in the general case where we don't necessarily know if it's a redirect or not (and to where we are redirecting).
### Provide environment information
```bash
Node.js v22.11.0
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Wed Jul 31 20:49:39 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 22.11.0
npm: 10.9.0
Yarn: 4.4.0
pnpm: N/A
Relevant Packages:
next: 15.1.0 // Latest available version is detected (15.1.0).
eslint-config-next: 15.1.0
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware, Navigation, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
_No response_ | Middleware,Navigation,Runtime,linear: next | low | Critical |
2,743,210,588 | flutter | Exclude .cxx folders from git as part of the flutter create templates. | ** /.cxx
https://stackoverflow.com/questions/66129837/about-the-gitignore-question-cxx-folder-in-the-android-project
I believe that an unintended side effect of https://github.com/flutter/flutter/pull/160260 is that we are seeing more native bin directories when working with apps.
This bug tracks updating the templates. As an aside the blog post for the next release might want to warn about this and we want to update our own repos to exclude this folder. | platform-android,P1,team-android | medium | Critical |
2,743,216,348 | go | proposal: x/tools/cmd/auth: cleanup old code | ### Proposal Details
The [x/tools/cmd/auth](https://cs.opensource.google/go/x/tools/+/master:cmd/auth/?q=cmd%2Fauth) directory is obsolete. We've integrated its GOAUTH implementation directly into [src/cmd/go/internal/auth](https://cs.opensource.google/go/go/+/master:src/cmd/go/internal/auth/) as part of issue #26232. This means we no longer need to maintain the separate reference implementation in x/tools/cmd/auth.
Therefore, let's delete x/tools/cmd/auth | Proposal | low | Minor |
2,743,247,534 | langchain | Agent react cannot accept gpt4 or gpt 4o as input model | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The mimial reproduce code is:
```
from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_openai import OpenAI
tools = []
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/react")
# Choose the LLM to use
llm = ChatOpenAI(temperature=0, model="gpt-4o")
# Construct the ReAct agent
agent = create_react_agent(llm, tools, prompt)
# Create an agent executor by passing in the agent and tools
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what is LangChain?"})
```
### Error Message and Stack Trace (if applicable)
ile /home/tl688/.conda/envs/cell2sentence/lib/python3.10/site-packages/langchain/agents/output_parsers/react_single_input.py:75, in ReActSingleInputOutputParser.parse(self, text)
74 if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL):
---> 75 raise OutputParserException(
76 f"Could not parse LLM output: `{text}`",
77 observation=MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE,
78 llm_output=text,
79 send_to_llm=True,
80 )
81 elif not re.search(
82 r"[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)", text, re.DOTALL
83 ):
OutputParserException: Could not parse LLM output: `LangChain is a framework designed to facilitate the development of applications powered by language models. It provides a suite of tools and components that help developers build applications that can interact with language models in a structured and efficient manner. LangChain is particularly useful for creating applications that require complex language processing tasks, such as chatbots, virtual assistants, and other AI-driven communication tools. It supports integration with various language models and offers features for managing conversations, handling inputs and outputs, and maintaining context across interactions.`
For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/OUTPUT_PARSING_FAILURE
### Description
This error only happended in gpt4 and gpt4o model, not in 3.5. Replacing it with OpenAI can work.
### System Info
The updated version of langchain. | 🤖:bug | low | Critical |
2,743,269,803 | flutter | Do we have tests around whether or not the new analytics/telemetry message is shown? | Part of work on https://github.com/flutter/flutter/issues/150575
I don't see any references to [`Analytics.getConsentMessage`](https://github.com/flutter/flutter/blob/72432c3f15b26fe55f8fd822e5fb3581260f75dd/packages/flutter_tools/lib/src/base/process.dart#L695) in any test | P1,team-tool,triaged-tool | medium | Minor |
2,743,411,515 | godot | CSGPolygon3D update in Inspector does not update the visual representation in the 3D editor | ### Tested versions
Reproducible in v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - macOS 15.0.0 - Vulkan (Forward+) - integrated Apple M1 Pro - Apple M1 Pro (8 Threads)
### Issue description
When you update any point of the CSGPolygon3D PackedVectorArray in the Inspector the change is not reflected in the 3D editor. Only after deselecting the node and selecting it again you can see the updated shape.
See video for example:
https://youtu.be/AR9V4CMxpXo
### Steps to reproduce
-
### Minimal reproduction project (MRP)
- | bug,topic:editor,topic:3d | low | Minor |
2,743,430,197 | langchain | Observered latency in the chain.invoke | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
from azure.identity import ClientSecretCredential, get_bearer_token_provider
from langchain_openai import AzureChatOpenAI
def llm_connection(model="gpt-4o",
temperature=0.2,
top_p=0.1,
max_tokens=2000,
max_retries=1):
credentials = {
"tenant_id": os.getenv("AZURE_TENANT_ID"),
"client_id": os.getenv("AZURE_CLIENT_ID"),
"client_secret": os.getenv("AZURE_CLIENT_SECRET"),
"openai_endpoint": os.getenv("API_BASE"),
"azure_api_version": os.getenv("AZURE_API_VERSION", "2024-04-01-preview"),
"subscription_key": os.getenv("SUBSCRIPTION_KEY"),
}
llm = instantiate_llm( credentials,
azure_deployment = model,
temperature = temperature,
top_p = top_p,
max_tokens = max_tokens,
max_retries = max_retries)
return llm
def instantiate_llm(
credentials: dict,
azure_deployment: str = "gpt-4o",
temperature: float = 0.2,
top_p=0.1,
max_tokens: int = 1000,
max_retries: int = 1,
):
"""
Instantiate llm model
"""
csc = ClientSecretCredential(
tenant_id=credentials["tenant_id"],
client_id=credentials["client_id"],
client_secret=credentials["client_secret"],
)
llm = AzureChatOpenAI(
azure_endpoint=credentials["openai_endpoint"],
api_version=credentials["azure_api_version"],
azure_deployment=azure_deployment,
azure_ad_token_provider=get_bearer_token_provider(
csc, "https://cognitiveservices.azure.com/.default"
),
default_headers={"Ocp-Apim-Subscription-Key": credentials["subscription_key"]},
temperature=temperature,
top_p=top_p,
max_tokens=max_tokens,
max_retries=max_retries,
)
return llm
prompt_template = ChatPromptTemplate.from_messages(
[("system", EMAIL_WRITER_SYSTEM)]
)
chain = prompt_template | llm | StrOutputParser()
response = chain.invoke(<request_payload>)
### Error Message and Stack Trace (if applicable)
Retrying request to /chat/completions in 0.376881 seconds
### Description
We have observed intermittent latency in the chain.invoke
At times it takes couple of minutes before it makes Open AI HTTP Post Request and there is no logging on the operation taking time.
With the same payload, at times the whole chain completes in 10 seconds where as we observe the latency with no logging and Error message **Retrying request to /chat/completions in**
We would like to understand the issue with langchain invoke on why there is latency observed in the
**Note**: both the requests are with same payload
Latency observed during this sequence of steps

No latency during this call -

### System Info
$ pip freeze
aiohappyeyeballs==2.4.3
aiohttp==3.10.10
aiosignal==1.3.1
annotated-types==0.7.0
anyio==4.6.2.post1
async-timeout==4.0.3
attrs==24.2.0
azure-core==1.32.0
azure-functions==1.21.3
azure-identity==1.19.0
beautifulsoup4==4.12.3
certifi==2024.8.30
cffi==1.17.1
charset-normalizer==3.4.0
click==8.1.7
colorama==0.4.6
coverage==7.6.4
cryptography==43.0.3
databricks-sql-connector==3.4.0
dataclasses-json==0.6.7
distro==1.9.0
et_xmlfile==2.0.0
exceptiongroup==1.2.2
fastapi==0.115.4
frozenlist==1.5.0
greenlet==3.1.1
h11==0.14.0
httpcore==1.0.6
httpx==0.27.2
httpx-sse==0.4.0
idna==3.10
iniconfig==2.0.0
isodate==0.7.2
jiter==0.7.0
jsonpatch==1.33
jsonpointer==3.0.0
langchain==0.3.7
langchain-community==0.3.5
langchain-core==0.3.15
langchain-openai==0.2.6
langchain-text-splitters==0.3.2
langsmith==0.1.140
lxml==5.3.0
lz4==4.3.3
marshmallow==3.23.1
msal==1.31.0
msal-extensions==1.2.0
msrest==0.7.1
multidict==6.1.0
mypy-extensions==1.0.0
numpy==1.26.4
oauthlib==3.2.2
openai==1.54.3
openpyxl==3.1.5
orjson==3.10.11
packaging==24.1
pandas==2.2.0
pluggy==1.5.0
portalocker==2.10.1
propcache==0.2.0
py4j==0.10.9.5
pyarrow==16.1.0
pycparser==2.22
pydantic==2.9.2
pydantic-settings==2.6.1
pydantic_core==2.23.4
PyJWT==2.9.0
pyspark==3.2.2
pytest==8.3.3
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytz==2024.2
pywin32==308
PyYAML==6.0.2
regex==2024.11.6
requests==2.32.3
requests-oauthlib==2.0.0
requests-toolbelt==1.0.0
six==1.16.0
sniffio==1.3.1
soupsieve==2.6
SQLAlchemy==2.0.35
starlette==0.41.2
tenacity==9.0.0
thrift==0.20.0
tiktoken==0.8.0
tomli==2.0.2
tqdm==4.67.0
typing-inspect==0.9.0
typing_extensions==4.12.2
tzdata==2024.2
urllib3==2.2.3
uvicorn==0.32.0
yarl==1.17.1 | Ɑ: core | low | Critical |
2,743,498,757 | yt-dlp | [youtube] Getting AI Generated English Audio and not Original German Audio | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Germany
### Example URLs
https://www.youtube.de/watch?v=0j1jfNnvC6w
### Provide a description that is worded well enough to be understood
Hello everyone,
I've encountered a problem with yt-dlp that I couldn't find an issue for on GitHub, and I'm hoping someone here can help or that a feature suggestion might be possible.
YouTube now seems to automatically create AI-synthesized translations for the audio in some videos without this being explicitly selected by the user. As a result, when I download a video, I sometimes get the original audio and sometimes an AI-translated version (e.g. in AI English or German).
What I would like to see:
A way to save both the original audio and the AI-translated audio track in the final file (MP4 in my case).
Alternatively, a parameter with which I can explicitly specify that only the original audio is downloaded, if available.
Is there currently a solution for this in yt-dlp? If not, would it be conceivable to add such a feature in the future?
Many thanks for your support and best regards!
be kind to me this is my first issue at yt-dlp. if i have misunderstood or done something wrong i am sorry
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vUi', '--merge-output-format', 'mp4', 'https://www.youtube.de/watch?v=0j1jfNnvC6w']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [542166962] (pip)
[debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.6.56-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.20.0, brotlicffi-1.1.0.0, certifi-2024.08.30, curl_cffi-0.7.2 (unsupported), mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.46.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.youtube.de/watch?v=0j1jfNnvC6w
[generic] watch?v=0j1jfNnvC6w: Downloading webpage
[redirect] Following redirect to https://www.youtube.com/watch?v=0j1jfNnvC6w&gl=DE
[youtube] Extracting URL: https://www.youtube.com/watch?v=0j1jfNnvC6w&gl=DE
[youtube] 0j1jfNnvC6w: Downloading webpage
[youtube] 0j1jfNnvC6w: Downloading ios player API JSON
[youtube] 0j1jfNnvC6w: Downloading mweb player API JSON
[debug] Loading youtube-nsig.03dbdfab from cache
[debug] [youtube] Decrypted nsig WmBnQoh91Q992g4-- => ewOSNX6_X-fDcA
[debug] Loading youtube-nsig.03dbdfab from cache
[debug] [youtube] Decrypted nsig Jmo2hbQT5uSDhtb6A => u0BVpr4gVKWxbA
[youtube] 0j1jfNnvC6w: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 0j1jfNnvC6w: Downloading 1 format(s): 401+251-1
[debug] Invoking http downloader on "https://rr3---sn-4g5lznek.googlevideo.com/videoplayback?expire=1734408141&ei=baNgZ5WAHdyDi9oPy4DLuQM&ip=2a02%3A908%3A186%3Ad8e1%3A7d3c%3A45d0%3A7ca6%3A4aaa&id=o-ANjbtrLyOSuZAts9U7kohTaiqANt4mBCkbojrV-_t3j7&itag=401&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1734386541%2C&mh=_O&mm=31%2C26&mn=sn-4g5lznek%2Csn-5hnednsz&ms=au%2Conr&mv=m&mvi=3&pl=46&rms=au%2Cau&initcwndbps=2408750&bui=AfMhrI9PgPvSRYxWkPHOIigQ8rQBWzZsZ2rmsWCxGVxUhXtUxvfCbNggTAvq0GKtxUz2pCa4FdwIBLv2&spc=x-caUCvN89n9uqG7By22EofXAob8CVuPydGFKeDRFNrTqPRdgw&vprv=1&svpuc=1&mime=video%2Fmp4&rqh=1&gir=yes&clen=840583662&dur=1197.400&lmt=1731966797785699&mt=1734385993&fvip=1&keepalive=yes&fexp=51326932%2C51335594%2C51347747&c=IOS&txp=4532434&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cspc%2Cvprv%2Csvpuc%2Cmime%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswRAIgGB4IGn44ICYoe3Nxo0axsQJeUFd9uVNrFUf0SOMXXe8CIB-852qkRsJExcPjtL_YpaZSzXTEDV-lQC9b6agyPuGY&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3MwRgIhANfBW-y6_SjGWobpeEaXc7PBPoa_n6QW4zJBMEYXaIelAiEAoAJ5BaeNeNwlvZ9JHCvpCPUpCmdz6wCval_h3-_IbN4%3D"
[download] Destination: Expedition zum Absturzort von Flug 571 - Schneegesellschaft - 72 Tage gefangen im Eis | Teil 1 [0j1jfNnvC6w].f401.mp4
[download] 100% of 801.64MiB in 00:01:22 at 9.71MiB/s
[debug] Invoking http downloader on "https://rr3---sn-4g5lznek.googlevideo.com/videoplayback?expire=1734408141&ei=baNgZ5WAHdyDi9oPy4DLuQM&ip=2a02%3A908%3A186%3Ad8e1%3A7d3c%3A45d0%3A7ca6%3A4aaa&id=o-ANjbtrLyOSuZAts9U7kohTaiqANt4mBCkbojrV-_t3j7&itag=251&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1734386541%2C&mh=_O&mm=31%2C26&mn=sn-4g5lznek%2Csn-5hnednsz&ms=au%2Conr&mv=m&mvi=3&pl=46&rms=au%2Cau&initcwndbps=2408750&bui=AfMhrI9PgPvSRYxWkPHOIigQ8rQBWzZsZ2rmsWCxGVxUhXtUxvfCbNggTAvq0GKtxUz2pCa4FdwIBLv2&spc=x-caUCvN89n9uqG7By22EofXAob8CVuPydGFKeDRFNrTqPRdgw&vprv=1&svpuc=1&xtags=acont%3Doriginal%3Alang%3Dde-DE&mime=audio%2Fwebm&rqh=1&gir=yes&clen=17399007&dur=1197.441&lmt=1731952038377256&mt=1734385993&fvip=1&keepalive=yes&fexp=51326932%2C51335594%2C51347747&c=IOS&txp=4532434&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cspc%2Cvprv%2Csvpuc%2Cxtags%2Cmime%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswRQIgMkSGlLj_MhGb__bd0gDvn_ayC-r70FCOSBIoXEFeH68CIQC-__Mvwf86cWqYbQpdV4J05LThp_cfaA86V5g5YaqsfQ%3D%3D&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3MwRgIhANfBW-y6_SjGWobpeEaXc7PBPoa_n6QW4zJBMEYXaIelAiEAoAJ5BaeNeNwlvZ9JHCvpCPUpCmdz6wCval_h3-_IbN4%3D"
[download] Destination: Expedition zum Absturzort von Flug 571 - Schneegesellschaft - 72 Tage gefangen im Eis | Teil 1 [0j1jfNnvC6w].f251-1.webm
[download] 100% of 16.59MiB in 00:00:02 at 6.35MiB/s
[Merger] Merging formats into "Expedition zum Absturzort von Flug 571 - Schneegesellschaft - 72 Tage gefangen im Eis | Teil 1 [0j1jfNnvC6w].mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:Expedition zum Absturzort von Flug 571 - Schneegesellschaft - 72 Tage gefangen im Eis | Teil 1 [0j1jfNnvC6w].f401.mp4' -i 'file:Expedition zum Absturzort von Flug 571 - Schneegesellschaft - 72 Tage gefangen im Eis | Teil 1 [0j1jfNnvC6w].f251-1.webm' -c copy -map 0:v:0 -map 1:a:0 -movflags +faststart 'file:Expedition zum Absturzort von Flug 571 - Schneegesellschaft - 72 Tage gefangen im Eis | Teil 1 [0j1jfNnvC6w].temp.mp4'
Deleting original file Expedition zum Absturzort von Flug 571 - Schneegesellschaft - 72 Tage gefangen im Eis | Teil 1 [0j1jfNnvC6w].f251-1.webm (pass -k to keep)
Deleting original file Expedition zum Absturzort von Flug 571 - Schneegesellschaft - 72 Tage gefangen im Eis | Teil 1 [0j1jfNnvC6w].f401.mp4 (pass -k to keep)
```
| site-bug,triage,site:youtube | medium | Critical |
2,743,510,474 | vscode | No notebook document for `file://*.ipynb` (wsl2) | ### Applies To
- [X] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
When adding cells: `No notebook document for file://*.ipynb` is prompted. I am using a workspace under WSL2 within the `.vhdx` as a wsl remote.
The notebook goes unresponsive.
### VS Code Version
Version: 1.91.0 (user setup) Commit: ea1445cc7016315d0f5728f8e8b12a45dc0a7286 Date: 2024-07-01T18:52:22.949Z Electron: 29.4.0 ElectronBuildId: 9728852 Chromium: 122.0.6261.156 Node.js: 20.9.0 V8: 12.2.281.27-electron.0 OS: Windows_NT x64 10.0.22631
### Jupyter Extension Version
v2024.6.0
### Jupyter logs
```shell
14:28:18.630 [info] Starting Kernel (Python Path: <repo>/.venv/bin/<interpreter>, Venv, 3.9.19) for '~/git/texconnect/DataTransformationApi/eolt_analysis.ipynb' (disableUI=false)
14:28:19.000 [info] Process Execution: <repo>/.venv/bin/<interpreter> -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
14:28:19.002 [info] Process Execution: <repo>/.venv/bin/<interpreter> -m ipykernel_launcher --f=/home/~/.local/share/jupyter/runtime/kernel-v2-369SMzJUcplhu2Z.json
> cwd: //home/~/... # this is unedited after ~
14:28:19.957 [info] Kernel successfully started
```
### Coding Language and Runtime Version
v3.9.19
### Language Extension Version (if applicable)
v2024.8.1
### Anaconda Version (if applicable)
_No response_
### Running Jupyter locally or remotely?
Local | bug,info-needed,notebook-workbench-integration | low | Major |
2,743,519,405 | vscode | Cell toolbars are not correct on newly opened notebook | Type: <b>Bug</b>
In a Jupyter notebook, toolbars are out of sync for some cells when the notebook is first opened.
## Reproduction
### Case 1
Create a Jupyter notebook with Python as the back end. (I don't know if this bug is Python-specific, but Python is what I'm using.) You will get an empty Python cell "for free."
Hover the cursor above that cell. Click the "+ Markdown" button.
Notice that the toobar contains the Python buttons: "Run by Line", "Execute Above Cells", "Execute Cell and Below, "..." and the trash can.
Now, here is the fun part. Switch to another app window, then switch back. Voila! The buttons are now the Markdown buttons: Check mark, "Split Cell", "..." and the trash can.
Once you've switched to another app window like this, the problem will not recur in that same notebook editor. To reproduce the following, you'll need
a setup so you can see these instructions _without_ switching windows.
### Case 2
Close the notebook you just created. Open it again. The Markdown cell will be active, only the "..." and trashcan buttons appear in the toolar.
Switch app windows. The Markdown buttons appear.
### Case 3
Close the test notebook and open it again. Since you did not click the check mark on the Markdown cells, the reopened notebook should still have that cell in edit mode, so we'd expect the Mardown edit buttons.
Hover the cursor before your Python cell (you should only have a Markdown cell and a Python cell at this point.)
Click the "+ Markdown" button. Your new Markdown cell will have the Python cell buttons.
Again, switching windows brings up the proper Markdown buttons.
### Case 4
Close the notebook and reopen it again. Click on each of your three cells. You will get a toolbar with just the "..." and Toolbar buttons.
This part gets weird. If you click from the bottom up, the bottom two
cells (Markdown at the bottom, Python in the middle) will have the generic buttons, but the top cell has the Markdown buttons. Sometimes all three cells have the generic buttons.
Now click around some more. At some point, you no longer get the generic buttons but instead start getting the proper buttons for the cell type.
Again, if you switch windows, you will always get the proper buttons after you switch back to your notebook.
### Case 5
One final case. Close your notebook and reopen it again.
Click in the last cell (Markdown) and type "something".
Click the checkmark to save the changes.
Hover the cursor below the Markdown cell. Click the "+ Markdown" button.
Notice that you get a new Markdown cell, in edit mode. You can type text. But the toolbar contains only the "..." and Trashcan icons.
Switch to another app window and back to VS Code. The toolbar now contains the check mark.
## Expected Behavior
It should go without saying, but the expected behavior is that the toolbar is consistent: I shouldn't have to switch app windows to get the proper toolbar.
The workaround is to either switch app windows, or click onto another cell and back again.
## Comment
It seems that something is off with the code that updates toolbars based on events. That code seems to be missing an event early on, but receives the correct event on loss of focus. Once whatever the problem is is adjusted once, that notebook stays in "working" mode thereafter.
## Details
Extension version: 1.0.17
VS Code version: Code 1.89.0 (b58957e67ee1e712cebf466b995adf4c5307b2bd, 2024-05-01T02:08:25.066Z)
OS version: Linux x64 5.15.0-101-generic
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz (8 x 1600)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off|
|Load (avg)|1, 1, 2|
|Memory (System)|31.31GB (7.80GB free)|
|Process Argv|--crash-reporter-id 5e232e7b-355a-468c-ac4a-02c300f898bd|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|cinnamon|
|XDG_CURRENT_DESKTOP|X-Cinnamon|
|XDG_SESSION_DESKTOP|cinnamon|
|XDG_SESSION_TYPE|x11|
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vstes627:30244334
vscorecescf:30445987
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
welcomedialog:30910333
pythonidxpt:30866567
pythonnoceb:30805159
asynctok:30898717
pythontestfixt:30902429
pythonregdiag2:30936856
pyreplss1:30897532
pythonmypyd1:30879173
pythoncet0:30885854
h48ei257:31000450
pythontbext0:30879054
accentitlementsc:30995553
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
ccp2r3:30993541
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
g316j359:31013175
pythoncenvpt:31022790
dwnewjupytercf:31035177
```
</details>
<!-- generated by issue reporter --> | bug,notebook-celltoolbar | low | Critical |
2,743,520,326 | pytorch | Applying python 'next' directly on a tensor is painfully slow | ### 🐛 Describe the bug
Applying the standard python `next` function directly to a pytorch tensor is mysteriously (and painfully) slow relative to iterating over the indices of that tensor and accessing each value independently. This is not due to, at least explicitly, breaking `next`: when printing the values being considered during the loop, only the expected values are considered.
Example: consider this code
```python
import torch
from time import process_time
def timed(f): # will allow us to time the functions below
def timed_f(*a, **kw):
start = process_time()
res = f(*a, **kw)
total = process_time() - start
print("TIME:", (" " * 4) + f.__name__, "took:", total, "s", flush=True)
return res
return timed_f
def predicate(loud):
def _predicate(b):
if loud: # will allow us to check which items are being considered in the `next` run
print(b)
return b>0.1 and b<0.9
return _predicate
@timed
def get_next_direct(a, pred): # apply next directly to a flat tensor
return next(v for v in a if pred(v))
@timed
def get_next_indirect(a, pred): # apply next indirectly, by iterating over the positions in the tensor
return next(a[i] for i in range(len(a)) if pred(a[i]))
a = torch.rand(int(1e7))
pred = predicate(False) # quiet run, for clean timing
get_next_direct(a, pred), get_next_indirect(a, pred)
print("now with printing calls to the predicate")
pred = predicate(True) # loud run, to see if the next is failing to break once the condition is satisfied - it is not, this is not the problem
get_next_direct(a, pred), get_next_indirect(a, pred)
```
The time difference between `get_next_direct` and `get_next_indirect` is enormous (demonstration with my specific run). As we can see when printing from the predicate, this is not due to `next` failing to break - only the expected evaluations are made:
```
TIME: get_next_direct took: 10.626148 s
TIME: get_next_indirect took: 0.00022200000000083264 s
now with printing calls to the predicate
testing: tensor(0.7398)
TIME: get_next_direct took: 10.294364999999992 s
testing: tensor(0.7398)
TIME: get_next_indirect took: 0.0006169999999912079 s
```
### Versions
PyTorch version: 2.2.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.5 (main, Sep 11 2023, 08:31:25) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.0.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] msgpack-numpy==0.4.8
[pip3] numpy==1.26.3
[pip3] pytorch-lightning==2.0.3
[pip3] torch==2.2.0
[pip3] torchaudio==2.2.0
[pip3] torchmetrics==1.1.2
[pip3] torchvision==0.15.2a0
[conda] msgpack-numpy 0.4.8 pypi_0 pypi
[conda] numpy 1.26.3 py311he598dae_0
[conda] numpy-base 1.26.3 py311hfbfe69c_0
[conda] pytorch 2.2.0 py3.11_0 pytorch
[conda] pytorch-lightning 2.0.3 py311hca03da5_0
[conda] torchaudio 2.2.0 py311_cpu pytorch
[conda] torchmetrics 1.1.2 py311hca03da5_0
[conda] torchvision 0.15.2 cpu_py311he74fb5d_0
cc @msaroufim @albanD | triaged,enhancement,module: python frontend | low | Critical |
2,743,522,220 | vscode | finalize fileIsIgnored | We have this
```
export function fileIsIgnored(uri: Uri, token: CancellationToken): Thenable<boolean>;
```
as proposed API. This is generally useful for all extensions and should be finalized. | feature-request,api,chat | low | Minor |
2,743,526,065 | vscode | Native editor scroll bar too small to use [II] | ### Applies To
- [X] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
It does not seem possible to resize the scrollbar of the jupyter editor.
I have tried the following settings:
```
"editor.scrollbar.horizontalScrollbarSize": 64,
"editor.scrollbar.verticalScrollbarSize": 64,
"editor.scrollbar.vertical": "visible",
"notebook.editorOptionsCustomizations": {
"editor.scrollbar.horizontalScrollbarSize": 64,
"editor.scrollbar.verticalScrollbarSize": 64,
},
```
And this is the outcome:
| regular editor | .ipynb editor |
| ------------- | ------------- |
| <kbd><img width="400" src="https://github.com/microsoft/vscode/assets/73083942/ecaea2ad-5fe8-4382-ac88-e67bec529ddc"/></kbd> | <kbd><img width="400" src="https://github.com/microsoft/vscode/assets/73083942/4f1f116e-1cf8-4f58-a8cd-5917505c3a19"/></kbd> |
This was reported in #2373, but I do not see the settings being respected in the current version (Version: 1.85.2).
Thanks.
### VS Code Version
Version: 1.85.2 (Universal) Commit: 8b3775030ed1a69b13e4f4c628c612102e30a681 Date: 2024-01-18T06:40:32.531Z (2 wks ago) Electron: 25.9.7 ElectronBuildId: 26354273 Chromium: 114.0.5735.289 Node.js: 18.15.0 V8: 11.4.183.29-electron.0 OS: Darwin arm64 23.3.0
### Jupyter Extension Version
v2023.11.1100101639
### Jupyter logs
_No response_
### Coding Language and Runtime Version
_No response_
### Language Extension Version (if applicable)
_No response_
### Anaconda Version (if applicable)
_No response_
### Running Jupyter locally or remotely?
None | bug,notebook-layout | low | Minor |
2,743,531,784 | PowerToys | Unstable language loading on Quick Accent | ### Microsoft PowerToys version
0.87.0
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Quick Accent
### Steps to reproduce
Check available characters for "-"
Choose a character set
Disable Hebrew
Check available characters for "-"
### ✔️ Expected Behavior
En-dash (U+2013) should be visible.
### ❌ Actual Behavior
En-dash (U+2013) disappears from the list.
Would have expected it to be linked to special characters.
### Other Software
_No response_ | Issue-Bug,Resolution-Fix Committed | low | Major |
2,743,534,823 | flutter | [desktop] performing a hot reload should focus on the app | ## Observed result
1) Launch a macos flutter app from the terminal
1) Change the code
1) go to the terminal and perform a hot reload
1) Notice now that you have to find the app and bring it to focus to see the results with something like alt+tab.
## Expected result
I'd expect that performing a hot reload would bring the app into focus when performing a hot reload.
## Proposal
Focusing on the app should be the default behavior for the flutter tool when developing desktop apps. We can add a flag `--no-auto-focus` to override that behavior in case someone ever doesn't like it. | c: proposal,a: desktop,P3,team-tool,triaged-tool | low | Major |
2,743,535,987 | PowerToys | Keyboard Manager randomly stops working | ### Microsoft PowerToys version
0.87.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Keyboard Manager key bindings randomly stops working. It works after laptop restart, but stops very quickly, after I trigger few shortcuts, and then it starts to work aga after few minutes, and stops again randomly.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
[PowerToysReport_2024-12-16-23-34-32.zip](https://github.com/user-attachments/files/18157840/PowerToysReport_2024-12-16-23-34-32.zip)
### Other Software
Bitwarden
StartAllBack
WinDynamicDesktop | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Minor |
2,743,560,059 | terminal | Duplicating tab is *significantly* slower than new tab | ### Windows Terminal version
1.21.3231.0
### Windows build number
10.0.26100.0
### Other Software
zsh 5.9 (x86_64-ubuntu-linux-gnu)
### Steps to reproduce
1. Set default profile to Ubuntu (might not be necessary, but I have only tested this on Ubuntu, with zsh as my shell)
2. Make sure that the PWD tracking is set up
```zsh
if test -n "$WT_SESSION"; then
# See https://learn.microsoft.com/en-us/windows/terminal/tutorials/new-tab-same-directory
__keep_current_path() { printf "\e]9;9;%s\e\\" "$(wslpath -w "$PWD")" }
precmd_functions+=(__keep_current_path)
fi
```
4. Open new tab---this is always _practically_ instant
5. Duplicate the tab---this _sometimes_ takes 30s to even a minute; most of the times, this is roughly as fast as opening a new tab, but other times, it takes a _long_ time (obviously, this is not WSL spin-up time, because WSL it _already_ running, and even opening a new tab works immediately). No clear pattern to _when_ it is going take longer to duplicate a tab though.
### Expected Behavior
Duplicating a tab should not be faster than opening a new tab and running `cd {prev tab working directory}`
### Actual Behavior
Duplicating the tab (sometimes) literally takes longer than me re-typing the whole path out by hand. | Needs-Repro,Issue-Bug,Needs-Triage,Needs-Attention,Product-Terminal,Priority-3,Area-Quality | low | Major |
2,743,571,254 | deno | The `ng test` command fails in Angular when using Deno | I'm trying to replace Node.js with Deno in my Angular project, but generating a new project (#27382) and running unit tests fails with Deno for the current/latest Angular version. With `ng test`, it runs with the Karma test runner, and it seems that the browser fails to load the test runner, and just stays loading. The `ng test` command will fail after restarting twice.
### Reproduction
```sh
> npx @angular/cli@latest new foo
> cd foo
> deno task --eval "rm -rf node_modules"
> deno install --allow-scripts
> deno task test
Task test ng test
✔ Browser application bundle generation complete.
16 12 2024 23:50:51.572:WARN [karma]: No captured browser, open http://localhost:9876/
16 12 2024 23:50:51.919:INFO [karma-server]: Karma v6.4.4 server started at http://localhost:9876/
16 12 2024 23:50:51.920:INFO [launcher]: Launching browsers Chrome with concurrency unlimited
16 12 2024 23:50:52.152:INFO [launcher]: Starting browser Chrome
16 12 2024 23:51:52.181:WARN [launcher]: Chrome has not captured in 60000 ms, killing.
16 12 2024 23:51:53.349:INFO [launcher]: Trying to start Chrome again (1/2).
16 12 2024 23:52:53.364:WARN [launcher]: Chrome has not captured in 60000 ms, killing.
16 12 2024 23:52:53.638:INFO [launcher]: Trying to start Chrome again (2/2).
16 12 2024 23:53:53.647:WARN [launcher]: Chrome has not captured in 60000 ms, killing.
16 12 2024 23:53:53.885:ERROR [launcher]: Chrome failed 2 times (timeout). Giving up.
```
Version: Deno 2.1.4
| bug,node compat | low | Critical |
2,743,575,458 | PowerToys | [PTRun] plugin activation commands in search results | ### Description of the new feature / enhancement
Add plugin activation commands in search results, searching plugin names will show them as one of the results. Use calculator for example, searching "Calculator" will give the activation command of calculator as one of the results, selecting this result will update the input by that command ("=") for further user input.
### Scenario when this would be used?
It's for those who have a lot of plugins enabled but cannot remember all the activation commands.
### Supporting information
_No response_ | Product-PowerToys Run,Needs-Triage,Run-Plugin | low | Minor |
2,743,590,742 | pytorch | An idea to improve Welford implementation in Inductor | ### 🐛 Describe the bug
Inductor implements a one-pass Welford algorithm to compute variance. The basic idea is to maintain 3 metrics (weight, mean, m2) for each group of items, and implement a combine function to merge these metrics for two group of items.
- Weight represents the number of elements in the group
- mean represents the mean value of the elements in the group
- M2 represents $\sum(x_i - mean) ^ 2$ summing over each element within the group
There are 3 places this can be improved
1. instead of maintaining mean, we can just maintain sum. By maintaining mean, we need keep adapting the denominator when we combine two groups since the number of elements get changed
2. instead of maintaining M2, we can just maintain sum_of_square $\sum{x_i ^ 2}$ (i.e. equivalent to temporarily pretending the mean is 0). By maintaining M2, we need keep adapting for the fact that 'mean' get changed when we combining two groups.
3. By doing 1 & 2, we don't need track weight anymore. That means we use **LESS REGISTERS**.
The end result is equivalent to leveraging the following equation to compute variance:
$$BiasedVariance = \frac{\sum(x_i - mean) ^ 2}{n} = \frac{\sum{x_i ^ 2}}{n} - mean ^ 2$$
These optimization may not improve perf for every kernel. But I think it worth the effort for the following reasons
1. it simplifies the implementation quite a bit.
2. if the kernel is fused with surrounding kernels and there is register pressure, using less registers indeed helps perf.
cc @chauhang @penguinwu @jansel @Chillee @eellison @peterbell10 for comments
### Error logs
.
### Versions
. | triaged,oncall: pt2 | low | Critical |
2,743,593,120 | pytorch | c10::SmallVector unusable with gcc/g++ 12 and 13 with `-O3` | ### 🐛 Describe the bug
Recently, github hosted runners for `ubuntu-latest` is migrating from ubuntu 22.04 to ubuntu 24.04. As part of that migration, the default compiler has changed from gcc 11.4 to gcc 13.2. A lot of my CI workflows using libtorch are failing during this migration.
All errors look like the following
```
[build] /home/thu/micromamba/envs/neml2/lib/python3.12/site-packages/torch/include/c10/util/SmallVector.h:139:19: error: ‘net’ may be used uninitialized [-Werror=maybe-uninitialized]
[build] 139 | Base::grow_pod(getFirstEl(), MinSize, TSize);
[build] | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[build] /home/thu/micromamba/envs/neml2/lib/python3.12/site-packages/torch/include/c10/util/SmallVector.h: In function ‘neml2::TensorShape neml2::utils::add_shapes(S&& ...) [with S = {c10::SmallVector<long int, 6>, c10::ArrayRef<long int>&}]’:
[build] /home/thu/micromamba/envs/neml2/lib/python3.12/site-packages/torch/include/c10/util/SmallVector.h:73:8: note: by argument 2 of type ‘const void*’ to ‘void c10::SmallVectorBase<Size_T>::grow_pod(const void*, size_t, size_t) [with Size_T = unsigned int]’ declared here
[build] 73 | void grow_pod(const void* FirstEl, size_t MinSize, size_t TSize);
[build] | ^~~~~~~~
[build] /home/thu/projects/neml2/include/neml2/misc/utils.h:303:15: note: ‘net’ declared here
[build] 303 | TensorShape net;
[build] | ^~~
```
Not all builds fail, and after some digging, I can confirm that the following conditions are needed to reproduce this error
1. libtorch that comes with pytorch==2.5.1
1. gcc/g++ 12.4/13.2/13.3
2. -O3
I have not checked newer versions of gcc/g++, and I have not checked previous versions of pytorch
However, I can confirm that with `-O2` and `-O0` the code compiles. Also, with gcc/g++ 11.4, the code compiles with all optimization levels. This led me to believe that this is a compiler bug, not a fault on the pytorch side.
### Versions
<details>
<summary>output</summary>
```
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.8 (++20240731025011+3b5b5c1ec4a3-1~exp1~20240731145104.143)
CMake version: version 3.30.5
Libc version: glibc-2.39
Python version: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 16:05:46) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro M6000 24GB
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 2
Stepping: 1
CPU(s) scaling MHz: 44%
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 4389.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 pti intel_ppin ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 10 MiB (40 instances)
L3 cache: 100 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-19
NUMA node1 CPU(s): 20-39
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT disabled
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torch-tb-profiler==0.4.3
[pip3] triton==3.1.0
[conda] No relevant packages
```
</details>
cc @malfet @seemethere @jbschlosser | needs reproduction,module: build,module: cpp,triaged | low | Critical |
2,743,618,208 | vscode | Strip characters from `"onEnterRules"` string | currently only `brackets` are stripped before `"onEnterRules"` evaluates
> this could also lead to unintended side effects where a bracket was removed when it shouldn't have
> tho idk if there are any examples
it would be nice to manually define what characters via a set or regex
OR
an option to disable `"onEnterRules"` when inside a `string`/`comment`/`regex`/`other`
related https://github.com/microsoft/vscode/issues/209519#issuecomment-2198372691
Steps to Reproduce:
1. install pre-release [RedCMD.yaml-syntax](https://marketplace.visualstudio.com/items?itemName=RedCMD.yaml-syntax) extension
2. create YAML file with code snippet
```yaml
block-scalar:
bracket:
>9
pipe:
|9
string:
bracket
>9
pipe
|9
#comment
# bracket
# >9
# pipe
# |9
```

3. press enter on all lines ending with `9`
expected:
only the `block-scalar:` ones should indent with 9 spaces

actual:
all `|9` lines indent with 9 spaces

this is because `>` is marked as a bracket and is removed from the string before `"onEnterRules"` evaluates
however `|` is not removed, because I cannot define it as a bracket without side affects
`"previousLineText"` won't fix it because previous lines can be blank in all contexts
```json
{
// indent 9 spaces after block-scalar with indentation-indicator
"action": {
"appendText": " ",
"indent": "none",
},
"afterText": "^(?![ \t]*#)",
"beforeText": "([ \t]+|^)[|>][+-]?9"
}
```
cc @aiday-mar
Does this issue occur when all extensions are disabled?: Yes
- VS Code Version: 1.96.0
- OS Version: Windows 11
| feature-request,editor-autoindent | low | Minor |
2,743,625,402 | vscode | variable -> Rename symbol -> ctrl+enter (Preview); doesn't work, collides with Cell Execution shortcut | ### Applies To
- [x] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
When trying to Rename symbol (variable) in a cell, there is a CTRL+ENTER option for Preview.
This works in .py files,
but not in .ipynb files, since it obviously collides with Cell Execution shortcut...
🤔
### VS Code Version
Version: 1.95.3
### Jupyter Extension Version
v2024.10.0
### Jupyter logs
```shell
```
### Coding Language and Runtime Version
_No response_
### Language Extension Version (if applicable)
_No response_
### Anaconda Version (if applicable)
_No response_
### Running Jupyter locally or remotely?
None | bug,notebook-commands | low | Minor |
2,743,628,632 | vscode | CTRL+A randomly breaks in Jupyter notebooks | Sometimes, while using a Python jupyter notebook, my CTRL+A will randomly stop working. I won't be able to select all text in a code block. I will be able to CTRL+A in the output and actually, sometimes, doing that AND pasting it a code block with then 'free' CTRL+A, but sometimes it won't and the only way out is to close my notebook and open it again which is really quite disruptive and annoying since I have to run everything again.
This bug started maybe 1-2 months ago now.
My build is:
```
Version: 1.90.0 (user setup)
Commit: 89de5a8d4d6205e5b11647eb6a74844ca23d2573
Date: 2024-06-04T19:33:54.889Z
Electron: 29.4.0
ElectronBuildId: 9593362
Chromium: 122.0.6261.156
Node.js: 20.9.0
V8: 12.2.281.27-electron.0
OS: Windows_NT x64 10.0.22621
```
My Jupyter version is `v2024.5.0`. | bug,notebook-commands | medium | Critical |
2,743,631,409 | ui | [bug]: Sidebar will not expand/collapse when using CSS variables. | ### Describe the bug
When using a vanilla NextJS 15.1 installation and initialising shadcn/ui with default values, including using CSS variables, the sidebar component does not expand or collapse. When you inspect the HTML using dev tools in the browser you can see the appropriate `data-state` being updated between `expanded` and `collapsed`, respectively but the sidebar itself does not slide in or out.
Interestingly, when using Tailwind utility classes, the sidebar behaves as expected and slides in and out on the click of the toggle.
### Affected component/components
sidebar
### How to reproduce
package.json:
```json
{
"name": "sapphire",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev --turbopack",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"dependencies": {
"@radix-ui/react-dialog": "^1.1.3",
"@radix-ui/react-separator": "^1.1.1",
"@radix-ui/react-slot": "^1.1.1",
"@radix-ui/react-tooltip": "^1.1.5",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"lucide-react": "^0.468.0",
"next": "15.1.0",
"react": "^19.0.0",
"react-dom": "^19.0.0",
"tailwind-merge": "^2.5.5",
"tailwindcss-animate": "^1.0.7"
},
"devDependencies": {
"@eslint/eslintrc": "^3",
"@types/node": "^20",
"@types/react": "^19",
"@types/react-dom": "^19",
"eslint": "^9",
"eslint-config-next": "15.1.0",
"postcss": "^8",
"tailwindcss": "^3.4.1",
"typescript": "^5"
}
}
```
components.json:
```json
{
"$schema": "https://ui.shadcn.com/schema.json",
"style": "new-york",
"rsc": true,
"tsx": true,
"tailwind": {
"config": "tailwind.config.ts",
"css": "src/app/globals.css",
"baseColor": "zinc",
"cssVariables": true,
"prefix": ""
},
"aliases": {
"components": "@/components",
"utils": "@/lib/utils",
"ui": "@/components/ui",
"lib": "@/lib",
"hooks": "@/hooks"
},
"iconLibrary": "lucide"
}
```
tailwind.config.ts:
```ts
import type { Config } from "tailwindcss";
import * as tailwindAnimate from "tailwindcss-animate";
export default {
darkMode: ["class"],
content: ["./src/app/**/*.{js,ts,jsx,tsx,mdx}"],
theme: {
extend: {
borderRadius: {
lg: 'var(--radius)',
md: 'calc(var(--radius) - 2px)',
sm: 'calc(var(--radius) - 4px)'
},
colors: {
background: 'hsl(var(--background))',
foreground: 'hsl(var(--foreground))',
card: {
DEFAULT: 'hsl(var(--card))',
foreground: 'hsl(var(--card-foreground))'
},
popover: {
DEFAULT: 'hsl(var(--popover))',
foreground: 'hsl(var(--popover-foreground))'
},
primary: {
DEFAULT: 'hsl(var(--primary))',
foreground: 'hsl(var(--primary-foreground))'
},
secondary: {
DEFAULT: 'hsl(var(--secondary))',
foreground: 'hsl(var(--secondary-foreground))'
},
muted: {
DEFAULT: 'hsl(var(--muted))',
foreground: 'hsl(var(--muted-foreground))'
},
accent: {
DEFAULT: 'hsl(var(--accent))',
foreground: 'hsl(var(--accent-foreground))'
},
destructive: {
DEFAULT: 'hsl(var(--destructive))',
foreground: 'hsl(var(--destructive-foreground))'
},
border: 'hsl(var(--border))',
input: 'hsl(var(--input))',
ring: 'hsl(var(--ring))',
chart: {
'1': 'hsl(var(--chart-1))',
'2': 'hsl(var(--chart-2))',
'3': 'hsl(var(--chart-3))',
'4': 'hsl(var(--chart-4))',
'5': 'hsl(var(--chart-5))'
},
sidebar: {
DEFAULT: 'hsl(var(--sidebar-background))',
foreground: 'hsl(var(--sidebar-foreground))',
primary: 'hsl(var(--sidebar-primary))',
'primary-foreground': 'hsl(var(--sidebar-primary-foreground))',
accent: 'hsl(var(--sidebar-accent))',
'accent-foreground': 'hsl(var(--sidebar-accent-foreground))',
border: 'hsl(var(--sidebar-border))',
ring: 'hsl(var(--sidebar-ring))'
}
}
}
},
plugins: [tailwindAnimate],
} satisfies Config;
```
layout.tsx:
```tsx
import type { Metadata } from "next";
import "./globals.css";
import { AppSidebar } from "@/components/app-sidebar";
import { SidebarProvider, SidebarTrigger } from "@/components/ui/sidebar";
export const metadata: Metadata = {
title: "Create Next App",
description: "Generated by create next app",
};
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
<html lang="en">
<body>
<SidebarProvider>
<AppSidebar />
<main>
<SidebarTrigger />
{children}
</main>
</SidebarProvider>
</body>
</html>
);
}
```
page.tsx:
```tsx
export default function Home() {
return (
<main>
<div>Hello world!</div>
</main>
);
}
```
globals.css:
```css
@tailwind base;
@tailwind components;
@tailwind utilities;
@layer base {
:root {
--background: 0 0% 100%;
--foreground: 240 10% 3.9%;
--card: 0 0% 100%;
--card-foreground: 240 10% 3.9%;
--popover: 0 0% 100%;
--popover-foreground: 240 10% 3.9%;
--primary: 240 5.9% 10%;
--primary-foreground: 0 0% 98%;
--secondary: 240 4.8% 95.9%;
--secondary-foreground: 240 5.9% 10%;
--muted: 240 4.8% 95.9%;
--muted-foreground: 240 3.8% 46.1%;
--accent: 240 4.8% 95.9%;
--accent-foreground: 240 5.9% 10%;
--destructive: 0 84.2% 60.2%;
--destructive-foreground: 0 0% 98%;
--border: 240 5.9% 90%;
--input: 240 5.9% 90%;
--ring: 240 10% 3.9%;
--chart-1: 12 76% 61%;
--chart-2: 173 58% 39%;
--chart-3: 197 37% 24%;
--chart-4: 43 74% 66%;
--chart-5: 27 87% 67%;
--radius: 0.5rem;
--sidebar-background: 0 0% 98%;
--sidebar-foreground: 240 5.3% 26.1%;
--sidebar-primary: 240 5.9% 10%;
--sidebar-primary-foreground: 0 0% 98%;
--sidebar-accent: 240 4.8% 95.9%;
--sidebar-accent-foreground: 240 5.9% 10%;
--sidebar-border: 220 13% 91%;
--sidebar-ring: 217.2 91.2% 59.8%
}
.dark {
--background: 240 10% 3.9%;
--foreground: 0 0% 98%;
--card: 240 10% 3.9%;
--card-foreground: 0 0% 98%;
--popover: 240 10% 3.9%;
--popover-foreground: 0 0% 98%;
--primary: 0 0% 98%;
--primary-foreground: 240 5.9% 10%;
--secondary: 240 3.7% 15.9%;
--secondary-foreground: 0 0% 98%;
--muted: 240 3.7% 15.9%;
--muted-foreground: 240 5% 64.9%;
--accent: 240 3.7% 15.9%;
--accent-foreground: 0 0% 98%;
--destructive: 0 62.8% 30.6%;
--destructive-foreground: 0 0% 98%;
--border: 240 3.7% 15.9%;
--input: 240 3.7% 15.9%;
--ring: 240 4.9% 83.9%;
--chart-1: 220 70% 50%;
--chart-2: 160 60% 45%;
--chart-3: 30 80% 55%;
--chart-4: 280 65% 60%;
--chart-5: 340 75% 55%;
--sidebar-background: 240 5.9% 10%;
--sidebar-foreground: 240 4.8% 95.9%;
--sidebar-primary: 224.3 76.3% 48%;
--sidebar-primary-foreground: 0 0% 100%;
--sidebar-accent: 240 3.7% 15.9%;
--sidebar-accent-foreground: 240 4.8% 95.9%;
--sidebar-border: 240 3.7% 15.9%;
--sidebar-ring: 217.2 91.2% 59.8%
}
}
@layer base {
* {
@apply border-border;
}
body {
@apply bg-background text-foreground;
}
}
```
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Node v23.4.0
pnpm v9.15.0
MacOS 15.1.1
Safari Version 18.1.1 (20619.2.8.11.12)
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,743,647,171 | vscode | Command Center should shrink when not enough space | When Vscode is in English the menu bar contains the items "file", "edit", "selection", "view", "go", "run", "terminal" and "help".

When I change the display language to Spanish it shows like this:

Best regards | bug,titlebar,layout,command-center | low | Major |
2,743,659,368 | angular | ADEV: formatting of APIs with long argument types | ### Describe the problem that you experienced
Currently, when an API has a some complex types, those types are being displayed as a single line in API docs, for example: https://angular.dev/api/core/afterNextRender. It'd be great to find a way to represent those types in a more readable form.
### Enter the URL of the topic with the problem
https://angular.dev/api/core/afterNextRender | area: dev-infra | low | Minor |
2,743,660,467 | ui | [bug]: duplication of toast manual installation | ### Describe the bug
All the code for manual installation in toast is duplicated



### Affected component/components
Toast
### How to reproduce
1. Go to the website https://ui.shadcn.com/docs/components/toast
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows, Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,743,664,466 | react-native | [iOS] PushNotification with DeepLinks doesn't work when the app is close | ### Description
when my iOS app receive push notification and when the app is close only opens in the home screen but never goes to the screen with the deeplink route, i always get url = null
```
//this way is to detect universal deep link
Linking.getInitialURL()
.then((url) => {
console.log('DEEPLINK', url);
if (url) {
Linking.canOpenURL(url).then((supported) =>
supported ? handleDeepLink(url) : null,
);
}
})
.catch((err) => {
console.warn('An error occurred deepLink', err);
});
```
but the curious thing is when the app is open or in the background it works perfectly, I have implemented [documentation's setup](https://reactnative.dev/docs/linking#enabling-deep-links) with universal links
I am currently working with react native 0.73.11 and i updated to 0.74.6 and 0.75.1 because in your [CHANGELOG](https://github.com/facebook/react-native/blob/main/CHANGELOG.md#ios-specific-18) mentions changes with Push Notification on iOS but the behavior is the same NO FIX
also looking on internet I haven't found any solution
### Steps to reproduce
1. close the iOS app
2. send a push notification with deep links
3. the app should open in the correct screen but only open at home screen
### React Native Version
0.73.11
### Affected Platforms
Runtime - iOS, Other (please specify)
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1.1
CPU: (8) arm64 Apple M2
Memory: 132.95 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.17.0
path: /usr/local/bin/node
Yarn:
version: 1.22.22
path: /usr/local/bin/yarn
npm:
version: 10.8.2
path: /usr/local/bin/npm
Watchman:
version: 2024.11.04.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK:
API Levels:
- "31"
- "34"
- "35"
Build Tools:
- 30.0.3
- 31.0.0
- 33.0.1
- 34.0.0
System Images:
- android-29 | Google Play ARM 64 v8a
- android-32 | Google APIs ARM 64 v8a
- android-32 | Google Play ARM 64 v8a
- android-TiramisuPrivacySandbox | Google Play ARM 64 v8a
- android-VanillaIceCream | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2411.12071903
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /usr/bin/javac
Ruby:
version: 3.3.6
path: /opt/homebrew/opt/ruby/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.1
wanted: 0.75.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
not apply
```
### Reproducer
none
### Screenshots and Videos
none | Platform: iOS,API: Linking,Needs: Repro,Needs: Attention | low | Critical |
2,743,666,311 | yt-dlp | [SoundCloud] Some cover arts are downloaded in 100x100 resolution instead of original size | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Poland
### Provide a description that is worded well enough to be understood
Hi,
I am not entirely sure if it is a bug, but I’ve noticed that some SoundCloud cover arts ([sample song](https://soundcloud.com/shinpuru/miracle)) are downloaded in 100x100 dimensions, even though the original artwork is available in higher resolution.
To download the file, I use this command and obtain the following log:
```bash
❯ yt-dlp --add-metadata --parse-metadata "%(artists)l:%(meta_artist)s" --embed-thumbnail -o "%(artists)l - %(title)s.%(ext)s" https://soundcloud.com/shinpuru/miracle
[soundcloud] Extracting URL: https://soundcloud.com/shinpuru/miracle
[soundcloud] shinpuru/miracle: Downloading info JSON
[soundcloud] 1987169691: Downloading hls_aac format info JSON
[soundcloud] 1987169691: Downloading hls_mp3 format info JSON
[soundcloud] 1987169691: Downloading http_mp3 format info JSON
[soundcloud] 1987169691: Downloading hls_opus format info JSON
[MetadataParser] Parsed meta_artist from '%(artists)l': 'shinpuru'
[info] 1987169691: Downloading 1 format(s): hls_aac_160k
[info] Downloading video thumbnail 0 ...
[info] Writing video thumbnail 0 to: shinpuru - Miracle.png
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 11
[download] Destination: shinpuru - Miracle.m4a
[download] 100% of 2.16MiB in 00:00:00 at 5.86MiB/s
[FixupM4a] Correcting container of "shinpuru - Miracle.m4a"
[Metadata] Adding metadata to "shinpuru - Miracle.m4a"
[EmbedThumbnail] mutagen: Adding thumbnail to "shinpuru - Miracle.m4a"
```
The downloaded `shinpuru - Miracle.m4a` file has 100x100 cover art dimensions, while the [original art](https://i1.sndcdn.com/artworks-hh0yahMrXxlmwJKO-72s1hA-original.png) seems to be 500x500.
This is not an issue for some other songs, such as [this one](https://soundcloud.com/capturelight/one-second-per-second) being downloaded in the `.opus` format with 1999x1999 cover art.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [542166962] (pip)
[debug] Python 3.12.7 (CPython AMD64 64bit) - Windows-11-10.0.26100-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.1-essentials_build-www.gyan.dev (setts), ffprobe 7.1-essentials_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.17, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.47.0, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
```
| site-bug,patch-available | low | Critical |
2,743,680,020 | rust | error: compiler panic "could not compile `syn` (lib)" | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
Received a compiler error compiling `rsql` on Ubuntu 24.04; the output for the build can be found here: https://github.com/theseus-rs/rsql/actions/runs/12363542400/job/34505446462?pr=238
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
```
### Error output
```
thread 'rustc' panicked at compiler/rustc_symbol_mangling/src/v0.rs:244:17:
assertion `left == right` failed
left: std::option::Option<(lifetime::Lifetime, buffer::Cursor<'{erased}>)>
right: std::option::Option<(lifetime::Lifetime, buffer::Cursor<'{erased}>)>
stack backtrace:
0: rust_begin_unwind
1: core::panicking::panic_fmt
2: core::panicking::assert_failed_inner
3: core::panicking::assert_failed::<rustc_middle::ty::Ty, rustc_middle::ty::Ty>
4: <rustc_symbol_mangling::v0::SymbolMangler as rustc_middle::ty::print::Printer>::print_impl_path
5: <rustc_symbol_mangling::v0::SymbolMangler as rustc_middle::ty::print::Printer>::print_def_path
6: <rustc_symbol_mangling::v0::SymbolMangler as rustc_middle::ty::print::Printer>::print_def_path
7: rustc_symbol_mangling::v0::mangle
8: rustc_symbol_mangling::symbol_name_provider
[... omitted 1 frame ...]
9: rustc_monomorphize::partitioning::assert_symbols_are_distinct::<core::slice::iter::Iter<rustc_middle::mir::mono::MonoItem>>
10: rustc_monomorphize::partitioning::collect_and_partition_mono_items
[... omitted 2 frames ...]
11: rustc_codegen_ssa::back::symbol_export::exported_symbols_provider_local
[... omitted 2 frames ...]
12: <rustc_metadata::rmeta::encoder::EncodeContext>::encode_crate_root
13: rustc_metadata::rmeta::encoder::encode_metadata
14: rustc_metadata::fs::encode_and_write_metadata
15: <rustc_interface::queries::Linker>::codegen_and_build_linker
16: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.83.0 (90b35a623 2024-11-26) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type lib -C embed-bitcode=no -C instrument-coverage
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [symbol_name] computing the symbol for `<core::option::Option<(lifetime::Lifetime, buffer::Cursor<'_>)> as core::ops::try_trait::FromResidual<core::option::Option<core::convert::Infallible>>>::from_residual`
#1 [collect_and_partition_mono_items] collect_and_partition_mono_items
#2 [exported_symbols] collecting exported symbols for crate `0`
end of query stack
error: could not compile `syn` (lib)
Caused by:
process didn't exit successfully: `/home/bheineman/.rustup/toolchains/1.83.0-x86_64-unknown-linux-gnu/bin/rustc --crate-name syn --edition=[201](https://github.com/theseus-rs/rsql/actions/runs/12363542400/job/34505446462?pr=238#step:7:202)8 /home/bheineman/.cargo/registry/src/index.crates.io-6f17d22bba15001f/syn-1.0.109/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no --cfg 'feature="clone-impls"' --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="extra-traits"' --cfg 'feature="full"' --cfg 'feature="parsing"' --cfg 'feature="printing"' --cfg 'feature="proc-macro"' --cfg 'feature="quote"' --cfg 'feature="visit-mut"' --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values("clone-impls", "default", "derive", "extra-traits", "fold", "full", "parsing", "printing", "proc-macro", "quote", "test", "visit", "visit-mut"))' -C metadata=51f7b7ab389b85a5 -C extra-filename=-51f7b7ab389b85a5 --out-dir /home/bheineman/development/actions-runner/_work/rsql/rsql/target/llvm-cov-target/debug/deps -L dependency=/home/bheineman/development/actions-runner/_work/rsql/rsql/target/llvm-cov-target/debug/deps --extern proc_macro2=/home/bheineman/development/actions-runner/_work/rsql/rsql/target/llvm-cov-target/debug/deps/libproc_macro2-2f42fe115b49d460.rmeta --extern quote=/home/bheineman/development/actions-runner/_work/rsql/rsql/target/llvm-cov-target/debug/deps/libquote-3f4b264c9933d899.rmeta --extern unicode_ident=/home/bheineman/development/actions-runner/_work/rsql/rsql/target/llvm-cov-target/debug/deps/libunicode_ident-fd3aa6049a7e9b09.rmeta --cap-lints allow -C instrument-coverage --cfg=coverage --cfg=trybuild_no_target --cfg syn_disable_nightly_tests` (exit status: 101)
warning: build failed, waiting for other jobs to finish...
error: process didn't exit successfully: `/home/bheineman/.rustup/toolchains/1.83.0-x86_64-unknown-linux-gnu/bin/cargo test --tests --manifest-path /home/bheineman/development/actions-runner/_work/rsql/rsql/Cargo.toml --target-dir /home/bheineman/development/actions-runner/_work/rsql/rsql/target/llvm-cov-target --workspace --jobs 2` (exit status: 101)
Error: Process completed with exit code 1.
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
stack backtrace:
0: rust_begin_unwind
1: core::panicking::panic_fmt
2: core::panicking::assert_failed_inner
3: core::panicking::assert_failed::<rustc_middle::ty::Ty, rustc_middle::ty::Ty>
4: <rustc_symbol_mangling::v0::SymbolMangler as rustc_middle::ty::print::Printer>::print_impl_path
5: <rustc_symbol_mangling::v0::SymbolMangler as rustc_middle::ty::print::Printer>::print_def_path
6: <rustc_symbol_mangling::v0::SymbolMangler as rustc_middle::ty::print::Printer>::print_def_path
7: rustc_symbol_mangling::v0::mangle
8: rustc_symbol_mangling::symbol_name_provider
[... omitted 1 frame ...]
9: rustc_monomorphize::partitioning::assert_symbols_are_distinct::<core::slice::iter::Iter<rustc_middle::mir::mono::MonoItem>>
10: rustc_monomorphize::partitioning::collect_and_partition_mono_items
[... omitted 2 frames ...]
11: rustc_codegen_ssa::back::symbol_export::exported_symbols_provider_local
[... omitted 2 frames ...]
12: <rustc_metadata::rmeta::encoder::EncodeContext>::encode_crate_root
13: rustc_metadata::rmeta::encoder::encode_metadata
14: rustc_metadata::fs::encode_and_write_metadata
15: <rustc_interface::queries::Linker>::codegen_and_build_linker
16: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
</p>
</details>
| I-ICE,T-compiler,C-bug,S-needs-repro | low | Critical |
2,743,692,646 | deno | Deno Test Internal Data Access | I have a suite of tests that I am interested in running every 24H, but not earlier than that (unless the last run failed).
The reason for that is that they are expensive (both computationally and monetarily as they access paying services).
My current strategy is hacky:
1. On the last test of the suite, store the date on `localStorage` with `import.meta.filename` as the key
2. At the beginning of the test, check the date and if less than 24, set const `ignore` to `true`
3. Add `{ ignore: ignore }` to all tests so that they are ignored if 24H have not gone by
4. Not forget to add `--fail-fast` (so that the date is NOT stored on failure)
As you can see, many things can go wrong with this approach.
I know that I could set a scheduler to run it every 24H or something like that, but I do not want to reach for "cron" oriented solutions.
My ask is to have access to the current suite of tests. How many have passed? How many have failed? Having read-only access to this info in user land could enable me to not depend on `--fail-fast`. And it would be nice to have a `beforeAll` `afterAll` without the additional structure of building steps.
Cheers.
| suggestion,testing | low | Critical |
2,743,772,809 | rust | Large files containing many tokens of `const` data compile very slowly and use a lot of memory (in MIR_borrow_checking and expand_crate) | [ICU4X](https://github.com/unicode-org/icu4x) has a concept of "baked data", a way of "baking" locale data into the source of a program in the form of consts. This has a bunch of performance benefits: loading data from the binary is essentially free and doesn't involve any sort of deserialization.
However, we have been facing issues with cases where a single crate contains a lot of data.
I have a minimal testcase here: https://github.com/Manishearth/icu4x_compile_sample. It removes most of the cruft whilst still having an interesting-enough AST in the const data. `cargo build` in the `demo` folder takes 51s, using almost a gigabyte of RAM. [Removing the macro](https://github.com/Manishearth/icu4x_compile_sample/tree/rm-macro) does improve things slightly, but not overly slow.
Some interesting snippets of `time-passes`:
```
...
time: 1.194; rss: 52MB -> 595MB ( +543MB) expand_crate
time: 1.194; rss: 52MB -> 595MB ( +543MB) macro_expand_crate
...
time: 3.720; rss: 682MB -> 837MB ( +155MB) type_check_crate
...
time: 55.505; rss: 837MB -> 1058MB ( +221MB) MIR_borrow_checking
...
time: 0.124; rss: 1080MB -> 624MB ( -456MB) free_global_ctxt
```
<details><summary>Full time-passes </summary>
```
time: 0.001; rss: 47MB -> 49MB ( +1MB) parse_crate
time: 0.001; rss: 50MB -> 50MB ( +0MB) incr_comp_prepare_session_directory
time: 0.000; rss: 50MB -> 51MB ( +1MB) setup_global_ctxt
time: 0.000; rss: 52MB -> 52MB ( +0MB) crate_injection
time: 1.194; rss: 52MB -> 595MB ( +543MB) expand_crate
time: 1.194; rss: 52MB -> 595MB ( +543MB) macro_expand_crate
time: 0.013; rss: 595MB -> 595MB ( +0MB) AST_validation
time: 0.008; rss: 595MB -> 597MB ( +1MB) finalize_macro_resolutions
time: 0.285; rss: 597MB -> 642MB ( +45MB) late_resolve_crate
time: 0.012; rss: 642MB -> 642MB ( +0MB) resolve_check_unused
time: 0.020; rss: 642MB -> 642MB ( +0MB) resolve_postprocess
time: 0.326; rss: 595MB -> 642MB ( +46MB) resolve_crate
time: 0.011; rss: 610MB -> 610MB ( +0MB) write_dep_info
time: 0.011; rss: 610MB -> 611MB ( +0MB) complete_gated_feature_checking
time: 0.058; rss: 765MB -> 729MB ( -35MB) drop_ast
time: 1.213; rss: 610MB -> 681MB ( +71MB) looking_for_derive_registrar
time: 1.421; rss: 610MB -> 682MB ( +72MB) misc_checking_1
time: 0.086; rss: 682MB -> 690MB ( +8MB) coherence_checking
time: 3.720; rss: 682MB -> 837MB ( +155MB) type_check_crate
time: 0.000; rss: 837MB -> 837MB ( +0MB) MIR_coroutine_by_move_body
time: 55.505; rss: 837MB -> 1058MB ( +221MB) MIR_borrow_checking
time: 1.571; rss: 1058MB -> 1068MB ( +10MB) MIR_effect_checking
time: 0.217; rss: 1068MB -> 1067MB ( -1MB) module_lints
time: 0.217; rss: 1068MB -> 1067MB ( -1MB) lint_checking
time: 0.311; rss: 1067MB -> 1068MB ( +0MB) privacy_checking_modules
time: 0.607; rss: 1068MB -> 1068MB ( +0MB) misc_checking_3
time: 0.000; rss: 1136MB -> 1137MB ( +1MB) monomorphization_collector_graph_walk
time: 0.778; rss: 1068MB -> 1064MB ( -4MB) generate_crate_metadata
time: 0.005; rss: 1064MB -> 1085MB ( +22MB) codegen_to_LLVM_IR
time: 0.007; rss: 1076MB -> 1085MB ( +10MB) LLVM_passes
time: 0.014; rss: 1064MB -> 1085MB ( +22MB) codegen_crate
time: 0.257; rss: 1084MB -> 1080MB ( -4MB) encode_query_results
time: 0.270; rss: 1084MB -> 1080MB ( -4MB) incr_comp_serialize_result_cache
time: 0.270; rss: 1084MB -> 1080MB ( -4MB) incr_comp_persist_result_cache
time: 0.271; rss: 1084MB -> 1080MB ( -4MB) serialize_dep_graph
time: 0.124; rss: 1080MB -> 624MB ( -456MB) free_global_ctxt
time: 0.000; rss: 624MB -> 624MB ( +0MB) finish_ongoing_codegen
time: 0.127; rss: 624MB -> 653MB ( +29MB) link_rlib
time: 0.135; rss: 624MB -> 653MB ( +29MB) link_binary
time: 0.138; rss: 624MB -> 618MB ( -6MB) link_crate
time: 0.139; rss: 624MB -> 618MB ( -6MB) link
time: 65.803; rss: 32MB -> 187MB ( +155MB) total
```
</details>
Even [without the intermediate macro](https://github.com/Manishearth/icu4x_compile_sample/tree/rm-macro), `expand_crate` still increases RAM significantly, though the increase is halved:
```
time: 0.715; rss: 52MB -> 254MB ( +201MB) expand_crate
time: 0.715; rss: 52MB -> 254MB ( +201MB) macro_expand_crate
```
I understand that to some extent, we are simply feeding Rust a file that is megabytes in size and we cannot expect it to be _too_ fast. It's interesting that MIR borrow checking is slowed down so much by this (there's relatively little to borrow check. I suspect there is MIR construction happening here too). The fact that the RAM usage is almost in the gigabytes is also somewhat concerning; the problematic source file is 7MB, but compilation takes a gigabyte of RAM, which is quite significant. Pair this with the fact that we have [many such data files per crate](https://github.com/unicode-org/icu4x/tree/main/provider/data/experimental/data) (some of which are large) we end up hitting CI limits.
With the actual problem we were facing (https://github.com/unicode-org/icu4x/issues/5230#issuecomment-2344984844), our time-passes numbers were:
```
...
time: 1.013; rss: 51MB -> 1182MB (+1130MB) expand_crate
time: 1.013; rss: 51MB -> 1182MB (+1131MB) macro_expand_crate
...
time: 6.609; rss: 1308MB -> 1437MB ( +128MB) type_check_crate
time: 36.802; rss: 1437MB -> 2248MB ( +811MB) MIR_borrow_checking
time: 2.214; rss: 2248MB -> 2270MB ( +22MB) MIR_effect_checking
...
```
I'm hoping there is at least some low hanging fruit that can be improved here, or advice on how to avoid this problem. So far we've managed to stay within CI limits by reducing the number of tokens, converting stuff like `icu::experimental::dimension::provider::units::UnitsDisplayNameV1 { patterns: icu::experimental::relativetime::provider::PluralPatterns { strings: icu::plurals::provider::PluralElementsPackedCow { elements: alloc::borrow::Cow::Borrowed(unsafe { icu::plurals::provider::PluralElementsPackedULE::from_byte_slice_unchecked(b"\0\x01 acre") }) }, _phantom: core::marker::PhantomData } },` into `icu::experimental::dimension::provider::units::UnitsDisplayNameV1::new_baked(b"\0\x01 acre")`. This works to some extent but the problems remain in the same order of magnitude and can recur as we add more data. | I-slow,T-compiler,I-compilemem | medium | Major |
2,743,792,299 | pytorch | [ROS Jazzy + WSL2] No module named 'torch' when launching a ROS node with virtualenv | Hello, I am currently working on a ROS Jazzy project inside WSL2 (Ubuntu). I am using a Python virtual environment to install libraries like torch. However, when I launch my node using roslaunch, I encounter this error:
[ERROR] [launch]: Caught exception in launch (see debug for traceback): No module named 'torch'
---
System setup:
OS: WSL2 (Ubuntu 22.04)
ROS version: ROS 2 Jazzy
Python version: Python 3.10
Virtual Environment: Created with venv
Installed libraries:
torch 2.5.1
torchvision 0.20.1
torchaudio 2.5.1
---
Steps I followed:
1. Created a virtual environment:
python3 -m venv ~/.venv
source ~/.venv/bin/activate
pip install torch torchvision torchaudio
2. Edited the shebang line in my Python script to point to the virtual environment:
#!/home/nora/.venv/bin/python3
3. Tested torch manually inside the virtual environment, and it works fine:
python3 -c "import torch; print(torch.version)"
4. However, when I use roslaunch to run the node, the error appears:
No module named 'torch'
---
What I have tried so far:
1. Verified that the environment variables like PYTHONPATH and PATH are pointing to the virtual environment.
2. Added the virtual environment path to PYTHONPATH inside the launch file:
<env name="PYTHONPATH" value="/home/nora/.venv/lib/python3.10/site-packages"/>
3. Checked the default Python interpreter used by ROS using which python3.
---
Question:
How can I ensure that ROS uses the virtual environment’s Python interpreter and libraries (like torch) when launching nodes?
Is there a standard way to make roslaunch work with virtual environments?
---
Tags:
#ros2
#python
#virtualenv
#wsl2
#torch
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | module: windows,triaged,module: wsl | low | Critical |
2,743,798,008 | youtube-dl | Seems like video.ibm.com is still broken |
## Checklist
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running youtube-dl version **2021.12.17**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
```
`$ youtube-dl --verbose https://video.ibm.com/recorded/134138822
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['--verbose', 'https://video.ibm.com/recorded/134138822']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
[debug] youtube-dl version 2024.12.17 [d55d1f423] (single file build)
[debug] ** This version was built from the latest master code at https://github.com/ytdl-org/youtube-dl.
[debug] ** For support, visit the main site.
[debug] Python 3.11.2 (CPython x86_64 64bit) - Linux-6.1.0-26-amd64-x86_64-with-glibc2.36 - OpenSSL 3.0.15 3 Sep 2024 - glibc 2.36
[debug] exe versions: ffmpeg 5.1.6-0, ffprobe 5.1.6-0
[debug] Proxy map: {}
[ustream] 134138822: Downloading JSON metadata
[ustream] 134138822: Downloading connection info
[ustream] 134138822: Downloading stream info
[ustream] 134138822: Downloading connection info (try 2)
[ustream] 134138822: Downloading stream info (try 2)
[ustream] 134138822: Downloading connection info (try 3)
[ustream] 134138822: Downloading stream info (try 3)
ERROR: No video formats found; please report this issue on https://github.com/ytdl-org/youtube-dl/issues , using the appropriate issue template. Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose option and include the complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 879, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 975, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 571, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/ustream.py", line 220, in _real_extract
self._sort_formats(formats)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 1558, in _sort_formats
raise ExtractorError('No video formats found')
youtube_dl.utils.ExtractorError: No video formats found; please report this issue on https://github.com/ytdl-org/youtube-dl/issues , using the appropriate issue template. Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose option and include the complete output.
`
```
## Description
While it does appear that there are some old issue reports about UStream - UStream appears to be completely gone now and totally rebranded to video.ibm.com. I found an old issue about video.ibm.com but it contains an invalid link that doesn't work today. Honestly I think all the old UStream issues should probably be closed since they are very stale now....
This appears to be the same issue as #32088, but due to the age of that issue it appears IBM has changed their URL format since then. So here is a fresh report with a fresh log and a fresh video URL that is alive today and available for testing.
Please let me know if there are any further details I can provide.
Thanks! | broken-IE | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.