id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,713,264,489 | storybook | [Bug] Preview API hooks aren't behaving as expected in portable stories | Apart from `useArgs` not working due to Storybook channel limitations (#29773), other hooks from `@storybook/preview-api` also don't behave as expected, such as `useState` and `useEffect`.
💡 Solutions/Action items:
- Research on how feasible it is to implement support for this
- Document this limitation
---
Currently, portable stories are not compatible with Storybook hooks from `@storybook/preview-api`. This makes it hard for users who have stories like so to use the Storybook Test feature:
```tsx
import { useArgs } from '@storybook/preview-api'
export const Default = {
args: {
value: true,
},
render: function Render() {
const [{ value }, updateArgs] = useArgs()
const onValueChange = () => updateArgs({ value: !value })
return <button onClick={onValueChange}>The value is {JSON.stringify(value)}</button>
},
play: async({ canvas }) => {
const button = await canvas.findByText('The value is true')
await expect(button).toBeInTheDocument()
await userEvent.click(button)
await expect(await canvas.findByText('The value is false')).toBeInTheDocument()
}
}
``` | bug,sev:S3,portable stories | low | Critical |
2,713,272,831 | storybook | [Bug] Vitest plugin does not detect custom vite config paths | Storybook, by default, loads a `vite.config` file in the project root. The same goes for Vitest. However, users can customize the vite config path like so:
```tsx
// .storybook/main.js
export default {
core: {
builder: {
name: '@storybook/builder-vite',
options: {
viteConfigPath: '.storybook/vite.config.ts',
},
},
},
};
```
Vitest **will not** know about this, unless users manually load such vite config file in their vitest config file like so:
```tsx
// vitest.config.ts
import { defineWorkspace } from 'vitest/config'
export default defineWorkspace([
{ extends: '.storybook/vite.config.ts', plugins: [ storybookTest() ] ... }
])
```
Some people have a vite config file for their app and another for Storybook. The `sb add` command won't detect a vite config file in the Storybook dir (though we could), so it can end up referring to the wrong config file when setting things up.
💡 Solutions/Action items:
- We can start checking the vite config file from the storybook configDir first
- We should make this clear in our docs, if it isn't already | bug,sev:S4,portable stories | low | Critical |
2,713,279,678 | storybook | [Bug] manual setup required for addon annotations in portable stories | *This is an issue that has existed forever in portable stories (and will likely be fixed by CSF4).*
To make it simple:
Storybook composes annotations that might come explicitly from user's files e.g. `.storybook/preview` or might come implicitly from addons which are registered in main.js. If an addon has entrypoints such as `/preview`, these annotations are added automatically by Storybook:
```tsx
// my-addon/preview
export default = {
decorators: [withRouter],
};
```
Additionally, some addons/frameworks can have annotations inside of a `/preset` entrypoint like so:
```tsx
// my-framework/preset
export default = {
previewAnnotations: (entries = []) => [...entries, ...yourEntry],
};
```
That works because Storybook has an async node layer that starts up first, evaluates main.js, and then provide info to the browser.
Previously in portable stories (let's call it raw portable stories), you would go from running Vitest → Portable stories in a test file. That means there is no in-between async node layer that would allow portable stories to figure things out such as these implicit annotations. That's why we ask users to write a setup file e.g.
```tsx
// vitest.setup.ts
import { beforeAll } from 'vitest';
import { setProjectAnnotations } from '@storybook/react';
import * as projectAnnotations from './preview';
const project = setProjectAnnotations([projectAnnotations]);
beforeAll(project.beforeAll);
```
The problem with that is that in order for stories to include the annotations from addons, the user would have to know that such addons have these preset annotations, and then manually add these annotations in the setup file:
```tsx
// .storybook/vitest.setup.ts
import * as projectAnnotations from './preview';
import * as actionsAnnotations from '@storybook/addon-actions/preview';
const project = setProjectAnnotations([projectAnnotations, actionsAnnotations]);
```
This becomes a bigger problem when we start implementing solutions that solely rely on these annotations, like the accessibility reporting API. If users don't have addon-a11y annotations in their setup file, there is no way to report anything.
As I mentioned, “raw portable stories” don't have a middle layer. However, now with the Storybook test plugin, we do have such middle layer, which can execute async code and evaluate main.js and their presets. So theoretically we could solve this issue in an automated fashion, and in fact we [[used to](https://github.com/storybookjs/vitest-plugin/commit/067b6076a681cd8dfd50ee925b8a0d2f0c76b1c8#diff-563b131f99dc0a01a333cef61fa317c5a61a5e6bf9e617b9016ae1cec6207976R30)](https://github.com/storybookjs/vitest-plugin/commit/067b6076a681cd8dfd50ee925b8a0d2f0c76b1c8#diff-563b131f99dc0a01a333cef61fa317c5a61a5e6bf9e617b9016ae1cec6207976R30) create the setup file automatically (not taking these annotations into account), in a very early version of the Storybook vitest plugin version. It was later removed in favor of letting users handle it themselves, and have the power to choose which annotations they want.
💡 Solutions/Action items:
- Revisit automating preset handling as part of the Storybook test plugin.
- Acknowledge that this will forever be a problem in raw portable stories.
- Implement CSF4, which would remove the need to have any set up file altogether.
- however even with CSF4, annotations from the `previewAnnotations` property coming from `/preset` annotations won't be taken into account. Hopefully this won't be that common or needed. | bug,sev:S3,portable stories | low | Critical |
2,713,281,168 | storybook | [Bug] framework and addon preview annotations not automatically applied in portable stories | Apart from setting annotations in a set up file by importing them, users, addons or frameworks can also programatically add preview annotations via the `previewAnnotations` preset. Here's an example of what a user can do (though I think it's not so common):
```tsx
// .storybook/main.js
export default {
previewAnnotations: [
"./src/stories/components",
"./template-stories/core/preview.ts",
],
}
```
And here's an example taken from our React renderer:
```tsx
// Example taken from @storybook/react/preset
export const previewAnnotations: PresetProperty<'previewAnnotations'> = async (
input = [],
options
) => {
const docsConfig = await options.presets.apply('docs', {}, options);
const features = await options.presets.apply('features', {}, options);
const docsEnabled = Object.keys(docsConfig).length > 0;
const result: string[] = [];
return result
.concat(input)
.concat([join(__dirname, 'entry-preview.mjs')])
.concat(docsEnabled ? [join(__dirname, 'entry-preview-docs.mjs')] : [])
.concat(features?.experimentalRSC ? [join(__dirname, 'entry-preview-rsc.mjs')] : []);
};
```
The react renderer example is interesting because the renderer has a portable stories entrypoint, which already adds the necessary annotations (and excludes the docs ones which are not needed), so that problem is solved when users call `setProjectAnnotations` coming from `@storybook/react`. However, this pattern would need to be documented for anyone creating community frameworks so that they can do the same thing.
💡 Solutions/Action items:
- Do nothing, acknowledge this limitation
- Document this properly. Users need to solve it by hand, framework authors need to provide the annotations (solved by CSF4 if the defineConfig function comes from frameworks)
- Take this into account as part of a “preset handling” solution | bug,sev:S3,portable stories | low | Critical |
2,713,281,296 | flutter | InteractiveViewer jumps to incorrect transform on mouse scroll during pan | ### Steps to reproduce
- Run the code sample on Windows or Web while using a mouse.
- Pan the InteractiveViewer grid with left or right mouse button.
- While the pan is in motion, place the mouse cursor on the blue square and use the mouse scroll wheel to scroll in or out.
- The zoom will NOT be focused on the current mouse location, rather the gesture will be centered on the point that the pan was initiated.
### Expected results
When the scroll wheel is used to zoom in or out, I would expect the InteractiveViewer to zoom in or out on the current location of the mouse cursor. In the sample code, anytime I try and zoom in or out while the cursor is on the blue square, I would expect the blue square to remain on screen and inside my mouse cursor.
### Actual results
If there is momentum from a pan gesture while scroll wheel is used to zoom in or out on my target (the blue square), the results will be unpredictable, and my desired zoom target (the blue square) will go offscreen.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key});
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
late final TransformationController _controller = TransformationController(
Matrix4(
0.5, 0, 0, 0, //
0, 0.5, 0, 0, //
0, 0, 0.5, 0, //
-2000, -2000, 0, 1, //
),
);
@override
void dispose() {
_controller.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: InteractiveViewer(
transformationController: _controller,
constrained: false,
minScale: 0.01,
scaleFactor: 500,
child: Stack(
children: [
const MapGrid(
size: Size(
10000,
10000,
),
),
Positioned(
left: 4500,
top: 4500,
child: Container(
color: Colors.blue,
width: 1000,
height: 1000,
),
),
],
),
),
);
}
}
class MapGrid extends StatelessWidget {
const MapGrid({super.key, required this.size});
final Size size;
@override
Widget build(BuildContext context) {
final theme = Theme.of(context);
return CustomPaint(
painter: MapGridPainter(
color: theme.colorScheme.primary,
),
size: size,
);
}
}
class MapGridPainter extends CustomPainter {
final Color color;
final Paint _paint;
MapGridPainter({super.repaint, required this.color})
: _paint = Paint()
..color = color
..strokeWidth = 1;
@override
void paint(Canvas canvas, Size size) {
for (double x = 0; x < size.width; x += 100) {
canvas.drawLine(Offset(x, 0), Offset(x, size.height), _paint);
}
for (double y = 0; y < size.height; y += 100) {
canvas.drawLine(Offset(0, y), Offset(size.width, y), _paint);
}
}
@override
bool shouldRepaint(covariant CustomPainter oldDelegate) {
return true;
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.22631.4460], locale en-US)
• Flutter version 3.24.3 on channel stable at D:\GitHub\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (3 months ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at C:\Users\sheri\AppData\Local\Android\sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Professional 2022 17.9.2)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Professional
• Visual Studio Professional 2022 version 17.9.34622.214
• Windows 10 SDK version 10.0.22621.0
[√] Android Studio (version 2024.2)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
[√] VS Code (version 1.95.3)
• VS Code at C:\Users\sheri\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.102.0
[√] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4460]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.69
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.70
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| framework,platform-web,a: desktop,a: mouse,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.27 | low | Major |
2,713,282,198 | storybook | [Bug] Storybook experimental features ignored in portable stories | In Storybook we normally use the `features`main.js property to allow users to toggle between certain experimental features like `experimentalRSC`, however those are not taken into account in portable stories:
```tsx
// .storybook/main.js
export default {
features: {
experimentalRSC: true,
},
}
```
💡 Solutions/Action items:
- We can add support for these by parsing that property in main.js and injecting into a `FEATURES` object in globalThis during a test run (in a virtual set up file or the like)
- We can discourage ourselves to use the features property whenever possible, and use parameters instead. The `experimentalRSC` flag was actually moved into `parameters.react.rsc` and that is automatically handled in portable stories | bug,sev:S3,portable stories,rsc | low | Critical |
2,713,395,380 | react | Bug: useId not stable on hydration with mid-render state update (React 18) | React version: 18.3.1, 18.0.0
Does not occur in React 19, but we can't use React 19.
## Summary
I am fully expecting this to be programmer error, but this behavior deeply puzzles me and I can't find any documented reason for it. If this isn't a bug in React, I would definitely consider it a bug in React documentation :)
As far as I can tell, this code follows all of the rules of hooks and documented invariants:
```jsx
function App() {
const _id = useId() // If commented out, no warning.
const [prevValue, setPrevValue] = useState(false)
if (prevValue === false) setPrevValue(true) // If commented out, no warning.
return <Inner />
}
// If this component's body is copied into `App`, no warning.
function Inner() {
const id = useId() // <---- NOT STABLE!
return <div id={id} />
}
```
However, it fails with a hydration mismatch error:
```
client.js:2427 Warning: Prop `id` did not match. Server: ":R6:" Client: ":R2:" Error Component Stack
at div (<anonymous>)
at Inner (client.js:23592:40)
at App (client.js:23596:41)
at body (<anonymous>)
at html (<anonymous>)
at Html (<anonymous>)
```
## Steps To Reproduce
I created a GitHub repo with a minimal reproduction. Steps to run are in the README: https://github.com/kognise/react-reproduction
Alternately, you should be able to see the same behavior by copying the top code sample into any server-rendered (and hydrated) React app.
## The current behavior
React prints the warning ``Prop `id` did not match. Server: ":R6:" Client: ":R2:"``.
## The expected behavior
I would expect this to just work with no issues.
Modifying the snippet as commented makes the warning go away.
Thanks! | Status: Unconfirmed | low | Critical |
2,713,456,044 | pytorch | Test failures when migrating to SM89 | ### 🐛 Describe the bug
See https://github.com/pytorch/pytorch/pull/140305
For some reasons, while running tests on L4 some tests started to fail, while other require increased tolerances
### Versions
CI
cc @ptrblck @msaroufim @eqy @mruberry @ZainRizvi | module: cuda,module: tests,triaged | low | Critical |
2,713,459,783 | terminal | Trigger Redraw Crash | ### Windows Terminal version
1.21.3231.0
### Windows build number
10.0.26335.0
### Other Software
_No response_
### Steps to reproduce
Unsure 😅
### Expected Behavior
To not crash
### Actual Behavior
`Access Violation` exception, see Watson Bucket 20f45815-12a1-7b5a-eadf-5a6297346f30 | Area-Rendering,Issue-Bug,Severity-Crash,Product-Terminal | low | Critical |
2,713,464,472 | vscode | Win+R code ENTER opens irritating conhost.exe console window | If you run code from "Run" Explorer dialog (type "Win+R" on keyboard) two windows are open: `code` editor and console `conhost.exe`.
conhost take precious space in Taskbar & occupy extra slot in Alt+TAB.
`Win+R code Enter` is a fast way to open an editor.
Workaround: instead of running `c:\Users\user\AppData\Local\Programs\Microsoft VS Code\bin\code.cmd` one could utilize `c:\Users\user\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Visual Studio Code\Visual Studio Code.lnk` by pressing `Win c o d e Enter` - basically from the Start menu.
Related bugs:
* https://github.com/microsoft/vscode/issues/66750
I don't understand why Code installer puts `c:\Users\user\AppData\Local\Programs\Microsoft VS Code\bin` to `PATH` but not a parent directory... Maybe to avoid pollution of `PATH` with DLLs...
I wish I don't see irritating extra `conhost` window when I type `Win+R c o d e Enter` | bug,windows,workbench-os-integration,confirmed | low | Critical |
2,713,496,392 | go | crypto/tls: TestHandshakeClientECDHEECDSAAESGCM/TLSv12 failures | ```
#!watchflakes
default <- pkg == "crypto/tls" && test == "TestHandshakeClientECDHEECDSAAESGCM/TLSv12"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8729631309123220881)):
=== RUN TestHandshakeClientECDHEECDSAAESGCM/TLSv12
=== PAUSE TestHandshakeClientECDHEECDSAAESGCM/TLSv12
=== CONT TestHandshakeClientECDHEECDSAAESGCM/TLSv12
panic: test timed out after 6m0s
running tests:
TestHandshakeClientECDHEECDSAAESGCM/TLSv12 (0s)
goroutine 392 gp=0x10c000105800 m=12 mp=0x10c000103808 [running]:
panic({0xbebfa0?, 0x10c000026ba0?})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:806 +0x168 fp=0x10c000341f10 sp=0x10c000341e60 pc=0x54eac8
...
crypto/tls.runTestAndUpdateIfNeeded.func1(0x10c00040c200)
/home/swarming/.swarming/w/ir/x/w/goroot/src/crypto/tls/handshake_test.go:60 +0x7f fp=0x10c0003edf38 sp=0x10c0003edf00 pc=0xa0097f
testing.tRunner(0x10c00040c200, 0x10c0003121b0)
/home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:1764 +0x1db fp=0x10c0003edfc0 sp=0x10c0003edf38 pc=0x6cb8db
testing.(*T).Run.gowrap1()
/home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:1823 +0x25 fp=0x10c0003edfe0 sp=0x10c0003edfc0 pc=0x6cd8c5
runtime.goexit({})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/asm_amd64.s:1700 +0x1 fp=0x10c0003edfe8 sp=0x10c0003edfe0 pc=0x556641
created by testing.(*T).Run in goroutine 327
/home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:1823 +0x94b
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,713,497,578 | rust | Bad `+ use<>` suggestion in nested macro_rules | Failure in migrating rocket 0.5.1 to 2024 due to the [_typed_stream](https://github.com/rwf2/Rocket/blob/v0.5.1/core/lib/src/response/stream/mod.rs#L292-L301) macro getting modified to:
```rust
macro_rules! _typed_stream {
($S:ident, $($t:tt)*) => (
$crate::__typed_stream! {
$crate::response::stream::$S,
$crate::response::stream::stream,
$crate::futures::stream::Stream,
$($t)*
} + use<'b>
)
}
```
where that is invalid syntax. Just removing the `+ use<'b>` seems to make things work?
Unraveling this to a minimal example seems to be quite a bit of work since there are many layers of macros involved. This could also just be an issue with the rocket_codegen proc-macro respanning, but I did not unravel it that far.
### Meta
```
rustc 1.85.0-nightly (5e1440ae5 2024-12-01)
binary: rustc
commit-hash: 5e1440ae514d98ddfcbf1607acb64d41e07ef616
commit-date: 2024-12-01
host: aarch64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.4
``` | A-lints,A-macros,T-compiler,C-bug,A-suggestion-diagnostics,E-needs-mcve,D-invalid-suggestion,D-edition,A-edition-2024,L-impl_trait_overcaptures,I-edition-triaged | low | Critical |
2,713,503,171 | pytorch | rms_norm forward performance analysis | ### 🐛 Describe the bug
In our previous tests, the performance of rms_norm operator compiled by inductor is worse than liger implementation. Here is the root cause analysis.
## Reproduce tests
```
% python run.py --op rms_norm --mode fwd --precision fp32 --metrics latency,speedup --cudagraph
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:04<00:00, 1.24it/s]
(M, H) llama_rms-latency liger_rms-speedup liger_rms-latency inductor_rms-speedup inductor_rms-latency
------------- ------------------- ------------------- ------------------- ---------------------- ----------------------
(2048, 1024) 0.041888 3.50938 0.011936 3.20833 0.013056
(2048, 2048) 0.06192 3.01402 0.020544 2.94521 0.021024
(2048, 4096) 0.135552 3.80251 0.035648 3.76533 0.036
(2048, 8192) 0.25376 3.83092 0.06624 3.81985 0.066432
(2048, 16384) 0.482496 3.71929 0.129728 3.37996 0.142752
(2048, 32768) 0.926784 3.44909 0.268704 2.5852 0.358496
```
## Obtain NCU Profile Reports
```
python run.py --op rms_norm --mode fwd --precision fp32 --metrics ncu_rep
```
## Report Analysis
### For input id 1-5
For input id 5, which is the largest input in our tests, the execution times are 0.35ms and 0.26ms for inductor and liger separately.
The inductor compiled kernel is the following.
```
# kernel path: /tmp/torchinductor_yhao/lc/clcsnr7zzaoxdu2qmftawo3xj5jh3rzkjdfrno47offog26pko6z.py
# Topologically Sorted Source Nodes: [pow_1, variance, add, rsqrt, hidden_states_1, mul_1], Original ATen: [aten.pow, aten.mean, aten.add, aten.rsqrt, aten.mul]
# Source node to ATen node mapping:
# add => add
# hidden_states_1 => mul
# mul_1 => mul_1
# pow_1 => pow_1
# rsqrt => rsqrt
# variance => mean
# Graph fragment:
# %pow_1 : [num_users=1] = call_function[target=torch.ops.aten.pow.Tensor_Scalar](args = (%primals_1, 2), kwargs = {})
# %mean : [num_users=1] = call_function[target=torch.ops.aten.mean.dim](args = (%pow_1, [-1], True), kwargs = {})
# %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%mean, 1e-06), kwargs = {})
# %rsqrt : [num_users=2] = call_function[target=torch.ops.aten.rsqrt.default](args = (%add,), kwargs = {})
# %mul : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%primals_1, %rsqrt), kwargs = {})
# %mul_1 : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%primals_2, %mul), kwargs = {})
triton_red_fused_add_mean_mul_pow_rsqrt_0 = async_compile.triton('triton_red_fused_add_mean_mul_pow_rsqrt_0', '''
import triton
import triton.language as tl
from triton.compiler.compiler import AttrsDescriptor
from torch._inductor.runtime import triton_helpers, triton_heuristics
from torch._inductor.runtime.triton_helpers import libdevice, math as tl_math
from torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties
triton_helpers.set_driver_to_gpu()
@triton_heuristics.reduction(
size_hints=[2048, 32768],
reduction_hint=ReductionHint.INNER,
filename=__file__,
triton_meta={'signature': {'in_out_ptr0': '*fp32', 'in_ptr0': '*fp32', 'in_ptr1': '*fp32', 'out_ptr0': '*fp32', 'xnumel': 'i32', 'rnumel': 'i32'}, 'device': DeviceProperties(type='cuda', index=0, cc=90, major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, multi_processor_count=132, warp_size=32), 'constants': {}, 'configs': [AttrsDescriptor(divisible_by_16=(0, 1, 2, 3, 4, 5), equal_to_1=())]},
inductor_meta={'autotune_hints': set(), 'kernel_name': 'triton_red_fused_add_mean_mul_pow_rsqrt_0', 'mutated_arg_names': ['in_out_ptr0'], 'optimize_mem': False, 'no_x_dim': False, 'num_load': 3, 'num_reduction': 1, 'backend_hash': '1D985D9C0CD26A9BB5BAD8A051C971598DEF7D09C84A1EB5695128BF97C731E4', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False}
)
@triton.jit
def triton_red_fused_add_mean_mul_pow_rsqrt_0(in_out_ptr0, in_ptr0, in_ptr1, out_ptr0, xnumel, rnumel, XBLOCK : tl.constexpr, RBLOCK : tl.constexpr):
xnumel = 2048
rnumel = 32768
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
xmask = xindex < xnumel
rbase = tl.arange(0, RBLOCK)[None, :]
x0 = xindex
_tmp3 = tl.full([XBLOCK, RBLOCK], 0, tl.float32)
for roffset in range(0, rnumel, RBLOCK):
rindex = roffset + rbase
rmask = rindex < rnumel
r1 = rindex
tmp0 = tl.load(in_ptr0 + (r1 + (32768*x0)), rmask & xmask, eviction_policy='evict_last', other=0.0)
tmp1 = tmp0 * tmp0
tmp2 = tl.broadcast_to(tmp1, [XBLOCK, RBLOCK])
tmp4 = _tmp3 + tmp2
_tmp3 = tl.where(rmask & xmask, tmp4, _tmp3)
tmp3 = tl.sum(_tmp3, 1)[:, None]
tmp5 = 32768.0
tmp6 = tmp3 / tmp5
tmp7 = 1e-06
tmp8 = tmp6 + tmp7
tmp9 = libdevice.rsqrt(tmp8)
tl.debug_barrier()
tl.store(in_out_ptr0 + (x0), tmp9, xmask)
for roffset in range(0, rnumel, RBLOCK):
rindex = roffset + rbase
rmask = rindex < rnumel
r1 = rindex
tmp10 = tl.load(in_ptr1 + (r1), rmask, eviction_policy='evict_last', other=0.0)
tmp11 = tl.load(in_ptr0 + (r1 + (32768*x0)), rmask & xmask, eviction_policy='evict_first', other=0.0)
tmp12 = tmp11 * tmp9
tmp13 = tmp10 * tmp12
tl.store(out_ptr0 + (r1 + (32768*x0)), tmp13, rmask & xmask)
''', device_str='cuda')
```
Line tmp11 is a redundant load instruction because all data has been loaded in line tmp0 before. However, we can’t reuse the tmp0 directly because it is in a loop. It is the root cause of the bad performance compared with liger’s implementation.
The following are the memory charts for inductor and liger kernels.

Inductor version

Liger version
As you can see, the total memory load from Device Memory to L2 Cache is 506MB for inductor version while 268MB for liger version.
We observed the same pattern in input id 1-4 too. This can be a serious problem even for other ops we didn’t test.
The equivalent implementation in liger kernel is [here](https://github.com/linkedin/Liger-Kernel/blob/7e0f459149d298c84f162363cc6f1347494b80f2/src/liger_kernel/ops/rms_norm.py#L44). The equivalent line for loading input is [`X_row = tl.load(X_ptr + col_offsets, mask=mask, other=0)`](https://github.com/linkedin/Liger-Kernel/blob/7e0f459149d298c84f162363cc6f1347494b80f2/src/liger_kernel/ops/rms_norm.py#L76C5-L76C61) This X_row is reused and it doesn't cause redundant loads.
For both the first load in inductor and liger implementation, they are all unrolled into multiple LDG.E instructions. But the loop in inductor stops the data reusing. For the second redundant load in inductor implementation, this loop is not unrolled because of the store instruction in the end of the loop I think.
### For input id 0
For the input id 0, which is the smallest input, its inductor compiled code is the following.
```
# kernel path: /tmp/torchinductor_yhao/d7/cd7zxtkrvvmyfsqgvi26o2eurcvl2bael2advqf5r6rruhxxl62f.py
# Topologically Sorted Source Nodes: [pow_1, variance, add, rsqrt, hidden_states_1, mul_1], Original ATen: [aten.pow, aten.mean, aten.add, aten.rsqrt, aten.mul]
# Source node to ATen node mapping:
# add => add
# hidden_states_1 => mul
# mul_1 => mul_1
# pow_1 => pow_1
# rsqrt => rsqrt
# variance => mean
# Graph fragment:
# %pow_1 : [num_users=1] = call_function[target=torch.ops.aten.pow.Tensor_Scalar](args = (%primals_1, 2), kwargs = {})
# %mean : [num_users=1] = call_function[target=torch.ops.aten.mean.dim](args = (%pow_1, [-1], True), kwargs = {})
# %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%mean, 1e-06), kwargs = {})
# %rsqrt : [num_users=2] = call_function[target=torch.ops.aten.rsqrt.default](args = (%add,), kwargs = {})
# %mul : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%primals_1, %rsqrt), kwargs = {})
# %mul_1 : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%primals_2, %mul), kwargs = {})
triton_per_fused_add_mean_mul_pow_rsqrt_0 = async_compile.triton('triton_per_fused_add_mean_mul_pow_rsqrt_0', '''
import triton
import triton.language as tl
from triton.compiler.compiler import AttrsDescriptor
from torch._inductor.runtime import triton_helpers, triton_heuristics
from torch._inductor.runtime.triton_helpers import libdevice, math as tl_math
from torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties
triton_helpers.set_driver_to_gpu()
@triton_heuristics.persistent_reduction(
size_hints=[2048, 1024],
reduction_hint=ReductionHint.INNER,
filename=__file__,
triton_meta={'signature': {'in_out_ptr0': '*fp32', 'in_ptr0': '*fp32', 'in_ptr1': '*fp32', 'out_ptr0': '*fp32', 'xnumel': 'i32', 'rnumel': 'i32'}, 'device': DeviceProperties(type='cuda', index=0, cc=90, major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, multi_processor_count=132, warp_size=32), 'constants': {}, 'configs': [AttrsDescriptor(divisible_by_16=(0, 1, 2, 3, 4, 5), equal_to_1=())]},
inductor_meta={'autotune_hints': set(), 'kernel_name': 'triton_per_fused_add_mean_mul_pow_rsqrt_0', 'mutated_arg_names': ['in_out_ptr0'], 'optimize_mem': False, 'no_x_dim': True, 'num_load': 2, 'num_reduction': 1, 'backend_hash': '1D985D9C0CD26A9BB5BAD8A051C971598DEF7D09C84A1EB5695128BF97C731E4', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False}
)
@triton.jit
def triton_per_fused_add_mean_mul_pow_rsqrt_0(in_out_ptr0, in_ptr0, in_ptr1, out_ptr0, xnumel, rnumel):
xnumel = 2048
XBLOCK: tl.constexpr = 1
rnumel = 1024
RBLOCK: tl.constexpr = 1024
xoffset = tl.program_id(0) * XBLOCK
xindex = tl.full([1], xoffset, tl.int32)
xmask = tl.full([RBLOCK], True, tl.int1)
rindex = tl.arange(0, RBLOCK)[:]
roffset = 0
rmask = tl.full([RBLOCK], True, tl.int1)
r1 = rindex
x0 = xindex
tmp0 = tl.load(in_ptr0 + (r1 + (1024*x0)), None)
tmp10 = tl.load(in_ptr1 + (r1), None, eviction_policy='evict_last')
tmp1 = tmp0 * tmp0
tmp2 = tl.broadcast_to(tmp1, [RBLOCK])
tmp4 = triton_helpers.promote_to_tensor(tl.sum(tmp2, 0))
tmp5 = 1024.0
tmp6 = tmp4 / tmp5
tmp7 = 1e-06
tmp8 = tmp6 + tmp7
tmp9 = libdevice.rsqrt(tmp8)
tmp11 = tmp0 * tmp9
tmp12 = tmp10 * tmp11
tl.debug_barrier()
tl.store(in_out_ptr0 + (x0), tmp9, None)
tl.store(out_ptr0 + (r1 + (1024*x0)), tmp12, None)
''', device_str='cuda')
```
The tmp0 is to load all inputs and no redundant loads when we do the computation for tmp11. However, its performance is still slightly worse than liger version.
For line tmp0, it is compiled to a bunch of SASS instructions(CUDA binary assembly code) with the most important one LDG(load from global memory) instruction\`LDG.E.128 R8, desc\[UR6\]\[R8.64\]\`. However, for the line in liger version with same semantics \` X\_row \= tl.load(X\_ptr \+ col\_offsets, mask=mask, other=0)\`, it is compiled to the following two LDG instructions.
```
@!P2 LDG.E.128 R8, desc[UR6][R22.64]
@!P1 LDG.E.128 R12, desc[UR6][R22.64+0x800]
```
I think these two separate instructions increase the flexibility of instruction scheduling and increase the instruction overlaps. This is the most suspicious one I can find for input0.
### Versions
main branch
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @eellison @shunting314 @Chillee @jansel | triaged,oncall: pt2,module: inductor | low | Critical |
2,713,543,494 | yt-dlp | Embedding subtitles in preferred formats or convert other formats | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
I am looking to download subtitles in given formats. I prefer ASS first, then SRT if they have it, and anything else should be converted into ASS where possible.
Issue #57 asks a similar question in that the user wanted to download into SRT and convert anything else (though they were archiving from YouTube so dropped the format selection), but this extends it to look for other formats before resorting to conversion.
It seems a possible set of options would be `--sub-format "ass/srt" --convert-subs ass` but I'm not sure if this what I'm looking for, and worried if it'll also convert the SRT into ASS as well.
So my question is is it possible to download the ASS format as-is, otherwise download the SRT format as-is, otherwise download the best version and convert it to ASS? And if so how do I put that into the command line?
I should point out it's downloading everything fine, so this is just icing on the cake.
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_ | enhancement | low | Critical |
2,713,548,502 | rust | [rustc] SIGSEGV maximum backtrace depth reached | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
Use javascript `x=10000; out="compile_error!(" + "concat!(".repeat(x) + '"A"' + ")".repeat(x); + ");"` to create rust code.
Full code in [evil3.rs.txt](https://github.com/user-attachments/files/17984702/evil3.rs.txt)
```Rust
compile_error!(concat!(concat!(concat!(concat!(concat!(concat!(concat!(.....)))))));
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
<version>
rustc 1.83.0 (90b35a623 2024-11-26) (Arch Linux rust 1:1.83.0-1)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 18.1.8
```
### Error output
```
error: rustc interrupted by SIGSEGV, printing backtrace
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x1373cb9) [0x79575ff73cb9]
/usr/lib/libc.so.6(+0x3d1d0) [0x79575ea4c1d0]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x45e3d38) [0x7957631e3d38]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x45e522e) [0x7957631e522e]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(_RNvMs_Cs1pTYV0gRL8Y_11rustc_lexerNtNtB4_6cursor6Cursor13advance_token+0xadd) [0x7957631e491d]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3db789d) [0x7957629b789d]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3e04e6c) [0x795762a04e6c]
### cycle encountered after 7 frames with period 8
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3e05187) [0x795762a05187]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3e05187) [0x795762a05187]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3e05187) [0x795762a05187]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3e05187) [0x795762a05187]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3e05187) [0x795762a05187]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3e05187) [0x795762a05187]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3e05187) [0x795762a05187]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3e05187) [0x795762a05187]
### recursed 31 times
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3e05187) [0x795762a05187]
note: rustc unexpectedly overflowed its stack! this is a bug
note: maximum backtrace depth reached, frames may have been lost
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=16777216
note: backtrace dumped due to SIGSEGV! resuming signal
Segmentation fault (core dumped)
```
| I-crash,P-low,T-compiler,C-bug | low | Critical |
2,713,556,804 | rust | [rustc_ast/src/ast_traits.rs:301] Stack overflow for nested expressions | ### Code
```Rust
fn main {
// please see attached files for full code
let x = 1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+ (repeat 100k times) +1;
}
```
### Affected release channels
- [ ] Previous Stable
- [x] Current Stable
- [ ] Current Beta
- [ ] Current Nightly
### Rust Version
```Shell
rustc 1.83.0 (90b35a623 2024-11-26) (Arch Linux rust 1:1.83.0-1)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 18.1.8
```
### Current error output
```Shell
```
### Backtrace
```Shell
$ RUST_BACKTRACE=full rustc evil8.rs
error: rustc interrupted by SIGSEGV, printing backtrace
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x1373cb9) [0x7524d7373cb9]
/usr/lib/libc.so.6(+0x3d1d0) [0x7524d5e4c1d0]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(_RNvXsm_NtCsi33IywiuLMr_12rustc_expand6expandNtB5_19InvocationCollectorNtNtCs2HYUzTpzXjS_9rustc_ast9mut_visit10MutVisitor10visit_expr+0x1b7) [0x7524d9af1057]
### cycle encountered after 3 frames with period 4
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3bc9e47) [0x7524d9bc9e47]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(_RNvXsm_NtCsi33IywiuLMr_12rustc_expand6expandNtB5_19InvocationCollectorNtNtCs2HYUzTpzXjS_9rustc_ast9mut_visit10MutVisitor10visit_expr+0x779) [0x7524d9af1619]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3bc9e47) [0x7524d9bc9e47]
/usr/lib/librustc_driver-37bf60d83001ffbc.so(_RNvXsm_NtCsi33IywiuLMr_12rustc_expand6expandNtB5_19InvocationCollectorNtNtCs2HYUzTpzXjS_9rustc_ast9mut_visit10MutVisitor10visit_expr+0x779) [0x7524d9af1619]
### recursed 63 times
/usr/lib/librustc_driver-37bf60d83001ffbc.so(+0x3bc9e47) [0x7524d9bc9e47]
note: rustc unexpectedly overflowed its stack! this is a bug
note: maximum backtrace depth reached, frames may have been lost
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=16777216
Segmentation fault (core dumped)
(gdb) run
Starting program: /usr/bin/rustc evil8.rs
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
[New Thread 0x7fffe85ff6c0 (LWP 1346636)]
[New Thread 0x7fffe7dff6c0 (LWP 1346637)]
Thread 3 "rustc" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffe7dff6c0 (LWP 1346637)]
Downloading 11.41 K source file /usr/src/debug/rust/rustc-1.83.0-src/compiler/rustc_ast/src/ast_traits.rs
0x00007ffff6ef1057 in rustc_ast::ast_traits::{impl#8}::visit_attrs<rustc_ast::ptr::P<rustc_ast::ast::Expr>, rustc_expand::expand::{impl#23}::take_first_attr::{closure_env#1}<rustc_ast::ptr::P<rustc_ast::ast::Expr>>> (self=0x7fffe54bfa50, f=<error reading variable: access outside bounds of object referenced via synthetic pointer>) at compiler/rustc_ast/src/ast_traits.rs:301
301 self.ast_deref_mut().visit_attrs(f)
(gdb)
```
### Anything else?
This is a different SIGSEGV than the one sent earlier. Might not be exploitable, but rustc should be more robust for long inputs.
Also works in macros. Rustfmt also crashes, while rust-analyzer does not.
Please see [evil8.rs.txt](https://github.com/user-attachments/files/17984754/evil8.rs.txt) and [evil9.rs.txt](https://github.com/user-attachments/files/17984753/evil9.rs.txt) for full example code.
| I-crash,P-low,T-compiler,C-bug | low | Critical |
2,713,584,774 | go | x/benchmarks/sweet/cmd/sweet: TestSweetEndToEnd failures | ```
#!watchflakes
default <- pkg == "golang.org/x/benchmarks/sweet/cmd/sweet" && test == "TestSweetEndToEnd"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8729636098417253009)):
=== RUN TestSweetEndToEnd
integration_test.go:36: phase setup @0s (duration: 1.690814682s)
integration_test.go:36: phase sweet-get @1.690814682s (duration: 25.079331613s)
integration_test.go:131: phase sweet-run-tile38 @26.77226649s (duration: 2m39.396455735s)
integration_test.go:131: phase sweet-run-etcd @26.772219613s (duration: 2m46.225737556s)
integration_test.go:131: phase sweet-run-go-build @26.772238392s (duration: 3m32.888872074s)
integration_test.go:131: phase sweet-run-bleve-index @3m12.998030008s (duration: 1m13.089130537s)
integration_test.go:131: phase sweet-run-gopher-lua @3m59.661207171s (duration: 39.735472257s)
integration_test.go:162: no go.results results
integration_test.go:176: output for esbuild:
...
panic: Internal error
goroutine 1 [running]:
github.com/evanw/esbuild/internal/helpers.(*Timer).Log(0x0?, {0xc0070700e0, 0xc007bb2240, 0xc007bb2258, 0xc006bb0080, 0x3, 0xc000b6a3f0})
/home/swarming/.swarming/w/ir/x/w/targetrepo385663626/sweet/tmp/tmp-4/esbuild/src/internal/helpers/timer.go:83 +0x6ee
github.com/evanw/esbuild/pkg/api.rebuildImpl({0xc009f761e0, {0xc007bb21f8, 0x1, 0x1}, {0x0, 0x0, 0x0}, {0x6, 0x1, 0x0, ...}, ...}, ...)
/home/swarming/.swarming/w/ir/x/w/targetrepo385663626/sweet/tmp/tmp-4/esbuild/src/pkg/api/api_impl.go:1635 +0x143a
github.com/evanw/esbuild/pkg/api.(*internalContext).rebuild(_)
/home/swarming/.swarming/w/ir/x/w/targetrepo385663626/sweet/tmp/tmp-4/esbuild/src/pkg/api/api_impl.go:990 +0x29c
github.com/evanw/esbuild/pkg/api.(*internalContext).Rebuild(...)
...
/home/swarming/.swarming/w/ir/x/w/targetrepo385663626/sweet/tmp/tmp-4/esbuild/src/cmd/esbuild/main.go:368 +0x9bc
exit status 2
[sweet] error: error preparing PGO profiles: failed to execute any profile benchmarks, see logs for more details
integration_test.go:197: exit status 1
integration_test.go:131: phase sweet-run-esbuild @3m6.168859846s (duration: 1m41.91694443s)
integration_test.go:131: phase sweet-run-markdown @4m26.087241639s (duration: 22.346004884s)
integration_test.go:131: phase sweet-run-gvisor @4m39.396760271s (duration: 2m35.249026287s)
integration_test.go:131: phase sweet-run-cockroachdb @26.77230934s (duration: 32m12.812836519s)
integration_test.go:36: phase sweet-run @26.770146295s (duration: 32m12.815137315s)
--- FAIL: TestSweetEndToEnd (1965.68s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,713,601,884 | material-ui | [Textfield] TS2322 with slotProps inputLabel | ### Steps to reproduce
Steps:
1. Using Textfield with shrink to true
```
<TextField
slotProps={{
inputLabel: { shrink: true },
}}
/>
```
2. Build with TypeScript (with strict)
Returns
```
error TS2322: Type '{ shrink: true; } | { shrink: true; } | { id?: string | undefined; role?: AriaRole | undefined; form?: string | undefined; property?: string | undefined; key?: Key | null | undefined; ... 265 more ...; shrink: true; } | ... 7 more ... | { ...; }' is not assignable to type 'SlotProps<ElementType<InputLabelProps, keyof IntrinsicElements>, {}, BaseTextFieldProps> | undefined'.
Object literal may only specify known properties, and 'shrink' does not exist in type '(Partial<Omit<DetailedHTMLProps<LabelHTMLAttributes<HTMLLabelElement>, HTMLLabelElement>, "ref"> & { ...; }> & SlotCommonProps) | ... 5 more ... | (Partial<...> & SlotCommonProps)'.'.
```
### Current behavior
Error on build
### Expected behavior
Suppose to build like
`<TextField InputLabelProps={{ shrink: true }} />`
Doc still suggest to use it this way even if InputLabelProps is deprecated: https://mui.com/material-ui/react-text-field/#shrink
### Context
_No response_
### Your environment
_No response_
**Search keywords**: TS2322 inputLabel slotProps Textfield | component: text field,typescript | low | Critical |
2,713,618,324 | PowerToys | The PowerToys.Run is taking almost 1GB of ram | ### Microsoft PowerToys version
0.86.0
Running in Windows 11 x64
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
I only enabled PowerToys Run for quick search.

### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Product-PowerToys Run,Needs-Triage | low | Major |
2,713,618,688 | node | fileHandle.readableWebStream sometimes crashes on large inputs | ### Version
v22.10.0
### Platform
Reproducible on:
```text
Linux smooreswork 6.11.8-300.fc41.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 14 20:37:39 UTC 2024 x86_64 GNU/Linux
Linux 04d2433b1ce0 6.1.49-Unraid #1 SMP PREEMPT_DYNAMIC Wed Aug 30 09:42:35 PDT 2023 x86_64 x86_64 x86_64 GNU/Linux
Linux server 6.8.0-49-generic #49-Ubuntu SMP PREEMPT_DYNAMIC Mon Nov 4 02:06:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
```
### Subsystem
fs/promises
### What steps will reproduce the bug?
```js
import { open } from "node:fs/promises"
/**
* Like readFile, but streams the file into memory in chunks
* to avoid hard-coded 2GB limit on file I/O operations in
* Node.js/libuv https://github.com/libuv/libuv/pull/1501
*
* @param {string} path
* @returns {Promise<Uint8Array>}
*/
export async function streamFile(path) {
const fileHandle = await open(path)
try {
const stats = await fileHandle.stat()
const fileData = new Uint8Array(stats.size)
let i = 0
for await (const chunk of fileHandle.readableWebStream()) {
const chunkArray = new Uint8Array(chunk)
fileData.set(chunkArray, i)
i += chunkArray.byteLength
}
return fileData
} finally {
await fileHandle.close()
}
}
```
Run the above function on any relatively large file in the Node.js REPL. The larger the file, the more consistent the failure. With a ~100MB file, I get the error maybe 1 in every 20 attempts on my laptop. With a 600MB file, it's more like every other attempt.
### How often does it reproduce? Is there a required condition?
The file must be sufficiently large. I have never seen this occur with files smaller than 50MB. With files less than 200MB, it occurs occasionally. With files larger than 500MB, it occurs very consistently.
I attempted to reproduce this in a simple node.js ESM script, but it does not seem to reproduce when the above function is run in a script. I can reproduce consistently in the REPL, and this same issue occurs consistently in the [Storyteller](https://gitlab.com/smoores/storyteller) Next.js app, where that code sample is from.
### What is the expected behavior? Why is that the expected behavior?
The `streamFile` function should successfully read files of any length into memory. It reads files one chunk at a time to avoid libuv's hard limit on 2GB file I/O operations. There should be no crash or warning about closing a file descriptor on garbage collection, because the file descriptor is closed in a finally block.
### What do you see instead?
```
> await streamFile('./large-test-file.m4b')
(node:460551) Warning: Closing file descriptor 42 on garbage collection
(node:460551) [DEP0137] DeprecationWarning: Closing a FileHandle object on garbage collection is deprecated. Please close FileHandle objects explicitly using FileHandle.prototype.close(). In the future, an error will be thrown if a file descriptor is closed during garbage collection.
```
After this error log, the Node REPL itself exits with status code `139`
### Additional information
If I swap `fileHandle.readableWebStream()` out for `fileHandle.createReadStream()`, this works without issue for files of any size. | confirmed-bug,stream,web streams | low | Critical |
2,713,626,027 | deno | Store npm packument in only the http cache | Right now we're storing it in the npm cache and the http cache depending on what people do. I think it makes sense to consolidate to the http cache. This would also solve https://github.com/denoland/deno/issues/24566 for free. | bug,refactor | low | Minor |
2,713,629,830 | godot | Breakpoints work unexpectedly when the script is modified at runtime without hotpatching it. | ### Tested versions
Reproduced in master (893bbdf)
### System information
Godot v4.4.dev (893bbdfde) - Windows 10.0.22631 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (NVIDIA; 32.0.15.6094) - AMD Ryzen 5 3600 6-Core Processor (12 threads)
### Issue description
When I modify the script at runtime, and I don't save it, then breakpoints behave unexpectedly. It appears the compiled script's line numbers and the editor's lines numbers are out of sync. Stepping through the code is also janky.
Restarting the game, or hot patching the script, fixes the issue.
### Steps to reproduce
1. Start the game
2. Set a breakpoint in your script

3. Remove a line BEFORE the breakpoint, so the breakpoint now exists on a different absolute line. (e.g. moved from Line 115 -> Line 114)
4. Do not save the script nor hot reload the game!
* The inconsistency comes from the editor and the compiled script being out of sync
6. Notice that the breakpoint is triggered even though this code is unreachable!

### Minimal reproduction project (MRP)
N / A | discussion,topic:gdscript,topic:editor | low | Minor |
2,713,658,925 | rust | rustdoc-json: Document why Id can't just be DefId | > NIT: This comment could be more useful. The problem it's actually trying to solve is that a single `pub use` with 1 defid can expand to multiple rustdoc-json items it the name exists in multiple namespaces: https://godbolt.org/z/jEnGxoWx6.
>
> At some point in the future, we should maybe change how we do this on the JSON side, but not in this PR.
_Originally posted by @aDotInTheVoid in https://github.com/rust-lang/rust/pull/130078#discussion_r1796986306_
Rustdoc-json's Id handling is a bit subtle. There should be a nice doc-comment to explain it.
[It came up on zulip here](https://rust-lang.zulipchat.com/#narrow/channel/266220-t-rustdoc/topic/linking.20to.20assoc.20fn.20of.20dyn.20trait/near/485729187) | T-rustdoc,C-enhancement,A-docs,A-contributor-roadblock,A-rustdoc-json | low | Minor |
2,713,675,057 | react | hydration error | ## Summary
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
Hydration failed because the server rendered HTML didn't match the client. As a result this tree will be regenerated on the client. This can happen if a SSR-ed Client Component used
- A server/client branch `if (typeof window !== 'undefined')`.
- Variable input such as `Date.now()` or `Math.random()` which changes each time it's called.
- Date formatting in a user's locale which doesn't match the server.
- External changing data without sending a snapshot of it along with the HTML.
- Invalid HTML tag nesting.
It can also happen if the client has a browser extension installed which messes with the HTML before React loaded.
See more info here: https://nextjs.org/docs/messages/react-hydration-error
- data-np-intersection-state="observed"
| Resolution: Needs More Information | medium | Critical |
2,713,728,375 | go | crypto/tls: TestGetClientCertificate/TLSv13 failures | ```
#!watchflakes
default <- pkg == "crypto/tls" && test == "TestGetClientCertificate/TLSv13"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8729618149014962353)):
=== RUN TestGetClientCertificate/TLSv13
panic: test timed out after 6m0s
running tests:
TestGetClientCertificate (0s)
TestGetClientCertificate/TLSv13 (0s)
goroutine 513 gp=0x10c000431400 m=0 mp=0x19a6860 [running]:
panic({0xbe7fc0?, 0x10c000027d00?})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:806 +0x168 fp=0x10c000075f10 sp=0x10c000075e60 pc=0x54eac8
testing.(*M).startAlarm.func1()
...
crypto/tls.TestGetClientCertificate.func2(0x10c000431000?)
/home/swarming/.swarming/w/ir/x/w/goroot/src/crypto/tls/handshake_client_test.go:2365 +0x18 fp=0x10c000277f38 sp=0x10c000277f18 pc=0xa3bb38
testing.tRunner(0x10c000431000, 0xcb8370)
/home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:1764 +0x1db fp=0x10c000277fc0 sp=0x10c000277f38 pc=0x6cb8db
testing.(*T).Run.gowrap1()
/home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:1823 +0x25 fp=0x10c000277fe0 sp=0x10c000277fc0 pc=0x6cd8c5
runtime.goexit({})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/asm_amd64.s:1700 +0x1 fp=0x10c000277fe8 sp=0x10c000277fe0 pc=0x556641
created by testing.(*T).Run in goroutine 484
/home/swarming/.swarming/w/ir/x/w/goroot/src/testing/testing.go:1823 +0x94b
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,713,751,871 | vscode | [Feature Request] Record file opened by navigation bar into recent file list | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
One of the functions of vscode is to view logs for me, you know the log files are usually only kept for a limited time, I opened `c:\SomeApp\logs\20241202.log` today, and it was recorded, tomorrow, I want see `xxx\20241203.log`, so I can press `ctrl+r` and type `someapp` to find out the `xxx\20241202.log` and open it, and navigate to `xxx\20241203.log` by navigation bar.
But `xxx\20241203.log` never record into recent, I only can reopen `xxx\20241202.log` every day after that, but `xxx\20241202.log` will gone one day, then I will get:

Then I had to reopen `xxx\xxx.log` in windows way.
It makes no sense not to record from navigation bar, WHY?? Is `open file ...` more noble and `navigation bar` inferior? IMO, recent files should record any opened, at least provide a option for navigation way, Or, give a chance to edit the path from `ctrl+r`. | feature-request,workbench-history | low | Major |
2,713,756,681 | deno | SDK Send Events not working or erroring | OS and version used: Windows 11
Node.js version: 22.11.0, Deno v2.0.6
npm version: 10.9.0
list of installed packages: N/A
Version: Deno 2.0.6
When using the Node SDK in Deno to attempt to sent events and batch events to the cloud, my callback function is not getting called, no error is thrown and no messages are going through. Example code below:
```js
import { Client, Message} from 'npm:[email protected]';
import { clientFromConnectionString } from 'npm:azure-iot-device-http';
import '@std/dotenv/load';
const deviceConnectionString: string = Deno.env.get('IOTHUB_DEVICE_CONNECTION_STRING')!;
const { DeviceID, HostName, SharedAccessKey } = parseConnectionString(
deviceConnectionString,
);
const client: Client = clientFromConnectionString(deviceConnectionString);
try {
client.sendEvent(new Message('some data from my device'), printResultFor("send"));
} catch (error) {
console.log(error);
}
function printResultFor(op: any): (err: any, res: any) => void {
return function printResult(err: any, res: any): void {
try {
if (err) console.log('Failed to send message with status ' + op + ' error: ' + err.toString());
if (res) console.log('completed batch SDK ' + op + ' status: ' + res.transportObj.statusCode + ' ' + res.transportObj.statusMessage);
} catch (error) {
console.log(error);
}
};
}
``` | bug,needs investigation,node compat | low | Critical |
2,713,761,483 | deno | specs::node::worker_threads::message_port is flaky | ```
---- specs::node::worker_threads::message_port ----
command D:\a\deno\deno\target\debug\deno.exe run --allow-env --allow-read message_port.mjs
command cwd D:\a\deno\deno\tests\specs\node\worker_threads
output path D:\a\deno\deno\tests\specs\node\worker_threads\message_port.out
-- OUTPUT START --
worker: Hello from worker on parentPort!
mainPort: Hello from worker on workerPort!
mainPort closed
-- OUTPUT END --
-- EXPECTED START --
[UNORDERED_START]
worker port closed
worker: Hello from worker on parentPort!
mainPort: Hello from worker on workerPort!
mainPort closed
[UNORDERED_END]
-- EXPECTED END --
-- DEBUG START --
==== HAD WRONG NUMBER OF UNORDERED LINES ====
# ACTUAL
mainPort·closed
mainPort:·Hello·from·worker·on·workerPort!
worker:·Hello·from·worker·on·parentPort!
# EXPECTED
mainPort·closed
mainPort:·Hello·from·worker·on·workerPort!
worker·port·closed
worker:·Hello·from·worker·on·parentPort!
-- DEBUG END --
```
https://github.com/denoland/deno/actions/runs/12126795075/job/33809663303 | flaky | low | Critical |
2,713,872,377 | pytorch | [Dynamo] Symbolic variables are reduced to Ints in compiled graph inputs | ### 🐛 Describe the bug
```
def test_select_scatter():
# (input_shape, shape_src, dim, index)
input_shapes = [
((16, 16, 1, 2, 3), (16, 16, 2, 3), 2, 0),
((17, 17, 1, 2, 3), (17, 17, 1, 3), 3, 1),
((18, 18, 1, 2, 3), (18, 18, 1, 2), 4, 2),
((19, 19, 1, 2, 3), (19, 19, 2, 3), 2, 0),
((20, 20, 1, 2, 3), (20, 20, 1, 3), 3, 1),
]
def wrapper_fn(t, t_src, dim, indices):
t2 = t.select_scatter(t_src, dim, indices)
return t2
compiled_fn = torch.compile(wrapper_fn, backend="inductor", dynamic=True)
for shape in input_shapes:
print("Input shape: ", shape)
input_tensor = torch.rand(shape[0], requires_grad=False, device='cpu')
src_tensor = torch.rand(shape[1], requires_grad=False, device='cpu')
y_cpu = compiled_fn(input_tensor, src_tensor, shape[2], shape[3])
```
When I run the test above with `TORCH_LOGS=+dynamo` I see a line in logs `set_replacement s3 = 2 (range_refined_to_singleton) VR[2, 2]`. This leads to one of the graph inputs s3 being replaced with just the integer 2, so it's just Sym(2) in the graph below. How can a symbolic variable be a value? Please help fix this issue.
```
TRACED GRAPH
===== __compiled_fn_1 =====
/home/sshanmugam/venv/lib/python3.10/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
def forward(self, s0: "Sym(s0)", s1: "Sym(s1)", s2: "Sym(s2)", L_t_: "f32[s0, s0, 1, s1, s2][s0*s1*s2, s1*s2, s1*s2, s2, 1]cpu", L_t_src_: "f32[s0, s0, s1, s2][s0*s1*s2, s1*s2, s2, 1]cpu", L_dim_: "Sym(2)", L_indices_: "Sym(s4)"):
l_t_ = L_t_
l_t_src_ = L_t_src_
l_dim_ = L_dim_
l_indices_ = L_indices_
# File: /home/sshanmugam/qnpu/pt/src/pytorch-integration/tests/pytest_working/compile/dynamic/test_hpu_select_scatter_dynamic.py:98 in wrapper_fn, code: t2 = t.select_scatter(t_src, dim, indices)
t2: "f32[s0, s0, 1, s1, s2][s0*s1*s2, s1*s2, s1*s2, s2, 1]cpu" = l_t_.select_scatter(l_t_src_, l_dim_, l_indices_); l_t_ = l_t_src_ = l_dim_ = l_indices_ = None
return (t2,)
```
### Versions
[pip3] flake8==6.0.0
[pip3] flake8-bugbear==23.5.9
[pip3] habana-torch-dataloader==1.20.0+git02ff1597f
[pip3] habana-torch-plugin==1.20.0+git02ff1597f
[pip3] mypy==1.3.0
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.26.4
[pip3] torch==2.5.1a0+git3ba5690
[pip3] torch_tb_profiler==0.4.0
[pip3] torchaudio==2.2.0+08901ad
[pip3] torchdata==0.7.1+5e6f7b7
[pip3] torchtext==0.17.0+400da5c
[pip3] torchvision==0.19.0a0+48b1edf
[pip3] triton==3.0.0
cc @chauhang @penguinwu @ezyang @bobrenjc93
cc @sujoysaraswati @joyalbin | triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,713,901,394 | electron | Add `canUndo` and `canRedo` as instance properties of the `webContents` object. | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
It is not possible to know the values of `canUndo` and `canRedo` of a `webContents` object.
### Proposed Solution
Rather than relying on a `context-menu` event to be invoked to see the parameters of `canUndo` and `canRedo`, allow the user at any time to inspect their values.
### Alternatives Considered
If one could manually trigger the context menu- then they could inspect these values- this is a kludge at best.
### Additional Information
_No response_ | enhancement :sparkles: | low | Minor |
2,713,921,548 | terminal | Be able to switch the pane "focus" of my cursor programmatically | ### Description of the new feature
I want to be able to run a command in one pane and then have my cursors focus be able to shift to the other frame. This is mainly for AI Shell where after a someone uses the command `/code post` to post code responses to their working shell that their cursor is also shifted to the left in order to help speed of running the commands suggested. Related issue: https://github.com/PowerShell/AIShell/issues/311
In the GIF below, I have to click on the left side to shift my focus and be able to execute the command inserted.

### Proposed technical implementation details
_No response_ | Issue-Feature,Product-Terminal,Area-Commandline | low | Minor |
2,713,947,684 | deno | fails to do relative dynamic import inside workspace package | Version: Deno 2.1.2
## Reproduction
1. create a deno workspace.
2. initialize [lume](https://github.com/lumeland/lume) with CMS in workspace package (the "`lume/`" dir)
3. run `cd lume` && `deno task cms`
4. it fails because cms module invokes `await import("lume/cms/adapters/lume.ts")`, and deno cannot resolve `lume/cms/adapters/lume.ts` despite it was added in `"imports"` section (`lume/deno.json`) on step 2
folder structure
```
- deno.json "workspace": ["lume"]
- lume/
- deno.json "imports": { "/lume/cms/": "..." }
``` | question | low | Minor |
2,713,984,322 | TypeScript | Support declaring multiple setter overloads | ### 🔍 Search Terms
setter overload, multiple setter signatures, multiple setter input types
### ✅ Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
TypeScript allows declaring multiple different argument signatures for a function, i.e. "function overloads": https://www.typescriptlang.org/docs/handbook/2/functions.html#function-overloads.
We should be able to do the same for property setters, since they are modeled much like a function taking a single argument.
> (Note that unlike some other languages, TypeScript's overloads are only bare type signatures: there is only a single method body implementation shared by all of them, making this a pure typechecking feature with no impact on the generated runtime code. This proposal works the same way, just extending this syntax from functions/methods to setters as well).
### 📃 Motivating Example
A setter might want to separate different types of inputs for improved clarity of documentation:
```ts
/**
* Set startTime to a specific timestamp, specified as a Date object or number of ms since epoch.
*/
set startTime(date: Date | number);
/**
* Set startTime to the start of the most recent activity matching the given category name.
*/
set startTime(category: string);
set startTime(dateOrCategory: Date | number | string) {
// ...
}
```
Note that these differ not just in documentation but also in the value argument's name, which often appears in generated docs output too.
### 💻 Use Cases
Although setter overloads are necessarily less versatile than function overloads with multiple parameters, some of the same rationales for the overload feature still apply to setters – as seen in the example above.
**Workaround**
As with functions before overloading is supported, the workaround is just to glom all the docs together with some additional verbiage, e.g.:
```ts
/**
* Set startTime:
* - If given a Date object or number of ms, sets to a specific timestamp.
* - If given a category name string, sets to the start of the most recent activity matching that name.
*/
set startTime(dateOrCategory: Date | number | string) {
// ...
}
``` | Suggestion,Awaiting More Feedback | low | Minor |
2,714,000,287 | neovim | `WinNew` event seems to ignore file patterns in autocmds | ### Problem
I'm trying to create an autocmd for the `WinNew` event that only fires when certain files are opened in new windows (specifically "help" files, i.e. `*/doc/*.txt`). However, when I specify a filepath pattern for the autocmd, it never fires.
### Steps to reproduce
```
nvim --clean
:autocmd WinNew "*/doc/*.txt" echo "hi"
:help help
```
### Expected behavior
I would expect the above autocmd to fire, but it doesn't. I have no idea if this is intended behavior or not; as far as I can tell the docs don't say anything about what the pattern for `WinNew` matches on, but matching on the file path if the new window is for a file buffer seems like an obvious choice (without any knowledge of implementation details).
### Nvim version (nvim -v)
NVIM v0.10.2 Build type: Release LuaJIT 2.1.1713484068
### Vim (not Nvim) behaves the same?
yes, tested version 9.1
### Operating system/version
Windows 11
### Terminal name/version
wezterm, version 20240203-110809-5046fc22
### $TERM environment variable
xterm-256color
### Installation
winget | documentation | low | Minor |
2,714,119,105 | godot | [Multiple monitors] Window disapears when borderless fullscreen and window mode maximized | ### Tested versions
- Reproducible in 4.3 stable, 4.4 dev branch
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 4070 Ti (NVIDIA; 32.0.15.6094) - 13th Gen Intel(R) Core(TM) i7-13700K (24 Threads)
### Issue description
When setting a borderless window to maximized, it will disappear.
### Steps to reproduce
If window borderless = true, and setting (ONLY while setting) window mode to maximized, window will disappear. Click on button on MVP to test.
### Minimal reproduction project (MRP)
[windowedMvp.zip](https://github.com/user-attachments/files/17988396/windowedMvp.zip)
| bug,needs testing,topic:gui | low | Major |
2,714,119,285 | pytorch | torch._dynamo.exc.Unsupported: builtin: eq [<class 'torch._dynamo.variables.misc.GetAttrVariable'>, <class 'torch._dynamo.variables.constant.EnumVariable'>] | ### 🐛 Describe the bug
mini reproducer:
it's not equal to https://github.com/pytorch/pytorch/issues/135573
```python
# import os
# os.environ["TORCH_LOGS"] = "+dynamo"
import torch
# torch._dynamo.config.inline_inbuilt_nn_modules = False
import enum
from collections import OrderedDict
DEVICE = "cpu"
class ParametersModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear1 = torch.nn.Linear(2, 2)
self.scale = torch.nn.Parameter(torch.randn(2, 2))
self.scale_dup = self.scale
def forward(self, x):
counter = 0
for param in self.parameters():
counter += 1
return x * self.scale * counter
class ZeroParamStatus(enum.Enum):
# parameters are fully present and ready for use on all processes
AVAILABLE = 1
# parameters are either partitioned or remote in some or all process
NOT_AVAILABLE = 2
# parameters are being gathered.
INFLIGHT = 3
class ZeROOrderedDict(OrderedDict):
def __init__(self, parent_module=None, *args, **kwargs):
super().__init__(*args, **kwargs)
self._parent_module = parent_module
def __getitem__(self, key):
param = super().__getitem__(key)
if param.ds_status == ZeroParamStatus.NOT_AVAILABLE:
pass
return param
def inject_parameters(module, cls):
for module in module.modules():
if cls == ZeROOrderedDict:
new_param = cls(parent_module=module)
else:
new_param = cls()
for key, param in module._parameters.items():
# just a hack to set the status
param.ds_status = ZeroParamStatus.NOT_AVAILABLE
new_param[key] = param
module._parameters = new_param
model = ParametersModule()
inject_parameters(model, ZeROOrderedDict)
model= model.to(DEVICE)
model = torch.compile(model, backend="inductor", fullgraph=False)
x = torch.ones(2).to(DEVICE)
y = model(x)
print(y)
```
### Versions
➜ python torch/utils/collect_env.py
Collecting environment information...
PyTorch version: 2.6.0a0+git3357348
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-126-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.12.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] intel-extension-for-pytorch==2.2.0
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.5.82
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] optree==0.11.0
[pip3] torch==2.6.0a0+git3357348
[pip3] torchaudio==2.2.0+cpu
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,714,130,368 | tensorflow | How to load partial parameters from subclass model in TF2? | Hi, all,
I have constructed a keras model with subclass mode in TensorFlow2.x, and trained it with a dataset named dataset1(which have 4000 classification, namely the vocabulary size is 4000), and save the checkpoint with tf.train.Checkpoint and tf.train.CheckpointManager. And now I want to finetune it with a new dataset named dataset2(which have 500 classification, the vocabulary size is 500), when directely restore the parameters from the saved checkpoint, it will fail because of the mismatch shape (ie, the embedding layer, the output layer of encoder and decoder etc).
So, how to restore partial parameter from the checkpoint model, or ignore the shape mismatch parameters when restore from the checkpoint. Here is a simplified example,
# custom design layer
class CustomLayer(tf.keras.layers.Layer):
def __init__(self, unit1, unit2, **kwargs):
super(CustomLayer, self).__init__(**kwargs)
self.dense1 = tf.keras.layers.Dense(unit1, activation='relu')
self.dense2 = tf.keras.layers.Dense(unit2, activation='softmax')
def call(self, inputs):
x = self.dense1(inputs)
return self.dense2(x)
# keras model
class CustomModel(tf.keras.Model):
def __init__(self, unit1, unit2):
super(SourceModel, self).__init__()
self.layer = tf.keras.layers.Dense(128, activation='relu')
self.custom_layer = CustomLayer(64, 10)
def call(self, inputs):
x = self.layer(inputs)
x = self.custom_layer(x)
return x
# original model
old_model = CustomModel(64, 10)
optimizer = tf.keras.optimizers.Adam(1e-4, beta_1=0.9, beta_2=0.98, epsilon=1e-9)
ckpt = tf.train.Checkpoint(step=tf.Variable(0), model=old_model, optimizer=optimizer)
for sample in dataset1:
training and save checkpoint ......
# new model
new_model = CustomModel(64, 5)
**how to restore partial parameters from the checkpoint saved before and start a new train in the dastaset2?**
| stat:awaiting tensorflower,type:support,comp:apis | medium | Minor |
2,714,147,650 | pytorch | [RFC] Disable CMake find_library(libm) on Windows, and solve libm conflict to MSVC runtime lib(ucrt.lib). | ### 🚀 The feature, motivation and pitch
Original issue is https://github.com/pytorch/pytorch/issues/134989
After debug work, I found that `libm` is not should exist on Windows. On Windows, Microsoft already provided math libraries in its runtime library. If we called find_library(libm) from CMake file on Windows, it should occurred `libm` math functions conflict to MSVC runtime lib(ucrt.lib).
### Solution:
Currently, I debugged and found `XNNPACK` and `sleef` called find_library(libm) from their CMake files. So I submit PRs to them to add CMake build option to turn off find_library(libm).
**XNNPACK:**
1. I submitted PR to add build option to disable `libm`: https://github.com/google/XNNPACK/pull/7456, and it was accepted and merged.
2. I occurred some build issue on `static_assert`: https://github.com/google/XNNPACK/issues/7534, I worked with Google engineers and fixed it now: https://github.com/google/XNNPACK/issues/7534#issuecomment-2513677937
3. We have a PyTorch PR to update `XNNPACK` and turn off `libm` on Windows: https://github.com/pytorch/pytorch/pull/141943
**Sleef:**
1. I have submitted PR to add build option to disable `libm`: https://github.com/shibatch/sleef/pull/603 , I still discuss with maintainer now.
2. We have a PyTorch PR to update `Sleef` and turn off `libm` on Windows: https://github.com/pytorch/pytorch/pull/142245
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
```[tasklist]
### Tasks
- [ ] https://github.com/pytorch/pytorch/pull/142245
- [ ] https://github.com/pytorch/pytorch/pull/141943
- [ ] https://github.com/pytorch/pytorch/pull/144644
```
| module: build,module: windows,triaged | low | Critical |
2,714,164,857 | kubernetes | Fake client does not validate if the namespace exists when creating resources | ### What happened?
When using the fake client provided by the Kubernetes client-go library, I observed that it does not validate whether the namespace exists before creating resources.
For example, in the test case below, I attempted to create a Deployment in a non-existent namespace (test-namespace). However, no error was returned, and the operation succeeded without the namespace being created or verified:
```go
import (
clientFake "k8s.io/client-go/kubernetes/fake"
v1 "k8s.io/api/apps/v1"
"testing"
)
func Test1(t *testing.T) {
k8sClient := clientFake.NewSimpleClientset()
_, err := k8sClient.AppsV1().Deployments("test-namespace").Create(context.Background(), &v1.Deployment{
ObjectMeta: metav1.ObjectMeta{Name: "test-deployment"},
}, metav1.CreateOptions{})
fmt.Println(err)
}
```
Running this test produces the following output:
```bash
<nil>
PASS
ok portrait/offline/pkg/prometheus 0.018s
```
I expected the fake client to return an error indicating that the namespace does not exist. However, it silently allowed the creation of the resource.
### What did you expect to happen?
The fake client should validate whether the specified namespace exists before allowing the creation of a resource within it. If the namespace does not exist, it should return an error similar to the behavior of a real Kubernetes API server.
### How can we reproduce it (as minimally and precisely as possible)?
* Create a fake client using clientFake.NewSimpleClientset().
* Attempt to create a resource (e.g., Deployment) in a non-existent namespace.
* Observe that no error is returned, and the resource is created without namespace validation.
### Anything else we need to know?
* The fake client is commonly used for unit testing in Kubernetes-related projects. If it does not mimic the real Kubernetes API server's behavior accurately, it can lead to tests passing incorrectly and bugs being introduced into production code.
* I understand that the fake client is not a full Kubernetes API server implementation. However, basic validations like namespace existence checks are critical for accurate testing.
* I am happy to submit a PR to address this issue if it is agreed that this is a bug or missing feature. Please let me know if this behavior is intentional or if I should proceed with the fix.
* If the maintainers decide this is not a bug, I would appreciate clarification on whether this behavior is by design or if there are alternative approaches to handle namespace validation in unit tests.
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.21.1
WARNING: version difference between client (1.31) and server (1.21) exceeds the supported minor version skew of +/-1
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
$ uname -a
Linux MI-20240719YHKN 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/api-machinery,triage/accepted | low | Critical |
2,714,215,716 | kubernetes | [Flaky Test] TestRegistrationHandler/manage-resource-slices of kubelet/cm/dra plugin is Flaking | ### Which jobs are flaking?
k8s Unit test Job.
### Which tests are flaking?
`TestRegistrationHandler/manage-resource-slices` in `k8s.io/kubernetes/pkg/kubelet/cm/dra: plugin`
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/dra/plugin/registration_test.go#L115
### Reason for failure (if possible)
```
--- FAIL: TestRegistrationHandler (3.38s)
--- FAIL: TestRegistrationHandler/manage-resource-slices (3.38s)
registration_test.go:149:
Error Trace: /root/kubernetes/pkg/kubelet/cm/dra/plugin/registration_test.go:149
/root/kubernetes/staging/src/k8s.io/client-go/testing/fixture.go:882
/root/kubernetes/staging/src/k8s.io/client-go/testing/fake.go:145
/root/kubernetes/staging/src/k8s.io/client-go/gentype/fake.go:234
/root/kubernetes/pkg/kubelet/cm/dra/plugin/registration.go:116
/root/kubernetes/staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:154
/root/kubernetes/staging/src/k8s.io/apimachinery/pkg/util/wait/backoff.go:485
/root/kubernetes/pkg/kubelet/cm/dra/plugin/registration.go:102
/usr/local/go/src/runtime/asm_amd64.s:1700
Error: Not equal:
expected: "spec.nodeName=worker"
actual : "spec.driver=pluginB,spec.nodeName=worker"
Diff:
--- Expected
+++ Actual
@@ -1 +1 @@
-spec.nodeName=worker
+spec.driver=pluginB,spec.nodeName=worker
Test: TestRegistrationHandler/manage-resource-slices
Messages: field selector in DeleteCollection
FAIL
```
### Anything else we need to know?
`TestRegistrationHandler` failure
```
[root@raji-x86-workspace1 kubernetes]# stress ./plugin.test -test.run TestRegistrationHandler
5s: 16 runs so far, 0 failures, 16 active
10s: 32 runs so far, 0 failures, 16 active
15s: 48 runs so far, 0 failures, 16 active
20s: 64 runs so far, 0 failures, 16 active
25s: 80 runs so far, 0 failures, 16 active
30s: 96 runs so far, 0 failures, 16 active
35s: 113 runs so far, 0 failures, 16 active
40s: 131 runs so far, 0 failures, 16 active
45s: 160 runs so far, 0 failures, 16 active
50s: 176 runs so far, 0 failures, 16 active
55s: 193 runs so far, 0 failures, 16 active
1m0s: 210 runs so far, 0 failures, 16 active
1m5s: 226 runs so far, 0 failures, 16 active
1m10s: 242 runs so far, 0 failures, 16 active
1m15s: 258 runs so far, 0 failures, 16 active
1m20s: 275 runs so far, 0 failures, 16 active
1m25s: 303 runs so far, 0 failures, 16 active
/tmp/go-stress-20241202T035110-241152069
I1202 03:52:35.497329 1309458 registration.go:154] "Register new DRA plugin" logger="DRA registration handler" pluginName="pluginA" endpoint="endpointA"
I1202 03:52:35.497841 1309458 registration.go:154] "Register new DRA plugin" logger="DRA registration handler" pluginName="pluginA" endpoint="endpointA"
I1202 03:52:35.497928 1309458 registration.go:184] "DRA plugin already registered, the old plugin was replaced and will be forgotten by the kubelet till the next kubelet restart" logger="DRA registration handler" pluginName="pluginA" oldEndpoint="endpointA"
I1202 03:52:35.497993 1309458 registration.go:154] "Register new DRA plugin" logger="DRA registration handler" pluginName="pluginB" endpoint="endpointB"
I1202 03:52:35.498078 1309458 registration.go:226] "Deregister DRA plugin" logger="DRA registration handler" pluginName="pluginB" endpoint="endpointB"
I1202 03:52:35.498146 1309458 registration.go:237] "Deregister DRA plugin not necessary, was already removed" logger="DRA registration handler"
I1202 03:52:35.498436 1309458 registration.go:154] "Register new DRA plugin" logger="DRA registration handler" pluginName="pluginA" endpoint="endpointA"
I1202 03:52:35.498518 1309458 registration.go:184] "DRA plugin already registered, the old plugin was replaced and will be forgotten by the kubelet till the next kubelet restart" logger="DRA registration handler" pluginName="pluginA" oldEndpoint="endpointA"
I1202 03:52:35.498578 1309458 registration.go:154] "Register new DRA plugin" logger="DRA registration handler" pluginName="pluginB" endpoint="endpointB"
I1202 03:52:35.498659 1309458 registration.go:226] "Deregister DRA plugin" logger="DRA registration handler" pluginName="pluginB" endpoint="endpointB"
I1202 03:52:35.498715 1309458 registration.go:237] "Deregister DRA plugin not necessary, was already removed" logger="DRA registration handler"
I1202 03:52:35.498967 1309458 registration.go:154] "Register new DRA plugin" logger="DRA registration handler" pluginName="pluginA" endpoint="endpointA"
I1202 03:52:35.499036 13
…
1m30s: 322 runs so far, 1 failures (0.31%), 16 active
```
### Relevant SIG(s)
/sig testing | sig/node,kind/flake,triage/accepted,wg/device-management | low | Critical |
2,714,262,567 | godot | Using too many tiles and a `Camera2D` will make `TileMapLayer` vanish on camera `zoom` + `offset` | ### Tested versions
Version: 4.3.stable.mono
### System information
Godot v4.3.stable.mono - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 2080 SUPER (NVIDIA; 32.0.15.6603) - AMD Ryzen 7 3700X 8-Core Processor (16 Threads)
### Issue description
At a specific number of tiles (in my test its 128 * 192 = 24 576 tiles) I have a problem when using a Camera2D.
When I zoom out of the tilemaplayer and then move the camera around via the `Offset` property, on specific positions in a direction (for example moving down) - the tilemaplayer vanishes. If I keep moving a bit more, it appears again. Then vanishes again. Like on specific breakpoints or something.
### Steps to reproduce
Download the example project and then zoom out to max zoom and keep holding, the down arrow. Then you will see the effect.
### Minimal reproduction project (MRP)
[TestTilemap.zip](https://github.com/user-attachments/files/17989255/TestTilemap.zip)
| bug,needs testing,topic:2d | low | Minor |
2,714,292,101 | vscode | tools menu |
Type: <b>Feature Request</b>
Hello,
integrate a tools menu in visual studio code please
for example tools > nuget package manager
Thanks.
HR
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.22000
Modes:
<!-- generated by issue reporter --> | feature-request,menus | low | Minor |
2,714,299,783 | vscode | no copy and paste |
Type: <b>Bug</b>
Hello,
after all these updates you don't even have a copy and paste feature in the Issue Reporter, that's poor.
What are You doing the whole day?
Thanks.
HR
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.22000
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD E-350D APU with Radeon(tm) HD Graphics (2 x 1597)|
|GPU Status|2d_canvas: unavailable_software<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: disabled_software<br>multiple_raster_threads: disabled_off<br>opengl: disabled_off<br>rasterization: disabled_software<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: disabled_software<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: unavailable_software<br>webgl2: unavailable_software<br>webgpu: unavailable_software<br>webnn: unavailable_software|
|Load (avg)|undefined|
|Memory (System)|3.60GB (0.68GB free)|
|Process Argv|--crash-reporter-id 88eee6e2-f681-494d-99e7-f471763957bc|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (26)</summary>
Extension|Author (truncated)|Version
---|---|---
npm-intellisense|chr|1.4.5
path-intellisense|chr|2.10.0
vscode-eslint|dba|3.0.10
vscode-solution-explorer|fer|0.8.8
linter-gfortran|for|3.2.0
copilot|Git|1.246.0
copilot-chat|Git|0.22.4
gc-excelviewer|Gra|4.2.62
csdevkit|ms-|1.9.55
csharp|ms-|2.39.29
vscode-dotnet-runtime|ms-|2.2.3
debugpy|ms-|2024.12.0
python|ms-|2024.14.0
vscode-pylance|ms-|2024.8.1
cpptools|ms-|1.21.6
vscode-versionlens|pfl|1.14.5
java|red|1.33.0
node-pack|Swe|0.1.16
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
0ee40948:31013168
dvdeprecation:31068756
dwnewjupyter:31046869
2f103344:31071589
impr_priority:31102340
nativerepl1:31139838
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
j44ff735:31181874
```
</details>
<!-- generated by issue reporter --> | feature-request,issue-reporter | low | Critical |
2,714,309,453 | excalidraw | TypeError: Failed to construct 'URL': Invalid URL | # Excalidraw TypeError: Failed to construct 'URL': Invalid URL
## Expected Outcome:
Following Excalidraw docs provided code working without error for Next js
Excalidraw Node mounted in the html tree

## Issue: Unhandled Runtime Error
I read thru [Excalidraw examples for Next js in Excalidraw's docs ](https://docs.excalidraw.com/docs/@excalidraw/excalidraw/integration#nextjs) & I cannot find a solution to **invalid URL** for weeks

I read thru different example from:
- [Lexical](https://github.com/facebook/lexical/blob/main/packages/lexical-playground/src/nodes/ExcalidrawNode/ExcalidrawImage.tsx)
- [Excalidraw example](https://github.com/excalidraw/excalidraw/blob/master/examples/excalidraw/with-nextjs/src/excalidrawWrapper.tsx)
Still have the same issue of **ERROR: Invalid URL + 'ReactCurrentDispatcher'**
## My attampts
1. I dowgraded excalidraw to `0.17.0` still not works
2. I deleted node_modules & re-install still no avail
3. I Try different version of next js , still not work (maybe I did't change the react version idk..)
4. I haven't try `Yarn` package manager , could be npm package manager issue
I am not smart enough .. some help will be nice😊!
## How to recreate the error?
1. [Install next js](https://nextjs.org/docs/app/getting-started/installation) <br> `npx create-next-app@latest`
2. I use app router <br> `Would you like to use App Router? (recommended) "Yes"` <- Yes/Y
3. [Install Excalidraw](https://docs.excalidraw.com/docs/@excalidraw/excalidraw/installation) <br> `npm install react react-dom @excalidraw/excalidraw` or Just copy my `packages.json`
4. Install packages <br> `npm i`
5. Copy my component or go the Exalidraw docs & follow it [here](https://docs.excalidraw.com/docs/@excalidraw/excalidraw/integration#nextjs)
6. The error should occur 
### My package.json:
```json
{
"name": "learn-excalidraw",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "NODE_OPTIONS='--inspect' next dev",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"dependencies": {
"@excalidraw/excalidraw": "^0.17.0",
"@lexical/react": "^0.20.0",
"@radix-ui/react-dialog": "^1.1.2",
"@radix-ui/react-slot": "^1.1.0",
"class-variance-authority": "^0.7.0",
"clsx": "^2.1.1",
"lexical": "^0.20.0",
"lucide-react": "^0.461.0",
"next": "15.0.3",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0",
"tailwind-merge": "^2.5.5",
"tailwindcss-animate": "^1.0.7"
},
"devDependencies": {
"@types/node": "^20",
"@types/react": "^18",
"@types/react-dom": "^18",
"eslint": "^8",
"eslint-config-next": "15.0.3",
"postcss": "^8",
"tailwindcss": "^3.4.1",
"typescript": "^5"
},
"peerDependencies": {
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0",
"react-dom": "^16.8 || ^17.0 || ^18.0 || ^19.0"
}
}
```
### My Components:
#### ExcalidrawWrapper.tsx
My excalidraw wrapper according to the docs [here](https://docs.excalidraw.com/docs/@excalidraw/excalidraw/integration#nextjs)
```tsx
"use client";
import { Excalidraw } from "@excalidraw/excalidraw";
const ExcalidrawWrapper: React.FC = () => {
return (
<div style={{ height: "500px", width: "500px" }}>
<Excalidraw />
</div>
);
};
export default ExcalidrawWrapper;
```
#### My page.tsx
```tsx
"use client";
import { Suspense } from "react";
import dynamic from "next/dynamic";
import Script from "next/script";
// Dynamically import ExcalidrawWrapper with SSR disabled
const ExcalidrawWrapperClient = dynamic(
async () => (await import("../components/Excalidraw/ExcalidrawWrapper")).default,
{ ssr: false }
);
export default function Page() {
return (
<Suspense fallback={<div>Loading...</div>}>
<Script id="load-env-variables" strategy="beforeInteractive">
{`
window["EXCALIDRAW_ASSET_PATH"] = window.origin;
console.log(window["EXCALIDRAW_ASSET_PATH"]);
`}
</Script>
<ExcalidrawWrapperClient />
</Suspense>
);
}
```
The `console.log(window["EXCALIDRAW_ASSET_PATH"])` is use to test if the url path is correct , for my case ts correctly connect to `127.0.0.1:3000`
I added the `<Script /> `to test out a solution for `TypeError: ReactCurrentDispatcher`
## TypeError: Cannot read properties of undefined (reading 'ReactCurrentDispatcher')

This solution come from [here](https://github.com/excalidraw/excalidraw/blob/master/examples/excalidraw/with-nextjs/src/app/page.tsx)
| bug,font | low | Critical |
2,714,335,958 | vscode | Attached text hover paddings | Testing #234962
The line between title and content is too close to the content:

| bug,verification-found,panel-chat | low | Minor |
2,714,341,938 | vscode | Color attached code in hover | Testing #234962
The syntax coloring is missing:
 | bug,panel-chat | low | Minor |
2,714,348,742 | vscode | Wrong file type icon for attached code | Testing #234962
It shows the plaintext icon instead of the TS one:

| bug,panel-chat | low | Minor |
2,714,356,652 | vscode | Undo show hide paste menu | Testing #234962
- paste code into chat panel
- undo
- paste menu stays 🐛
| bug,panel-chat | low | Minor |
2,714,363,174 | vscode | Selecting a completion entry doesn't do anything | Testing #235021
* trigger completions with ctrl+space
* arrow down to an entry
* select Enter
* :bug: suggestion close but nothing is inserted
https://github.com/user-attachments/assets/fcb573a5-1ce3-4292-802a-a40fcda7bc9f
| feature-request,terminal-suggest | medium | Critical |
2,714,393,247 | langchain | langchain_chroma similarity_search_with_score pass two metadata It's not OK | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_chroma import Chroma
from conf import vectorStores_dir
from embeddings.BgeEmbeddings import BgeEmbedding
a = Chroma(
collection_name="test_meta",
embedding_function=BgeEmbedding,
persist_directory=vectorStores_dir + "test_meta"
)
# It's not OK
# q = a.similarity_search_with_score("home", 2, {"pageId": "test", "id": "1"})
# It's OK
# q = a.similarity_search_with_score("家", 2, {"pageId": "test"})
# It's OK to pass one metadata, but it's not OK to pass two metadata.
```
### Error Message and Stack Trace (if applicable)
raise ValueError(f"Expected where to have exactly one operator, got {where}")
ValueError: Expected where to have exactly one operator, got {'pageId': 'test', 'id': '1'} in query.
### Description
langchain and langchain-chroma is latest version
langchain_chroma similarity_search_with_score pass two metadata It's not OK
### System Info
embedding:BAAI/bge-large-zh-v1.5
langchain:0.3.9
langchain-chroma:0.1.4 | Ɑ: vector store | low | Critical |
2,714,402,580 | rust | `allow_internal_unstable` valid on non proc-macros | This bug was technically noticed before, in #69399. That issue was closed without fixing the issue (instead fixing another issue).
Found the bug looking at this gem of a function:

if it's a function;
look if it's a proc macro and return
if it's not a proc macro, also return.....
Anyway, I'll fix this with the attribute handling rework (#131229), I'm just opening the issue to track the fact that this is now a known bug.
@rustbot claim
@rustbot label +A-Attributes
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"jdonszelmann"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-attributes,T-compiler,C-bug | low | Critical |
2,714,421,691 | vscode | Allow to configure the editor split on drop target threshold | I want to reopen #116502 that was closed a few years ago as it is still highly relevant. I find the default behavior of vscode annoying.
---
I would like an option to configure the split offering threshold on drag. In Atom editor I was used to a value around 66%
This settings could be named as "splitOnDragOfferingThreshold".
I think the code below does this feature. Currently it is fixed as 33%
[src/vs/workbench/browser/parts/editor/editorDropTarget.ts](https://github.com/microsoft/vscode/blob/42221c900ba1b5a06ccf7d439104381a5b13d885/src/vs/workbench/browser/parts/editor/editorDropTarget.ts#L406)
```ts
const splitWidthThreshold = editorControlWidth / 3; // offer to split left/right at 33%
const splitHeightThreshold = editorControlHeight / 3; // offer to split up/down at 33%
```

A bigger threshold could provide a much smoother user experience when someone split tabs very often by dragging.
| feature-request,workbench-editor-grid | low | Minor |
2,714,454,806 | vscode | Unclear why filing an issue affordance is so prominent | Testing #235051
* run a NB cell
* hover over execution times
* the message lists v e r y prominently that I should file an issue
* IMO that's too strong because generally times are just interesting and nothing that should be followed with by an issue report. Can this be just a little icon after the extension/renderer?

| under-discussion,notebook-statusbar | low | Minor |
2,714,565,971 | vscode | Extensions Warning: Dismiss button is not using custom hover | Testing #234993

You can add a custom hover with: `hoveService.showDelayedHover(...)` | debt,extensions | low | Major |
2,714,568,450 | svelte | Svelte's usage of the `:where` CSS selector breaks sites in Safari 12 and 13 | ### Describe the bug
Svelte uses the `:where` CSS selector when you use a CSS selector like `li.main-menu:hover ul`. Svelte turns that into `.main-menu.svelte-13eihuy:hover ul:where(.svelte-13eihuy)`.
This `:where` selector has only been added to Safari in version 14.
The fix is to change `li.main-menu:hover ul` into `li.main-menu:hover :global(ul)`, but this is not obvious and you won't notice that your CSS doesn't work in older browsers unless you specifically test your site in them.
Now obviously I know that Safari 12 and 13 are quite old, but aren't they supported by Svelte? What is the list of supported browsers anyway?
### Reproduction
I have an extremely simple version here in the playground:
https://svelte.dev/playground/ca3a0989871744e7b2d43959c9d6f574?version=5.4.0
When you inspect the generated css, you will see this:
``` css
.main-menu.svelte-13eihuy ul:where(.svelte-13eihuy) {
display: none;
}
.main-menu.svelte-13eihuy:hover ul:where(.svelte-13eihuy) {
display: block;
}
```
This will not work in Safari 12 or 13.
### Logs
_No response_
### System Info
```shell
System:
OS: macOS 15.1.1
CPU: (10) arm64 Apple M1 Max
Memory: 161.03 MB / 32.00 GB
Shell: 3.7.1 - /opt/homebrew/bin/fish
Binaries:
Node: 20.17.0 - ~/Library/pnpm/node
npm: 10.8.2 - ~/Library/pnpm/npm
pnpm: 9.13.2 - ~/Library/pnpm/pnpm
Browsers:
Safari: 18.1.1
npmPackages:
svelte: ^5.2.4 => 5.2.4
```
### Severity
annoyance | css | low | Critical |
2,714,582,944 | deno | Support compile `deno serve --parallel server.ts` into a single executable | I am currently unable to compile the serve function into a single executable with the `parallel` option enabled. | suggestion,compile,serve | low | Minor |
2,714,594,201 | godot | [Windows on Snapdragon ARM] GUI goes double in popup windows | ### Tested versions
v4.3 stable
### System information
Godot v4.3.stable - Windows 10.0.26100 - GLES3 (Compatibility) - D3D12 (Qualcomm(R) Adreno(TM) X1-85 GPU) - Snapdragon(R) X Elite - X1E78100 - Qualcomm(R) Oryon(TM) CPU (12 Threads) - 16GB RAM - 10 touchscreen points - 2944x1840 resolution with 200% GUI scaling
### Issue description
I think this may be a resolution or CPU architecture issue, as on my gaming PC (which doesnt use ARM i think, has a RTX 3050 iirc, and is 1920x1080 with 100% scaling) there are no gui bugs.
on popup windows, whenever i click/tap on an interactable element,



### Steps to reproduce
how to replicate: go on godot and open any popup window (e.g. the create new project), then click on an interactable element (e.g. a button, text box.).
### Minimal reproduction project (MRP)
N/A. there is no bug replication project, because this happens on all projects. | bug,platform:windows,topic:rendering,topic:thirdparty,needs testing,topic:gui | low | Critical |
2,714,601,730 | vscode | Test filter is hard to follow | re https://github.com/microsoft/vscode/issues/228195
* vscode sources, all test extensions
* disable fuzzy match
* search for `IPC, Child Process`
* :bug: I cannot make any sense of the results
We have one suite that's called "IPC, Child Process" and I expect that to be the only result. There seems to be totally unrelated results from terminal and why too many from IPC
https://github.com/user-attachments/assets/36fb1be4-5761-4287-92e6-fd7d043aed96
| bug,testing | low | Critical |
2,714,615,002 | ant-design | tabs-使用renderTabBar UI闪烁 右侧操作栏目不固定 | ### Reproduction link
[](https://codesandbox.io/p/sandbox/fglsnz)
### Steps to reproduce
https://codesandbox.io/p/sandbox/fglsnz
### What is expected?
期望使用自定义渲染后 超出长度右侧操作栏固定,新增tab会默认 滚动到最新的位置
### What is actually happening?
使用自定义渲染后 超出长度右侧操作栏不固定,需要手动滑倒最后才展示,,新增tab会默认 滚动到最前面的tab
| Environment | Info |
| --- | --- |
| antd | 5.22.3 |
| React | 18.0 |
| System | mac |
| Browser | 128.0.6613.138 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive | low | Major |
2,714,621,906 | vscode | Allow List: Should target specific versions be supported when extension has not been released traget specific? | Testing #234993
What if an extension releases a version without specific targets. Should adding a target to the specific version still work? Currently it doesn't and I do understand why, however, we could still support it such that companies can specify that a version of an extension can only be used on a specific platform. This is currently not possible if the extension did not publish a version for each target.
In the example bellow `dbaeumer.vscode-eslint` published the version `3.0.8`without being target specific. Because of this, it is not possible to specify that this extension should only be allowed on windows and not on other platforms:

| feature-request,extensions | low | Minor |
2,714,625,618 | vscode | Opening find widget in editor throws an exception | Latest VS Code insiders, macOS
1. Open Dev Tools
2. Editor > cmd+f -> find widget is opened
3. Notice exception in Dev Tools console 🐛
"Blocked aria-hidden on a <textarea> element because the element that just received focus must not be hidden from assistive technology users. Avoid using aria-hidden on a focused element or its ancestor. Consider using the inert attribute instead, which will also prevent focus. For more details, see the aria-hidden section of the WAI-ARIA specification at https://w3c.github.io/aria/#aria-hidden. "
```html
<textarea class="input" autocorrect="off" autocapitalize="off" spellcheck="false" wrap="off" aria-label="Find" placeholder="Find" style="background-color: inherit; color: var(--vscode-input-foreground); width: calc(100% + 0px); height: 23px;"></textarea>
``` | bug,accessibility,editor-find | low | Minor |
2,714,632,591 | vscode | No default agent contributed: CodeExpectedError: No default agent contributed | 1. VS Code, open dev tools, sign-out of GH Copilot
2. Reload. Notice error in dev tools console
" ERR No default agent contributed: CodeExpectedError: No default agent contributed
at Lqe.R (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:1951:25105)"
Here's the minified code place where it throws

If I am not logged in to GH Copilot ideally we would not throw any console errors. | bug | low | Critical |
2,714,686,701 | vscode | Copilot Chat - attached code block hover feedback | Testing #234962
* It would be nice if we could include the file-type icon in from the of the file name
* The file name does not seem to be vertically centered in the header of the hover
* Not sure about using bold for the file name. I think that being in the header is distinct enough
* The main body of the hover should have consistent padding at the top/bottom
* Would be nice to remove the white-space from the beginning of the line

* There should be a visual indicator (ex: ...) if the hover does not contain the complete code block
 | feature-request,panel-chat | low | Minor |
2,714,713,364 | vscode | `notebook.addFindMatchToSelection` keeps looping | Testing #235047
When pressing <kbd>Cmd D</kbd> in a regular editor, find results keep getting added to the multiple selection collection. It will be a noop, once all find results are covered by selections.
🐛 In notebooks, <kbd>Cmd D</kbd> will never stop looping around find results.
https://github.com/user-attachments/assets/70dccdb2-1d20-4c52-95bf-d7733e64d97e
| bug,editor-highlight,notebook-cell-editor | low | Minor |
2,714,716,909 | deno | X509Certificate.prototype.publicKey fails for Ed25519 keys eventho KeyObject supports them | Version: Deno 2.1.2
In the provided script the X.509 structure's SPKI includes the very same SPKI. The SPKI itself can be passed to `createPublicKey` but cannot be obtained through X509Certificate.
This is unexpected.
The very fact that `process.getBuiltinModule` is used to **again** ship incomplete `node:crypto` implementations is to Deno user's own detriment since it is used by modules to dynamically target Node.js APIs for their performance and ease of use when a pure JS fallback would be used otherwise.
```js
const spki =
'-----BEGIN PUBLIC KEY-----\n' +
'MCowBQYDK2VwAyEAlBZ9GShvbQEtyyaGs+0Nd4aurH7ERq6UOIvXGb5/tXA=\n' +
'-----END PUBLIC KEY-----\n'
const x509 =
'-----BEGIN CERTIFICATE-----\n' +
'MIIBoTCCAVOgAwIBAgIUde5G4y+mtbb0eRISc7vnINRbSXkwBQYDK2VwMEUxCzAJ\n' +
'BgNVBAYTAkNaMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5l\n' +
'dCBXaWRnaXRzIFB0eSBMdGQwIBcNMjIxMDExMTIyMTUzWhgPMjEyMjA5MTcxMjIx\n' +
'NTNaMEUxCzAJBgNVBAYTAkNaMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQK\n' +
'DBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwKjAFBgMrZXADIQCUFn0ZKG9tAS3L\n' +
'Joaz7Q13hq6sfsRGrpQ4i9cZvn+1cKNTMFEwHQYDVR0OBBYEFAATzoAtBcYTcOdY\n' +
'jkcQqsWXipnSMB8GA1UdIwQYMBaAFAATzoAtBcYTcOdYjkcQqsWXipnSMA8GA1Ud\n' +
'EwEB/wQFMAMBAf8wBQYDK2VwA0EApfw+9jSO0x0IorDfdr5ZVGRBVgrfrd9XhxqQ\n' +
'Krphj6cA4Ls9aMYAHf5w+OW9D/t3a9p6mYm78AKIdBsPEtT1AQ==\n' +
'-----END CERTIFICATE-----\n'
const crypto = globalThis.process.getBuiltinModule('node:crypto')
const publicKeyObject = crypto.createPublicKey(spki)
const x509Object = new crypto.X509Certificate(x509)
console.log('got publicKeyObject\n', publicKeyObject)
console.log('got x509Object\n', x509Object)
try {
console.log('got x509Object.publicKey', x509Object.publicKey)
// Not implemented in deno, but it never gets here anyway
console.log(
'x509Object.publicKey.equals(publicKeyObject)',
x509Object.publicKey.equals(publicKeyObject),
)
} catch (err) {
console.log('x509Object.publicKey error', err)
}
```
Node.js console out
```
got publicKeyObject
PublicKeyObject [KeyObject] { [Symbol(kKeyType)]: 'public' }
got x509Object
X509Certificate {
subject: 'C=CZ\nST=Some-State\nO=Internet Widgits Pty Ltd',
subjectAltName: undefined,
issuer: 'C=CZ\nST=Some-State\nO=Internet Widgits Pty Ltd',
infoAccess: undefined,
validFrom: 'Oct 11 12:21:53 2022 GMT',
validTo: 'Sep 17 12:21:53 2122 GMT',
validFromDate: 2022-10-11T12:21:53.000Z,
validToDate: 2122-09-17T12:21:53.000Z,
fingerprint: 'FC:35:B7:FC:EE:99:AA:A3:32:4B:EA:93:DB:4F:E5:FB:1B:26:57:41',
fingerprint256: '1D:96:32:84:05:BA:54:8C:04:2C:64:8E:9B:46:7A:36:E8:90:10:87:5B:D5:BE:9B:9E:DE:37:C9:26:87:30:59',
fingerprint512: 'BE:F8:CE:6E:EC:C9:B7:DA:69:FD:B5:DE:89:48:E2:41:AB:6F:A7:71:AD:E5:9E:7F:BB:D2:8A:BF:13:41:7D:80:5B:DA:0E:DE:CF:F8:53:0E:4F:A9:C7:78:AC:1F:04:68:41:9A:9E:69:DC:EC:3F:9D:09:F9:7E:BE:3E:48:AF:A5',
keyUsage: undefined,
serialNumber: '75EE46E32FA6B5B6F479121273BBE720D45B4979'
}
got x509Object.publicKey PublicKeyObject [KeyObject] { [Symbol(kKeyType)]: 'public' }
x509Object.publicKey.equals(publicKeyObject) true
```
Deno console out
```
got publicKeyObject
PublicKeyObject [KeyObject] {
[Symbol(kKeyType)]: "public",
[Symbol(kHandle)]: {}
}
got x509Object
X509Certificate {}
x509Object.publicKey error Error: unsupported x509 public key type
at X509Certificate.get publicKey (ext:deno_node/internal/crypto/x509.ts:84:20)
at file:///Users/panva/repo/jose/some.mjs:28:54
``` | bug,node compat,crypto | low | Critical |
2,714,720,830 | vscode | Selection Highlighting in notebooks doesn't work after a few scroll passes | Testing #235047
1. Select a word. Notice it's correctly highlighted in other editors.
2. Scroll around. All is good until...
🐛 ... it isn't. 🙈
https://github.com/user-attachments/assets/50bfa81e-57f1-42c3-a9fb-7d86a13edf2d
| bug,editor-highlight,notebook-cell-editor | low | Minor |
2,714,722,563 | vscode | Remote SSH: Participant is not aware of the SSH remotes I have setup/configured in VS Code | Testing #235037
Maybe having a tool which enables the participant to get the configured SSH remotes I have configured could help it answer some questions.

| feature-request,ssh | low | Minor |
2,714,722,685 | flutter | Multiple flutters with same engine group freeze with platform view | ### Steps to reproduce
1. Clone the sample repo
2. Run `flutter pub get` inside `freeze`
3. Run `flutter precache --ios` inside `freeze`
3. Run `pod install` in the root of the repo
4. Open `FlutterExperiments.xcworkspace`
5. Run the app
6. Scroll the home tab
7. Go to the profile tab
8. Scroll the profile tab
9. Repeat 6 to 8 a couple times. Home screen should freeze
The issue is only reproducible on iOS
### Expected results
Scroll keeps working
### Actual results
The screen freezes
### Code sample
We are providing a full repo because it fits better to showcase the integration between multiple flutters and native views: https://github.com/motain/flutter-experiments
But in any case, here is the minimal code example
<details open>
<summary>iOS driver</summary>
class SceneDelegate: UIResponder, UIWindowSceneDelegate {
let engines = FlutterEngineGroup(name: "multiple-flutters", project: nil)
var window: UIWindow?
func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {
guard let scene = (scene as? UIWindowScene) else { return }
let window = UIWindow(windowScene: scene)
let tabBar = UITabBarController()
window.rootViewController = tabBar
tabBar.viewControllers = [
SingleFlutterViewController(withRoute: "/home", withEngineGroup: engines),
SingleFlutterViewController(withRoute: "/profile", withEngineGroup: engines),
]
window.makeKeyAndVisible()
self.window = window
}
}
final class SingleFlutterViewController: FlutterViewController {
private var channel: FlutterMethodChannel?
init(withRoute initialRoute: String, withEngineGroup engineGroup: FlutterEngineGroup) {
let newEngine = engineGroup.makeEngine(withEntrypoint: nil, libraryURI: nil, initialRoute: initialRoute)
GeneratedPluginRegistrant.register(with: newEngine)
super.init(engine: newEngine, nibName: nil, bundle: nil)
tabBarItem = UITabBarItem(title: initialRoute, image: nil, tag: initialRoute.hashValue)
}
required init(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
</details>
<details open>
<summary>iOS Plugin</summary>
public class FreezerPlugin: NSObject, FlutterPlugin {
public static func register(with registrar: FlutterPluginRegistrar) {
let factory = FLNativeViewFactory(messenger: registrar.messenger())
registrar.register(factory, withId: "freezer")
}
}
class FLNativeViewFactory: NSObject, FlutterPlatformViewFactory {
private var messenger: FlutterBinaryMessenger
init(messenger: FlutterBinaryMessenger) {
self.messenger = messenger
super.init()
}
func create(
withFrame frame: CGRect,
viewIdentifier viewId: Int64,
arguments args: Any?
) -> FlutterPlatformView {
return FLNativeView(
frame: frame,
viewIdentifier: viewId,
arguments: args,
binaryMessenger: messenger)
}
public func createArgsCodec() -> FlutterMessageCodec & NSObjectProtocol {
return FlutterStandardMessageCodec.sharedInstance()
}
}
class FLNativeView: NSObject, FlutterPlatformView {
private var _view: UIView
init(
frame: CGRect,
viewIdentifier viewId: Int64,
arguments args: Any?,
binaryMessenger messenger: FlutterBinaryMessenger?
) {
_view = UIView()
super.init()
createNativeView(view: _view)
}
func view() -> UIView {
return _view
}
func createNativeView(view _view: UIView){
_view.backgroundColor = UIColor.blue
let nativeLabel = UILabel()
nativeLabel.text = "Native text from iOS"
nativeLabel.textColor = UIColor.white
nativeLabel.textAlignment = .center
nativeLabel.frame = CGRect(x: 0, y: 0, width: 180, height: 48.0)
_view.addSubview(nativeLabel)
}
}
</details>
<details open>
<summary>Flutter</summary>
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
Future<void> main() async {
runApp(MyApp());
}
get fakeClubId => "1094";
get fakeNationalId => "1";
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Home',
initialRoute: '/home',
routes: {
// When navigating to the "/" route, build the FirstScreen widget.
'/home': (context) => Scaffold(
body: SingleChildScrollView(
child: Column(
children: List<Widget>.generate(
1000,
(index) => Text('this is home dummy text index $index'),
),
),
)),
// When navigating to the "/second" route, build the SecondScreen widget.
'/profile': (context) => Scaffold(
body: SingleChildScrollView(
child: Column(children: [
const SizedBox(
width: 100,
height: 50,
child: UiKitView(
viewType: "freezer",
creationParamsCodec: StandardMessageCodec(),
),
),
...List<Widget>.generate(
1000,
(index) => Text('this is profile dummy text index $index'),
),
]),
)),
},
debugShowCheckedModeBanner: false,
);
}
}
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/b0ad3e28-ba93-4283-a13d-b6455508e604
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-arm64, locale en-DE)
• Flutter version 3.24.3 on channel stable at /Users/bruno.pastre/fvm/versions/3.24.3
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (3 months ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/bruno.pastre/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (5 available)
• Bruno’s iPhone (mobile) • 00008120-00141D201412201E • ios • iOS 18.1.1 22B91
• iPhone SE (3rd generation) (mobile) • C645237F-3C82-4732-9801-C431210747AA • ios • com.apple.CoreSimulator.SimRuntime.iOS-17-5 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.120
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| platform-ios,engine,a: platform-views,has reproducible steps,P2,team-ios,triaged-ios,found in release: 3.24,found in release: 3.27 | low | Critical |
2,714,731,017 | vscode | Selection Highlighting in notebooks is indistinguishable from actual multi cursor | Testing #235047
In the regular editor, selection highlights are a faded out color of selection decorations. This lets the user keep track of which selection highlights are already part of the selection.
🐛 There seems to be no distinction in notebooks:
https://github.com/user-attachments/assets/3ad9648f-5f23-4b37-a3ef-d3318be20c65
| bug,editor-highlight,notebook-cell-editor | low | Minor |
2,714,735,852 | vscode | `notebook.addFindMatchToSelection` needs one extra invocation | Testing #235047
When pressing <kbd>Cmd D</kbd> in a regular editor after selecting a word, the next find match will be added to the selection.
🐛 In notebooks, the first <kbd>Cmd D</kbd> will do nothing. Only the second one will select the next find match.
| bug,editor-highlight,notebook-cell-editor | low | Minor |
2,714,735,983 | deno | Deno fetch gives TLS related "reset by peer" error, on a site with what appears to be strong security | Using `fetch` in Lume v2.4.2 (Deno Typescript static site generator app) on Deno 2.1.2, I get this error when fetching to our ops db REST API:
> fetch TypeError: "client error connection reset by peer"
I see a lot of people have had this same or a similar problem, in various GH issues, and it appears that it is because of the underlying Rust [library](https://github.com/ctz/rustls) being used, is extra strict about TLS connections. For example:
https://github.com/denoland/deno/issues/6427
https://github.com/denoland/deno/issues/6197
https://github.com/denoland/deno/issues/7528
When I test the connection using `curl -v` you can see the result below, and this well-known assessment site https://www.ssllabs.com/ssltest/analyze.html?d=pro.dbflex.net returns an A rating, yet, Deno (well, the underlying library) rejects the connection.
As a workaround, I used a different service to pull the json I am interested in, and represent it as an URL that does not require any auth / bearer tokens etc.
I am confused by this, because although TLS 1.2 is getting old, it seems like the A rating from SSLLabs should be good enough?
Are there any workarounds on the code side, like using a different or 3rd party fetch library?

```sh
❯ curl -v https://pro.dbflex.net/secure
* Host pro.dbflex.net:443 was resolved.
* IPv6: (none)
* IPv4: 208.100.33.100
* Trying 208.100.33.100:443...
* Connected to pro.dbflex.net (208.100.33.100) port 443
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-SHA384 / [blank] / UNDEF
* ALPN: server did not agree on a protocol. Uses default.
* Server certificate:
* subject: CN=dbflex.net
* start date: Feb 15 00:00:00 2024 GMT
* expire date: Mar 17 23:59:59 2025 GMT
* subjectAltName: host "pro.dbflex.net" matched cert's "*.dbflex.net"
* issuer: C=GB; ST=Greater Manchester; L=Salford; O=Sectigo Limited; CN=Sectigo RSA Domain Validation Secure Server CA
* SSL certificate verify ok.
* using HTTP/1.x
> GET /secure HTTP/1.1
> Host: pro.dbflex.net
> User-Agent: curl/8.7.1
> Accept: */*
``` | tls,needs investigation | low | Critical |
2,714,764,792 | vscode | Why run code first, the command seems not loaded in vscode terminal | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: Sequoia 15.1.1
Steps to Reproduce:
1. Click the button of `run python file` when the terminal is closed.
- The output is none. It looks like command runs before terminal get ready.
- This problem not happens on windows.
- Look the below pic.

2. Kick the button of `run python file` secondly, when the terminal is opening.
- The same command runs and get output

**It is very stange that why run code in vscode on mac needs twice click.** | bug,terminal-input | low | Critical |
2,714,774,132 | pytorch | Support SDPA flash attention/ memory efficant attn on ROCm gfx908 | ### 🚀 The feature, motivation and pitch
Currently SDPA has a flash attention backend via AOTriton for gfx90a and gfx942, but not gfx908.
CK flash attention backend used by https://github.com/ROCm/flash-attention dose work on gfx908 but this is not used by pytroch.
For fp16 gfx908 isa supports all the same instructions as gfx90a so the gfx90a kernels should work on gfx908 with no modification. For bf16 only the mfma bf16_1k instruction is missing which can be easily substituted.
It thus would be only a limited amount of work to extend support to gfx908 while bringing significant benefit to those devices.
### Alternatives
the apis provided by https://github.com/ROCm/flash-attention can be used directly.
### Additional context
gfx908 is the isa of the MI100 compute accelerator released in 2020
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | module: rocm,triaged,module: sdpa | low | Minor |
2,714,796,175 | bitcoin | qa: Broken `wallet_multiwallet.py` | On the master branch @ ebe4cac38bf6c510b00b8e080acab079c54016d6, the `wallet_multiwallet.py` test has several issues:
1. This code: https://github.com/bitcoin/bitcoin/blob/ebe4cac38bf6c510b00b8e080acab079c54016d6/test/functional/wallet_multiwallet.py#L132-L140 checks for the "Error scanning" message in the `debug.log` caused by processing the `no_access` directory. However, the same message can also be generated when parsing the `self_walletdat_symlink` directory. As a result, the current implementation is prone to producing false-positive results.
2. Parsing the `self_walletdat_symlink` directory with `bitcoind.exe` depends on how it was built. When cross-compiling, the parsing completes without system errors. On the other hand, when building natively, it raises an "unknown error" exception and logs the "Error scanning" message.
3. This code: https://github.com/bitcoin/bitcoin/blob/ebe4cac38bf6c510b00b8e080acab079c54016d6/test/functional/wallet_multiwallet.py#L132-L139 is not portable due to its use of [`os.chmod`](https://docs.python.org/3/library/os.html#os.chmod):
> **Note:** Although Windows supports [`chmod()`](https://docs.python.org/3/library/os.html#os.chmod), you can only set the file’s read-only flag with it (via the `stat.S_IWRITE` and `stat.S_IREAD` constants or a corresponding integer value). | Tests | low | Critical |
2,714,860,290 | svelte | add a way to force update a state variable | ### Describe the problem
I'm using a custom made data store with realtime update. In svelte 4, when the value got updated, I was just doing `value = value` to refresh different bindings. In svelte 5 it is not possible anymore as before updating bindings, the system check if the value is the same. What more is, inside some of the object, we have our classes that are filled automatically ( think relationships ) and so it seems like svelte is not detecting the change in values.
I'm not sure if I'm clear in what I'm explaining, but basically, I would need something to inform the system to update bindings forcefully.
To be a bit more precise:
The system we have in my company needs to be used without svelte, so we have our own data system. We have so call "Resource" and "ResourceCollection" ( that are basically relationships ). The system handle realtime update accross client using web sockets. To get "informed" of these change we subscribe to the resource. The only way I found to make the refresh working is by doing that ( `subscribeAndDo` is internal to our system ) :
```js
let unsub
$effect(() => {
unsub?.unsubscribe?.()
unsub = account.subscribeAndDo(onDestroy, () => {
const newAccount = account
account = null
account = newAccount
})
})
```
### Describe the proposed solution
The goal here is, whenever the props "account" change, we subscribe to it. Then whever something change in "account" the callback will be called. Here, I'm using the "trick" to make sure that svelte update the bindings. It would be great if we can have a $forceUpdate(account) instead. if that make any sense.
### Importance
would make my life easier | feature request | medium | Critical |
2,714,867,204 | electron | Developer Tools doesn't open after opening and quitting second app instance | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.1
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 11 Pro (26100.1742)
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
Opening Developer Tools should be possible, also when opening and quitting multiple instances of the app.
### Actual Behavior
After opening and closing a second instance of the packaged app, Developer Tools doesn't open as expected. Instead, some unformatted code is displayed.

### Testcase Gist URL
_No response_
### Additional Information
This happens only on Windows 11 - tried on at least 3 different PCs to validate this.
I can also reproduce this with https://github.com/electron/electron-quick-start with the following steps:
1. Add electron-builder to devDependencies ```"electron-builder": "^25.1.8"```
2. Add script: ```"build": "electron-builder build --win"```
3. Add electron-builder.json with content:
````
{
"appId": "electron-quick-start",
"productName": "electron-quick-start",
"directories": { "output": "release-builds/" },
"files": ["**/*"],
"win": {
"target": [
{ "target": "portable", "arch": ["x64"] }
]
}
}
````
4. Run: ```npm install && npm run build```
5. Open release-builds/electron-quick-start 1.0.0.exe --> Developer Tool opens as expected.
6. Open a second instance --> While the app is loading, Developer Tool in the existing window doesn't open...
6. After the second window opened, the Developer Tool works again as expected.
7. Close one of the windows --> Developer Tool in the remaining window cannot be opened anymore.
I'm happy to provide any other debug data, if you instruct me how to get them :) | platform/windows,bug :beetle:,has-repro-comment,33-x-y | low | Critical |
2,714,884,218 | react | [React 19] dangerouslySetInnerHTML causes repaint | ## Summary
Given the following component
```jsx
function App() {
const [count, setCount] = useState(0);
return (
<>
<button onClick={() => setCount((count) => count + 1)}>{count}</button>
<div dangerouslySetInnerHTML={{ __html: "I should only paint once" }} />
</>
);
}
```
Clicking the button should not trigger a repaint of "I should only paint once". This works as expected in [email protected] but not in [email protected] or [email protected].
* Browser: Chrome Version 131.0.6778.86 (Official Build) (arm64)
* OS: MacOS 15.1.1 (24B91)
## [email protected]
https://github.com/user-attachments/assets/3912ae19-f0b5-4425-abd4-0c376d9f8bb7
## [email protected]
https://github.com/user-attachments/assets/d486783d-819a-40fe-a9f8-8c537009e173
[Source Code](https://github.com/lukahartwig/react-dangerouslysetinnerhtml)
| React 19 | low | Major |
2,714,910,648 | tauri | [bug][linux] Decorations are unresponsive if WebviewWindow is built with visible(false), then shown | ### Describe the bug
**Reproduced on Gnome on both Fedora and Ubuntu** (fresh installs). I don't know about other DEs. Windows/macOS work well.
When manually building a `WebviewWindow` via its `Builder`, and setting the visibility to `false`, then once the UI is ready, call `getCurrentWindow().show()`, the window decorations are unresponsive and do nothing (panning is still possible).
Weirdly enough, the interactivity goes back after double clicking somewhere in the titlebar, like the maximize button.
### Reproduction
- remove window creation from `tauri.conf.json`
- in app setup (`main.rs`):
```
let window_builder =
WebviewWindowBuilder::new(app, "main", WebviewUrl::App("index.html".into()))
.title("Museeks")
.visible(false) // IMPORTANT
.inner_size(900.0, 550.0)
.min_inner_size(900.0, 550.0)
.fullscreen(false)
.resizable(true)
.disable_drag_drop_handler()
.zoom_hotkeys_enabled(true)
.build()?;
```
- in the UI, call at some point `getCurrentWindow().show()`
- try to manipulate the decorations
### Expected behavior
The decoration should be responsive.
### Full `tauri info` output
```text
parallels@ubuntu-linux-2404:~/dev/museeks$ bun tauri info
$ tauri info
[✔] Environment
- OS: Ubuntu 24.4.0 aarch64 (X64)
✔ webkit2gtk-4.1: 2.46.3
✔ rsvg2: 2.58.0
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-unknown-linux-gnu (default)
- bun: 1.1.38
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : not installed!
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-log 🦀: 2.0.3
- @tauri-apps/plugin-log : not installed!
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : not installed!
- tauri-plugin-os 🦀: 2.0.1
- @tauri-apps/plugin-os : not installed!
- tauri-plugin-notification 🦀: 2.0.1
- @tauri-apps/plugin-notification : not installed!
- tauri-plugin-fs 🦀: 2.1.0
- @tauri-apps/plugin-fs : not installed!
- tauri-plugin-single-instance 🦀: 2.0.2
- @tauri-apps/plugin-single-instance : not installed!
- tauri-plugin-dialog 🦀: 2.0.4
- @tauri-apps/plugin-dialog : not installed!
- tauri-plugin-window-state 🦀: 2.0.2
- @tauri-apps/plugin-window-state : not installed!
[-] App
- build-type: bundle
- CSP: default-src 'none'; img-src 'self' asset: data:; media-src 'self' blob: asset: https://asset.localhost http://asset.localhost; child-src 'self'; object-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; connect-src ipc: asset: https://asset.localhost http://asset.localhost http://ipc.localhost 'self' https://api.github.com; font-src 'self' data:
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,714,929,414 | react-native | App is crashing after upgrading native version 0.74.3 to 0.76.3. | ### Description
In my existing project i upgrade all version with native version and app is crashing.
here is the package.json file
{
"name": "wellness360",
"version": "0.0.1",
"IOS_VERSION": "1.22",
"IOS_BUILD_NUMBER": "0",
"ANDROID_VERSION": "1.0",
"ANDROID_BUILD_NUMBER": "38",
"private": true,
"scripts": {
"check-dependencies": "rnx-align-deps",
"fix-dependencies": "rnx-align-deps --write",
"prepare": "husky",
"postinstall": "husky",
"lint-staged": "lint-staged",
"android": "react-native run-android",
"ios": "react-native run-ios",
"lint": "eslint .",
"lint-error": "eslint --quiet .",
"code-format": "prettier . --write",
"start": "react-native start",
"test": "jest",
"type-check": "tsc",
"test:report": "jest --collectCoverage --coverageDirectory=\"./coverage\" --ci --reporters=default --reporters=jest-junit --coverage",
"pod-install": "cd ios && RCT_NEW_ARCH_ENABLED=1 bundle exec pod install && cd ..",
"bundle:ios": "react-native bundle --entry-file='index.js' --bundle-output='./ios/main.jsbundle' --dev=false --platform='ios' --assets-dest='./ios'",
"check-updates": "yarn dlx npm-check-updates"
},
"lint-staged": {
"*.{js,jsx,ts,tsx}": [
"eslint",
"prettier --write"
],
"*.{css,scss,md,html}": [
"prettier --write"
]
},
"dependencies": {
"@datadog/mobile-react-native": "^2.4.4",
"@datadog/mobile-react-navigation": "^2.4.4",
"@gluestack-style/react": "^1.0.57",
"@gluestack-ui/config": "^1.1.20",
"@gluestack-ui/themed": "^1.1.61",
"@notifee/react-native": "^7.8.2",
"@react-native-async-storage/async-storage": "^2.0.0",
"@react-native-community/slider": "^4.5.5",
"@react-native-firebase/analytics": "^21.6.1",
"@react-native-firebase/app": "^21.6.1",
"@react-native-firebase/crashlytics": "^21.6.1",
"@react-native-firebase/messaging": "^21.6.1",
"@react-native-masked-view/masked-view": "^0.3.0",
"@react-navigation/bottom-tabs": "^7.0.13",
"@react-navigation/drawer": "^7.0.13",
"@react-navigation/native": "^6.0.8",
"@react-navigation/native-stack": "^7.1.9",
"@react-navigation/stack": "^6.3.21",
"@reduxjs/toolkit": "^2.3.0",
"async-mutex": "^0.5.0",
"base-64": "^1.0.0",
"formik": "^2.4.6",
"i18next": "^24.0.2",
"lodash": "^4.17.21",
"lottie-react-native": "^7.1.0",
"lucide-react-native": "^0.462.0",
"mixpanel-react-native": "^3.0.8",
"moment": "^2.30.1",
"react": "18.3.1",
"react-dom": "^18.3.1",
"react-fast-compare": "^3.2.2",
"react-i18next": "^15.1.2",
"react-native": "^0.76.0",
"react-native-blob-util": "^0.19.11",
"react-native-calendars": "^1.1307.0",
"react-native-circular-progress": "^1.4.1",
"react-native-date-picker": "^5.0.7",
"react-native-device-info": "^14.0.1",
"react-native-document-picker": "^9.3.1",
"react-native-file-viewer": "^2.1.5",
"react-native-fs": "^2.18.0",
"react-native-gesture-handler": "^2.20.0",
"react-native-health": "^1.19.0",
"react-native-health-connect": "^3.3.1",
"react-native-htmlview": "^0.17.0",
"react-native-image-crop-picker": "^0.41.6",
"react-native-inappbrowser-reborn": "^3.7.0",
"react-native-keyboard-aware-scroll-view": "^0.9.5",
"react-native-mmkv": "^3.1.0",
"react-native-pdf": "^6.7.5",
"react-native-reanimated": "^3.16.1",
"react-native-render-html": "^6.1.0",
"react-native-safe-area-context": "^4.12.0",
"react-native-screens": "^3.34.0",
"react-native-svg": "^15.8.0",
"react-native-track-player": "^4.1.1",
"react-native-vector-icons": "^10.2.0",
"react-native-video": "^6.8.2",
"react-native-webview": "^13.12.2",
"react-native-youtube-iframe": "^2.3.0",
"react-redux": "^9.1.2",
"react-test-renderer": "18.3.1",
"redux-persist": "^6.0.0",
"rn-samsung-health": "github:wellness360inc/rn_shealth",
"url": "^0.11.4",
"victory-native": "^36.9.2-next.3",
"yup": "^1.4.0"
},
"resolutions": {
"react-dom": "18.3.1",
"react-native-reanimated": "3.14.0"
},
"devDependencies": {
"@babel/core": "^7.25.2",
"@babel/preset-env": "^7.25.3",
"@babel/runtime": "^7.25.0",
"@react-native-community/cli": "^15.0.0",
"@react-native-community/cli-platform-android": "^15.0.0",
"@react-native-community/cli-platform-ios": "^15.0.0",
"@react-native/babel-preset": "^0.76.0",
"@react-native/eslint-config": "0.76.3",
"@react-native/metro-config": "^0.76.0",
"@react-native/typescript-config": "0.76.3",
"@react-navigation/devtools": "^7.0.9",
"@rnx-kit/align-deps": "^3.0.2",
"@tsconfig/react-native": "^3.0.5",
"@types/base-64": "^1.0.2",
"@types/lodash": "^4.17.13",
"@types/react": "^18.2.6",
"@types/react-native-htmlview": "^0.16.5",
"@types/react-native-vector-icons": "^6.4.18",
"@types/react-native-video": "^5.0.20",
"@types/react-redux": "^7.1.34",
"@types/react-test-renderer": "^18.0.0",
"@typescript-eslint/eslint-plugin": "^8.16.0",
"@typescript-eslint/eslint-plugin-tslint": "^7.0.2",
"@typescript-eslint/parser": "^8.16.0",
"babel-jest": "^29.6.3",
"babel-plugin-inline-dotenv": "^1.7.0",
"babel-plugin-module-resolver": "^5.0.2",
"dotenv": "^16.4.5",
"eslint": "^9.15.0",
"eslint-config-prettier": "^9.1.0",
"eslint-config-react-app": "^7.0.1",
"eslint-config-react-native": "^4.1.0",
"eslint-define-config": "^2.1.0",
"eslint-plugin-ft-flow": "^3.0.11",
"eslint-plugin-import": "^2.31.0",
"eslint-plugin-jest": "^28.9.0",
"eslint-plugin-prettier": "^5.2.1",
"eslint-plugin-react": "^7.37.2",
"eslint-plugin-react-hooks": "^5.0.0",
"husky": "^9.1.7",
"jest": "^29.2.1",
"lint-staged": "^15.2.10",
"metro-runtime": "^0.81.0",
"prettier": "3.4.1",
"react-test-renderer": "18.3.1",
"typescript": "^5.7.2"
},
"engines": {
"node": ">=18"
},
"packageManager": "[email protected]",
"rnx-kit": {
"kitType": "app",
"alignDeps": {
"requirements": [
"[email protected]"
],
"capabilities": [
"animation",
"babel-preset-react-native",
"core",
"core-android",
"core-ios",
"core/metro-config",
"filesystem",
"gestures",
"html",
"jest",
"masked-view",
"navigation/native",
"navigation/stack",
"react",
"react-dom",
"react-test-renderer",
"safe-area",
"screens",
"storage",
"svg",
"webview"
]
}
}
}
### Steps to reproduce
Please go thought the attached vedio.
### React Native Version
0.76.3
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.6.1
CPU: (16) x64 Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Memory: 59.26 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.17.0
path: /usr/local/bin/node
Yarn:
version: 3.6.4
path: /usr/local/bin/yarn
npm:
version: 10.8.2
path: /usr/local/bin/npm
Watchman:
version: 2024.11.04.00
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /Users/kapil/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.19072.14.2412.12360217
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /Library/Java/JavaVirtualMachines/jdk-17.jdk/Contents/Home/bin/javac
Ruby:
version: 3.2.2
path: /Users/kapil/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.2
wanted: ^15.0.0
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.3
wanted: ^0.76.0
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
[Firebase/Crashlytics] Version 11.5.0
+[RNFBSharedUtils getConfigBooleanValue:key:defaultValue:] [Line 155] RNFBCrashlyticsInit crashlytics_debug_enabled via RNFBJSON: 1
+[RNFBSharedUtils getConfigBooleanValue:key:defaultValue:] [Line 165] RNFBCrashlyticsInit crashlytics_debug_enabled final value: 1
+[RNFBCrashlyticsInitProvider isCrashlyticsCollectionEnabled] [Line 79] RNFBCrashlyticsInit specific key crashlytics_auto_collection_enabled not set, falling back to general key app_data_collection_default_enabled with default 1 if it does not exist.
+[RNFBSharedUtils getConfigBooleanValue:key:defaultValue:] [Line 162] RNFBCrashlyticsInit app_data_collection_default_enabled via RNFBMeta: 1
+[RNFBSharedUtils getConfigBooleanValue:key:defaultValue:] [Line 165] RNFBCrashlyticsInit app_data_collection_default_enabled final value: 1
+[RNFBCrashlyticsInitProvider componentsToRegister]_block_invoke [Line 129] RNFBCrashlyticsInit initialization successful
11.5.0 - [FirebaseCrashlytics][I-CLS000000] Crashlytics skipped rotating the Install ID during urgent mode because it is run on the main thread, which can't succeed. This can happen if the app crashed the last run and Crashlytics is uploading urgently.
11.5.0 - [FirebaseMessaging][I-FCM001000] FIRMessaging Remote Notifications proxy enabled, will swizzle remote notification receiver handlers. If you'd prefer to manually integrate Firebase Messaging, add "FirebaseAppDelegateProxyEnabled" to your Info.plist, and set it to NO. Follow the instructions at:
https://firebase.google.com/docs/cloud-messaging/ios/client#method_swizzling_in_firebase_messaging
to ensure proper integration.
[HealthKit] Background observers added to the app
11.5.0 - [FirebaseAnalytics][I-ACS023007] Analytics v.11.5.0 started
11.5.0 - [FirebaseAnalytics][I-ACS023008] To enable debug logging set the following application argument: -FIRAnalyticsDebugEnabled (see http://goo.gl/RfcP7r)
[HealthKit] Background delivery enabled for ActiveEnergyBurned
[HealthKit] Background observer set up for ActiveEnergyBurned
Sending `healthKit:ActiveEnergyBurned:setup:success` with no listeners registered.
libc++abi: terminating due to uncaught exception of type std::runtime_error: Feature flags were accessed before being overridden: fuseboxEnabledDebug
```
### Reproducer
-
### Screenshots and Videos
https://github.com/user-attachments/assets/baab0314-b95e-42d4-8ffd-ead2d0a97024
| Needs: Repro,Needs: Attention | medium | Critical |
2,714,930,379 | rust | False positive dead code warning when using combination of array bound with constant function and trait implementation | ### Code
```Rust
const fn test() -> usize {
0
}
trait Test {}
impl Test for [u8; test()] {}
// NOTE: this constitutes a use
//trait _Test2 {
// type Ty;
//}
//
//impl _Test2 for String {
// type Ty = [u8; test()];
//}
fn test2<T: Test>() {}
fn main() {
test2::<[u8; 0]>();
// NOTE: this constitutes a use
//let _x: [u8; test()] = [];
//
}
```
### Current output
```Shell
Compiling playground v0.0.1 (/playground)
warning: function `test` is never used
--> src/main.rs:1:10
|
1 | const fn test() -> usize {
| ^^^^
|
= note: `#[warn(dead_code)]` on by default
warning: `playground` (bin "playground") generated 1 warning
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.89s
Running `target/debug/playground`
```
### Desired output
```Shell
None.
```
### Rationale and extra context
Seems related to #128617, but that problem has been resolved in the meantime.
### Other cases
```Rust
The code is used, so no warning should be displayed. Note that the two commented-out blocks fix the warning and *are* considered uses, so there seems to be a specific interaction with trait implementations.
```
### Rust Version
```Shell
The warning only starts showing up in Rust 1.78, and seems to show up in any build since:
Note the difference between
https://rust.godbolt.org/z/43vh6vv7s
vs.
https://rust.godbolt.org/z/djP9qM7xa
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,714,946,606 | vscode | Chinese characters / Emojis input in the VS code terminal panel repeat twice | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
- version: 1.95.2
- commit: `e8653663e8840adaf45af01eab5c627a5af81807`
- date: 2024-11-07T11:07:22.054Z
- Electron: 32.2.1
- ElectronBuildId: 10427718
- Chromium: 128.0.6613.186
- Node.js: 20.18.0
- V8: 12.8.374.38-electron.
- OS Version:
- kernel: Linux x64 6.12.1-zen1-1-zen
- os: Arch Linux x86_64
- Maybe related software Version
- fcitx5: 5.1.11
- hyprland:
- Hyprland 0.45.2 built from branch at commit 12f9a0d0b93f691d4d9923716557154d74777b0a ([gha] Nix: update inputs).
- Date: Tue Nov 19 21:47:18 2024
- Tag: v0.45.2, commits: 5451
- built against aquamarine 0.5.0
- kitty: kitty 0.37.0 created by Kovid Goyal
Steps to Reproduce
1. Open the terminal in VS code. I test the bash and fish, they have same problem while typing Chinese characters and emojis with the terminal panel in VS code, but no problem when I open the terminal through desktop.
2. Type some Chinese characters or emojis using fcitx5 ( I haven't tested with other IME) like :
``` bash
git commit -m "Update : 这是用于测试的一条提交😀"
```
If I type `这是`, `用于`, `测试`, `的`, `一条提交` and `😀` separately, the terminal will behave like:
``` bash
git commit -m "Update : 这是这是用于用于测试测试的的一条提交一条提交😀😀"
```
The video:
https://github.com/user-attachments/assets/5c9f53af-ab49-4c3f-8715-e77dd9362941 | bug,terminal-input | low | Critical |
2,715,045,763 | vscode | Task command hangs and doesn't recognize completion of detached processes |
Type: <b>Bug</b>
## Issue
While trying to setup a debugging launch configuration in VS Code, I noticed an issue where the task system hangs indefinitely when using a shell task command that starts a detached process. My setup involved:
- A Docker Compose service with a custom entrypoint script that performs setup tasks (e.g., uv sync) before starting the main process.
- The task shell command I was using: docker compose up -d. This task was used as a preLaunchTask in a launch configuration meant to attach to a python debugger in the running service started by my task, but given the task never completes, the launch configuration never runs.
The issue seems to appear when there's a delay between when the process starts and when it fully detaches. The task system seems unable to recognize that the command has completed, even though the process has been successfully detached (as evidenced by the container running correctly). I'm not sure if the issue is related to vscode getting confused by the process hierarchy or not properly handling compose's detach signal, but either way it seems like Code's task system might be holding onto the process handle or waiting for some specific signal that never comes.
As a workaround, adding a longer delay after docker compose up -d in the task’s command seems to mitigate the issue:
`{ "command": "docker compose up -d && sleep 3" }`
_Note_: the delay in the task command has to be longer than any delay in the entrypoint script.
## Expected
The task system should handle the container’s initialization period gracefully, even with an entrypoint delay, without requiring explicit workarounds.
## Minimal Repro Steps:
Build the following image, then start a Python Debugger using launch.json. Ensure the service is not up and running before trying to reproduce the hang using the "Debug using launch.json" option and selecting "Python Debugger: Attach".
<details>
<summary>Code</summary>
`requirements.txt`
```
debugpy
pytest
```
`text_example.py`
```
def test_example():
assert True
```
`entrypoint.sh`
```
#!/bin/bash
sleep 2 # simulate init delay
exec "$@"
```
`docker-compose.yml`
```
services:
test_runner:
build: .
command: "python -m debugpy --listen 0.0.0.0:5679 --wait-for-client -m pytest"
ports:
- "5679:5679"
```
`Dockerfile`
```
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY test_example.py .
COPY entrypoint.sh /usr/bin/entrypoint.sh
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
```
`launch.json`
```
{
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Attach",
"type": "debugpy",
"request": "attach",
"preLaunchTask": "Docker Compose Up",
"connect": {
"host": "0.0.0.0",
"port": 5679
},
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
]
}
]
}
```
`tasks.json`
```
{
"version": "2.0.0",
"tasks": [
{
"label": "Docker Compose Up",
"type": "shell",
"command": "docker compose up -d",
"problemMatcher": []
}
]
}
```
Version:
```
Version: 1.95.3
Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813
Date: 2024-11-13T14:50:04.152Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.1.0
```
I also verified the issue happens in Visual Studio Code - Insider's Edition:
```
Version: 1.96.0-insider
Commit: a40fbb1ac10ad5c797d331eb75d1d99332ef9cbe
Date: 2024-12-03T10:12:24.933Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.1.0
```
</details>
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Darwin arm64 24.1.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Max (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|16, 19, 27|
|Memory (System)|32.00GB (0.40GB free)|
|Process Argv|--crash-reporter-id 63f58570-0bcd-43a0-b6c7-3b3c84beb9ea|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (17)</summary>
Extension|Author (truncated)|Version
---|---|---
ruff|cha|2024.56.0
gitlens|eam|16.0.4
copilot|Git|1.246.0
copilot-chat|Git|0.22.4
vscode-docker|ms-|1.29.3
debugpy|ms-|2024.12.0
python|ms-|2024.20.0
vscode-pylance|ms-|2024.11.3
jupyter|ms-|2024.10.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.388.0
vscode-typescript-next|ms-|5.8.20241202
one-dark-theme|msk|1.14.2
vim|vsc|1.28.1
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | bug,tasks | low | Critical |
2,715,065,678 | next.js | Loading.tsx and Suspense do not work on second page load and on | ### Link to the code that reproduces this issue
https://github.com/Vovch/test-next-2
### To Reproduce
1. pnpm i
2. add .env file for DB connection (Postgres)
3. pnpm dev
4. navigate to localhost:3000/dashboard
5. reload the page
6. reload the page again
7. comment out or delete the Suspense wrapper in app\dashboard\page.tsx
8. reload the page
9. reload the page again
### Current vs. Expected behavior
Expected:
On steps 4 and 5 I expect the Suspense component to be rendered correctly without delaying the page load.
Steps 8 and 9 - Expected component from loading.tsx to be shown on the screen
Current:
The page is not loaded into the browser at all before all the components are rendered on the server (3+ seconds). Suspense or loading.tsx components are not shown.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Enterprise
Available memory (MB): 32257
Available CPU cores: 8
Binaries:
Node: 18.20.3
npm: 10.8.1
Yarn: N/A
pnpm: 9.11.0
Relevant Packages:
next: 15.0.4-canary.36 // Latest available version is detected (15.0.4-canary.36).
eslint-config-next: N/A
react: 18.2.0
react-dom: 18.2.0
typescript: 5.5.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Lazy Loading
### Which stage(s) are affected? (Select all that apply)
next dev (local)
next start (local)
### Additional context
I was also able to reproduce this issue with next 15.0.3 and 14.2.18.
The code was created by following the tutorial up to and including chapter 9 https://nextjs.org/learn/dashboard-app/streaming and then trying to find the issue (since I couldn't make it work while following the tutorial).
in app\dashboard\page.tsx there is a commented out string with `await new Promise(...)`. If enabled it causes the app to work correctly. The effect depends on milliseconds value passed to the timeout, higher values (1000 and on) make the best result, lower values like `500` makes the defect reproducible again.
Intermittently I'm unable to get the loaders even when navigating between the pages by links in the app even if the first loaded page is not the dashboard itself. | bug,Lazy Loading | low | Major |
2,715,069,918 | pytorch | Do an audit on skipfiles and mark more files as inline | One of the largest pain points for developers trying to use torch.compile seems to be files in torch.* being skipped. For example in
https://github.com/pytorch/pytorch/pull/138555.
Most of these files should not be skipped; a lot of them should just be inlined. New modules should be inlined by default (they shouldn't be skipped unless there's a good reason to), and we should audit any existing modules in skiplists to see if they should actually be skipped.
cc @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @ZainRizvi | module: tests,triaged,better-engineering | low | Major |
2,715,071,846 | pytorch | Handling custom context managers | ### 🐛 Describe the bug
The coverage of custom context managers doesn't seem to be very extensive at the moment. With tensordict, we often end up on
## Context
1. **TD as a context manager**
TensorDict relies on context managers for many "temporary" operations. If, for a period of time, it is convenient to represet a tensordict as a flat structure for example, you could do
```python
td = TensorDict({"a": 0, "b": {"c": 1"}})
with td.flatten_keys() as td_flat:
c = td_flat["b.c"]
assert c == td["b", "c"]
```
The ops that work in this way are `permute`, `flatten_keys`, `transpose` and a bunch of others. The most used is `to_module` which allows us to make functional calls:
```python
td = TensorDict.from_module(module)
# eg, detach the parameters and cast them in the module
with td.detach().to_module(module):
output = module(input)
assert not output.requires_grad
```
2. **TensorDictParams**
Similar to [`ParameterList`](https://pytorch.org/docs/stable/generated/torch.nn.ParameterList.html), we use `TensorDictParams` to represent group tensors that are parameters together in a single data structure. The main purpose is that setting such object as an attribute of an nn.Module exposes the content of the tensordict to the nn.Module methods, such as `buffers()`, `parameters()` or anything accessed through `_apply`.
3. **Non-tensor data**
It is useful for tensordict to carry non-tensor data. For one, it is a requirement whenever you want to use tensordict instead of a plain state dict (as the `state_dict()` signature is `Dict[str, Any]`).
Secondly, using tensordict for data preproc requires you to carry non-tensor data along the way (eg, you could store a prompt of a list of prompts along with the tokenized version).
4. **Locking tensordicts**
There is an option to lock/unlock a tensordict. a "locked" tensordict is read-only: you can modify tensors in-place but you will not be able to add or remove an entry. This helps us cache the results of some operations in eager mode if they're cheap to store, and it's also a security when a content is shared between nodes or workers (such that users don't expect that writing a new entry will automatically synchronize the tensordict across workers for example).
## Where it breaks
Combining 1, 2, 3 and 4 can lead to graph breaks with lengthly error stacks. The best way to reproduce these is to install the tensordict nightly built. I'm reporting a bunch of these in this issue, but I'm also happy to split it in several issues if that's easier to track.
Let's start with what works
```python
from tensordict import from_module, TensorDictParams, TensorDict
import torch.nn
module = torch.nn.Module()
module.params = torch.nn.Parameter(torch.randn(3))
params2 = from_module(module, as_module=True) # as_module produces a TensorDictParams
@torch.compile(fullgraph=True)
def func(z, params2):
with params2.to_module(module):
out = z + module.params
return out
print(func(torch.zeros(()), params2))
```
We can work with a simple tensordict and it works too under #141376
```python
# Isolate the inner tensordict
params2 = params2._param_td
print(func(torch.zeros(()), params2))
```
(here after I assume that tests are executed on that branch)
Now where it breaks: If a module contains a TensorDictParams instance, things start to fall apart
```python
from tensordict import from_module, TensorDictParams, TensorDict
import torch.nn
module = torch.nn.Module()
module.params = TensorDictParams(
TensorDict(a=0.0)
)
params2 = from_module(module, as_module=True)
@torch.compile(fullgraph=True)
def func(z, params2):
with params2.to_module(module):
out = z + module.params["a"]
return out
print(func(torch.zeros(()), params2))
```
<details>
<summary>To reproduce</summary>
```
$ pip3 install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu -U
$ pip install tensordict-nightly
$ python -c """<code above>"""
```
</details>
Error: see below
Working with a plain tensordict works fine
<details>
<summary>This works ok</summary>
```python
from tensordict import from_module, TensorDictParams, TensorDict
import torch.nn
module = torch.nn.Module()
module.params = TensorDict(a=0.0)
params2 = from_module(module, as_module=True)
@torch.compile(fullgraph=True)
def func(z, params2):
with params2.to_module(module):
out = z + module.params["a"]
return out
print(func(torch.zeros(()), params2))
```
</details>
In TorchRL, we want to use TensorDictParams to register distribution parameters as buffers such that their device is updated when the policy is sent to a given device (the distribution is built on-the-fly during the forward call of the policy).
Here is how it's done:
https://github.com/pytorch/rl/blob/b7840d7dcf8f67179be2641fb67ee6d2a213418b/sota-implementations/cql/utils.py#L223-L232
in [this PR](https://github.com/pytorch/rl/pull/2553)
Here for some reason, things work ok until we have a `NonTensorData` registered in the distribution kwargs, where the following graph break occurs:
<details>
<summary>To reproduce</summary>
```
$ git clone https://github.com/pytorch/rl
$ cd rl
$ pip3 install gymnasium wandb tqdm
$ pip3 install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu -U
$ pip install tensordict-nightly
$ ghstack checkout https://github.com/pytorch/rl/pull/2553
$ python setup.py develop
$ TORCH_LOGS="+graph_breaks" TORCHDYNAMO_VERBOSE=1 python sota-implementations/cql/cql_online.py compile.compile=1
```
</details>
The funny thing is that this example is actually a more convoluted version of the previous one but things work ok until we have non-tensor data.
### Error logs
<details>
<summary>Error 1</summary>
```
Traceback (most recent call last):
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3070, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3196, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 969, in step
self.exception_handler(e)
File "torch/_dynamo/symbolic_convert.py", line 1554, in exception_handler
raise raised_exception
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3070, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3196, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 969, in step
self.exception_handler(e)
File "torch/_dynamo/symbolic_convert.py", line 1554, in exception_handler
raise raised_exception
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1750, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3070, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3196, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 969, in step
self.exception_handler(e)
File "torch/_dynamo/symbolic_convert.py", line 1554, in exception_handler
raise raised_exception
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3070, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3196, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 969, in step
self.exception_handler(e)
File "torch/_dynamo/symbolic_convert.py", line 1554, in exception_handler
raise raised_exception
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1750, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3070, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3196, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 969, in step
self.exception_handler(e)
File "torch/_dynamo/symbolic_convert.py", line 1554, in exception_handler
raise raised_exception
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1446, in RAISE_VARARGS
self._raise_exception_variable(inst)
File "torch/_dynamo/symbolic_convert.py", line 1439, in _raise_exception_variable
raise exc.ObservedException(f"raised exception {val}")
torch._dynamo.exc.ObservedException: raised exception ExceptionVariable()
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "torch/_dynamo/variables/ctx_manager.py", line 174, in exit
).call_function(
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3070, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3196, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3070, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3196, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 969, in step
self.exception_handler(e)
File "torch/_dynamo/symbolic_convert.py", line 1514, in exception_handler
unimplemented(
File "torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: exception is raised when top of the block stack is not exception handler (e.g. try .. with .. except). Current TOS is Instruction(opcode=143, opname='SETUP_WITH', arg=21, argval=66, offset=22, starts_line=333, is_jump_target=True, positions=None, target=Instruction(opcode=49, opname='WITH_EXCEPT_START', arg=None, argval=None, offset=66, starts_line=333, is_jump_target=True, positions=None, target=None, exn_tab_entry=None), exn_tab_entry=None)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/ctx_manager.py", line 1222, in call_function
return self.ctx.exit(tx, *args)
File "torch/_dynamo/variables/ctx_manager.py", line 184, in exit
unimplemented(
File "torch/_dynamo/exc.py", line 316, in unimplemented
raise Unsupported(msg, case_name=case_name) from from_exc
torch._dynamo.exc.Unsupported: Unsupported context manager TensorDict(
...)'s __exit__ function
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/vmoens/Library/Application Support/JetBrains/PyCharm2023.3/scratches/scratch_11.py", line 43, in <module>
print(func(torch.zeros(()), params2))
File "torch/_dynamo/eval_frame.py", line 573, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 1379, in __call__
return self._torchdynamo_orig_callable(
File "torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "torch/_dynamo/bytecode_transformation.py", line 1349, in transform_code_object
transformations(instructions, code_options)
File "torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 2866, in run
super().run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 667, in wrapper
unimplemented("Graph break under GenericContextWrappingVariable")
File "torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Graph break under GenericContextWrappingVariable
from user code:
File "/Users/vmoens/Library/Application Support/JetBrains/PyCharm2023.3/scratches/scratch_11.py", line 36, in func
with params2.to_module(module):
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
</details>
TorchRL example:
<details>
<summary>Error 2</summary>
```
File "torch/_dynamo/variables/ctx_manager.py", line 174, in exit
).call_function(
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1750, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1750, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1750, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1750, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1750, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1750, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1750, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/user_defined.py", line 600, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/base.py", line 405, in call_function
unimplemented(f"call_function {self} {args} {kwargs}")
File "torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: call_function UserDefinedClassVariable(<class 'tensordict.tensorclass.NonTensorData'>) [] {'data': ConstantVariable(bool: False), '_metadata': ConstantVariable(NoneType: None), '_is_non_tensor': ConstantVariable(bool: True), 'batch_size': SizeVariable(length=0), 'device': ConstantVariable(NoneType: None), 'names': ConstantVariable(NoneType: None)}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/ctx_manager.py", line 1222, in call_function
return self.ctx.exit(tx, *args)
File "torch/_dynamo/variables/ctx_manager.py", line 184, in exit
unimplemented(
File "torch/_dynamo/exc.py", line 316, in unimplemented
raise Unsupported(msg, case_name=case_name) from from_exc
torch._dynamo.exc.Unsupported: Unsupported context manager TensorDict(
...)'s __exit__ function
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/vmoens/Repos/rl/rl/sota-implementations/cql/cql_online.py", line 185, in main
loss_td = update(sampled_tensordict)
File "torch/_dynamo/eval_frame.py", line 573, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 1379, in __call__
return self._torchdynamo_orig_callable(
File "torch/_dynamo/convert_frame.py", line 547, in __call__
return _compile(
File "torch/_dynamo/convert_frame.py", line 986, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "torch/_dynamo/convert_frame.py", line 715, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 750, in _compile_inner
out_code = transform_code_object(code, transform)
File "torch/_dynamo/bytecode_transformation.py", line 1349, in transform_code_object
transformations(instructions, code_options)
File "torch/_dynamo/convert_frame.py", line 231, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 2865, in run
super().run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/lazy.py", line 170, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "torch/_dynamo/variables/nn_module.py", line 914, in call_function
return variables.UserFunctionVariable(fn, source=source).call_function(
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 661, in wrapper
return inner_fn(self, inst)
File "torch/_dynamo/symbolic_convert.py", line 1660, in CALL_FUNCTION
self.call_function(fn, args, {})
File "torch/_dynamo/symbolic_convert.py", line 899, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "torch/_dynamo/variables/functions.py", line 389, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 328, in call_function
return super().call_function(tx, args, kwargs)
File "torch/_dynamo/variables/functions.py", line 129, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "torch/_dynamo/symbolic_convert.py", line 905, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3069, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "torch/_dynamo/symbolic_convert.py", line 3195, in inline_call_
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 1054, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 964, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 667, in wrapper
unimplemented("Graph break under GenericContextWrappingVariable")
File "torch/_dynamo/exc.py", line 317, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Graph break under GenericContextWrappingVariable
from user code:
File "/Users/vmoens/Repos/rl/rl/sota-implementations/cql/cql_online.py", line 121, in update
loss_td = loss_module(sampled_tensordict)
File "torch/nn/modules/module.py", line 1844, in _call_impl
return inner()
File "torch/nn/modules/module.py", line 1793, in inner
result = forward_call(*args, **kwargs)
File "torchrl/objectives/common.py", line 55, in new_forward
return func(self, *args, **kwargs)
File "/Users/vmoens/Repos/rl/tensordict/tensordict/nn/common.py", line 316, in wrapper
return func(_self, tensordict, *args, **kwargs)
File "torchrl/objectives/cql.py", line 518, in forward
q_loss, metadata = self.q_loss(tensordict)
File "torchrl/objectives/common.py", line 55, in new_forward
return func(self, *args, **kwargs)
File "torchrl/objectives/cql.py", line 686, in q_loss
target_value = self._get_value_v(
File "torchrl/objectives/cql.py", line 631, in _get_value_v
with set_exploration_type(ExplorationType.RANDOM), actor_params.data.to_module(
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
</details>
### Versions
nightlies
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,715,139,531 | PowerToys | PowerToys v0.86.0: replacing keyboard keys not working in new release | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce

remapping the key "\" to "Ctrl" failed though it was working on previous releases
### ✔️ Expected Behavior
when pressing "\" on wireless keyboard i want the keyboard to map this key to right control (Ctrl)
### ❌ Actual Behavior
Powertoys keep writing "\" and the mapping function is not working
### Other Software
_No response_ | Issue-Bug,Product-Keyboard Shortcut Manager | low | Critical |
2,715,141,198 | flutter | [video_player_android] Flutter player widget does not scale correctly if video uses rotation correction | ### What package does this bug report belong to?
video_player (2.9.2)
### What target platforms are you seeing this bug on?
Android
### Have you already upgraded your packages?
Yes
### Steps to reproduce
1. Run the example app with updated remote video link on Android device (see Code Sample).
2. Wait for the video to load (about 7 MB, I could not reduce it because any compression tool I tried resulted in losing rotation correction).
3. Play the video.
4. Verify the actual video size and player widget size.
### Expected results
Player widget size matches video size.
### Actual results
Player widget is smaller and does not match the actual video size. The rotation correction of the video in this case is 270 degrees.
The issue was not observed on iOS and MacOS.
### Code sample
Use the example project code (`video_player/example/lib/main.dart`) with a different link for the remote video. Instead of `https://flutter.github.io/assets-for-api-docs/assets/videos/bee.mp4` use `https://drive.google.com/uc?export=download&id=1j8xYdGEYZ_mnfYTnDc4ULxMkc7EBw3AD`
### Screenshots or Videos
<details open>
<summary>Video demonstration</summary>
https://github.com/user-attachments/assets/e337123e-d0ca-4f80-86e5-b41a96962fb3
</details>
### Logs
<details><summary>Output</summary>
```console
Restarted application in 870ms.
I/ViewRootImpl( 7263): updatePointerIcon pointerType = 1000, calling pid = 7263
D/InputManager( 7263): setPointerIconType iconId = 1000, callingPid = 7263
I/ViewRootImpl( 7263): updatePointerIcon pointerType = 1000, calling pid = 7263
D/InputManager( 7263): setPointerIconType iconId = 1000, callingPid = 7263
I/ExoPlayerImpl( 7263): Init 75844ee [AndroidXMedia3/1.4.1] [beyond0, SM-G970F, samsung, 31]
I/ViewRootImpl@613a070[FlutterActivity]( 7263): ViewPostIme pointer 0
I/ViewRootImpl@613a070[FlutterActivity]( 7263): ViewPostIme pointer 1
D/BufferPoolAccessor2.0( 7263): bufferpool2 0x70b01bbc78 : 0(0 size) total buffers - 0(0 size) used buffers - 361/376 (recycle/alloc) - 14/366 (fetch/transfer)
D/BufferPoolAccessor2.0( 7263): evictor expired: 1, evicted: 1
I/ViewRootImpl( 7263): updatePointerIcon pointerType = 1000, calling pid = 7263
D/InputManager( 7263): setPointerIconType iconId = 1000, callingPid = 7263
I/AudioManager( 7263): getParameters keys = offloadVariableRateSupported
I/DMCodecAdapterFactory( 7263): Creating an asynchronous MediaCodec adapter for track type video
I/ACodec ( 7263): [] Now uninitialized
I/ACodec ( 7263): [] onAllocateComponent
I/OMXClient( 7263): IOmx service obtained
I/ACodec ( 7263): [OMX.Exynos.avc.dec] Now Loaded
I/MediaCodec( 7263): MediaCodec will operate in async mode
D/MediaCodec( 7263): flushMediametrics
D/SurfaceUtils( 7263): connecting to surface 0x715028d510, reason connectToSurface
I/MediaCodec( 7263): [OMX.Exynos.avc.dec] setting surface generation to 7437325
D/SurfaceUtils( 7263): disconnecting from surface 0x715028d510, reason connectToSurface(reconnect)
D/SurfaceUtils( 7263): connecting to surface 0x715028d510, reason connectToSurface(reconnect)
I/ACodec ( 7263): app-pid(7263)
W/ACodec ( 7263): [OMX.Exynos.avc.dec] setting HDRStaticInfo failed even though codec advertises support
W/ACodec ( 7263): [OMX.Exynos.avc.dec] getting HDRStaticInfo failed even though codec advertises support
D/MediaCodec( 7263): keep callback message for reclaim
I/ACodec ( 7263): [OMX.Exynos.avc.dec] Now Loaded->Idle
D/SurfaceUtils( 7263): set up nativeWindow 0x715028d510 for 1280x720, color 0x105, rotation 270, usage 0x402900
I/ACodec ( 7263): [OMX.Exynos.avc.dec] configureOutputBuffersFromNativeWindow setBufferCount : 13, minUndequeuedBuffers : 9
I/MediaCodec( 7263): setCodecState state(0), called in 6
I/ACodec ( 7263): [OMX.Exynos.avc.dec] Now Idle->Executing
I/ACodec ( 7263): [OMX.Exynos.avc.dec] Now Executing
I/ACodec ( 7263): [OMX.Exynos.avc.dec] calling emptyBuffer 1 w/ codec specific data, size : 21
I/DMCodecAdapterFactory( 7263): Creating an asynchronous MediaCodec adapter for track type audio
I/CCodec ( 7263): state->set(ALLOCATING)
I/CCodec ( 7263): allocate(c2.android.aac.decoder)
I/ACodec ( 7263): [OMX.Exynos.avc.dec] calling emptyBuffer 2 w/ codec specific data, size : 8
I/CCodec ( 7263): setting up 'default' as default (vendor) store
I/CCodec ( 7263): Created component [c2.android.aac.decoder]
I/CCodec ( 7263): state->set(ALLOCATED)
D/CCodecConfig( 7263): read media type: audio/mp4a-latm
D/ReflectedParamUpdater( 7263): extent() != 1 for single value type: algo.buffers.max-count.values
D/ReflectedParamUpdater( 7263): extent() != 1 for single value type: output.subscribed-indices.values
D/ReflectedParamUpdater( 7263): extent() != 1 for single value type: input.buffers.allocator-ids.values
D/ReflectedParamUpdater( 7263): extent() != 1 for single value type: output.buffers.allocator-ids.values
D/ReflectedParamUpdater( 7263): extent() != 1 for single value type: algo.buffers.allocator-ids.values
D/ReflectedParamUpdater( 7263): extent() != 1 for single value type: output.buffers.pool-ids.values
D/ReflectedParamUpdater( 7263): extent() != 1 for single value type: algo.buffers.pool-ids.values
W/ACodec ( 7263): [OMX.Exynos.avc.dec] getting HDRStaticInfo failed even though codec advertises support
I/CCodecConfig( 7263): query failed after returning 19 values (BAD_INDEX)
D/CCodecConfig( 7263): c2 config diff is Dict {
D/CCodecConfig( 7263): c2::u32 coded.aac-packaging.value = 0
D/CCodecConfig( 7263): c2::u32 coded.bitrate.value = 64000
D/CCodecConfig( 7263): c2::u32 coded.pl.level = 0
D/CCodecConfig( 7263): c2::u32 coded.pl.profile = 8192
D/CCodecConfig( 7263): c2::i32 coding.drc.album-mode.value = 0
D/CCodecConfig( 7263): c2::float coding.drc.attenuation-factor.value = 1
D/CCodecConfig( 7263): c2::float coding.drc.boost-factor.value = 1
D/CCodecConfig( 7263): c2::i32 coding.drc.compression-mode.value = 3
D/CCodecConfig( 7263): c2::i32 coding.drc.effect-type.value = 3
D/CCodecConfig( 7263): c2::float coding.drc.encoded-level.value = 0.25
D/CCodecConfig( 7263): c2::float coding.drc.reference-level.value = -16
D/CCodecConfig( 7263): c2::u32 input.buffers.max-size.value = 8192
D/CCodecConfig( 7263): c2::u32 input.delay.value = 0
D/CCodecConfig( 7263): string input.media-type.value = "audio/mp4a-latm"
D/CCodecConfig( 7263): c2::u32 output.delay.value = 2
D/CCodecConfig( 7263): c2::float output.drc.output-loudness.value = 0.25
D/CCodecConfig( 7263): string output.media-type.value = "audio/raw"
D/CCodecConfig( 7263): c2::u32 raw.channel-count.value = 1
D/CCodecConfig( 7263): c2::u32 raw.max-channel-count.value = 8
D/CCodecConfig( 7263): c2::u32 raw.sample-rate.value = 44100
D/CCodecConfig( 7263): }
I/MediaCodec( 7263): MediaCodec will operate in async mode
D/MediaCodec( 7263): flushMediametrics
D/CCodec ( 7263): [c2.android.aac.decoder] buffers are bound to CCodec for this session
I/CCodec ( 7263): appPid(7263) width(0) height(0)
D/CCodecConfig( 7263): no c2 equivalents for log-session-id
D/CCodecConfig( 7263): no c2 equivalents for flags
D/CCodecConfig( 7263): config failed => CORRUPTED
D/CCodecConfig( 7263): c2 config diff is c2::u32 raw.channel-count.value = 2
D/CCodecConfig( 7263): c2::u32 raw.sample-rate.value = 48000
W/Codec2Client( 7263): query -- param skipped: index = 1107298332.
D/CCodec ( 7263): client requested max input size 885, which is smaller than what component recommended (8192); overriding with component recommendation.
W/CCodec ( 7263): This behavior is subject to change. It is recommended that app developers double check whether the requested max input size is in reasonable range.
D/CCodec ( 7263): setup formats input: AMessage(what = 0x00000000) = {
D/CCodec ( 7263): int32_t aac-drc-album-mode = 0
D/CCodec ( 7263): int32_t aac-drc-boost-level = 127
D/CCodec ( 7263): int32_t aac-drc-cut-level = 127
D/CCodec ( 7263): int32_t aac-drc-effect-type = 3
D/CCodec ( 7263): int32_t aac-encoded-target-level = -1
D/CCodec ( 7263): int32_t aac-max-output-channel_count = 8
D/CCodec ( 7263): int32_t aac-target-ref-level = 64
D/CCodec ( 7263): int32_t bitrate = 64000
D/CCodec ( 7263): int32_t channel-count = 2
D/CCodec ( 7263): int32_t level = 0
D/CCodec ( 7263): int32_t max-input-size = 8192
D/CCodec ( 7263): string mime = "audio/mp4a-latm"
D/CCodec ( 7263): int32_t profile = 2
D/CCodec ( 7263): int32_t sample-rate = 48000
D/CCodec ( 7263): int64_t durationUs = 0
D/CCodec ( 7263): }
D/CCodec ( 7263): setup formats output: AMessage(what = 0x00000000) = {
D/CCodec ( 7263): int32_t aac-drc-album-mode = 0
D/CCodec ( 7263): int32_t aac-drc-boost-level = 127
D/CCodec ( 7263): int32_t aac-drc-cut-level = 127
D/CCodec ( 7263): int32_t aac-drc-effect-type = 3
D/CCodec ( 7263): int32_t aac-drc-output-loudness = -1
D/CCodec ( 7263): int32_t aac-encoded-target-level = -1
D/CCodec ( 7263): int32_t aac-max-output-channel_count = 8
D/CCodec ( 7263): int32_t aac-target-ref-level = 64
D/CCodec ( 7263): int32_t channel-count = 2
D/CCodec ( 7263): string mime = "audio/raw"
D/CCodec ( 7263): int32_t sample-rate = 48000
D/CCodec ( 7263): }
I/CCodecConfig( 7263): query failed after returning 19 values (BAD_INDEX)
I/ACodec ( 7263): [OMX.Exynos.avc.dec] Now handling output port settings change
D/MediaCodec( 7263): keep callback message for reclaim
I/CCodec ( 7263): state->set(STARTING)
W/Codec2Client( 7263): query -- param skipped: index = 1342179345.
W/Codec2Client( 7263): query -- param skipped: index = 2415921170.
W/Codec2Client( 7263): query -- param skipped: index = 1610614798.
D/CCodecBufferChannel( 7263): [c2.android.aac.decoder#599] Created input block pool with allocatorID 16 => poolID 29 - OK (0)
D/BufferPoolAccessor2.0( 7263): Destruction - bufferpool2 0x70b01bbc78 cached: 0/0M, 0/0% in use; allocs: 376, 96% recycled; transfers: 366, 96% unfetched
D/SurfaceUtils( 7263): set up nativeWindow 0x715028d510 for 1280x720, color 0x105, rotation 270, usage 0x402900
I/ACodec ( 7263): [OMX.Exynos.avc.dec] configureOutputBuffersFromNativeWindow setBufferCount : 17, minUndequeuedBuffers : 9
I/CCodecBufferChannel( 7263): [c2.android.aac.decoder#599] Created output block pool with allocatorID 16 => poolID 523 - OK
D/CCodecBufferChannel( 7263): [c2.android.aac.decoder#599] Configured output block pool ids 523 => OK
I/CCodec ( 7263): state->set(RUNNING)
I/CCodecBufferChannel( 7263): [c2.android.aac.decoder#599] 4 initial input buffers available
I/ACodec ( 7263): [OMX.Exynos.avc.dec] Now Executing
I/MediaCodec( 7263): setCodecState state(0), called in 6
D/AudioTrack( 7263): setVolume(1.000000, 1.000000) pid : 7263
W/ACodec ( 7263): [OMX.Exynos.avc.dec] getting HDRStaticInfo failed even though codec advertises support
I/ACodec ( 7263): [OMX.Exynos.avc.dec] OMX_EventPortSettingsChanged 0x7f030010
W/MediaCodec( 7263): mapFormat: no mediaType information
I/MediaCodec( 7263): setCodecState state(1), called in 6
I/MediaCodec( 7263): setCodecState state(0), called in 6
D/BufferPoolAccessor2.0( 7263): evictor expired: 1, evicted: 0
D/BufferPoolAccessor2.0( 7263): bufferpool2 0x70b025a548 : 5(40960 size) total buffers - 0(0 size) used buffers - 24/29 (recycle/alloc) - 5/29 (fetch/transfer)
D/BufferPoolAccessor2.0( 7263): evictor expired: 1, evicted: 1
D/ImageReaderSurfaceProducer( 7263): ImageTextureEntry can't wait on the fence on Android < 33
I/ViewRootImpl@613a070[FlutterActivity]( 7263): ViewPostIme pointer 0
I/ViewRootImpl@613a070[FlutterActivity]( 7263): ViewPostIme pointer 1
D/AudioTrack( 7263): getTimestamp_l(1575): device stall time corrected using current time 411490888101286
I/MediaCodec( 7263): setCodecState state(1), called in 6
I/ViewRootImpl( 7263): updatePointerIcon pointerType = 1000, calling pid = 7263
D/InputManager( 7263): setPointerIconType iconId = 1000, callingPid = 7263
D/BufferPoolAccessor2.0( 7263): bufferpool2 0x70b025a548 : 5(40960 size) total buffers - 4(32768 size) used buffers - 95/105 (recycle/alloc) - 10/102 (fetch/transfer)
I/ViewRootImpl@613a070[FlutterActivity]( 7263): ViewPostIme pointer 0
I/ViewRootImpl@613a070[FlutterActivity]( 7263): ViewPostIme pointer 1
I/ViewRootImpl( 7263): updatePointerIcon pointerType = 1000, calling pid = 7263
D/InputManager( 7263): setPointerIconType iconId = 1000, callingPid = 7263
I/ViewRootImpl@613a070[FlutterActivity]( 7263): ViewPostIme pointer 0
I/MediaCodec( 7263): setCodecState state(0), called in 6
I/ViewRootImpl@613a070[FlutterActivity]( 7263): ViewPostIme pointer 1
D/AudioTrack( 7263): getTimestamp_l(1575): device stall time corrected using current time 411494977105206
I/MediaCodec( 7263): setCodecState state(1), called in 6
I/ViewRootImpl@613a070[FlutterActivity]( 7263): ViewPostIme pointer 0
I/ViewRootImpl@613a070[FlutterActivity]( 7263): ViewPostIme pointer 1
I/ViewRootImpl( 7263): updatePointerIcon pointerType = 1000, calling pid = 7263
D/InputManager( 7263): setPointerIconType iconId = 1000, callingPid = 7263
I/MediaCodec( 7263): setCodecState state(0), called in 6
```
</details>
### Flutter Doctor output
<details><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.4.1 23E224 darwin-arm64, locale en-PL)
• Flutter version 3.24.3 on channel stable at /Users/paweljakubowski/Development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (3 months ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/paweljakubowski/Library/Android/sdk
• Platform android-35, build-tools 34.0.0
• ANDROID_HOME = /Users/paweljakubowski/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (4 available)
• SM G970F (mobile) • RF8MB0MKYEK • android-arm64 • Android 12 (API 31)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.4.1 23E224 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.4.1 23E224 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.86
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| platform-android,p: video_player,package,has reproducible steps,P2,team-android,triaged-android,found in release: 3.27,found in release: 3.28 | low | Critical |
2,715,162,491 | ui | [bug]: Dropdown issue with sticky header | ### Describe the bug
When I'm working with sticky header with dropdown on it, I'm having layout shift and some weird behavior.

It might be the case of header where the header is sticky behavior.
```
<header className="sticky top-0 z-40 m-auto w-full border-b border-b-border bg-background px-3 py-3 align-middle sm:flex sm:h-[var(--header-height)] sm:items-center sm:justify-between sm:px-6 sm:py-0">
<div className="flex flex-1 items-center justify-between">
<ChatButton />
<DropdownMenu>
<DropdownMenuTrigger asChild>
<UserInfoButton />
</DropdownMenuTrigger>
<DropdownMenuContent className="rounded-t-0 m-0 -mt-1 me-3 w-64 rounded-none rounded-bl-xl rounded-br-xl bg-white sm:me-0">
<DropdownMenuGroup>
<DropdownMenuItem className="rounded-md text-sm">
<UserIcon size={18} color="black" />
<span>Profile</span>
</DropdownMenuItem>
<DropdownMenuItem className="rounded-md text-sm">
<Settings size={18} color="black" />
<span>Settings</span>
</DropdownMenuItem>
<DropdownMenuItem className="rounded-md text-sm">
<Bell size={18} color="black" />
<span>Notification Setting</span>
</DropdownMenuItem>
</DropdownMenuGroup>
<DropdownMenuSeparator />
<DropdownMenuItem
className="rounded-md text-sm"
onClick={handleSignOut}
>
<LogOut size={18} color="black" />
<span>Logout</span>
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
</div>
</header>
```
Please have a look into it.
### Affected component/components
Dropdown
### How to reproduce
Check above code.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Window 11
@radix-ui/react-dropdown-menu: "^2.1.2"
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,715,183,966 | godot | Very long shader/pipeline compilation | ### Tested versions
Reproducible: 4.4-dev4, 4.4-dev5
Not Reproducible: <= 4.4-dev3
### System information
Godot v4.4.dev5 - Ubuntu 24.04.1 LTS 24.04 on X11 - X11 display driver, Multi-window, 2 monitors - Vulkan (Forward+) - integrated Intel(R) UHD Graphics (CML GT2) - Intel(R) Core(TM) i7-10850H CPU @ 2.70GHz (12 threads)
### Issue description
Starting from 4.4-dev4, shader (or pipeline?) compilation takes a very long time, around 6s, sometimes 12s.
Before it was just a small stutter, ~250ms maybe.
To reproduced it reliably, I've tried to remove shader caches from different places.
- ~/.local/share/godot/app_userdata/my_godot_project/shader_cache/
- ~/.local/share/godot/app_userdata/my_godot_project/vulkan/
- ~/.local/share/godot/shader_cache/
- ~/my_godot_project/.godot/shader_cache/
- ~/.cache/godot/
- ~/.cache/mesa_shader_cache/
The one that triggers the long freeze when cleaned is `~/.cache/mesa_shader_cache/`.
It can happen in many different situations:
- at first display of a material
- scene loading
- project loading in the editor
- scene opening in the editor
- selecting a mesh instance in the editor
- changing a parameter of a material in the editor
When it happens on first display of the material (for example by setting a mesh material to `StandardMaterial3D.new()` by script), it's possible to monitor it: it shows 1 additional Compilation Specialization and 1 additional Compilation Surface. (circled in red in the screenshot below)
Subsequent displays of another similar material will also show 1 additional Compilation Specialization and 1 additional Compilation Surface, but will not have a long freeze.



### Steps to reproduce
1. Remove shader caches (`~/.cache/mesa_shader_cache/` in my case)
2. run a scene with a mesh
3. change the material to `StandardMaterial3D.new()`
### Minimal reproduction project (MRP)
[shader_compilation.zip](https://github.com/user-attachments/files/17994403/shader_compilation.zip)
| bug,topic:rendering,regression,performance | low | Minor |
2,715,225,578 | transformers | SequenceClassification for all Model types should have the option to add weights in Cross Entropy loss | ### Feature request
Qwen2ForSequenceClassification and all other SequenceClassification wrappers for transformer models don't have the possibility to add class weights. This leads to not using the implementation.
### Motivation
We need it, because we have highly imbalanced data
### Your contribution
I could submit a PR but this is probably a bigger architectural topic | Feature request | low | Minor |
2,715,231,636 | deno | LSP picks up `@types/*` package only with "manual" node_modules dir handling | Version: Deno 2.1.2
When working on a project with the Deno LSP, Deno it not picking up `@types/*` package automatically, unless `nodeModulesDir` setting is set to `"manual"`. That means in situation when `nodeModulesDir` is explicitly set to "auto", the typings for npm packages are not found automatically.
With no `nodeModulesDir` or with `nodeModulesDir` set to `"manual":

With `nodeModulesDir` set to `"auto"`:

In other words the LSP should look inside `node_modules` directory for `@types/*` packages and always use them if they are available. | bug,lsp,dx | low | Minor |
2,715,258,733 | langchain | [Bug] Azure cosmos db no sql vector store similarity search method "mmr" | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai import AzureOpenAIEmbeddings
from langchain_community.vectorstores.azure_cosmos_db import AzureCosmosDBVectorSearch
azure_endpoint = "here_goes_azure_endpoint"
openai_api_version = "2023-12-01-preview"
openai_api_key = "here_goes_api_key"
embeddings = AzureOpenAIEmbeddings(
model="text-embedding-ada-002",
deployment="text-embedding-ada-002",
azure_endpoint=azure_endpoint,
openai_api_version=openai_api_version,
openai_api_key=openai_api_key
)
azure_cosmos_db_connection_string = "here_goes_connection string"
namespace = "here_goes_namespace"
index_name = "here_goes_index_name"
vs = AzureCosmosDBVectorSearch.from_connection_string(azure_cosmos_db_connection_string,
namespace, embeddings,
index_name=index_name)
docs_with_scores = vs.max_marginal_relevance_search(query, k=8)
```
### Error Message and Stack Trace (if applicable)
```txt
File \site-packages\langchain_community\vectorstores\azure_cosmos_db_no_sql.py:363, in AzureCosmosDBNoSqlVectorSearch.max_marginal_relevance_search_by_vector(self, embedding, k, fetch_k, lambda_mult, **kwargs)
[361](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:361) if kwargs["with_embedding"]:
[362](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:362) with_embedding = kwargs["with_embedding"]
--> [363](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:363) docs = self._similarity_search_with_score(
[364](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:364) embeddings=embedding,
[365](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:365) k=fetch_k,
[366](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:366) pre_filter=pre_filter,
[367](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:367) with_embedding=with_embedding,
[368](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:368) )
[370](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:370) # Re-ranks the docs using MMR
[371](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:371) mmr_doc_indexes = maximal_marginal_relevance(
[372](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:372) np.array(embedding),
[373](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:373) [doc.metadata[self._embedding_key] for doc, _ in docs],
[374](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:374) k=k,
[375](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:375) lambda_mult=lambda_mult,
[376](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:376) )
File \site-packages\langchain_community\vectorstores\azure_cosmos_db_no_sql.py:308, in AzureCosmosDBNoSqlVectorSearch._similarity_search_with_score(self, embeddings, k, pre_filter, with_embedding)
[306](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:306) score = item["SimilarityScore"]
[307](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:307) if with_embedding:
--> [308](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:308) metadata[self._embedding_key] = item[self._embedding_key]
[309](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:309) metadata['SimilarityScore'] = score
[310](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:310) docs_and_scores.append(
[311](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:311) (Document(page_content=text, metadata=metadata), score)
[312](/langchain_community/vectorstores/azure_cosmos_db_no_sql.py:312) )
KeyError: 'embedding'
```
### Description
The problem here is two fold :
* with_embedding kwargs argument is not working ([issues/26097](https://github.com/langchain-ai/langchain/issues/26097))
* in turn causes the "mmr" search type method to fails because it requires embedding in the metadata of the resulting documents
note: "with_embedding" needs to be true for mmr search to work properly
Why did this issue happen ? --> an update to the query made to cosmos db resulted in items that don't have embedding as key. [pull/27377](https://github.com/langchain-ai/langchain/pull/27377)
### System Info
```
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.0 (tags/v3.12.0:0fb18b0, Oct 2 2023, 13:03:39) [MSC v.1935 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.21
> langchain: 0.3.9
> langchain_community: 0.3.9
> langsmith: 0.1.147
> langchain_cohere: 0.3.3
> langchain_experimental: 0.3.3
> langchain_openai: 0.2.10
> langchain_text_splitters: 0.3.2
> langgraph_sdk: 0.1.40
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.9
> async-timeout: Installed. No version info available.
> cohere: 5.13.0
> dataclasses-json: 0.6.7
> httpx: 0.28.0
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.56.0
> orjson: 3.10.12
> packaging: 24.2
> pandas: 2.2.3
> pydantic: 2.10.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tabulate: 0.9.0
> tenacity: 9.0.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2
``` | Ɑ: vector store,🤖:bug | low | Critical |
2,715,318,557 | vscode | Number-of-files-found Badge is hard to see on a selected list item | Testing #234889
It's hard to see there's a badge and very hard to see the contents of the badge on a selected item with Light+ theme.

| file-explorer,themes,polish | low | Minor |
2,715,324,047 | vscode | Weird rendering after response comes in for terminal inline chat | 
| bug,terminal-chat | low | Minor |
2,715,333,074 | vscode | Number-of-files-found Badge is not at the center of the list item vertically | Testing #234889
## Actual

## Expected

Version: 1.96.0-insider
Commit: 5d6531aabba3abfbc263c1e17e965db97df9210f
Date: 2024-12-03T05:13:15.471Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.1.0 | bug,file-explorer | low | Minor |
2,715,356,350 | vscode | With explorer file search, is it possible to jump to the found file without opening each folder recursively first? | Testing #234889
Currently, to go through found files, one has to open folders recursively (AFAIU).
Would it be helpful to add a gesture to go through each file with folders uncollapsed automatically? Something like `cltr/cmd+up/down`
| feature-request,file-explorer | low | Minor |
2,715,381,743 | vscode | git blame feedback! | Testing #234997


I actually used git blame (the extension) and got used to seeing those decorations. personally, I'd like if they were more inline with the rest of code and/or had less space in front of them? not really sure, but i see how it could be annoying if users are trying to type | git,under-discussion | low | Minor |
2,715,383,538 | angular | provideAnimationsAsync breaks leave animation applied to component host via HostBinding | ### Which @angular/* package(s) are the source of the bug?
animations
### Is this a regression?
No
### Description
I was trying to use lazy-loaded animation package `provideAnimationsAsync()`, but I encountered an issue with `:leave` animation not being triggered on component removal. The animation was configured directly for a component by applying the trigger via `@HostBinding()`.
The problem disappeared when
* reverting to `provideAnimations()`
* changing the approach to configure the animation outside the component and applying the animation trigger to the component host element instead
In order to demonstrate the defect, open provided Stackblitz and switch between the lazy `provideAnimationsAsync()` and the eager `provideAnimations()`.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/angular-provideanimationsasync-removal-bug?file=src%2Fapp%2Fapp.module.ts
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 18.2.8
Node: 20.18.0
Package Manager: npm 10.8.2
OS: linux x64
Angular: 18.2.8
... animations, cdk, cli, common, compiler, compiler-cli, core
... forms, language-service, platform-browser
... platform-browser-dynamic, platform-server, router, ssr
... youtube-player
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.8
@angular-devkit/build-angular 18.2.8
@angular-devkit/core 18.2.8
@angular-devkit/schematics 18.2.8
@schematics/angular 18.2.8
ng-packagr 18.2.1
rxjs 7.8.1
typescript 5.5.4
zone.js 0.15.0
```
### Anything else?
_No response_ | area: animations,animations: async | low | Critical |
2,715,388,221 | vscode | Find widget in file explorer should progress and show number of found elements | Testing #234889
We could include search-progress info and # of files found in the find widget in file explorer. Because of async nature of search, I think a spinner would help see there's UI feedback on their typing in the find widget input box.
See how find widget in editor shows number of items found:

I got a bit lost with the find widget in explorer not showing number of found items:

| feature-request,file-explorer | low | Minor |
2,715,394,699 | godot | Crash with SDFGI and very large znear or zfar | ### Tested versions
4.4.dev (893bbdfd)
4.4-dev5
### System information
Godot v4.4.dev (b09b8ef1a) - Ubuntu 24.04.1 LTS 24.04 on X11 - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3050 (nvidia; 535.183.01) - Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz (8 threads)
### Issue description
Crash when SDFGI is active and the camera's znear or zfar is set to a very high value (above 1E20 or so).
A `VK_ERROR_DEVICE_LOST` is raised here :
https://github.com/godotengine/godot/blob/5f7dab636a2dfcf054412fa028474773e5ea61b9/drivers/vulkan/rendering_device_driver_vulkan.cpp#L2525-L2528
### Steps to reproduce
In Editor : New scene > Add `Camera3D` > Add `Environment` > Enable SDFGI.
Set the camera's znear or zfar to anything above 1e20.
I'm aware such large far / near values can't be used in practice right now, but I'm currently working on making this possible.
### Minimal reproduction project (MRP)
[crash_sdfgi.zip](https://github.com/user-attachments/files/17995548/crash_sdfgi.zip)
Open scene "crash_sdfgi".
Godot will crash as soon as you preview the camera.
| bug,topic:rendering,crash,topic:3d | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.