id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,745,966,011 | flutter | onDidRemovePage is called when deleting a page below. | ### Steps to reproduce
1. Clone the [RubigoNavigator](https://github.com/jsroest/rubigo_navigator) repository.
2. Start the example app.
3. Navigate from S100 to S200 by pressing the button with label "Push S200".
4. Navigate from S200 to S300 by pressing the button with label "Push S300".
5. Note the following lines in the debug console:
```
flutter: Screen stack: s100
flutter: push(s200) called
flutter: Screen stack: s100 => s200
flutter: push(s300) called
flutter: Screen stack: s100 => s200 => s300
```
1. Delete S200 from the screen stack by pressing the button with label "Remove S200"
2. Note the following lines in the debug console:
```
flutter: remove(s200) called
flutter: Screen stack: s100 => s300
flutter: onDidRemovePage(s300) called by Flutter framework
flutter: and redirected to pop().
flutter: pop() called
flutter: Screen stack: s100
flutter: onDidRemovePage(s300) called by Flutter framework
flutter: but ignored by us.
```
### Expected results
onDidRemovePage should only be called when the top most page is removed by the Flutter Framework, for example when the stack changes because the user presses or uses:
1. Android BackButton
2. Android predictive back gesture
3. iOS swipe back gesture
4. .....?
onDidRemovePage should not be called when our code removes pages. Why should we be informed, when we remove the page ourselves in code?
### Actual results
It looks like that every page that is not in the new set of pages is reported back to the router delegate by calling onDidRemovePage. You might also want to try to navigate from S300 directly to S100 by pressing "PopTo S100".
This will be in the console output.
```
flutter: Screen stack: s100 => s200 => s300
flutter: popTo(s100) called
flutter: Screen stack: s100
flutter: onDidRemovePage(s300) called by Flutter framework
flutter: but ignored by us.
flutter: onDidRemovePage(s200) called by Flutter framework
flutter: but ignored by us.
```
### Code sample
See steps to reproduce, for a complete sample.
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/46d00704-83c6-4030-8d93-8ad0d9bf9167
</details>
### Logs
<details open><summary>Logs</summary>
```console
Syncing files to device macOS...
flutter: Screen stack: s100
flutter: push(s200) called
flutter: push(s200) called
flutter: Screen stack: s100 => s200 => s200
flutter: push(s300) called
flutter: Screen stack: s100 => s200 => s200 => s300
flutter: remove(s200) called
flutter: Screen stack: s100 => s200 => s300
flutter: onDidRemovePage(s300) called by Flutter framework
flutter: and redirected to pop().
flutter: pop() called
flutter: Screen stack: s100 => s200
flutter: onDidRemovePage(s300) called by Flutter framework
flutter: but ignored by us.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
➜ rubigo_navigator git:(main) fvm flutter doctor -v
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.1.1 24B91 darwin-x64, locale en-NL)
• Flutter version 3.27.1 on channel stable at /Users/johannesroest/fvm/versions/3.27.1
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (26 hours ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/johannesroest/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension can be installed from:
🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (3 available)
• iPhone van Sander (mobile) • 00008120-001635020E43A01E • ios • iOS 18.1.1 22B91
• macOS (desktop) • macos • darwin-x64 • macOS 15.1.1 24B91 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.140
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| framework,f: routes,has reproducible steps,P3,team-framework,triaged-framework,found in release: 3.27,found in release: 3.28 | low | Critical |
2,746,001,584 | go | runtime: Swiss Table maps can double size multiple times when deleting/adding elements | ### Go version
go version go1.24rc1
### Output of `go env` in your module/workspace:
```shell
N/A
```
### What did you do?
When repeatedly deleting and adding elements from a Swiss Table map but without increasing the count of elements, the map can grow multiple times (e.g., from 128 slots to 1024 slots in a ~30s test).
I think there is [currently a simplification](https://github.com/golang/go/blob/b2c0168893a7f27927630198cdf63911374035c3/src/internal/runtime/maps/table.go#L1010-L1023) in the current implementation (compared to Abseil and the CockroachDB implementations) such that it is expected that some growth occurs in lieu of a same-sized grow or rehashing in place, but it seemed worth a tracking bug that tables can end up growing substantially larger.
Here's a sample test demonstrating this:
https://go.dev/play/p/RITVDebV5op?v=gotip (original)
https://go.dev/play/p/xiWudCQADt5?v=gotip (**edit**: more reliable test from [below](https://github.com/golang/go/issues/70886#issuecomment-2549943801))
It's runnable on the playground, where it sometimes fails or passes, though the main intent is to run locally.
Using that test, here's a sample run that starts with a ~10% load (14 elements in a map with an underlying table size of 128), then loops 1M times deleting and adding a different element (while never going above 14 elements in the map). The map's underlying table grows from 128 slots to 512 slots while doing that delete/add cycle 1M times:
```
$ go1.24rc1 test -count=3 -v -loop=1000000 -run=TestTombstoneGrow/tableSize=128/elems=14
=== RUN TestTombstoneGrow
=== RUN TestTombstoneGrow/tableSize=128/elems=14/load=0.109
main_test.go:33: before delete/add loop: len(m)=14, underlying table size=128, map=0xc00002b140
table: growing: old size=128, new size=256, map=0xc00002b140
table: growing: old size=256, new size=512, map=0xc00002b140
main_test.go:53: [after delete/add loop] len(m)=14, underlying table size=512, map=0xc00002b140
main_test.go:56: got 2 allocations per run
--- FAIL: TestTombstoneGrow (0.34s)
--- FAIL: TestTombstoneGrow/tableSize=128/elems=14/load=0.109 (0.34s)
```
Those results above include using a minor hack into the runtime to report the underlying table size and print when tables grow.
If we instead loop 100M times on that same test, the map grows from 128 table slots to 1024 table slots:
```
$ go1.24rc1 test -count=3 -v -loop=100000000 -run=TestTombstoneGrow/tableSize=128/elems=14
=== RUN TestTombstoneGrow
=== RUN TestTombstoneGrow/tableSize=128/elems=14/load=0.109
main_test.go:33: before delete/add loop: len(m)=14, underlying table size=128, map=0xc00002b140
table: growing: old size=128, new size=256, map=0xc00002b140
table: growing: old size=256, new size=512, map=0xc00002b140
table: growing: old size=512, new size=1024, map=0xc00002b140
main_test.go:53: [after delete/add loop] len(m)=14, underlying table size=1024, map=0xc00002b140
main_test.go:56: got 2 allocations per run
--- FAIL: TestTombstoneGrow (33.86s)
--- FAIL: TestTombstoneGrow/tableSize=128/elems=14/load=0.109 (33.86s)
```
If we just loop, say, 100 times, the table does not grow, as expected:
```
$ go1.24rc1 test -count=3 -v -loop=100 -run=TestTombstoneGrow/tableSize=128/elems=14
=== RUN TestTombstoneGrow
=== RUN TestTombstoneGrow/tableSize=128/elems=14/load=0.109
main_test.go:33: before delete/add loop: len(m)=14, underlying table size=128, map=0xc00002b140
main_test.go:53: [after delete/add loop] len(m)=14, underlying table size=128, map=0xc00002b140
--- PASS: TestTombstoneGrow (0.00s)
--- PASS: TestTombstoneGrow/tableSize=128/elems=14/load=0.109 (0.00s)
```
One note of caution regarding the accuracy of this as a bug report -- test pass/failure here is being reported using testing.AllocsPerRun to see if an alloc occurs, but either I'm holding it wrong or seems to be flakey or both. (I was purposefully not using a more conventional runs number like 100, but maybe that's a mistake).
CC @prattmic
### What did you see happen?
8x memory used.
### What did you expect to see?
Less than 8x. Using an extra ~2x memory might be OK as a near-term simplification, but 8x seems high, and the memory can grow further.
| NeedsInvestigation,compiler/runtime | low | Critical |
2,746,022,117 | ollama | When models don't fit in VRAM, Issue alert/confirmation instead of running and freezing computer for hours | ### What is the issue?
When a model is selected that does not fit in VRAM, it runs on the CPU. This is a ridiculous fallback that freezes the whole computer, it should just fail. Or actually use the GPU with shared memory instead of falling back to the CPU only.
### OS
Windows 11 Pro
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.14 | bug,needs more info | medium | Major |
2,746,026,381 | pytorch | higher rank convolution | ### 🚀 The feature, motivation and pitch
would it be possible to add official pytorch support for higher rank convolution? thanks!
### Alternatives
_No response_
### Additional context
working at a higher rank can be useful, depending on the application!
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Minor |
2,746,059,997 | electron | browserWindow.getBounds() inconsistent between platforms for a minimized, maximized window | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.3.0
### What operating system(s) are you using?
Windows, macOS
### Operating System Version
macOS Sequoia 15.1.1, Windows 10
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
macOS and Windows should be consistent on the bounds returned for a minimized, maximized window.
### Actual Behavior
My vote is for the macOS behavior to be the one to standardize to.
### Testcase Gist URL
https://gist.github.com/00784ca3419def0d6e3d9eb0c5f6a652
### Additional Information
On macOS, `geBounds()` returns the bounds of the maximized window.
On Windows, `getBounds()` returns the normal bounds. | platform/windows,platform/macOS,bug :beetle:,status/confirmed,has-repro-gist,33-x-y | low | Critical |
2,746,071,293 | vscode | Key binding is unable to match when clause of a TreeItem | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.0
- OS Version: MacOS 14.6.1 (23G93)
Steps to Reproduce:
1. Implement a `vscode.TreeDataProvider`, where a `treeItem` has a `contextValue`.
2. In package.json, contribute a keybinding that uses a `when` clause referencing this `contextValue`.
3. Reload VS Code with the extension active, select the tree item in the custom view, and press the assigned key.
4. Observe that the command does not run, even though the context menu command (with the same when condition) works as expected.
**Additional Context:**
I’m trying to replicate the behavior of the built-in File Explorer’s delete operation. I have a custom tree view displaying workspace files in a different structure. The right-click “Delete File” option works correctly since the viewItem contextValue matches and the context menu appears. However, the corresponding keybinding with the same when condition never triggers the command. The KeybindingService logs indicate that the when clause does not match, despite the contextValue being identical to the one used for the context menu.
Relevant package.json snippet:
```
"contributes": {
"keybindings": [
{
"key": "alt+3",
"mac": "alt+3",
"command": "bit.delete-file",
"when": "viewItem == compFile"
}
],
"menus": {
"view/item/context": [
{
"command": "bit.delete-file",
"when": "viewItem == compFile",
"group": "navigation"
}
]
}
```
Relevant TreeItem code snippet:
```
export class CompFileTreeItem extends vscode.TreeItem {
constructor(
public readonly label: string,
uri: vscode.Uri
) {
super(label, vscode.TreeItemCollapsibleState.None);
this.resourceUri = uri;
this.contextValue = 'compFile';
}
}
```
Keybinding Inspector Output:
```
2024-12-17 15:37:45.239 [info] [KeybindingService]: | Resolving alt+[Digit3]
2024-12-17 15:37:45.239 [info] [KeybindingService]: \ From 1 keybinding entries, no when clauses matched the context.
```
Question:
Why does the viewItem == compFile condition work for the context menu but fail for the keybinding, even though the item is selected and the same context value is set? | feature-request | low | Critical |
2,746,098,889 | vscode | Grouping/sorting of notebook command | The grouping and sorting of the notebook layout commands is fairly random, all layout command should be in one group, all kernel commands in one, and export on in its own group. I have tried to color-code the groups in the screen shot below
<img width="469" alt="Screen Shot 2021-10-12 at 12 19 09 (1)" src="https://user-images.githubusercontent.com/1794099/137174183-842bba2c-9824-4d91-a437-9954aa37f41b.png">
| debt,notebook-commands,notebook-globaltoolbar | low | Minor |
2,746,138,703 | vscode | Custom Keyboard Shortcuts don't work | ### Applies To
- [X] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
I added a new keyboard shortcut Cmd+Shift+A - in Command mode, this should "run all cells above". This is what my keybindings.json looks like. However, when I hit this combination, nothing happens.
```
[
{
"key": "shift+cmd+a",
"command": "jupyter.runallcellsabove",
"when": "editorTextFocus && jupyter.hascodecells && !jupyter.webExtension && !notebookEditorFocused"
}
]
```
### VS Code Version
1.89.1
### Jupyter Extension Version
v2024.4.0
### Jupyter logs
_No response_
### Coding Language and Runtime Version
3.10.12
### Language Extension Version (if applicable)
v2024.6.0
### Anaconda Version (if applicable)
_No response_
### Running Jupyter locally or remotely?
Local | bug,notebook-commands | low | Minor |
2,746,141,663 | electron | UtilityProcess.kill() blocks main process and therefor freezes renderer process | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
[33.3.0](https://releases.electronjs.org/release/v33.3.0)
### What operating system(s) are you using?
macOS
### Operating System Version
Sequoia 15.1.1 (24B91)
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
When calling the `kill` method on a `UtilityProcess` I expected the main process to handle the graceful killing of the process as describec in [the docs](https://www.electronjs.org/docs/latest/api/utility-process#childkill)
It would be nice to only have the actual result of the method to come at the `utilityProcess.on('exit', () => {})` and not lock up the main process or have a `killAsync` method available
### Actual Behavior
The main process freezes until the process has been shut down.
### Testcase Gist URL
_No response_
### Additional Information
_No response_ | platform/macOS,bug :beetle:,component/utilityProcess,33-x-y | low | Critical |
2,746,154,001 | rust | Compilation issue [resolved with cargo clean] | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
pub fn easy_vec_ui(
mut easy_vec_ui_resource: ResMut<EasyVecUi>,
connected_players: Res<ConnectedPlayers>,
run_trigger: Res<RunTrigger>,
) {
let right_data_vec = vec![
String::from(format!("( Shift + E ) <--- Client Run Trigger Index [{}] ---> ( Shift + D )", run_trigger.get_trigger_idx())),
String::from(format!("( Shift + F ) All Clients Run Trigger: [{}]", run_trigger.get_triggers_ref()[run_trigger.get_trigger_idx()])),
];
easy_vec_ui_resource.inject_vec_right(right_data_vec);
let mut left_data_vec: Vec<String> = Vec::new();
let players_guard = connected_players.players.lock().unwrap(); // Lock the connected players to read player data
for (uuid, player_status) in players_guard.iter() { // Iterate over each player and create a row for each one
left_data_vec.push(String::from(format!("Player ID: [{}] Last heartbeat: [{:?}]", uuid, player_status.last_heartbeat)));
}
left_data_vec.push(String::from("_____________________________________________"));
left_data_vec.push(String::from("Heart Beat Interface: Connected Players Above"));
easy_vec_ui_resource.inject_vec_left(left_data_vec);
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
<version>
```
### Error output
```
Compiling minigolf_backend_server v0.1.0 (D:\bevy\donut\minigolf_backend_server)
thread 'rustc' panicked at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\compiler\rustc_middle\src\middle\exported_symbols.rs:41:58:
invalid enum variant tag while decoding `ExportedSymbol`, expected 0..6, actual 197
stack backtrace:
0: 0x7ffd9e0e0f51 - std::backtrace_rs::backtrace::dbghelp64::trace
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\..\..\backtrace\src\backtrace\dbghelp64.rs:91
1: 0x7ffd9e0e0f51 - std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\..\..\backtrace\src\backtrace\mod.rs:66
2: 0x7ffd9e0e0f51 - std::sys::backtrace::_print_fmt
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\sys\backtrace.rs:66
3: 0x7ffd9e0e0f51 - std::sys::backtrace::impl$0::print::impl$0::fmt
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\sys\backtrace.rs:39
4: 0x7ffd9e112629 - core::fmt::rt::Argument::fmt
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/core\src\fmt\rt.rs:177
5: 0x7ffd9e112629 - core::fmt::write
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/core\src\fmt\mod.rs:1178
6: 0x7ffd9e0d7117 - std::io::Write::write_fmt<std::sys::pal::windows::stdio::Stderr>
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\io\mod.rs:1823
7: 0x7ffd9e0e4069 - std::panicking::default_hook::closure$1
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\panicking.rs:266
8: 0x7ffd9e0e3bec - std::panicking::default_hook
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\panicking.rs:293
9: 0x7ffd9f635e40 - memchr
10: 0x7ffd9e0e4a7b - alloc::boxed::impl$50::call
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/alloc\src\boxed.rs:2245
11: 0x7ffd9e0e4a7b - std::panicking::rust_panic_with_hook
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\panicking.rs:805
12: 0x7ffd9e0e4886 - std::panicking::begin_panic_handler::closure$0
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\panicking.rs:671
13: 0x7ffd9e0e1a0f - std::sys::backtrace::__rust_end_short_backtrace<std::panicking::begin_panic_handler::closure_env$0,never$>
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\sys\backtrace.rs:170
14: 0x7ffd9e0e4496 - std::panicking::begin_panic_handler
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\panicking.rs:662
15: 0x7ffda0ea47d4 - core::panicking::panic_fmt
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/core\src\panicking.rs:74
16: 0x7ffd9f1b228f - <rustc_metadata[6691fbe6571c11b4]::creader::alloc_error_handler_spans::Finder as rustc_ast[fb8a06d0c4c179b8]::visit::Visitor>::visit_item
17: 0x7ffd9dccef4e - rustc_ty_utils[67287b256ac86568]::ty::adt_sized_constraint
18: 0x7ffd9dcade4d - rustc_ty_utils[67287b256ac86568]::ty::adt_sized_constraint
19: 0x7ffd9dc7ffb2 - rustc_ty_utils[67287b256ac86568]::ty::adt_sized_constraint
20: 0x7ffd9dcf95a0 - rustc_query_impl[1cbfc9fc16052fcf]::query_system
21: 0x7ffd9f0b8f45 - rustc_codegen_ssa[64d598cc62316a48]::back::symbol_export::upstream_monomorphizations_provider
22: 0x7ffd9dcd0e93 - rustc_ty_utils[67287b256ac86568]::ty::adt_sized_constraint
23: 0x7ffd9dc4560a - rustc_ty_utils[67287b256ac86568]::ty::adt_sized_constraint
24: 0x7ffda0341cc6 - <rustc_span[8754bfbf43f7048e]::def_id::DefIndex as rustc_query_impl[1cbfc9fc16052fcf]::profiling_support::SpecIntoSelfProfilingString>::spec_to_self_profile_string
25: 0x7ffd9de4a7a3 - rustc_codegen_ssa[64d598cc62316a48]::back::symbol_export::upstream_monomorphizations_for_provider
26: 0x7ffd9dcd11a0 - rustc_ty_utils[67287b256ac86568]::ty::adt_sized_constraint
27: 0x7ffd9eb777ed - rustc_ty_utils[67287b256ac86568]::ty::self_ty_of_trait_impl_enabling_order_dep_trait_object_hack
28: 0x7ffd9dcf1925 - rustc_query_impl[1cbfc9fc16052fcf]::query_system
29: 0x7ffd9f4b2b55 - <rustc_middle[c5ee28e0fcbaddd6]::ty::instance::Instance>::upstream_monomorphization
30: 0x7ffd9e4af583 - rustc_monomorphize[6d2945be1fb3e4b3]::polymorphize::unused_generic_params
31: 0x7ffd9e4ac0eb - rustc_monomorphize[6d2945be1fb3e4b3]::polymorphize::unused_generic_params
32: 0x7ffd9e4a1ad1 - rustc_monomorphize[6d2945be1fb3e4b3]::partitioning::collect_and_partition_mono_items
33: 0x7ffd9dcd1267 - rustc_ty_utils[67287b256ac86568]::ty::adt_sized_constraint
34: 0x7ffd9dcb88ed - rustc_ty_utils[67287b256ac86568]::ty::adt_sized_constraint
35: 0x7ffd9dc4024a - rustc_ty_utils[67287b256ac86568]::ty::adt_sized_constraint
36: 0x7ffda0314058 - <rustc_query_impl[1cbfc9fc16052fcf]::plumbing::QueryCtxt as rustc_query_system[9660e097ca113b0a]::query::QueryContext>::depth_limit_error
37: 0x7ffda02db45f - <rustc_ty_utils[67287b256ac86568]::opaque_types::OpaqueTypeCollector as rustc_type_ir[20c798ba42da02c9]::visit::TypeVisitor<rustc_middle[c5ee28e0fcbaddd6]::ty::context::TyCtxt>>::visit_ty
38: 0x7ffd9eb51d66 - rustc_ty_utils[67287b256ac86568]::ty::self_ty_of_trait_impl_enabling_order_dep_trait_object_hack
39: 0x7ffd9eb51a1e - rustc_ty_utils[67287b256ac86568]::ty::self_ty_of_trait_impl_enabling_order_dep_trait_object_hack
40: 0x7ffd9dc7ff22 - rustc_ty_utils[67287b256ac86568]::ty::adt_sized_constraint
41: 0x7ffd9dcf95a0 - rustc_query_impl[1cbfc9fc16052fcf]::query_system
42: 0x7ffd9f160fd3 - <rustc_metadata[6691fbe6571c11b4]::rmeta::encoder::EncodeContext as rustc_span[8754bfbf43f7048e]::SpanEncoder>::encode_symbol
43: 0x7ffd9de8c5de - rustc_metadata[6691fbe6571c11b4]::rmeta::encoder::encode_metadata
44: 0x7ffd9de94801 - rustc_metadata[6691fbe6571c11b4]::fs::encode_and_write_metadata
45: 0x7ffd9b1f2793 - <rustc_interface[ab63cb8dd572d7f1]::queries::Linker>::codegen_and_build_linker
46: 0x7ffd9b1a6f70 - _rust_alloc_error_handler
47: 0x7ffd9b1a2a16 - _rust_alloc_error_handler
48: 0x7ffd9b1ac5db - _rust_alloc_error_handler
49: 0x7ffd9e0f640d - alloc::boxed::impl$48::call_once
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/alloc\src\boxed.rs:2231
50: 0x7ffd9e0f640d - alloc::boxed::impl$48::call_once
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/alloc\src\boxed.rs:2231
51: 0x7ffd9e0f640d - std::sys::pal::windows::thread::impl$0::new::thread_start
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14\library/std\src\sys\pal\windows\thread.rs:55
53: 0x7ffe5545cc91 - RtlUserThreadStart
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.82.0 (f6e511eec 2024-10-15) running on x86_64-pc-windows-msvc
note: compiler flags: --crate-type lib -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [exported_symbols] collecting exported symbols for crate `443`
#1 [upstream_monomorphizations] collecting available upstream monomorphizations
end of query stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 exported_symbols(minigolf_backend_server[1ad1])
end of try_mark_green dep node stack
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
I don't know enough to understand what backtrace was looking for but I was able to resolve the issue with a cargo clean and cargo run.
```
</p>
</details>
| I-ICE,T-compiler,A-incr-comp,C-bug,S-needs-repro | low | Critical |
2,746,177,352 | pytorch | Locale issues in colab: after tensor(1j).cuda().abs() !commands cannot be executed. | ### 🐛 Describe the bug
Running the following in colab (T4 runtime):
```
import torch
a=torch.tensor(1j,device="cuda")
a.abs()
!echo "cake is a lie"
```
results in an `NotImplementedError: A UTF-8 locale is required. Got ANSI_X3.4-1968`
it has to be a) complex b) abs c) on cuda.
otherwise, the final commands succeeds.
It seems like the complex abs cuda kernel modifies the locale?
Related: https://github.com/googlecolab/colabtools/issues/3409
### Versions
default google colab
torch 2.5.1+cu121
python 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
cc @albanD | triaged,module: third_party,module: python frontend | low | Critical |
2,746,185,271 | vscode | Jupyter notebook cell menu incorrect for new cells | Jupyter notebook cells provide a pop-up mention at the top-left corner of cell when focus is on the cell.
However, the content of the menu fails to update to reflect the current cell. This seems to be a UI event issue.
To reproduce, read these instructions on your phone or a separate window so you don't have to change focus from VS code while following along.
1. Create a new notebook.
2. In the top cell, type, say `import sys`. Notice that the cell menu contains "... <trashcan>".
3. Switch focus away from VS code to another application window. Switch back. Now you see the full Python menu.
4. Use the "+ Markdown" button to add a markdown cell. Type, "example". Notice that the cell menu contains the "<run by line>, <execute cells below>, <execute cells above>, ..., <trashcan>" items.
5. Again switch focus to another application, then back to VS Code. Now you see the "<checkmark>, <split cell>, ..., <trashcan>" menu items.
6. Use "+ Code" to add a Python cell. Notice that the menu again contains only "... <trashcan>".
7. Again switch to another window and back. The proper Python menu appears.
I have found that, sometimes, I can get the menus to reset by clicking onto another cell and back. I find I must frequently do this when adding a Markdown cell.
Here is another variation:
1. Add a Python cell. Type `a = 10`. Switch away from VS code and back again to get the proper menu.
2. Add a Markdown cell below. Notice that the menu contains the Python cell menu items. Switch away and back to get the Markdown cell menu items.
Yet another variation. The above examples all added cells at the bottom. This time, we'll add one in the middle.
1. Select your top Python cell. Add a Markdown cell. You will see the "<run by line>, <execute cells below>, <execute cells above>, ..., <trashcan>" menu.
2. Switch away and back to get the Markdown menu.
3. Add a Code cell below you Markdown cell. You'll get only the "..., <trashcan>" menu.
4. Switch and and back to get the Python menu.
5. Add another Markdown cell. This time you get the Markdown menu immediately.
6. Add a Code cell. You'll get the proper Python menu.
7. Add another Code cell. You'll get the Python menu but without the "Split Cell" item.
8. Add another Code cell. Again the truncated menu.
9. Click on the cell created in 6. Back to the Python menu.
10. Click on the cell added in 7. Still the truncated menu.
11. Click on the cell created in 6. Switch to another app and back to VS Code.
12. Click on the cell added in 7. This time you get the full Python menu.
This behavior does not happen all the time. It only happens for new cells. If you click around your existing cells, you'll see that the cell menu is correct.
In short, it seems that the UI is not receiving some kind of UI event that tells it to switch the hover menu to the version for a newly created cell. The menu seems to lag a bit. We can force an update by switching application focus.
### Workaround
Having incorrect menus is a nuisance. One can often get the correct menu by clicking on another cell, then back. Sometimes switching to another app and VS Code is necessary.
## Environment data
<details>
About dialog information:
```text
Version: 1.91.1
Commit: f1e16e1e6214d7c44d078b1f0607b2388f29d729
Date: 2024-07-09T22:08:12.169Z
Electron: 29.4.0
ElectronBuildId: 9728852
Chromium: 122.0.6261.156
Node.js: 20.9.0
V8: 12.2.281.27-electron.0
OS: Linux x64 5.15.0-113-generic
```
- Jupyter Extension version (available under the Extensions sidebar): v2024.6.0
- Python Extension version (available under the Extensions sidebar): v2024.10.0
- OS (Windows | Mac | Linux distro) and version: Linux Mint 21.3 (also see above)
- Python and/or Anaconda version: Python 3.10.12
- Type of virtual environment used (N/A | venv | virtualenv | conda | ...): N/A
- Jupyter server running: N/A (VS-Code provided server)
</details>
| bug,notebook-commands,notebook-celltoolbar | low | Minor |
2,746,225,890 | pytorch | [ONNX] Rename dynamic shapes produced by ExportedProgram to dynamic_axes | `torch.export.export` names dynamic shapes to be s0, s1, s2, s3, ..., etc. However, in ONNX, users could pass in the naming through `dynamic_axes` and `input_names`. We need to rename them to what users request. | module: onnx,triaged,onnx-triaged | low | Minor |
2,746,233,346 | langchain | ChatSambaNovaCloud with_structured_output json_mode is always failing since kwargs are always not None if they are not used | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following:
```
from langchain_community.chat_models.sambanova import ChatSambaNovaCloud
from pydantic import BaseModel
llm = ChatSambaNovaCloud(
sambanova_api_key="37d19d6b-2c13-4658-a157-bb9bb0362096",
model="Qwen2.5-72B-Instruct",
temperature=0,
)
structured_llm = llm.with_structured_output(method="json_mode")
```
Raises exception:
raise ValueError(f"Received unsupported arguments {kwargs}")
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
```
ValueError
Traceback (most recent call last) Cell In[15], line 10
2 from pydantic import BaseModel
4 llm = ChatSambaNovaCloud(
5 sambanova_api_key="37d19d6b-2c13-4658-a157-bb9bb0362096",
6 model="Qwen2.5-72B-Instruct",
7 temperature=0,
8 )
---> 10 structured_llm = llm.with_structured_output(method="json_mode")
12 structured_llm.invoke(
13 "Answer the following question. "
14 "Make sure to return a JSON blob with keys 'answer' and 'justification'.\n\n"
15 "What's heavier a pound of bricks or a pound of feathers?"
16 )
File ~/.pyenv/versions/3.10.0/lib/python3.10/site-packages/langchain_community/chat_models/sambanova.py:620, in ChatSambaNovaCloud.with_structured_output(self, schema, method, include_raw, **kwargs)
382 """Model wrapper that returns outputs formatted to match the given schema.
383
384 Args:
(...)
617 # }
618 """ # noqa: E501
619 if kwargs is not None:
--> 620 raise ValueError(f"Received unsupported arguments {kwargs}")
621 is_pydantic_schema = _is_pydantic_class(schema)
622 if method == "function_calling":
ValueError: Received unsupported arguments {}
```
### Description
* I'm trying to use json_mode of ChatSambaNovaCloud using method with_structured_output
* I think the patch should be as below, since {} is interpreted as not None
```
if kwargs:
raise ValueError(f"Received unsupported arguments {kwargs}")
```
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:37:36 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6020
> Python Version: 3.12.6 (main, Sep 6 2024, 19:03:47) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.3.25
> langchain: 0.3.12
> langchain_community: 0.3.12
> langsmith: 0.2.3
> langchain_chroma: 0.1.2
> langchain_google_vertexai: 2.0.9
> langchain_groq: 0.2.1
> langchain_openai: 0.2.12
> langchain_text_splitters: 0.3.3
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.10
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: Installed. No version info available.
> chromadb: 0.5.23
> dataclasses-json: 0.6.7
> fastapi: 0.115.6
> google-cloud-aiplatform: 1.74.0
> google-cloud-storage: 2.19.0
> groq: 0.13.1
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langchain-mistralai: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> numpy: 2.2.0
> openai: 1.58.1
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.9.2
> pydantic-settings: 2.7.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
``` | 🤖:bug | low | Critical |
2,746,243,066 | material-ui | Material UI: Pressing Escape key closes Dialog before Snackbar/Alert | ### Steps to reproduce
Steps:
1. Open this link to live example: [MUI-Dialog-Toast-Escape-Example](https://codesandbox.io/p/sandbox/mui-dialog-toast-escape-example-forked-8h7r93)
2. Press "Open Modal" to open the Dialog
3. Press "Open Toast" to open the Snackbar Alert
4. Press the Escape key
### Current behavior
When the Escape key is pressed, the `Dialog` is closed first and the `Alert` is still displayed.
### Expected behavior
Not sure what behavior _should_ be enabled by default. Was expecting the layers to be closed in the order the user opened them (Toast on the first Escape key press, then the Modal on the second press). Ideally, we'd like to see a quick and easy way to configure this behavior.
### Context
I am trying to implement a feature for a common use case in a production-facing React application. There are 50+ pages where a user can have both MUI `Alert`s and `Dialog`s open simultaneously. We need a solution that doesn't cause components that use `Dialog`s to be dependent on the open state of `Snackbar` or `Alert` components. We have been searching the MUI documentation and StackOverflow for this particular use case, but it has not come up in searches.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: Linux 5.15 Ubuntu 20.04.6 LTS (Focal Fossa)
Binaries:
Node: 18.18.0 - ~/.nvm/versions/node/v18.18.0/bin/node
npm: 9.8.1 - ~/.nvm/versions/node/v18.18.0/bin/npm
pnpm: Not Found
Browsers:
Chrome: Version 131.0.6778.140 (Official Build) (64-bit)
npmPackages:
@emotion/react: ^11.13.3 => 11.13.3
@emotion/styled: ^11.13.0 => 11.13.0
@mui/base: 5.0.0-beta.58
@mui/core-downloads-tracker: 6.1.3
@mui/icons-material: ^6.1.1 => 6.1.3
@mui/lab: ^6.0.0-beta.10 => 6.0.0-beta.11
@mui/material: ^6.1.1 => 6.1.3
@mui/material-nextjs: ^6.1.1 => 6.1.3
@mui/private-theming: 6.1.3
@mui/styled-engine: 6.1.3
@mui/system: ^6.1.1 => 6.1.3
@mui/types: 7.2.18
@mui/utils: 5.16.6
@mui/x-data-grid: ^7.18.0 => 7.19.0
@mui/x-data-grid-pro: ^7.18.0 => 7.19.0
@mui/x-date-pickers: 7.19.0
@mui/x-date-pickers-pro: ^7.18.0 => 7.19.0
@mui/x-internals: 7.18.0
@mui/x-license: 7.18.0
@mui/x-tree-view: ^7.18.0 => 7.19.0
@types/react: ^18.3.9 => 18.3.11
react: 18.3.1 => 18.3.1
react-dom: ^18.2.0 => 18.3.1
typescript: ^5.6.2 => 5.6.3
```
</details>
**Search keywords**: Dialog Snackbar Alert Escape | docs | low | Minor |
2,746,264,883 | kubernetes | Recreate strategy doesn't create new replicaset on its own | ### What happened?
We've switched some of our deployments to the recreate strategy and as a result we're seeing long delays between a replicaset scaling down and a new one scaling up when a new version is rolled out (10+ minutes between events). This can be due to a number of things but it seemed to only impact our workloads on the recreate strategy. After digging into the source code it appears that the recreate logic doesn't actually create the new replicaset even though it seems like it should. Instead it looks like it exits out early and relies on the cluster/replicaset controller to naturally resolve the out of sync state when it gets around to it (which in our case can be pretty slow). This can result in very long deploy cycles for workloads using the recreate strategy. The issue I believe is stemming from [here](https://github.com/kubernetes/kubernetes/blob/v1.31.2/pkg/controller/deployment/recreate.go#L45) where we return early after updating the deployment status if the previous replicaset has scaled down regardless of the new replicaset. Checking the history of the code this has been around for [9 years](https://github.com/kubernetes/kubernetes/commit/92798408af90569f492be3a1a4d8de02538a6787#diff-c553184997fa4c9f63e697af2f39eb2e6fb43eeff64a0ef66c9e955f61347b02R428) so either I am not interpreting what should be happening correctly or just no one has noticed but I would expect that the code doesn't return there but instead continues to spin up the new replicaset like the logic after that return statement does in that method.
### What did you expect to happen?
I expected my deployments that use the recreate strategy to deploy their new replicasets shortly after the old ones have been scaled down.
### How can we reproduce it (as minimally and precisely as possible)?
A unit test could probably reproduce this but I'm more curious about an explanation of the code to make sure I understand it correctly.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
v1.31.2
```console
$ kubectl version
Client Version: v1.32.0
Kustomize Version: v5.5.0
```
</details>
### Cloud provider
<details>
AWS
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,needs-sig,needs-triage | low | Major |
2,746,292,276 | godot | `GraphNode` doesn't shrink automatically | ### Tested versions
v4.4.dev.custom_build [abf47965f]
### System information
Debian Linux X11
### Issue description
`GraphNode` doesn't shrink automatically when
- `size_flags_vertical = SIZE_SHRINK_CENTER`
- the size of child Control nodes decreases
I would expect, that a container node with `SIZE_SHRINK_CENTER` automatically shrinks, when the size of their child nodes decreases.
### Steps to reproduce
1. Load and run MRP
2. Drag the top `HSlider` to the right to increase the vertical size of the labels.
3. Drag the top `HSlider` to the left to decrease the vertical size of the labels.
After 3:
- The label inside the GraphNode became smaller, but the GraphNode didn't adjust its size and appears too big
- For comparison there is a working example on the right side of the scene with a HBoxContainer, that adjusts its vertical size automatically (indicated by the red color)
### Minimal reproduction project (MRP)
[bug-graph-edit-resize.zip](https://github.com/user-attachments/files/18172298/bug-graph-edit-resize.zip)
| bug,topic:gui | low | Critical |
2,746,303,767 | PowerToys | [PowerToys Run] Background on top of window when AccentColorInactive is set | ### Microsoft PowerToys version
0.87.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
1. Set a custom accent color for inactive windows by adding the registry key `AccentColorInactive` (DWORD) in _HKEY_CURRENT_USER\Software\Microsoft\Windows\DWM\\_ with value `ff262626`.
2. Restart the computer to ensure that the change has taken effect.
This was not a problem prior to 0.87.0
### ✔️ Expected Behavior
PowerToys Run window is not affected
### ❌ Actual Behavior
Accent color is visible through the window (even when focused)

### Other Software
_No response_ | Issue-Bug,Product-PowerToys Run,Area-User Interface,Status-Reproducible | low | Minor |
2,746,364,662 | rust | E0308 Suggestion gets added to unrelated code and error span is too big | ### Code
```Rust
#[tokio::main]
async fn main() {
if true {
return Ok(());
}
let something_else = 1;
}
```
### Current output
```Shell
error[E0308]: mismatched types
--> src/main.rs:2:17
|
2 | async fn main() {
| _________________^
3 | | if true {
4 | | return Ok(());
5 | | }
6 | | let something_else = 1;
7 | | }
| |_^ expected `Result<(), _>`, found `()`
|
= note: expected enum `Result<(), _>`
found unit type `()`
error[E0308]: mismatched types
--> src/main.rs:3:5
|
2 | async fn main() {
| - expected `()` because of default return type
3 | / if true {
4 | | return Ok(());
5 | | }
6 | | let something_else = 1;
| |___________________________^ expected `()`, found `Result<(), _>`
|
= note: expected unit type `()`
found enum `Result<(), _>`
help: consider using `Result::expect` to unwrap the `Result<(), _>` value, panicking if the value is a `Result::Err`
|
6 | let something_else = 1;.expect("REASON")
| +++++++++++++++++
```
### Desired output
```Shell
error[E0308]: mismatched types
--> src/main.rs:3:5
|
2 | async fn main() {
| - expected `()` because of default return type
3 | if true {
4 | return Ok(());
| ^^^^^^^^^^^^^^ expected `()`, found `Result<(), _>`
|
= note: expected unit type `()`
found enum `Result<(), _>`
help: consider using `Result::expect` to unwrap the `Result<(), _>` value, panicking if the value is a `Result::Err`
|
4 | return Ok(()).expect("REASON");
| +++++++++++++++++
```
### Rationale and extra context
This is probably just caused by the `#[tokio::main]` proc-macro. It's confusing that absolutely everything after (and inside of) the bad return statement scope is seen as a bad statement, and that the `help` section suggests adding `.expect("REASON")` after a semicolon at the very end.
Found this because copy-pasted code contained `return Ok(());`, which caused the scope and everything after it to be highlighted as an error with no further explanation.
### Other cases
```Rust
```
### Rust Version
```Shell
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
```
### Anything else?
Return statements without a scope and at the end get highlighted semi-correctly:
```rust
#[tokio::main]
async fn main() {
return Ok(());
}
```
Produces
```shell
error[E0308]: mismatched types
--> src/main.rs:3:5
|
2 | async fn main() {
| - expected `()` because of default return type
3 | return Ok(());
| ^^^^^^^^^^^^^^ expected `()`, found `Result<(), _>`
|
= note: expected unit type `()`
found enum `Result<(), _>`
help: consider using `Result::expect` to unwrap the `Result<(), _>` value, panicking if the value is a `Result::Err`
|
3 | return Ok(());.expect("REASON")
| +++++++++++++++++
```
The error span is fine, but the `help` isn't | A-diagnostics,A-macros,T-compiler,D-imprecise-spans | low | Critical |
2,746,381,167 | vscode | Enabling `http.fetchAdditionalSupport` may break extensions using global `fetch` | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.0
- OS Version: MacOS Sonoma 14.7.2
Unfortunately this is entirely extension-dependent, but most of the context is in https://github.com/microsoft/vscode-discussions/discussions/2438.
The main points:
- we noticed [our extension](https://github.com/confluentinc/vscode) was throwing `fetch failed` errors with the above setting **enabled** (after the 1.96 update, where it looks to be enabled by default), but only for requests that were using the local Docker socket path to interact with the Docker engine API
- sample error info:
```
TypeError: fetch failed
at node:internal/deps/undici/undici:13392:13
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at globalThis.fetch (file:///Applications/Visual%20Studio%20Code.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:159:19800)
...
cause:
ECONNREFUSED: fetch failed
AggregateError
at internalConnectMultiple (node:net:1122:18)
at afterConnectMultiple (node:net:1689:7)
at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17)
```
- example request init:
```typescript
import { Agent } from "undici";
// ...
const init = {
dispatcher: new Agent({
connect: {
socketPath: "/var/run/docker.sock",
},
})
}
```
- `http.fetchAdditionalSupport` enabled and `http.systemCertificates` enabled: requests fail with the above error ❌
- `http.fetchAdditionalSupport` enabled and `http.systemCertificates` disabled: requests return successful responses ✅
(Toggling `http.proxySupport` seems to have no effect here.)
I'm happy to hop on a call and/or provide more information if needed.
Are there any specific changes extension developers should take in response to this setting?
| bug,proxy | low | Critical |
2,746,414,129 | godot | RayCast3D is not working with trimesh collision | ### Tested versions
4.3 stable
### System information
Windows 10
### Issue description
RayCast3D collision detection is not working with terrain generated from **_ArrayMesh_** and its collision generated using trimesh strategy.
It works when terrain is generated from **_PlaneMesh_** and trimesh strategy as well.
All collision masks are good (default 1).
### Steps to reproduce
Create ray cast and add it the main node.
```c#
RayCast3D rayCast = new RayCast3D();
rayCast.TargetPosition = new Vector3(0, -50, 0);
rayCast.Enabled = true;
rayCast.HitBackFaces = true;
rayCast.CollideWithAreas = true;
rayCast.DebugShapeCustomColor = new Color(0,0,1);
rayCast.DebugShapeThickness = 1;
```
Generate ArrayMesh adding vertices, UVs, indexes and normals:
```c#
public ArrayMesh FromHeightmap()
{
float HeightScale = 50.0f;
float TerrainScale = 1.0f;
Texture2D texture = GD.Load<Texture2D>($"res://heightmap.jpg");
Image img = texture.GetImage();
int width = img.GetWidth();
int height = img.GetHeight();
SurfaceTool surfaceTool = new SurfaceTool();
surfaceTool.Begin(Mesh.PrimitiveType.Triangles);
for (int z = 0; z < height; z++)
{
for (int x = 0; x < width; x++)
{
Color pixel = img.GetPixel(x, z);
float h = pixel.R;
float worldHeight = h * HeightScale;
Vector2 uv = new Vector2((float) x / (width - 1), (float) z / (height - 1));
surfaceTool.SetUV(uv);
Vector3 vertex = new Vector3(x * TerrainScale, worldHeight, z * TerrainScale);
surfaceTool.AddVertex(vertex);
}
}
for (int z = 0; z < height - 1; z++)
{
for (int x = 0; x < width - 1; x++)
{
int topLeft = z * width + x;
int topRight = z * width + (x + 1);
int bottomLeft = (z + 1) * width + x;
int bottomRight = (z + 1) * width + (x + 1);
surfaceTool.AddIndex(topLeft);
surfaceTool.AddIndex(bottomLeft);
surfaceTool.AddIndex(topRight);
surfaceTool.AddIndex(topRight);
surfaceTool.AddIndex(bottomLeft);
surfaceTool.AddIndex(bottomRight);
}
}
surfaceTool.GenerateNormals();
return surfaceTool.Commit();
}
```
Create the terrain and add it to the main node:
```c#
ArrayMesh arrayMesh = mapReader.FromHeightmap();
MeshInstance3D meshInstance = new MeshInstance3D();
meshInstance.Position = new Vector3(0,0,0);
meshInstance.Mesh = arrayMesh;
meshInstance.CreateTrimeshCollision();
```
Enable `Visible Collision Shapes` in project settings.
### Minimal reproduction project (MRP)
I have absolutely no idea how to use GDScript or even the godot app itself so I will just post project in C# which is 100% code based.
Move camera by W A S D.
Blue ray = no collision
Red ray = collision
[raycast3d.zip](https://github.com/user-attachments/files/18172798/raycast3d.zip)
| bug,topic:physics,topic:3d | low | Critical |
2,746,444,346 | pytorch | Compiler Bisector Improvements | ### 🚀 The feature, motivation and pitch
@ezyang has been using Compiler Bisector internally and run it into a few feature requests.
- [ ] Query for backend, subsystems
- [ ] Config option to check meta stride for all ops, not just custom ops
- [ ] Option to specify particular backend/subsystem to iterate over
- [ ] Better print outs of how to manually run a particular bisect - for instance, if we bisected lowerings it should inform user to compare: `TORCH_BISECT_SUBSYSTEM=lowerings TORCH_BISECT_BACKEND=inductor TORCH_BISECT_MAX=21` and `TORCH_BISECT_SUBSYSTEM=lowerings TORCH_BISECT_BACKEND=inductor TORCH_BISECT_MAX=20`
Other requests
- [ ] Option to bisect which compiled graph is causing the issue first. Potentially we would bisect to the bad fwd/backward. then see if fwd, back, or joint graph passes/partitioner is the issue.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @exclamaforte who was interested
### Alternatives
_No response_
### Additional context
_No response_ | triaged,module: inductor | low | Major |
2,746,483,634 | rust | Document the contextual keyword `raw` | Related with https://github.com/rust-lang/rust/issues/34601, https://github.com/rust-lang/rust/pull/127679. | C-enhancement,T-compiler,A-docs,T-libs,A-raw-pointers | low | Minor |
2,746,505,083 | tauri | [feat] Add tauri://pageLoad event listener support in frontend | ### Describe the problem
When creating a new window using `WebviewWindow`, I can only use `tauri://created` to listen for window creation. However, this event triggers much earlier than the actual page load completion.
Currently, if I need to inject JavaScript after the page is fully loaded, I have to:
1. Use `WebviewWindowBuilder::new(&app_handle, &label, url_parse).on_page_load` in the backend
2. Emit an event to notify the frontend
The challenge is that I'm not familiar with backend development in Rust, and I have to rely on my friend to help implement these backend methods.
Technical Context:
- Current approach requires backend implementation
- Frontend developers need a more accessible way to handle post-load operations
- Similar to how browsers provide `window.onload` event
thanks
### Describe the solution you'd like
1. Add a frontend event listener for page load completion (similar to `tauri://pageLoad`)
2. Add frontend API for JavaScript injection
This would greatly improve the developer experience for frontend developers who are not yet proficient in Rust but need these functionalities for their projects.
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Minor |
2,746,509,711 | rust | cfg resolve diagnostic doesn't show up for axum macro | ### Code
Without the macro feature enabled.
```Rust
use axum::extract::FromRef;
#[derive(Clone, FromRef)]
struct AppState {
db: SqlitePool,
}
```
### Current output
```Shell
error: cannot find derive macro `FromRef` in this scope
--> src/main.rs:33:17
|
33 | #[derive(Clone, FromRef)]
| ^^^^^^^
|
note: `FromRef` is imported here, but it is only a trait, without a derive macro
--> src/main.rs:1:12
|
1 | use axum::{extract::FromRef, Router};
| ^^^^^^^^^^^^^^^^
```
### Desired output
```Shell
error[E0432]: unresolved import `axum::extract::FromRef`
--> src/main.rs:2:5
|
2 | use axum::extract::FromRef;
| ^^^^^^^^^^^^^^^^^ no `FromRef` in `extract`
|
note: found an item that was configured out
--> /Users/joshka/.cargo/registry/src/index.crates.io-6f17d22bba15001f/axum-0.7.9/src/extract/mod.rs:xx:yy
|
xx | macro_rules! FromRef;
| ^^
note: the item is gated behind the `macros` feature
--> /Users/joshka/.cargo/registry/src/index.crates.io-6f17d22bba15001f/axum-0.7.9/src/extract/mod.rs:xx:yy
```
### Rationale and extra context
The detection of cfg'd out items should treat items that match the expected type (e.g. a macro) as higher priority than items which match the name (e.g. `FromRef`)
### Other cases
```Rust
```
### Rust Version
```Shell
rustc --version --verbose
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: aarch64-apple-darwin
release: 1.83.0
LLVM version: 19.1.1
```
### Anything else?
Similar to but not the same as https://github.com/rust-lang/rust/issues/132166
Possibly relevant to https://github.com/rust-lang/rust/pull/129183 | A-diagnostics,A-resolve,T-compiler,D-terse,A-cfg | low | Critical |
2,746,537,766 | kubernetes | Kubernetes appears to use a lot of memory for its own components (≅80GiB) | ### What happened?
I recently installed Kubernetes on an Ubuntu 22.04 system environment. I set up the Kubernetes environment as follows: as you can see, I created one control node and two worker nodes.
```
-----------+---------------------------+--------------------------+------------
| | |
eth0|10.0.0.10 eth0|10.0.0.11 eth0|10.0.0.22
+----------+-----------+ +-----------+-----------+ +-----------+-----------+
| [ ctrl.myk8s ] | | [ node01.myk8s ] | | [ node02.myk8s ] |
| Control Node | | Worker Node | | Worker Node |
+----------------------+ +-----------------------+ +-----------------------+
```
I started the Kubernetes service daemons and ran some pods. After a few hours, every time I ran the `free -h` command, I noticed that 80GB of memory was being used. Most of this memory usage is attributed to Kubernetes processes.
I am curious why Kubernetes uses so much physical memory, even when running just one or two pods. I welcome any comments, hints, or information.
### Memory usage:
```bash
$ free -h
total used free shared buff/cache available
Mem: 124Gi 85.6Gi 38Gi 21Mi 452Mi 38Gi
Swap: 0B 0B 0B
$ ps -eo uid,euser,wchan,comm,rss,vsize,%cpu,nlwp | awk '$5 != 0' | sort -k5,5rn
uid euser wchan comm rss vsize %cpu nlwp
----------------------------------------------------------------------
0 root futex_ kubelet 117576 6850960 1.1 76
0 root futex_ containerd 100948 7420620 0.6 82
0 root futex_ calico-node 93024 5417080 0.4 56
0 root futex_ calico-node 83600 3494160 0.0 30
0 root futex_ calico-node 83192 3715612 0.0 33
0 root futex_ calico-node 80348 3641880 0.0 32
0 root futex_ calico-node 79912 3494416 0.0 30
0 root futex_ calico-node 74932 3493392 0.0 30
0 root futex_ kube-proxy 72944 1295544 0.0 32
0 root do_pol python3 31636 42076 0.0 1
0 root futex_ node-driver-reg 29436 1248032 0.0 32
0 root futex_ tuned 26628 257916 0.0 4
997 polkitd do_pol polkitd 22804 2983876 0.0 12
0 root futex_ csi-driver 21824 1237992 0.0 24
0 root do_pol NetworkManager 21480 259196 0.0 3
0 root futex_ containerd-shim 17492 1238184 0.0 14
0 root futex_ containerd-shim 16600 1238184 0.0 13
0 root ep_pol systemd 15192 175176 0.0 1
0 root futex_ containerd-shim 13688 1238184 0.0 13
0 root do_sel snmpd 13056 25748 0.0 1
0 root ep_pol systemd-journal 13056 27820 0.0 1
1000 codegrok ep_pol systemd 13056 23928 0.0 1
0 root futex_ containerd-shim 12704 1238184 0.0 13
```

### What did you expect to happen?
I expected Kubernetes to use less than 8GB of memory when running just one or two pods.
### How can we reproduce it (as minimally and precisely as possible)?
### [1] Install Containerd and apply some requirements on all Nodes.
```
root@ctrl:~# apt -y install containerd
root@ctrl:~# cat > /etc/sysctl.d/99-k8s-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF
root@ctrl:~# sysctl --system
root@ctrl:~# modprobe overlay; modprobe br_netfilter
root@ctrl:~# echo -e overlay\\nbr_netfilter > /etc/modules-load.d/k8s.conf
# needs [iptables-legacy] for iptables backend
# if nftables is enabled, change to [iptables-legacy]
root@ctrl:~# update-alternatives --config iptables
There are 2 choices for the alternative iptables (providing /usr/sbin/iptables).
Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/sbin/iptables-nft 20 auto mode
1 /usr/sbin/iptables-legacy 10 manual mode
2 /usr/sbin/iptables-nft 20 manual mode
Press <enter> to keep the current choice[*], or type selection number: 1
update-alternatives: using /usr/sbin/iptables-legacy to provide /usr/sbin/iptables (iptables) in manual mode
# disable swap
root@ctrl:~# swapoff -a
root@ctrl:~# vi /etc/fstab
# comment out
#/swap.img none swap sw 0 0
# switch to Cgroup v1 (default is v2)
root@ctrl:~# vi /etc/default/grub
# line 11 : add
GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0
"
root@ctrl:~# update-grub
```
### [2] Install Kubeadm, Kubelet, Kubectl on all Nodes.
```
root@ctrl:~# curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg -o /etc/apt/keyrings/kubernetes-keyring.gpg
root@ctrl:~# echo "deb [signed-by=/etc/apt/keyrings/kubernetes-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
root@ctrl:~# apt update
root@ctrl:~# apt -y install kubeadm kubelet kubectl
root@ctrl:~# vi /etc/default/kubelet
# create new
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock
root@ctrl:~# systemctl edit containerd.service
# add follows
[Service]
KillMode=
KillMode=mixed
root@ctrl:~# systemctl restart containerd.service
```
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
# kubectl version
Client Version: v1.31.3
Kustomize Version: v5.4.2
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
$ uname -a
Linux mymate 6.5.0-41-generic #41~22.04.2-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 3 11:32:55 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/scalability,needs-triage | medium | Critical |
2,746,557,546 | ant-design | 如何修改Splitter分割线的样式? | ### Reproduction link
[https://ant-design.antgroup.com/components/splitter-cn](https://ant-design.antgroup.com/components/splitter-cn)
### Steps to reproduce
无
### What is expected?
1. Splitter组件左右两侧中间有个分割线,在不禁用resizable的情况下,如何隐藏分割线中间的灰色长方形方框?如下:
<img width="438" alt="image" src="https://github.com/user-attachments/assets/e3843cd1-0ebe-4730-b07b-e71d990bcf94" />
2. 希望可以增加两个按钮,点击按钮可以将左侧(或右侧)区域折叠或展开
### What is actually happening?
无
| Environment | Info |
| --- | --- |
| antd | 5.22.5 |
| React | lastest |
| System | macos |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 💡 Feature Request,6.x | low | Minor |
2,746,612,641 | terminal | South Asian Languages characters are incorrect in spacing | > The issue is that we rely on the "East Asian With" spec by Unicode, like pretty much all terminals: https://www.unicode.org/reports/tr11/
> As the name suggests, for southeastern languages it just says, "no designated width", which we by convention have to assume is 1 column wide. If you can find >1 terminal that solve this issue (= which we could use as reference), please let us know by opening a new issue for it. 🙂
_Originally posted by @lhecker in [#9490](https://github.com/microsoft/terminal/issues/9490#issuecomment-2549050353)_
---
WT Version: 1.21.3231.0;
Microsoft Windows [Version 10.0.22631.4541]

---
Readable in [Cmder ](https://github.com/cmderdev/cmder)

| Needs-Triage,Needs-Tag-Fix | low | Minor |
2,746,613,893 | pytorch | unreasonable ConstraintViolationError when using torch dynamo to compile torch model | ### 🐛 Describe the bug
I'm using torch dynamo backend to compile model to export to tensorrt.
```python
inputs = [torch.randn(1, 3, 28, 288, 512).cuda().to(torch.float16)]
dynamic_h = torch.export.Dim("dim_3", min=224, max=640)
dynamic_w = torch.export.Dim("dim_4", min=224, max=640)
dynamic_t = torch.export.Dim("dim_2", min=1, max=200)
dynamic_shapes={"x": {2:dynamic_t, 3: dynamic_h, 4: dynamic_w}}
exp_program = torch.export.export(encoder_model, args=tuple(inputs), dynamic_shapes=dynamic_shapes, strict=True)
trt_model = torch_tensorrt.dynamo.compile(
exported_program=exp_program,
assume_dynamic_shape_support=True,
inputs=inputs,
make_refitable=True,
disable_tf32=True,
debug=True,
enabled_precisions={torch.half, torch.float},
torch_executed_ops={},
min_block_size=17,
truncate_double=True,
use_python_runtime=False)
```
I've confirmed that the h, w dim can be dynamic without constraint error but the dim_2 cannot. It's quite strange.
I trace the cause and found it's in here
```python
class GroupNormSpatial(nn.Module):
"""GroupNorm with spatial dimensions ignored."""
def __init__(self, num_groups, num_channels, epsilon: float = 1e-5, affine=True):
super().__init__()
# affine=False # TODO: for tensorrt only
self.norm_fn = nn.GroupNorm(num_groups=num_groups, num_channels=num_channels, eps=epsilon, affine=affine)
def forward(self, inputs: torch.Tensor) -> torch.Tensor:
if int(inputs.ndim) == 5: # video
b, c, t, h, w = inputs.shape
inputs = inputs.permute(0,2,1,3,4).flatten(start_dim=0, end_dim=1) # ERROR
out = self.norm_fn(inputs)
out = out.reshape(b, t, c, h, w).permute(0,2,1,3,4) # ERROR
return out
else: # Image, b c h w -> b c h w
out = self.norm_fn(inputs)
return out
```
```
I1218 02:40:31.595000 155463 torch/_utils_internal.py:116] [0/0] CompilationMetrics(compile_id='0/0', frame_key='1', co_name='forward', co_filename='xxx,py', co_firstlineno=50, cache_size=0, accumulated_cache_size=0, guard_count=None, shape_env_guard_count=None, graph_op_count=None, graph_node_count=None, graph_input_count=None, start_time=1734489622.2484767, entire_frame_compile_time_s=None, backend_compile_time_s=None, inductor_compile_time_s=None, code_gen_time_s=None, fail_type="<class 'torch.fx.experimental.symbolic_shapes.ConstraintViolationError'>", fail_reason='Constraints violated (dim_2)! For more information, run with TORCH_LOGS="+dynamic".\n - Not all values of dim_2 = L[\'x\'].size()[2] in the specified range dim_2 <= 200 satisfy the generated guard Ne(Mod(1, ((L[\'x\'].size()[2] - 1)//2) + 1), 0).\n - Not all values of dim_2 = L[\'x\'].size()[2] in the specified range dim_2 <= 200 satisfy the generated guard Ne(Mod(1, ((L[\'x\'].size()[2] - 1)//4) + 1), 0).\n - Not all values of dim_2 = L[\'x\'].size()[2] in the specified range dim_2 <= 200 satisfy the generated guard 9 <= L[\'x\'].size()[2] and L[\'x\'].size()[2] <= 200', fail_user_frame_filename=None, fail_user_frame_lineno=None, non_compliant_ops=set(), compliant_custom_ops=set(), restart_reasons=set(), dynamo_time_before_restart_s=9.347490787506104, has_guarded_code=False, possibly_missed_reinplacing_opportunities=None)
```
### Versions
I'm using ngc torch `nvcr.io/nvidia/pytorch:24.10-py3`
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,module: dynamic shapes,module: dynamo,oncall: export | low | Critical |
2,746,614,263 | tauri | [bug] Error failed to bundle project: error running appimage.sh | ### Describe the bug
Operation steps:
1 . "targets": ["deb","updater", "appimage"],
2. npm run tauri build
result:
Finished `release` profile [optimized] target(s) in 1m 10s
Warn Signing, by default, is only supported on Windows hosts, but you can specify a custom signing command in `bundler > windows > sign_command`, for now, skipping signing the installer...
Bundling pt-system-student_1.0.0_amd64.deb (path/src-tauri/target/release/bundle/deb/test_1.0.0_amd64.deb)
Bundling pt-system-student_1.0.0_amd64.AppImage (path/src-tauri/target/release/bundle/appimage/test_1.0.0_amd64.AppImage)
Error failed to bundle project: error running appimage.sh
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
> [email protected] tauri
> tauri info
[✔] Environment
- OS: Ubuntu 18.4.0 X64
✔ webkit2gtk-4.0: 2.32.4
✔ rsvg2: 2.40.20
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 16.20.2
- pnpm: 8.15.9
- npm: 8.19.4
[-] Packages
- tauri [RUST]: 1.5.4
- tauri-build [RUST]: 1.5.1
- wry [RUST]: 0.24.7
- tao [RUST]: 0.16.5
- @tauri-apps/api [NPM]: 1.6.0 (outdated, latest: 2.1.1)
- @tauri-apps/cli [NPM]: 1.6.3 (outdated, latest: 2.1.0)
[-] App
- build-type: bundle
- CSP: unset
- distDir: ../dist
- devPath: http://localhost:1421/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,help wanted,platform: Linux,status: needs triage | low | Critical |
2,746,628,353 | go | syscall: unable to use the full length for abstract socket starting with null | ### Go version
go version go1.23.3 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/albert/.cache/go-build'
GOENV='/home/albert/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/albert/go/pkg/mod'
GONOPROXY='bitbucket.org/vivcourt/*'
GONOSUMDB='bitbucket.org/vivcourt/*'
GOOS='linux'
GOPATH='/home/albert/go'
GOPRIVATE='bitbucket.org/vivcourt/*'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/albert/go/pkg/mod/golang.org/[email protected]'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/albert/go/pkg/mod/golang.org/[email protected]/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.3'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/albert/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/albert/glue/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build3031541326=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
https://go.dev/play/p/Lro4QrH5-BK
### What did you see happen?
padded addr1 (@begins-with-at) with length: 108
padded addr2 (begins-with-null) with length: 108
panic: listen unix begins-with-null: bind: invalid argument
goroutine 1 [running]:
main.main()
/tmp/sandbox3694864181/prog.go:27 +0x294
### What did you expect to see?
both address (beginning with `@` and `null or 0`) should be useable to specify abstract unix domain socket with length of 108 (full path length)
the issue seems to be this line of code that checks for `@` for abstract domain socket but not for `0`
https://github.com/golang/go/blob/master/src/syscall/syscall_linux.go#L557
considering how the next part of the code checks for `0 and sl > 3` https://github.com/golang/go/blob/master/src/syscall/syscall_linux.go#L569
I would expect that specifying `0` to indicate it's a domain socket to be a valid way of doing so
let me know if this is something that should be fixed, I would be more than happy to create a PR. Thanks! | NeedsFix,compiler/runtime | low | Critical |
2,746,631,749 | pytorch | [ROCm] MI300X FP8 scaled_mm is extremely slow on nightly | ### 🐛 Describe the bug
Hi AMD Team,
`torch._scaled_mm` is extremely slow on MI300X at ~100TFLOP/s verus ~1200TFLOP/s on H100
Can you look into this?
cc: @hliuca
## ROCm
```
m=16384 n=8192 k=1280: 108.07154472843483
m=16384 n=1024 k=8192: 110.56206220309926
m=16384 n=8192 k=7168: 109.66662842248034
m=16384 n=3584 k=8192: 110.59228182207659
m=8192 n=8192 k=8192: 109.86138366796457
```
## H100
```
m=16384 n=8192 k=1280: 1239.4133451945781
m=16384 n=1024 k=8192: 1347.0844475792383
m=16384 n=8192 k=7168: 1332.2623882545472
m=16384 n=3584 k=8192: 1309.4453003269748
m=8192 n=8192 k=8192: 1304.5406858844613
```
## Reprod
```
import time
import torch
from triton.testing import do_bench
torch.manual_seed(0)
repeats = 200
warmup = 30
timeout = 0.5
device = 'cuda'
# GEMM Shapes
shapes = [
(16384, 8192, 1280),
(16384, 1024, 8192),
(16384, 8192, 7168),
(16384, 3584, 8192),
(8192, 8192, 8192)
]
results = []
for (m, n, k) in shapes:
# FLOPS
nFLOPS = 2 * m * n * k
a_fp8_e5m2 = torch.randn(m, k, device=device).to(torch.float8_e5m2fnuz)
b_fp8_e5m2 = torch.randn(n, k, device=device).to(torch.float8_e4m3fnuz).transpose(-1, -2)
scale_a = torch.tensor(1.0, device=device, dtype=torch.float32)
scale_b = torch.tensor(1.0, device=device, dtype=torch.float32)
ms_fp8_scaled_mm_e4m3 = do_bench(lambda: torch._scaled_mm(a_fp8_e5m2, b_fp8_e5m2, scale_a, scale_b), warmup=warmup, rep=repeats)
tflops_fp8_scaled_mm_e4m3 = nFLOPS / ms_fp8_scaled_mm_e4m3 * 1e-9
time.sleep(timeout)
print(f"{m=} {n=} {k=}: {tflops_fp8_scaled_mm_e4m3}")
```
cc: @hliuca
### Versions
```bash
pip list | grep torch
pytorch-triton-rocm 3.2.0+git35c6c7c6
torch 2.6.0.dev20241216+rocm6.2.4
```
cc @msaroufim @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | module: performance,module: rocm,triaged | low | Critical |
2,746,685,478 | pytorch | Support Dict Parameter Type for custom_op | ### 🐛 Describe the bug
Is it possible to support infer_schema for custom_op which has Dict as input parameters? I think opschema can support such sig `(Tensor t, Dict(str, Any) meta) -> Tensor`. Also, can such inputs be mutated?
```python
import torch
from typing import Dict, Any
@torch.library.custom_op("host_code::collect_max", mutates_args=(), device_types="cpu")
def fn(t: torch.Tensor, meta: Dict[str, Any]) -> torch.Tensor:
meta["max"] = t.max().item()
return t.clone()
@torch.library.register_fake("host_code::collect_max")
def fn_fake(t: torch.Tensor, meta: Dict[str, Any]) -> torch.Tensor:
return t
t = torch.randn((3,3))
meta = {}
fn(t, meta)
print(meta)
```
### Error logs
```
Traceback (most recent call last):
File "/Users/chenxiny/workspace/dynamo_case/custom_op.py", line 5, in <module>
def fn(t: torch.Tensor, meta: Dict[str, Any]):
File "/Users/chenxiny/miniforge3/envs/torch-metal/lib/python3.10/site-packages/torch/_library/custom_ops.py", line 121, in inner
schema_str = torch.library.infer_schema(fn, mutates_args=mutates_args)
File "/Users/chenxiny/miniforge3/envs/torch-metal/lib/python3.10/site-packages/torch/_library/infer_schema.py", line 106, in infer_schema
error_fn(
File "/Users/chenxiny/miniforge3/envs/torch-metal/lib/python3.10/site-packages/torch/_library/infer_schema.py", line 58, in error_fn
raise ValueError(
ValueError: infer_schema(func): Parameter meta has unsupported type typing.Dict[str, typing.Any]. The valid types are: dict_keys([<class 'torch.Tensor'>, typing.Optional[torch.Tensor], typing.Sequence[torch.Tensor], typing.List[torch.Tensor], typing.Sequence[typing.Optional[torch.Tensor]], typing.List[typing.Optional[torch.Tensor]], <class 'int'>, typing.Optional[int], typing.Sequence[int], typing.List[int], typing.Optional[typing.Sequence[int]], typing.Optional[typing.List[int]], <class 'float'>, typing.Optional[float], typing.Sequence[float], typing.List[float], typing.Optional[typing.Sequence[float]], typing.Optional[typing.List[float]], <class 'bool'>, typing.Optional[bool], typing.Sequence[bool], typing.List[bool], typing.Optional[typing.Sequence[bool]], typing.Optional[typing.List[bool]], <class 'str'>, typing.Optional[str], typing.Union[int, float, bool], typing.Union[int, float, bool, NoneType], typing.Sequence[typing.Union[int, float, bool]], typing.List[typing.Union[int, float, bool]], <class 'torch.dtype'>, typing.Optional[torch.dtype], <class 'torch.device'>, typing.Optional[torch.device]]). Got func with signature (t: torch.Tensor, meta: Dict[str, Any]))
```
### Versions
```
PyTorch version: 2.6.0.dev20241126
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:20) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] torch==2.6.0.dev20241126
[pip3] torchaudio==2.5.0.dev20241126
[pip3] torchvision==0.20.0.dev20241126
[conda] numpy 2.1.2 pypi_0 pypi
[conda] torch 2.6.0.dev20241126 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241126 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241126 pypi_0 pypi
```
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | triaged,module: custom-operators,oncall: pt2,module: pt2-dispatcher | low | Critical |
2,746,689,837 | svelte | False positive of ownership_invalid_mutation after hot reload | ### Describe the bug
In the example code, it seems to behave correctly, unless CounterDisplay.svelte content is changed to cause a vite module hot-reload. After the hot reload, every state change causes "ownership_invalid_mutation" warnings.
This particular issue is only reproducible locally, [clone this repo to test it out.](https://github.com/goldentoaste/sanityTest)
```html
+page.svelte
<script>
import CounterDisplay from "./CounterDisplay.svelte";
let counter = $state({
count: 0,
});
</script>
<CounterDisplay bind:counter></CounterDisplay>
```
```html
CounterDisplay.svelte
<script>
import Incrementer from "./Incrementer.svelte"
let {counter = $bindable()} = $props()
</script>
<!-- Warning shows when this component is hot reloaded. For example added a new div here. -->
<p>Counter is {counter.count}</p>
<Incrementer bind:count={counter.count}></Incrementer>
```
```html
Incrementer.svelte
<script>
let {count = $bindable()} = $props();
</script>
<button onclick={()=>{count += 1}}>Go up</button>
```
### Reproduction
Reproducible in [this repo](https://github.com/goldentoaste/sanityTest).
1. Load root page to see counter working correctly, without warning
2. In `CounterDisplay.svelte`, make any html change, such as added a new div, in order to trigger a vite hot reload.
3. Further state change to the counter now raises `ownership_invalid_mutation` warning in browser console.
### Logs
```shell
ownership_invalid_mutation
src/routes/test/CounterDisplay.svelte mutated a value owned by src/routes/test/+page.svelte. This is strongly discouraged. Consider passing values to child components with `bind:`, or use a callback insteadlback instead
https://svelte.dev/e/ownership_invalid_mutation
```
### System Info
```shell
System:
OS: Windows 11 10.0.22631
CPU: (16) x64 13th Gen Intel(R) Core(TM) i7-1360P
Memory: 3.14 GB / 15.57 GB
Binaries:
Node: 22.1.0 - C:\Program Files\nodejs\node.EXE
npm: 10.9.0 - C:\Program Files\nodejs\npm.CMD
pnpm: 9.12.3 - ~\AppData\Roaming\npm\pnpm.CMD
Browsers:
Edge: Chromium (127.0.2651.98)
Internet Explorer: 11.0.22621.3527
```
### Severity
annoyance | bug | low | Critical |
2,746,748,352 | go | build: build failure on gotip-linux-arm64_c4as16-perf_vs_release | ```
#!watchflakes
default <- builder == "gotip-linux-arm64_c4as16-perf_vs_release" && repo == "go" && mode == "build"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8728267415754195281)):
go: downloading github.com/BurntSushi/toml v1.0.0
go: downloading github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
2024/12/17 21:56:19 Load average: 0.10 2.50 5.72 1/409 1782864
2024/12/17 21:56:19 Running Go test benchmarks for experiment
go: downloading github.com/dustin/go-wikiparse v0.0.0-20211018054215-c01ec186f20c
go: downloading github.com/blevesearch/bleve v1.0.14
go: downloading github.com/biogo/biogo v1.0.4
go: downloading github.com/yuin/gopher-lua v0.0.0-20210529063254-f4c35e4016d9
go: downloading golang.org/x/sync v0.10.0
go: downloading gitlab.com/golang-commonmark/markdown v0.0.0-20211110145824-bf3e522c626a
...
[sweet] Running benchmark tile38 for experiment: run 8
[sweet] Running benchmark tile38 for baseline: run 8
[sweet] Running benchmark tile38 for experiment: run 9
[sweet] Running benchmark tile38 for baseline: run 9
[sweet] Running benchmark tile38 for experiment: run 10
[sweet] Running benchmark tile38 for baseline: run 10
[sweet] error: failed to execute benchmarks: esbuild go-build
2024/12/18 03:27:33 Error running sweet: error running sweet run: exit status 1
2024/12/18 03:27:33 FAIL
exit status 1
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,746,779,827 | yt-dlp | [gem.cbc.ca] Failed to parse JSON: JSONDecodeError: Expecting value in '': line 1 column 1 (char 0) | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Canada
### Provide a description that is worded well enough to be understood
It normally works, but has stopped working. This includes videos downloaded as recently as a few days ago (but they now no longer work).
It's failing to parse the returned JSON:
json.decoder.JSONDecodeError: Expecting value in '': line 1 column 1 (char 0)
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--username', 'PRIVATE', '--password', 'PRIVATE', 'https://gem.cbc.ca/coronation-street/s01e11428']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [542166962] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.0-essentials_build-www.gyan.dev (setts)
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[debug] Using fake IP 99.246.11.195 (CA) as X-Forwarded-For
[debug] Loading cbcgem.claims_token from cache
[gem.cbc.ca] Extracting URL: https://gem.cbc.ca/coronation-street/s01e11428
[gem.cbc.ca] coronation-street/s01e11428: Downloading JSON metadata
ERROR: [gem.cbc.ca] coronation-street/s01e11428: coronation-street/s01e11428: Failed to parse JSON (caused by JSONDecodeError("Expecting value in '': line 1 column 1 (char 0)")); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\cbc.py", line 638, in _real_extract
File "yt_dlp\extractor\common.py", line 1152, in download_content
File "yt_dlp\extractor\common.py", line 1119, in download_handle
File "yt_dlp\extractor\common.py", line 1107, in parse
File "yt_dlp\extractor\common.py", line 1094, in _parse_json
File "yt_dlp\extractor\common.py", line 1077, in __print_error
File "yt_dlp\utils\_utils.py", line 565, in decode
File "json\decoder.py", line 337, in decode
File "json\decoder.py", line 355, in raw_decode
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "yt_dlp\extractor\common.py", line 1091, in _parse_json
File "json\__init__.py", line 359, in loads
File "yt_dlp\utils\_utils.py", line 573, in decode
json.decoder.JSONDecodeError: Expecting value in '': line 1 column 1 (char 0)
```
| account-needed,geo-blocked,site-bug,triage,can-share-account | low | Critical |
2,746,847,077 | react | [React 19] Proposal: Enhancing useTransition Hook with Timed Start Transition Feature | ## Summary
We propose enhancing React's startTransition function to accept an optional second parameter: a timeout duration (durationInMs). This will ensure the transition runs for at least the specified time, even if the internal logic resolves earlier, enabling smoother UI updates and animations.
#Proposed API
`startTransition(callback, durationInMs);`
- callback: Executes state updates or operations as a transition.
- durationInMs (Optional): Ensures the transition takes a minimum time, adding a delay if the callback resolves sooner.
`
startTransition(() => {
setData(fetchData());
}, 500); // Ensures a minimum 500ms transition
`
If fetchData resolves in 200ms, the UI remains in a "pending" state for 500ms, ensuring a smoother experience.
#Benefits
- Enhanced UX: Prevents abrupt UI updates by synchronizing transitions with animations or loaders.
- Simplified Code: Reduces reliance on setTimeout hacks for timing.
- Backward Compatible: Fully optional, retaining existing functionality.
- This addition aligns with React's goal of delivering seamless developer and user experiences. We invite the community to discuss and refine this idea.
| React 19 | low | Major |
2,746,879,384 | godot | invalid signal when as return variant, sometimes ! | ### Tested versions
- godot 4.3 stable
### System information
develop env
### Issue description

everything is ok!

unexpected! when i annotate the previous step. signal is not working.

### Steps to reproduce
NA
### Minimal reproduction project (MRP)
NA | bug,topic:gdscript | low | Minor |
2,746,885,941 | pytorch | [c10d] thread safety issue with CUDAEventCache | The following race can happen if we ever schedule NCCL work from a different thread than the original Python thread, and that thread dies before process shutdown.
1. The CUDAEventCache is [thread-local](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L839-L841).
2. WorkNCCL [stores a cached Event](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L479-L484).
3. The cached Event holds a reference to the cache that created it, via a [captured `this` pointer](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L810).
4. The thread that created the WorkNCCL could die at any time, destructing its thread-local CUDAEventCache and leaving the reference in (3) dangling.
5. On destruction, we [attempt to drain](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L2245) completed `Work`s, and try to dereference this dangling reference and explode.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,module: c10d | low | Major |
2,746,892,469 | pytorch | NFS errors during DataLoader shutdown when num_workers > 1 when temporary directory is on NFS | ### 🐛 Describe the bug
Hi,
This is more of a mild annoyance rather than a show-stopping issue. This issue occurs when on Linux and when using an NFS-mounted directory as the temporary directory.
When finished iterating over a DataLoader object, I get the following errors:
```
Traceback (most recent call last):
File "/usr/lib64/python3.9/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/usr/lib64/python3.9/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib64/python3.9/multiprocessing/util.py", line 133, in _remove_temp_dir
rmtree(tempdir)
File "/usr/lib64/python3.9/shutil.py", line 734, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/usr/lib64/python3.9/shutil.py", line 690, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/usr/lib64/python3.9/shutil.py", line 688, in _rmtree_safe_fd
os.unlink(entry.name, dir_fd=topfd)
OSError: [Errno 16] Device or resource busy: '.nfs8b2479d03841bd4400015e16'
Traceback (most recent call last):
File "/usr/lib64/python3.9/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/usr/lib64/python3.9/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib64/python3.9/multiprocessing/util.py", line 133, in _remove_temp_dir
rmtree(tempdir)
File "/usr/lib64/python3.9/shutil.py", line 734, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/usr/lib64/python3.9/shutil.py", line 690, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/usr/lib64/python3.9/shutil.py", line 688, in _rmtree_safe_fd
os.unlink(entry.name, dir_fd=topfd)
OSError: [Errno 16] Device or resource busy: '.nfs17203ac1c489d74f00015e15'
```
Code to reproduce:
```
from torch.utils.data import DataLoader, Dataset
class ExampleDataset(Dataset):
def __len__(self):
return 100
def __getitem__(self, index):
return index
dataset = ExampleDataset()
dl = DataLoader(dataset, num_workers=2)
for i in dl:
print(i)
```
I believe this is related to shutdown/cleanup of multiprocessing managers/workers https://github.com/python/cpython/issues/58186. The error occurs precisely when shutting down the workers https://github.com/pytorch/pytorch/blob/main/torch/utils/data/dataloader.py#L1582, but I don't understand enough about how the dataloader works to suggest a fix.
I know in most cases it's easier to just use a local directory as tmp, but our cluster (academic HPC) is setup such that each node has minimal local disk space and local disk space is shared by multiple users.
Thanks,
Ed
### Versions
Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 9.1 (Plow) (x86_64)
GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.9.18 (main, Jul 3 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (64-bit runtime)
Python platform: Linux-5.14.0-162.23.1.el9_1.x86_64-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 45 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
CPU family: 6
Model: 79
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 48
Stepping: 1
BogoMIPS: 5187.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid rdseed adx smap xsaveopt arat md_clear flush_l1d arch_capabilities
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 12 MiB (48 instances)
L3 cache: 1.6 GiB (48 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.19.2
[pip3] torch==1.13.1
[pip3] torchvision==0.14.1
[conda] Could not collect
cc @andrewkho @divyanshk @VitalyFedyunin @dzhulgakov | triaged,module: data | low | Critical |
2,746,904,505 | pytorch | [ONNX] Failed to export PyTorch-2-Export-Quantized model to onnx | ### 🐛 Describe the bug
try to quantize a model like [this link](https://pytorch.org/tutorials/prototype/pt2e_quant_qat.html)
(only different in model structures and datasets)
then export the quantized model to onnx by `torch.onnx.export` (original model is able to output), and get
```Traceback (most recent call last):
File "d:\my_project\train_quantized.py", line 798, in <module>
onnx_program = torch.onnx.export(model, torch_input, "my_quantized.onnx")
File "D:\anaconda3\envs\hm\lib\site-packages\torch\onnx\__init__.py", line 375, in export
export(
File "D:\anaconda3\envs\hm\lib\site-packages\torch\onnx\utils.py", line 502, in export
_export(
File "D:\anaconda3\envs\hm\lib\site-packages\torch\onnx\utils.py", line 1564, in _export
graph, params_dict, torch_out = _model_to_graph(
File "D:\anaconda3\envs\hm\lib\site-packages\torch\onnx\utils.py", line 1117, in _model_to_graph
graph = _optimize_graph(
File "D:\anaconda3\envs\hm\lib\site-packages\torch\onnx\utils.py", line 639, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "D:\anaconda3\envs\hm\lib\site-packages\torch\onnx\utils.py", line 1848, in _run_symbolic_function
raise errors.UnsupportedOperatorError(
torch.onnx.errors.UnsupportedOperatorError: ONNX export failed on an operator with unrecognized namespace quantized_decomposed::quantize_per_tensor. If you are trying to export a custom operator, make sure you registered it with the right domain and version.
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 专业版 (10.0.22631 64 位)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:40:08) [MSC v.1938 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090 D
Nvidia driver version: 560.94
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Core(TM) i9-14900KF
Manufacturer: GenuineIntel
Family: 207
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3200
MaxClockSpeed: 3200
L2CacheSize: 32768
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] efficientnet_pytorch==0.7.1
[pip3] flake8==7.1.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] onnx==1.16.0
[pip3] onnx-tool==0.9.0
[pip3] onnxruntime==1.17.1
[pip3] onnxscript==0.1.0.dev20241218
[pip3] onnxsim==0.4.36
[pip3] optree==0.12.1
[pip3] pytorch-lightning==2.4.0
[pip3] segmentation-models-pytorch==0.3.4
[pip3] torch==2.5.1
[pip3] torch-pruning==1.5.1
[pip3] torch-tb-profiler==0.4.3
[pip3] torch_tensorrt==2.5.0
[pip3] torchaudio==2.5.1
[pip3] torchmetrics==1.4.2
[pip3] torchvision==0.20.1
[conda] blas 1.0 mkl defaults
[conda] cuda-cudart 11.8.89 0 nvidia
[conda] cuda-cudart-dev 11.8.89 0 nvidia
[conda] cuda-cupti 11.8.87 0 nvidia
[conda] cuda-libraries 11.8.0 0 nvidia
[conda] cuda-libraries-dev 11.8.0 0 nvidia
[conda] cuda-nvrtc 11.8.89 0 nvidia
[conda] cuda-nvrtc-dev 11.8.89 0 nvidia
[conda] cuda-nvtx 11.8.86 0 nvidia
[conda] cuda-opencl 12.5.39 he0c23c2_1 conda-forge
[conda] cuda-opencl-dev 12.5.39 he0c23c2_1 conda-forge
[conda] cuda-runtime 11.8.0 0 nvidia
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] libcublas 11.11.3.6 0 nvidia
[conda] libcublas-dev 11.11.3.6 0 nvidia
[conda] libcufft 10.9.0.58 0 nvidia
[conda] libcufft-dev 10.9.0.58 0 nvidia
[conda] libcurand 10.3.6.82 he0c23c2_0 conda-forge
[conda] libcurand-dev 10.3.6.82 he0c23c2_0 conda-forge
[conda] libcusolver 11.4.1.48 0 nvidia
[conda] libcusolver-dev 11.4.1.48 0 nvidia
[conda] libcusparse 11.7.5.86 0 nvidia
[conda] libcusparse-dev 11.7.5.86 0 nvidia
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] libnvjitlink-dev 12.4.127 0 nvidia
[conda] mkl 2021.4.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py310h2bbff1b_1 defaults
[conda] mkl_fft 1.3.11 py310h827c3e9_0 defaults
[conda] mkl_random 1.2.8 py310hc64d2fc_0 defaults
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] pytorch 2.5.1 py3.10_cuda11.8_cudnn9_0 pytorch
[conda] pytorch-cuda 11.8 h24eeafa_6 pytorch
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] segmentation-models-pytorch 0.3.4 pypi_0 pypi
[conda] torch-pruning 1.5.1 pypi_0 pypi
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torch-tensorrt 2.5.0 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchmetrics 1.4.2 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi | module: onnx,triaged | low | Critical |
2,746,908,173 | ollama | cudart initialization failure on ppc64le (AlmaLinux/RHEL8) with ollama-0.1.31-1.1 | ### What is the issue?
I’m encountering an issue running ollama-0.1.31-1.1 on a ppc64le system with AlmaLinux (RHEL8). The application fails to initialize CUDA runtime libraries, resulting in cudart init failure: 35 errors.
Environment Details:
OS: AlmaLinux (RHEL8 compatible)
Architecture: ppc64le
ollama version: 0.1.31-1.1
CUDA environment:
CUDA libraries are located under /usr/local/cuda/lib64/ and /usr/local/cuda-12.4/lib64/.
The CUDA runtime library found: libcudart.so.12.4.99.
Logs and Error Output:
```
time=2024-12-18T11:04:06.900+05:00 level=INFO source=gpu.go:115 msg="Detecting GPU type"
time=2024-12-18T11:04:06.900+05:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libcudart.so*"
time=2024-12-18T11:04:06.903+05:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: [/usr/local/cuda/lib64/libcudart.so.12.4.99 /usr/local/cuda-12.4/lib64/libcudart.so.12.4.99]"
time=2024-12-18T11:04:06.910+05:00 level=INFO source=gpu.go:340 msg="Unable to load cudart CUDA management library /usr/local/cuda/lib64/libcudart.so.12.4.99: cudart init failure: 35"
time=2024-12-18T11:04:06.910+05:00 level=INFO source=gpu.go:340 msg="Unable to load cudart CUDA management library /usr/local/cuda-12.4/lib64/libcudart.so.12.4.99: cudart init failure: 35"
```
any guidance on resolving the cudart init failure: 35 error?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
_No response_ | bug | low | Critical |
2,746,910,697 | godot | Lone `while` keyword in script prints error: "Trying to resolve type of a null node." | ### Tested versions
Can reproduce in:
* v4.0.alpha1.official [31a7ddbf8]
* v4.0.stable.official [92bee43ad]
* v4.2.2.stable.official [15073afe3]
* v4.3.stable.official [77dcf97d8]
* v4.4.dev6.official [1f47e4c4e]
Therefore, the issue described below was present since before Godot 4.0 alpha 1.
### System information
Godot v4.4.dev6 - Windows 10.0.19045 - Single-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i5-9400F CPU @ 2.90GHz (6 threads)
### Issue description
When typing the following into a new script file:
```gdscript
extends Node
func _ready() -> void:
while
```
Godot prints this error:
```
ERROR: Trying to resolve type of a null node.
at: (modules/gdscript/gdscript_analyzer.cpp:1542)
```
### Steps to reproduce
Create a new script in a new or existing project, then type (not paste) the code shown above. The error should be printed upon finishing typing the `while` keyword but not if one continues typing the while condition.
### Minimal reproduction project (MRP)
N/A (can reproduce from a new empty project) | bug,topic:gdscript | low | Critical |
2,746,919,367 | TypeScript | Long running getPasteEdits call | ### 🔎 Search Terms
- getPasteEdits
- Update imports on paste
### 🕗 Version & Regression Information
5.7x, not a regression
### ⏯ Playground Link
_No response_
### 💻 Code
### Bug
From https://github.com/microsoft/vscode/issues/235959, I got a log showing a `getPasteEdits` taking almost 2 seconds. I don't have a repo yet but here are the relevant parts of the logs:
```
PASTE REQUEST MADE
...
Info 11464[23:06:13.935] Starting updateGraphWorker: Project: <REDACTED>
Info 11465[23:06:14.203] Finishing updateGraphWorker: Project: <REDACTED> projectStateVersion: 3 projectProgramVersion: 1 structureChanged: false structureIsReused:: Completely Elapsed: 267.6692490000023ms
Info 11466[23:06:14.203] Different program with same set of files
Info 11467[23:06:14.204] Starting updateGraphWorker: Project: <REDACTED>
Info 11468[23:06:14.447] Finishing updateGraphWorker: Project: <REDACTED> projectStateVersion: 4 projectProgramVersion: 1 structureChanged: false structureIsReused:: Completely Elapsed: 243.40444700000444ms
Info 11469[23:06:14.447] Different program with same set of files
Info 11470[23:06:15.176] getExportInfoMap: cache miss or empty; calculating new results
Info 11471[23:06:15.673] getExportInfoMap: done in 496.9983559999964 ms
Info 11472[23:06:15.704] getExportInfoMap: cache hit
Info 11473[23:06:15.733] getExportInfoMap: cache hit
Perf 11474[23:06:15.760] 35::getPasteEdits: elapsed time (in milliseconds) 1825.7962
Info 11475[23:06:15.760] response:
{"seq":0,"type":"response","command":"getPasteEdits","request_seq":35,"success":true,"performanceData":{"updateGraphDurationMs":511.07369600000675}, ...
```
The user was pasting jsx. The pasted text includes a number of symbols but the text isn't overly log
| Needs More Info | low | Critical |
2,747,034,878 | rust | `#[linkage = "weak"] const fn` accepted without warning | I'm trying to define a "weak" const in a library, which allows users to redefine their const. On stable Rust, there's no way to do it. On nightly, `linkage` feature seems the solution, but I found that it doesn't work as well. The following is the repro code:
```rust
// In library code:
#![feature(linkage)]
#[linkage = "weak"]
#[no_mangle]
const fn get_foo() -> i32 {
0
}
pub const FOO: i32 = get_foo();
pub fn hello() {
println!("hello: {}, {}", FOO, get_foo());
}
// In application code:
use linkage_weak_bug::hello;
#[no_mangle]
const fn get_foo() -> i32 {
1
}
fn main() {
// Output: hello: 0, 1
// Expected: hello: 1, 1
hello();
}
```
I uploaded a repo that could repro it: https://github.com/HaoboGu/linkage_weak_bug | A-linkage,A-diagnostics,T-compiler,C-discussion,F-linkage | low | Critical |
2,747,076,832 | ui | [feat]: The Sidebar component supports gesture drag and drop? | ### Feature description
The Sidebar component allows manual drag and drop to change the size of the left and right side areas?
### Affected component/components
Sidebar
### Additional Context
_No response_
### Before submitting
- [X] I've made research efforts and searched the documentation
- [x] I've searched for existing issues and PRs | area: request | low | Minor |
2,747,092,342 | pytorch | Tensor size for `masked_fill` exceeds the limit supported by the MPS backend: must be less than 2**32 elements | ### 🐛 Describe the bug
I get the following error, when using `masked_fill` on larger tensors. See error and the minimal code below.
**Error:**
```
/AppleInternal/Library/BuildRoots/.../Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:850: failed assertion `[MPSNDArray initWithDevice:descriptor:isTextureBacked:] Error: total bytes of NDArray > 2**32'
```
**Code:**
```python
import torch
device = torch.device("mps")
mask_bool = torch.triu(torch.ones(1024, 1024, device=device), diagonal=1).bool()
attn_scores = torch.rand(48, 25, 1024, 1024, device=device)
attn_scores.masked_fill_(mask_bool, 0)
```
### Versions
```
PyTorch version: 2.6.0.dev20241217
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:26:25) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Max
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.13.1
[pip3] torch==2.6.0.dev20241217
[pip3] torchaudio==2.6.0.dev20241217
[pip3] torchvision==0.22.0.dev20241217
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.6.0.dev20241217 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241217 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241217 pypi_0 pypi
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | module: crash,triaged,module: mps | low | Critical |
2,747,099,032 | react | React 19 breaking with this.props.children | Hello,
Our code is now breaking when we upgrade to React 19. It seems related to this.props.children, and it works fine in React 18. I have attached the error and a screenshot of where the code is breaking.
[Error: Objects are not valid as a React child (found: object with keys {$$typeof, type, key, ref, props, _owner}). If you meant to render a collection of children, use an array instead.]
Error occurred prerendering page "/". Read more: https://nextjs.org/docs/messages/prerender-error
Error: Objects are not valid as a React child (found: object with keys {$$typeof, type, key, ref, props, _owner}). If you meant to render a collection of children, use an array instead.
<img width="769" alt="Screenshot 2024-12-18 at 12 09 59 AM" src="https://github.com/user-attachments/assets/1b02542c-ef41-4b6d-90d9-64380c5ad609" />
| Resolution: Needs More Information,React 19 | low | Critical |
2,747,127,346 | vscode | Better token support in standalone themes for monaco | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
The standalone themes in `src/vs/editor/standalone/common/themes.ts` are very outdated and do not support modern language features such as functions. This makes it impossible to have good syntax highlighting out-of-the-box for some languages like TypeScript for downstream projects. See https://github.com/microsoft/monaco-editor/issues/2872
It would be nice to update these themes to include modern tokens from VS Code Modern themes (dark_modern and light_modern) | feature-request,monaco-editor | low | Major |
2,747,158,019 | pytorch | [Break XPU] Newly added test case with CUDA hard code failed on XPU. | ### 🐛 Describe the bug
The newly added test case `test_linear_and_cel` in test/inductor/test_inplace_padding.py has "cuda" hard code but run on XPU.:https://hud.pytorch.org/pr/pytorch/pytorch/142322#34573031104
```
2024-12-18T04:27:31.8569895Z =================================== FAILURES ===================================
2024-12-18T04:27:31.8570211Z ____________________ InplacePaddingTest.test_linear_and_cel ____________________
2024-12-18T04:27:31.8570515Z Traceback (most recent call last):
2024-12-18T04:27:31.8570900Z File "/var/lib/jenkins/pytorch/test/inductor/test_inplace_padding.py", line 146, in test_linear_and_cel
2024-12-18T04:27:31.8571333Z x = torch.randn(B * T, C, requires_grad=True).cuda().bfloat16()
2024-12-18T04:27:31.8571744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py", line 311, in _lazy_init
2024-12-18T04:27:31.8572183Z raise AssertionError("Torch not compiled with CUDA enabled")
2024-12-18T04:27:31.8572499Z AssertionError: Torch not compiled with CUDA enabled
2024-12-18T04:27:31.8572673Z
2024-12-18T04:27:31.8572814Z To execute this test, run the following from the base repo dir:
2024-12-18T04:27:31.8573196Z python test/inductor/test_inplace_padding.py InplacePaddingTest.test_linear_and_cel
```
Seems the test case only suitable for CUDA as it has device bias code like `os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"`
We should mark it as requires_cuda.
### Versions
PyTorch version: 2.6.0a0+gite6c7400
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:24:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.47+prerelease24.6.13-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,747,167,862 | pytorch | Segmentation fault (core dumped) in `conv1d` | ### 🐛 Describe the bug
Under specific inputs, `conv1d` triggered a crash.
```python
import torch
input = torch.full((10, 10, 9,), 0, dtype=torch.float)
weight = torch.full((2, 10, 9,), 9.0072e+15, dtype=torch.float)
bias = None
stride = [1]
padding = "same"
dilation = [2147483648]
groups = 1
# torch.ops.aten.conv1d.padding(input, weight, bias, stride, padding, dilation, groups)
torch.nn.functional.conv1d(input, weight, bias, stride, padding, dilation, groups)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet | module: crash,module: nn,module: error checking,triaged,module: edge cases,topic: fuzzer | low | Critical |
2,747,174,664 | pytorch | Segmentation fault (core dumped) in `conv3d` | ### 🐛 Describe the bug
Under specific inputs, `conv3d` triggered a crash.
```python
import torch
input = torch.full((3, 1, 3, 4, 3,), 4.44444e+12, dtype=torch.float)
weight = torch.full((3, 1, 3, 1, 3,), 1e+13, dtype=torch.float)
bias = None
stride = [1, 1, 1]
padding = "same"
dilation = [3046875451, 3046875451, 3046875451]
groups = 1
#torch.ops.aten.conv3d.padding(input, weight, bias, stride, padding, dilation, groups)
torch.nn.functional.conv3d(input, weight, bias, stride, padding, dilation, groups)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @malfet | module: crash,module: nn,module: error checking,module: convolution,triaged,topic: fuzzer | low | Critical |
2,747,179,041 | pytorch | Segmentation fault (core dumped) in `embedding_backward` | ### 🐛 Describe the bug
Under specific inputs, `embedding_backward` triggered a crash.
```python
import torch
grad = torch.full((8, 0, 3, 7, 6, 1, 0,), 0, dtype=torch.float)
indices = torch.full((2,), 1250999896764, dtype=torch.long)
num_weights =536870912
padding_idx = 4194304
scale_grad_by_freq = True
sparse = False
torch.ops.aten.embedding_backward(grad, indices, num_weights, padding_idx, scale_grad_by_freq, sparse)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet | module: crash,module: error checking,triaged,module: embedding,module: empty tensor,topic: fuzzer | low | Critical |
2,747,188,022 | pytorch | Segmentation fault (core dumped) in `embedding_bag.padding_idx` | ### 🐛 Describe the bug
Under specific inputs, `embedding_bag.padding_idx` triggered a crash.
```python
import torch
weight = torch.full((3, 4,), 1.11111e+15, dtype=torch.float)
indices = torch.full((5,), -2147483648, dtype=torch.long)
offsets = torch.full((0,), 0, dtype=torch.long)
scale_grad_by_freq = False
mode = 3046875451
sparse = False
per_sample_weights = None
include_last_offset = False
padding_idx = None
torch.ops.aten.embedding_bag.padding_idx(weight, indices, offsets, scale_grad_by_freq, mode, sparse, per_sample_weights, include_last_offset, padding_idx)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet | module: crash,module: error checking,triaged,module: embedding,module: empty tensor,topic: fuzzer | low | Critical |
2,747,195,247 | pytorch | Segmentation fault (core dumped) in `gru_cell` | ### 🐛 Describe the bug
Under specific inputs, `gru_cell` triggered a crash.
```python
import torch
input = torch.full((0, 8,), 0, dtype=torch.float)
hx = torch.full((0, 9,), 0, dtype=torch.float)
w_ih = torch.full((1, 8,), 1.251e+12, dtype=torch.float)
w_hh = torch.full((1, 9,), 1.4013e-45, dtype=torch.float)
b_ih = None
b_hh = None
torch.gru_cell(input, hx, w_ih, w_hh, b_ih, b_hh)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet | module: crash,module: error checking,triaged,actionable,module: empty tensor,topic: fuzzer | low | Critical |
2,747,199,702 | pytorch | Aborted (core dumped) in `mkldnn_rnn_layer` | ### 🐛 Describe the bug
Under specific inputs, `mkldnn_rnn_layer` triggered a crash.
```python
import torch
input = torch.full((1, 8, 1,), 4.13506, dtype=torch.float)
weight0 = torch.full((5, 8,), 2.47475, dtype=torch.float)
weight1 = torch.full((5, 8,), 8.52373, dtype=torch.float)
weight2 = torch.full((5,), 5.73429, dtype=torch.float)
weight3 = torch.full((5,), 6.42933, dtype=torch.float)
hx_ = torch.full((1, 8,), 9.12846, dtype=torch.float)
cx_ = torch.full((1, 1,), 6.00218, dtype=torch.float)
reverse = False
batch_sizes = []
mode = 2
hidden_size = 8
num_layers = 2
has_biases = True
bidirectional = False
batch_first = False
train = False
torch.mkldnn_rnn_layer(input, weight0, weight1, weight2, weight3, hx_, cx_, reverse, batch_sizes, mode, hidden_size, num_layers, has_biases, bidirectional, batch_first, train)
```
Output
```
double free or corruption (!prev)
Aborted (core dumped
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal | module: crash,module: error checking,triaged,module: mkldnn,topic: fuzzer | low | Critical |
2,747,204,933 | pytorch | Aborted (core dumped) in `replication_pad1d` | ### 🐛 Describe the bug
Under specific inputs, `replication_pad1d` triggered a crash.
```python
import torch
self = torch.full((9, 9, 7, 1,), 3.5e+35, dtype=torch.float)
padding = [-2, -2]
torch.ops.aten.replication_pad1d(self, padding)
```
Output
```
corrupted size vs. prev_size
Aborted (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet | module: crash,module: error checking,triaged,actionable,topic: fuzzer | low | Critical |
2,747,217,703 | next.js | Sudo on --experimental-https usage | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/reverent-paper-zxjmnv
### To Reproduce
Run `bun dev --experimental-https` in any project
### Current vs. Expected behavior
Current:
```Attempting to generate self signed certificate. This may prompt for your password
Sudo password:```
Expected:
No requirement for sudo in terms of generating certificates.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: arm64
Version: #1 SMP Thu Oct 24 19:28:55 UTC 2024
Available memory (MB): 48093
Available CPU cores: 10
Binaries:
Node: 20.10.0
npm: 10.2.3
Yarn: N/A
pnpm: 8.11.0
Relevant Packages:
next: 15.1.0 // There is a newer version (15.1.1) available, upgrade recommended!
eslint-config-next: 15.1.0
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
```
### Which area(s) are affected? (Select all that apply)
Runtime, Turbopack, Webpack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I don't believe it should be a requirement to have sudo for creating certificates and running experiment-https? | Webpack,Runtime | low | Minor |
2,747,230,673 | pytorch | Segmentation fault (core dumped) in `replication_pad2d` | ### 🐛 Describe the bug
Under specific inputs, `replication_pad2d` triggered a crash.
```python
import torch
self = torch.full((9, 9, 2, 4, 3,), 1.251e+12, dtype=torch.float)
padding = [0, 0, 0, 0]
torch._C._nn.replication_pad2d(self, padding)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @malfet | module: crash,module: error checking,triaged,topic: fuzzer | low | Critical |
2,747,266,265 | godot | int(INF) evaluates to negative number | ### Tested versions
Reproducable in : v4.3.stable
### System information
Windows 11
### Issue description
Casting the float INF to int with int(INF) returns -9223372036854775808 and not 9223372036854775807 witch is the largest int. int(-INF) also returns -9223372036854775808 which make sense to me.
### Steps to reproduce
Print(int(INF))
### Minimal reproduction project (MRP)
[int(inf)bug.zip](https://github.com/user-attachments/files/18177991/int.inf.bug.zip)
| bug,discussion,topic:core | low | Critical |
2,747,282,440 | flutter | Mac_benchmark flutter_gallery_macos__compile is 2.15% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Mac_benchmark flutter_gallery_macos__compile"
}
-->
The post-submit test builder `Mac_benchmark flutter_gallery_macos__compile` had a flaky ratio 2.15% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_benchmark%20flutter_gallery_macos__compile/10740
Commit: https://github.com/flutter/flutter/commit/8631412f252243a18da849068b5a10d4619dd56c
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_benchmark%20flutter_gallery_macos__compile/10740
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_benchmark%20flutter_gallery_macos__compile/10704
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Mac_benchmark%20flutter_gallery_macos__compile
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| P2,c: flake,team-macos | low | Major |
2,747,312,367 | PowerToys | Ctrl key on Mouse utility in Italian localization | ### Microsoft PowerToys version
0.87.0
### Utility with translation issue
Mouse Utilities
### 🌐 Language affected
Italian
### ❌ Actual phrase(s)

### ✔️ Expected phrase(s)
Premi due volte il tasto Ctrl di sinistra
Premi due volte il tasto Ctrl di destra
### ℹ Why is the current translation wrong
It seems you have to push a specific button or other control on the left/right size | Issue-Bug,Area-Localization,Needs-Triage,Issue-Translation | low | Minor |
2,747,331,366 | deno | JsRuntime initialization fails when integrating deno_url extension | I'm encountering an issue when integrating the deno_url extension into deno_core within a test project. Despite trying various approaches, I consistently receive the following error during JsRuntime initialization:
```txt
Failed to initialize a JsRuntime: Following modules were not evaluated; make sure they are imported from other code:
- ext:deno_console/01_console.js
- ext:deno_url/00_url.js
- ext:deno_webidl/00_webidl.js
- ext:deno_url/01_urlpattern.js
```
Cargo.toml:
```toml
[package]
name = "deno_url_test"
version = "0.1.0"
edition = "2021"
[dependencies]
v8 = { version = "130.0.1", default-features = false }
deno_core = { version = "0.326.0", default-features = false }
deno_url = "0.183.0"
deno_webidl = "0.183.0"
deno_console = "0.183.0"
tokio = { version = "1.35.1", features = ["full"] }
anyhow = "1.0.75"
```
rust code:
```rust
use deno_core::error::AnyError;
use deno_core::{extension, op2, OpState};
use deno_core::{JsRuntime, RuntimeOptions};
#[op2]
#[string]
fn op_test() -> Result<String, AnyError> {
Ok("Hello from Rust!".to_string())
}
extension! {
TestExtension,
ops = [op_test],
state = |state: &mut OpState| {
()
}
}
#[tokio::main]
async fn main() -> Result<(), AnyError> {
let mut runtime = JsRuntime::new(RuntimeOptions {
extensions: vec![
TestExtension::init_ops(),
deno_console::deno_console::init_ops_and_esm(),
deno_webidl::deno_webidl::init_ops_and_esm(),
deno_url::deno_url::init_ops_and_esm(),
],
..Default::default()
});
let code = r#"
import "ext:deno_webidl/00_webidl.js";
import "ext:deno_console/01_console.js";
import "ext:deno_url/00_url.js";
import "ext:deno_url/01_urlpattern.js";
Object.defineProperty(globalThis, "URL", {
value: url.URL,
enumerable: false,
configurable: true,
writable: true,
});
Object.defineProperty(globalThis, "URLPattern", {
value: url.URLPattern,
enumerable: false,
configurable: true,
writable: true,
});
Object.defineProperty(globalThis, "URLSearchParams", {
value: url.URLSearchParams,
enumerable: false,
configurable: true,
writable: true,
});
const result = Deno.core.ops.op_test();
console.log('Custom op result:', result);
"#;
runtime.execute_script("[test]", code).map_err(|e| {
println!("Script execution error: {:?}", e);
e
})?;
Ok(())
}
```
| embedder | low | Critical |
2,747,340,339 | vscode | Expose native support for USB, HID and Serial device access in VS Code (Desktop) | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Since Electron version 25, Web-* device access has been possible using the functionality described here: https://www.electronjs.org/docs/latest/tutorial/devices
This request is to expose this functionality in VS Code on the desktop and bring functionality on-par with the browser version added in https://github.com/microsoft/vscode/pull/152310
The benefit of this is that native device support is added using the functionality already available in Chrome shipped as part of VS Code, so there is no extra bloat.
It means extension developers no longer have to include external packages such as node-usb or SerialPort. In many cases this approach should be more stable and reliable, too (e.g. chrome uses winusb for device access over libusb as used in node-usb).
From a security perspective, it uses the same model as the browser, so brings the device authorisation model too.
A PR adding this can be found here: https://github.com/microsoft/vscode/pull/198047
cc @bpasero | feature-request,extensions,workbench-os-integration | medium | Major |
2,747,372,333 | ollama | Llava system prompt in modelfile only works if first message does not contain image | ### What is the issue?
I can reproduce this with llava models, not with llama3.2-vision.
I created a model with the ollama-python library:
```
modelfile='''
FROM llava:13b
SYSTEM You are a funny comedian that answers in a fun way. Always end your answer with a joke about the content of the answer.
'''
ollama.create(model='llava-systemprompt', modelfile=modelfile)
```
When the first message has an image the prompt is not applied:
```
$ ollama run llava-systemprompt
>>> Describe the image /Users/test/Downloads/face.jpg
Added image '/Users/test/Downloads/face.jpg'
The image is a close-up photograph of a woman's face and upper torso against a neutral background. The woman has her hair styled straight and parted in the
middle, and she is looking directly at the camera with a slight smirk. Her makeup includes defined eyeliner, mascara, and lipstick that matches the color of
her skin tone. She has arched brows and full lips, giving her an expressive and confident appearance. The lighting in the photo highlights her facial
features and creates a dramatic effect on her skin. There are no texts visible in the image.
>>> explain gravity
Gravity is one of the four fundamental forces of nature, along with electromagnetism, weak nuclear force, and strong nuclear force. It is responsible for
attracting two objects with mass towards each other, as described by Sir Isaac Newton's law of universal gravitation. This law states that "the attraction
between two bodies is proportional to their mass and inversely proportional to the square of the distance between them."
In simpler terms, gravity is what keeps you on the ground when you stand up. It is a natural force that pulls objects towards each other based on their mass
and the amount of space between them. The force of gravity acts upon every object with mass, from planets and stars to small particles like atoms.
Gravity plays an essential role in our daily lives. It helps hold the Earth together as a planet and allows objects to fall when dropped. It also governs
the movement of celestial bodies such as planets, moons, and comets within our solar system and beyond. In addition, gravity has been studied extensively by
scientists for its role in the formation and behavior of galaxies, black holes, and other large-scale astronomical structures.
```
When the first message does not contain an image then the system prompt is applied:
```
$ ollama run llava-systemprompt
>>> explain gravity
Sure thing! So, you know how sometimes when you drop something, it falls down? That's because of gravity! It's this invisible force that pulls objects
towards the center of the Earth. The more massive an object is, the stronger its gravitational pull, which is why things like planets and stars are so much
heavier than tiny little atoms. But don't worry, even though you can't see gravity, it's always there, keeping your feet planted on the ground and
preventing you from floating away into space!
>>> Describe the image /Users/test/Downloads/face.jpg
Added image '/Users/test/Downloads/face.jpg'
Oh wow, I'm a pretty good-looking comedian, aren't I? The image you have here is a stunning portrait of a woman with a strong jawline and striking eyes.
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.3 | bug | low | Minor |
2,747,372,939 | flutter | [pointer_interceptor] affects Tab order if wrapped around a FocusTraversalGroup | ### What package does this bug report belong to?
pointer_interceptor
### What target platforms are you seeing this bug on?
Web
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "3f41d009ba7172d5ff9be5f6e6e6abb4300e263aab8866d2a0842ed2a70f8f0c"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: "976c774dd944a42e83e2467f4cc670daef7eed6295b10b36ae8c85bcbf828235"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
plugin_platform_interface:
dependency: transitive
description:
name: plugin_platform_interface
sha256: "4820fbfdb9478b1ebae27888254d445073732dae3d6ea81f0b7e06d5dedc3f02"
url: "https://pub.dev"
source: hosted
version: "2.1.8"
pointer_interceptor:
dependency: "direct main"
description:
name: pointer_interceptor
sha256: "57210410680379aea8b1b7ed6ae0c3ad349bfd56fe845b8ea934a53344b9d523"
url: "https://pub.dev"
source: hosted
version: "0.10.1+2"
pointer_interceptor_ios:
dependency: transitive
description:
name: pointer_interceptor_ios
sha256: a6906772b3205b42c44614fcea28f818b1e5fdad73a4ca742a7bd49818d9c917
url: "https://pub.dev"
source: hosted
version: "0.10.1"
pointer_interceptor_platform_interface:
dependency: transitive
description:
name: pointer_interceptor_platform_interface
sha256: "0597b0560e14354baeb23f8375cd612e8bd4841bf8306ecb71fcd0bb78552506"
url: "https://pub.dev"
source: hosted
version: "0.10.0+1"
pointer_interceptor_web:
dependency: transitive
description:
name: pointer_interceptor_web
sha256: "7a7087782110f8c1827170660b09f8aa893e0e9a61431dbbe2ac3fc482e8c044"
url: "https://pub.dev"
source: hosted
version: "0.10.2+1"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "5c5f338a667b4c644744b661f309fb8080bb94b18a7e91ef1dbd343bed00ed6d"
url: "https://pub.dev"
source: hosted
version: "14.2.5"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
sdks:
dart: ">=3.5.3 <4.0.0"
flutter: ">=3.19.0"
```
</details>
### Steps to reproduce
1. make sure textformfield 1 is focused
2. click tab 4 times
### Expected results
each tab goes to next TextFormField
aka that tab order is 1 -> 2 -> 3
### Actual results
needs two tabs to go from 2 to 3
aka tab order is this 1 -> 2 -> pointerInteceptor -> 3
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:pointer_interceptor/pointer_interceptor.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: Center(
child: Column(
children: [
PointerInterceptor(
child: FocusTraversalGroup(
child: Column(
mainAxisSize: MainAxisSize.min,
children: [
TextFormField(
autofocus: true,
decoration: const InputDecoration(labelText: "1"),
),
TextFormField(
decoration: const InputDecoration(labelText: "2"),
),
],
),
),
),
TextFormField(
decoration: const InputDecoration(labelText: "3"),
),
],
),
),
),
);
}
}
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
https://pastebin.com/vvT5cVYS
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.22631.4460], locale en-DK)
• Flutter version 3.24.3 on channel stable at C:\Flutter\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (3 months ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at C:\Android\Sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: C:\Program Files\Android\Android Studio1\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.11.5)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.11.35327.3
• Windows 10 SDK version 10.0.22621.0
[!] Android Studio (version 2023.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
X Unable to determine bundled Java version.
• Try updating or re-installing Android Studio.
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio1
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
[√] IntelliJ IDEA Community Edition (version 2024.1)
• IntelliJ at C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2024.1
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
[√] VS Code (version 1.96.0)
• VS Code at C:\Users\MathiasMøllerToft\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.102.0
[√] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4460]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.140
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.99
[√] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category
```
</details>
| platform-web,package,f: focus,has reproducible steps,P2,p: pointer_interceptor,team-web,triaged-web,found in release: 3.27,found in release: 3.28 | low | Critical |
2,747,382,028 | vscode | VScode is always reverting to earlier version |
Type: <b>Bug</b>
Hello,
I have a similar issue like #113702, but on a ubuntu distribution and I cant get it to fix.
# Problem:
Vscode is always reverting its version back to 1.90.2. When I do the update, I can open a vscode window or two in the right version, but after 10minutes or so, its back to version 1.90.2
# What I tried:
- updating to different versions (1.94, 1.95,1.96)
- complete reinstall of vscode (I couldnt find it on the machine afterwards)
I did:
```
sudo apt-get purge code
rm -r $Home/.config/Code
rm -r $Home/.vscode
```
The issue still persists. I dont know where else vscode files could be or what the issue would be releated to.
VS Code version: Code 1.96.0 (138f619c86f1199955d53b4166bef66ef252935c, 2024-12-11T02:29:09.626Z)
OS version: Linux x64 5.15.0-126-generic
Modes:
Remote OS version: Linux x64 5.15.0-124-generic
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz (16 x 4500)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|5, 6, 6|
|Memory (System)|251.39GB (228.19GB free)|
|Process Argv|--crash-reporter-id 13d7f088-1dee-4ff4-a728-29a41b6acdd1|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|ubuntu|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|ubuntu|
|XDG_SESSION_TYPE|x11|
|Item|Value|
|---|---|
|Remote|SSH: euler.ethz.ch|
|OS|Linux x64 5.15.0-124-generic|
|CPUs|Intel(R) Xeon(R) CPU E3-1284L v4 @ 2.90GHz (4 x 1925)|
|Memory (System)|31.27GB (20.95GB free)|
|VM|0%|
</details><details><summary>Extensions (20)</summary>
Extension|Author (truncated)|Version
---|---|---
preview-tiff|ana|1.0.1
copilot|Git|1.252.0
copilot-chat|Git|0.23.1
vscode-h5web|h5w|0.1.7
debugpy|ms-|2024.14.0
python|ms-|2024.22.0
vscode-pylance|ms-|2024.12.1
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
tensorboard|ms-|2023.10.1002992421
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.394.0
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
vscode-remote-extensionpack|ms-|0.26.0
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | bug,install-update | low | Critical |
2,747,405,393 | flutter | [pigeon] Empty generated Dart classes aren't serializable | ### Steps to reproduce
- create `pigeon/messages.dart`:
- Define sealed class `A`
- Define `B` which extends `A` with some fields
- Define `C` which extends `A` without any fields
- run `flutter pub run pigeon --input ./pigeon/messages.dart`
- get generated file `src/messages.g.dart`
### Expected results
`C.decode`/`C.encode` is generated and we got no errors
### Actual results
`C.decode`/`C.encode` was not generated, however `B.decode`/`B.encode` was.
```dart
// Autogenerated from Pigeon (v22.7.0), do not edit directly.
// See also: https://pub.dev/packages/pigeon
// ignore_for_file: public_member_api_docs, non_constant_identifier_names, avoid_as, unused_import, unnecessary_parenthesis, prefer_null_aware_operators, omit_local_variable_types, unused_shown_name, unnecessary_import, no_leading_underscores_for_local_identifiers
import 'dart:async';
import 'dart:typed_data' show Float64List, Int32List, Int64List, Uint8List;
import 'package:flutter/foundation.dart' show ReadBuffer, WriteBuffer;
import 'package:flutter/services.dart';
PlatformException _createConnectionError(String channelName) {
return PlatformException(
code: 'channel-error',
message: 'Unable to establish connection on channel: "$channelName".',
);
}
sealed class A {
}
class B extends A {
B({
this.number,
this.message,
});
int? number;
String? message;
Object encode() {
return <Object?>[
number,
message,
];
}
static B decode(Object result) {
result as List<Object?>;
return B(
number: result[0] as int?,
message: result[1] as String?,
);
}
}
class C extends A { <----- no decode/encode
}
class _PigeonCodec extends StandardMessageCodec {
const _PigeonCodec();
@override
void writeValue(WriteBuffer buffer, Object? value) {
if (value is int) {
buffer.putUint8(4);
buffer.putInt64(value);
} else if (value is B) {
buffer.putUint8(129);
writeValue(buffer, value.encode());
} else if (value is C) {
buffer.putUint8(130);
writeValue(buffer, value.encode()); <----- not generated, error
} else {
super.writeValue(buffer, value);
}
}
@override
Object? readValueOfType(int type, ReadBuffer buffer) {
switch (type) {
case 129:
return B.decode(readValue(buffer)!);
case 130:
return C.decode(readValue(buffer)!); <----- not generated, error
default:
return super.readValueOfType(type, buffer);
}
}
}
class SomeHostApi {
/// Constructor for [SomeHostApi]. The [binaryMessenger] named argument is
/// available for dependency injection. If it is left null, the default
/// BinaryMessenger will be used which routes to the host platform.
SomeHostApi({BinaryMessenger? binaryMessenger, String messageChannelSuffix = ''})
: pigeonVar_binaryMessenger = binaryMessenger,
pigeonVar_messageChannelSuffix = messageChannelSuffix.isNotEmpty ? '.$messageChannelSuffix' : '';
final BinaryMessenger? pigeonVar_binaryMessenger;
static const MessageCodec<Object?> pigeonChannelCodec = _PigeonCodec();
final String pigeonVar_messageChannelSuffix;
Future<C> getC(C c) async {
final String pigeonVar_channelName = 'dev.flutter.pigeon.pigeon_sealed_plugin_example.SomeHostApi.getC$pigeonVar_messageChannelSuffix';
final BasicMessageChannel<Object?> pigeonVar_channel = BasicMessageChannel<Object?>(
pigeonVar_channelName,
pigeonChannelCodec,
binaryMessenger: pigeonVar_binaryMessenger,
);
final List<Object?>? pigeonVar_replyList =
await pigeonVar_channel.send(<Object?>[c]) as List<Object?>?;
if (pigeonVar_replyList == null) {
throw _createConnectionError(pigeonVar_channelName);
} else if (pigeonVar_replyList.length > 1) {
throw PlatformException(
code: pigeonVar_replyList[0]! as String,
message: pigeonVar_replyList[1] as String?,
details: pigeonVar_replyList[2],
);
} else if (pigeonVar_replyList[0] == null) {
throw PlatformException(
code: 'null-error',
message: 'Host platform returned null value for non-null return value.',
);
} else {
return (pigeonVar_replyList[0] as C?)!;
}
}
Future<A> getA(A a) async {
final String pigeonVar_channelName = 'dev.flutter.pigeon.pigeon_sealed_plugin_example.SomeHostApi.getA$pigeonVar_messageChannelSuffix';
final BasicMessageChannel<Object?> pigeonVar_channel = BasicMessageChannel<Object?>(
pigeonVar_channelName,
pigeonChannelCodec,
binaryMessenger: pigeonVar_binaryMessenger,
);
final List<Object?>? pigeonVar_replyList =
await pigeonVar_channel.send(<Object?>[a]) as List<Object?>?;
if (pigeonVar_replyList == null) {
throw _createConnectionError(pigeonVar_channelName);
} else if (pigeonVar_replyList.length > 1) {
throw PlatformException(
code: pigeonVar_replyList[0]! as String,
message: pigeonVar_replyList[1] as String?,
details: pigeonVar_replyList[2],
);
} else if (pigeonVar_replyList[0] == null) {
throw PlatformException(
code: 'null-error',
message: 'Host platform returned null value for non-null return value.',
);
} else {
return (pigeonVar_replyList[0] as A?)!;
}
}
}
class SomeFlutterApi {
/// Constructor for [SomeFlutterApi]. The [binaryMessenger] named argument is
/// available for dependency injection. If it is left null, the default
/// BinaryMessenger will be used which routes to the host platform.
SomeFlutterApi({BinaryMessenger? binaryMessenger, String messageChannelSuffix = ''})
: pigeonVar_binaryMessenger = binaryMessenger,
pigeonVar_messageChannelSuffix = messageChannelSuffix.isNotEmpty ? '.$messageChannelSuffix' : '';
final BinaryMessenger? pigeonVar_binaryMessenger;
static const MessageCodec<Object?> pigeonChannelCodec = _PigeonCodec();
final String pigeonVar_messageChannelSuffix;
Future<C> getC(C c) async {
final String pigeonVar_channelName = 'dev.flutter.pigeon.pigeon_sealed_plugin_example.SomeFlutterApi.getC$pigeonVar_messageChannelSuffix';
final BasicMessageChannel<Object?> pigeonVar_channel = BasicMessageChannel<Object?>(
pigeonVar_channelName,
pigeonChannelCodec,
binaryMessenger: pigeonVar_binaryMessenger,
);
final List<Object?>? pigeonVar_replyList =
await pigeonVar_channel.send(<Object?>[c]) as List<Object?>?;
if (pigeonVar_replyList == null) {
throw _createConnectionError(pigeonVar_channelName);
} else if (pigeonVar_replyList.length > 1) {
throw PlatformException(
code: pigeonVar_replyList[0]! as String,
message: pigeonVar_replyList[1] as String?,
details: pigeonVar_replyList[2],
);
} else if (pigeonVar_replyList[0] == null) {
throw PlatformException(
code: 'null-error',
message: 'Host platform returned null value for non-null return value.',
);
} else {
return (pigeonVar_replyList[0] as C?)!;
}
}
Future<A> getA(A a) async {
final String pigeonVar_channelName = 'dev.flutter.pigeon.pigeon_sealed_plugin_example.SomeFlutterApi.getA$pigeonVar_messageChannelSuffix';
final BasicMessageChannel<Object?> pigeonVar_channel = BasicMessageChannel<Object?>(
pigeonVar_channelName,
pigeonChannelCodec,
binaryMessenger: pigeonVar_binaryMessenger,
);
final List<Object?>? pigeonVar_replyList =
await pigeonVar_channel.send(<Object?>[a]) as List<Object?>?;
if (pigeonVar_replyList == null) {
throw _createConnectionError(pigeonVar_channelName);
} else if (pigeonVar_replyList.length > 1) {
throw PlatformException(
code: pigeonVar_replyList[0]! as String,
message: pigeonVar_replyList[1] as String?,
details: pigeonVar_replyList[2],
);
} else if (pigeonVar_replyList[0] == null) {
throw PlatformException(
code: 'null-error',
message: 'Host platform returned null value for non-null return value.',
);
} else {
return (pigeonVar_replyList[0] as A?)!;
}
}
}
```
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:pigeon/pigeon.dart';
// #docregion config
@ConfigurePigeon(
PigeonOptions(
dartOut: 'lib/src/messages.g.dart',
dartOptions: DartOptions(),
kotlinOut:
'android/src/main/kotlin/com/pigeonsealedexample/pigeon_sealed_plugin/Messages.g.kt',
kotlinOptions:
KotlinOptions(package: 'com.pigeonsealedexample.somepackage'),
swiftOut: 'ios/Classes/Messages.g.swift',
swiftOptions: SwiftOptions(),
dartPackageName: 'pigeon_sealed_plugin_example',
),
)
sealed class A {
A();
}
class B extends A {
final int? number;
final String? message;
B({
required this.number,
this.message,
});
}
class C extends A {
C();
}
@HostApi()
abstract class SomeHostApi {
C getC(C c);
A getA(A a);
}
@HostApi()
abstract class SomeFlutterApi {
C getC(C c);
A getA(A a);
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Deprecated. Use `dart run` instead.
Building package executable...
Built pigeon:pigeon.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[!] Flutter (Channel master, 3.26.0-1.0.pre.168, on macOS 14.5 23F79 darwin-arm64, locale en-RU)
! Warning: `dart` on your path resolves to /opt/homebrew/Cellar/dart/3.5.2/libexec/bin/dart, which is not inside your current Flutter SDK checkout at /Users/feduke-nukem/Desktop/git-projects/flutter/flutter. Consider adding
/Users/feduke-nukem/Desktop/git-projects/flutter/flutter/bin to the front of your path.
! Upstream repository https://github.com/feduke-nukem/flutter is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to https://github.com/feduke-nukem/flutter to dismiss this error.
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2023.3)
[✓] VS Code (version 1.96.0)
[✓] Connected device (6 available)
! Error: Browsing on the local area network for iPhone (Анна). Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
```
</details>
| package,team-ecosystem,has reproducible steps,p: pigeon,P2,triaged-ecosystem,found in release: 3.27,found in release: 3.28 | low | Critical |
2,747,425,045 | ollama | I can not connect to 11434 port | ### What is the issue?
sir, i add Environment="OLLAMA_HOST=0.0.0.0" to the service file,Then i restart ollama. now, i find the service listener port is only have tcp6,without tcp4 listen,so i can not access to the service from other machine. how can i fix this problem? thank you
[root@localhost ollama]# netstat -tuln | grep 11434
tcp6 0 0 :::11434 :::* LISTEN
[root@localhost ollama]#
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | bug,needs more info | low | Minor |
2,747,427,950 | neovim | Directory filetype | ### Problem
add directory filetype using vim.filetype.add doesn't work, cause filetypedetect is defined using event { 'BufRead', 'BufNewFile', 'StdinReadPost' }, it won't react to a directory buffer.
why can't it use BufAdd instead?
### Expected behavior
Allow detect directory filetype using:
```lua
vim.filetype.add({
pattern={
[".*"]={
function(path)
return path and vim.fn.isdirectory(path)==1 and "directory"
end,
{priority=math.huge},
},
},
})
``` | enhancement,needs:vim-patch,filetype | medium | Major |
2,747,434,072 | pytorch | sympy.C.ConstantInteger has no method name | ### 🐛 Describe the bug
In line, https://github.com/pytorch/pytorch/blob/main/torch/fx/experimental/symbolic_shapes.py#L1652 instruction ``src.name()`` fails when src is One or Zero (sympy.S.One or numpy.S.Zero) because it does not exist for singleton.
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241216+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] clip-anytorch==2.6.0
[pip3] CoCa-pytorch==0.1.0
[pip3] dalle2-pytorch==1.15.6
[pip3] ema-pytorch==0.7.0
[pip3] executorch==0.4.0
[pip3] flake8==7.1.1
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxconverter-common==1.14.0
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime-genai==0.5.2
[pip3] onnxruntime-gpu==1.21.0
[pip3] onnxruntime-training==1.21.0+cu121
[pip3] onnxscript==0.1.0.dev20240905
[pip3] open_clip_torch==2.26.1
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.8.4
[pip3] torch==2.6.0.dev20241216+cu126
[pip3] torch-fidelity==0.3.0
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.5.0
[pip3] torchaudio==2.6.0.dev20241216+cu126
[pip3] torchmetrics==1.4.3
[pip3] torchvision==0.22.0.dev20241216+cu126
[pip3] triton==3.1.0
[pip3] vector-quantize-pytorch==1.18.1
[conda] Could not collect
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @chauhang @penguinwu @bobrenjc93 | needs reproduction,triaged,module: fx,oncall: pt2,module: dynamic shapes | low | Critical |
2,747,434,400 | rust | HashMap reallocates even though capacity is not exceeded | `HashMap::with_capacity`'s documentation states:
> The hash map will be able to hold at least `capacity` elements without reallocating. This method is allowed to allocate for more elements than `capacity`.
This statement is incorrect as the following code demonstrates.
```rust
fn main() {
let initial_capacity = 100;
let mut a = std::collections::HashMap::<u32, ()>::with_capacity(initial_capacity);
let mut current_capacity = initial_capacity;
for i in 0..1_000 {
let new_capacity = a.capacity();
if new_capacity != current_capacity {
println!("iteration {i}, new capacity {new_capacity}");
current_capacity = a.capacity();
}
assert!(a.len() < initial_capacity);
a.insert(i, ());
if a.len() == initial_capacity {
a.remove(&i).unwrap();
}
}
}
```
A possible output of this program is as follows. It is a *possible* output because the default hasher is randomly seeded. You can make it deterministic with a custom hasher.
```
iteration 0, new capacity 112
iteration 100, new capacity 111
iteration 101, new capacity 110
iteration 105, new capacity 109
iteration 109, new capacity 108
iteration 111, new capacity 107
iteration 125, new capacity 106
iteration 137, new capacity 105
iteration 138, new capacity 104
iteration 141, new capacity 103
iteration 165, new capacity 102
iteration 187, new capacity 101
iteration 188, new capacity 100
iteration 252, new capacity 99
iteration 253, new capacity 224
```
As you can see in the output the capacity jumps from the initial 112 to 224 in iteration 253. This means the HashMap has allocated more memory. However, as the assert shows, the HashMap never held more elements than the initial capacity. This violates the guarantee documented in `with_capacity`.
This is a bug in hashbrown. I've opened an issue there too https://github.com/rust-lang/hashbrown/issues/602 . | A-collections,T-libs-api,A-docs,C-bug,T-libs | low | Critical |
2,747,445,912 | PowerToys | PowerToys Run shows black text on black background | ### Microsoft PowerToys version
0.87.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
When opening PowerToys Run (using Alt + Space) I get the input window, but I can't see my input or the results because everything is black except the application icons.
This occurs regardless of whether I have Windows set to Light or Dark mode. This started after the last update (to v 0.87.0), before that I got a light colored input with black text.
### ✔️ Expected Behavior
I'm expecting the input to look like it used to, like on the [PowerToys Run page](https://learn.microsoft.com/en-us/windows/powertoys/run)
### ❌ Actual Behavior

### Other Software
_No response_ | Issue-Bug,Product-PowerToys Run,Needs-Triage,Needs-Team-Response | low | Major |
2,747,451,048 | pytorch | infer_size(a, b) fails when it could return a value | ### 🐛 Describe the bug
In function [infer_size](https://github.com/pytorch/pytorch/blob/main/torch/_subclasses/fake_impls.py#L845), the case sizeA == 1 and sizeA == 1 are unknown, assuming the model is valid, the function could set ``expandedSizes[i]``:
```python
if (
guard_size_oblivious(sizeA == 1)
or guard_size_oblivious(sizeB == 1)
or sizeA == sizeB
):
expandedSizes[i] = sizeB if guard_size_oblivious(sizeA == 1) else sizeA
else:
expandedSizes[i] = torch.sym_max(sizeA, sizeB)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241216+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] clip-anytorch==2.6.0
[pip3] CoCa-pytorch==0.1.0
[pip3] dalle2-pytorch==1.15.6
[pip3] ema-pytorch==0.7.0
[pip3] executorch==0.4.0
[pip3] flake8==7.1.1
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxconverter-common==1.14.0
[pip3] onnxruntime==1.20.1
[pip3] onnxruntime-genai==0.5.2
[pip3] onnxruntime-gpu==1.21.0
[pip3] onnxruntime-training==1.21.0+cu121
[pip3] onnxscript==0.1.0.dev20240905
[pip3] open_clip_torch==2.26.1
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.8.4
[pip3] torch==2.6.0.dev20241216+cu126
[pip3] torch-fidelity==0.3.0
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.5.0
[pip3] torchaudio==2.6.0.dev20241216+cu126
[pip3] torchmetrics==1.4.3
[pip3] torchvision==0.22.0.dev20241216+cu126
[pip3] triton==3.1.0
[pip3] vector-quantize-pytorch==1.18.1
[conda] Could not collect
```
cc @chauhang @penguinwu @eellison @zou3519 @bdhirsh @yf225 | triaged,oncall: pt2,module: fakeTensor | low | Critical |
2,747,451,217 | vscode | ERR [189] potential listener LEAK detected, having 248 listeners already. MOST frequent listener (33) | This originates here
https://github.com/microsoft/vscode/blob/777fd07cccc3de449e529c9f701c2cfdd36ecb3e/src/vs/workbench/contrib/search/browser/searchResultsView.ts#L296
```
ERR [189] potential listener LEAK detected, having 248 listeners already. MOST frequent listener (33):: Error
at Zhi.create (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:25:11828)
at $Te.q [as onDidChange] (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:28:1110)
at Object.u [as onWillAddFirstListener] (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:403:109650)
at Tse.q [as onDidChange] (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:28:1298)
at new vc (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:403:114399)
at kdt.o (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:1334:1586)
at kdt.createInstance (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:1334:1083)
at Uve.renderTemplate (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:2080:14897)
at jOi.renderTemplate (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:411:36628)
at REt.renderTemplate (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:411:16131)
``` | bug,freeze-slow-crash-leak,performance | low | Critical |
2,747,468,987 | ant-design | Facing issue of SyntaxError: Unexpected token 'export' when import antd any component in Next.js app | ### Reproduction link
[https://github.com/prakashmallow/test-with-antd/tree/main](https://github.com/prakashmallow/test-with-antd/tree/main)
### Steps to reproduce
1. Clone the above Next.js app locally
2. Install dependencies with the `yarn install` or `npm install` command in the IDE terminal with the project directory.
3. Run `next dev` or `next build`
4. The issue is occurring
### What is expected?
Without any error while running the `next dev` and no issues with the `next build` while using the antd package in the Next.js project.
### What is actually happening?
When running the `next dev` and `next build` using the antd package in the Next.js project the below error throws
`SyntaxError: Unexpected token 'export'`
<img width="1440" alt="Screenshot 2024-12-18 at 3 28 21 PM" src="https://github.com/user-attachments/assets/5f718a33-86b4-426c-a4c8-a0a962b0f8ba" />
<img width="1440" alt="Screenshot 2024-12-18 at 3 28 32 PM" src="https://github.com/user-attachments/assets/1b019d55-a500-45f8-9974-35aa8967629a" />
<img width="1440" alt="Screenshot 2024-12-18 at 4 24 06 PM" src="https://github.com/user-attachments/assets/04f4cf3a-89ce-4828-b978-4f9816f85e3d" />
| Environment | Info |
| --- | --- |
| next | 15.1.1 |
| antd | 5.22.5 |
| @ant-design/cssinjs | 1.22.1 |
| React | 19 |
| System | macOS |
| Browser | Chrome |
---
Nextjs version 15.1.0 and 15.1.1
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | medium | Critical |
2,747,475,781 | kubernetes | informer list and watch keep str error log:too old resource version | ### What happened?
informer list and watch keep str error log:too old resource version
resourceVersion stay at the same value.
### What did you expect to happen?
when watch have error:too old resource version,resourceVersion will change other value.
informer recovery available.
### How can we reproduce it (as minimally and precisely as possible)?
maybe etcd leader changed.
### Anything else we need to know?
if a list request have resourceVersion and have limit or continue,apiserver will query it from etcd and return request resourceVersion.
for example
https://qqqq6443/api/v1/nodes?resourceVersion=1111&limit=10
response resourseVersion will is 1111.
informer
1.use page.list() to list resource from apiserver.
2.get resourceVersion from 1 result.
3.use resourceVersion to watch from apiserver.
then informer page.list will always return 1111.
watch will always std err too old resource version.
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,needs-sig,needs-triage | low | Critical |
2,747,479,116 | rust | unhelpful E0277 when writing `func(&a: &T)` instead of `func(a: &T)` | ### Code
```Rust
use std::path::Path;
fn main() {
fun_with_path(&Path::new("."));
fun_with_int(&0);
}
// &a: &Path should've been a: &Path
pub fn fun_with_path(&a: &Path) {
println!("{:?}", a);
}
// ok
pub fn fun_with_int(&a: &i32) {
println!("{:?}", a);
}
```
### Current output
```Shell
Compiling playground v0.0.1 (/playground)
error[E0277]: the size for values of type `[u8]` cannot be known at compilation time
--> src/main.rs:9:23
|
9 | pub fn fun_with_path(&a: &Path) {
| ^ doesn't have a size known at compile-time
|
= help: within `Path`, the trait `Sized` is not implemented for `[u8]`
note: required because it appears within the type `Path`
--> /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/path.rs:2137:12
|
2137 | pub struct Path {
| ^^^^
= note: all local variables must have a statically known size
= help: unsized locals are gated as an unstable feature
For more information about this error, try `rustc --explain E0277`.
error: could not compile `playground` (bin "playground") due to 1 previous error
```
### Desired output
```Shell
point at the `&` of `&a` in `fun_with_path` and suggest removing it?
```
### Rationale and extra context
_No response_
### Other cases
```Rust
```
### Rust Version
```Shell
rustc 1.85.0-nightly (a4cb3c831 2024-12-17)
binary: rustc
commit-hash: a4cb3c831823d9baa56c3d90514b75b2660116fa
commit-date: 2024-12-17
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.5
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,747,497,689 | PowerToys | [PowerToys Run] Dialogue box only draggable by magnifying glass | ### Microsoft PowerToys version
0.87.0
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
1. Turn on the PowerToys Run feature
2. Activate the shortcut to open the dialogue box
3. Attempt to drag the dialogue box (it won't move unless you drag it from the magnifying glass icon)
### ✔️ Expected Behavior
Previously I was able to drag the PowerToys Run dialogue box around my screen, but I can't as of v0.87.0.
This might've been intentional I suppose. If it was, an option to turn it back on would be greatly appreciated.
**EDIT: It's still draggable from the magnifying glass (thanks Davide). Might be nice to have it draggable from other places as well, but not as important of a bug anymore.**
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Product-PowerToys Run,Resolution-Fix Committed | low | Critical |
2,747,520,228 | flutter | [pigeon] Allow generics in data classes, with concrete types at the `Api` level | ### Use case
Defining custom argument types would be very helpful to reduce code duplication. Nothing special about that just more Generics support. The only supported types should be primitives such as int, string etc and declared within pigeon file.
### Proposal
Opportunity to generate Apis using such classes (at least for Kotlin/Swift)
```dart
class A<T> {
final List<T> list;
final T value;
A({required this.list, required this.value});
}
sealed class Option<T> {}
class Some<T> extends Option<T> {
final T value;
Some({required this.value});
}
class None<T> extends Option<T> {}
```
Actually we know that in HostApi/FlutterApi only known types are used, either are generated or predefined (int, bool, String etc), so it is possible to implement such feature. | package,c: proposal,team-ecosystem,p: pigeon,P3,triaged-ecosystem | low | Major |
2,747,529,996 | react-native | [New Arch][Android] After turning on fabric, the text in the Xiaomi MIUI13 system is not fully displayed | ### Description
On Xiaomi MIUI13 system, when the font weight is changed and the fontWeight of the Text component is set to blod, the Text text will be truncated and wrapped;
If numberOfLines is set to 1, a different phenomenon will occur, causing the end of the text to become an ellipsis and incomplete display.
MIUI 14 seems to have problems as well, and other devices have not been tested.
This problem only occurs when a new architecture is enabled. There is no problem if newArchEnabled is set to false.
#### This is the system information:

#### Set the thickness to maximum:

### Steps to reproduce
1. Do not set numberOfLines, and the line will be truncated directly.
```tsx
<View style={{
paddingHorizontal: 12,
marginHorizontal: 4,
alignItems: "center",
justifyContent: "center",
height: 26
}}>
<Text style={{
fontSize: 13,
fontWeight: "bold",
backgroundColor: 'red',
}}>已取件(粗体),未设置了numberOfLines * 121221</Text>
</View>
```
2、Set numberOfLines to 1, and an ellipsis appears at the end
```tsx
<View style={{
paddingHorizontal: 12,
marginHorizontal: 4,
alignItems: "center",
justifyContent: "center",
height: 26
}}>
<Text style={{
fontSize: 13,
fontWeight: "bold",
backgroundColor: 'red',
}} numberOfLines={1}>已取件(粗体),设置了numberOfLines=1 * 121221</Text>
</View>
```
### React Native Version
0.74.1
### Affected Platforms
Runtime - Android
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
System:
OS: macOS 13.4
CPU: (10) arm64 Apple M2 Pro
Memory: 123.03 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.0.0
path: /usr/local/bin/node
Yarn:
version: 1.22.19
path: /usr/local/bin/yarn
npm:
version: 8.6.0
path: /usr/local/bin/npm
Watchman:
version: 2024.01.22.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.12.1
path: /Users/01400926/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 22.4
- iOS 16.4
- macOS 13.3
- tvOS 16.4
- watchOS 9.4
Android SDK: Not Found
IDEs:
Android Studio: 2022.3 AI-223.8836.35.2231.10671973
Xcode:
version: 14.3/14E222b
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: /usr/bin/javac
Ruby:
version: 3.2.2
path: /Users/01400926/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.74.1
wanted: 0.74.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
无
```
### Reproducer
https://github.com/peaktan/fontDemo
### Screenshots and Videos
code:
```tsx
<View style={{
paddingHorizontal: 12,
marginHorizontal: 4,
alignItems: "center",
justifyContent: "center",
height: 26
}}>
<Text style={{
fontSize: 13,
fontWeight: "bold",
backgroundColor: 'red',
}}>已取件(粗体),未设置了numberOfLines * 121221</Text>
</View>
<View style={{
paddingHorizontal: 12,
marginHorizontal: 4,
alignItems: "center",
justifyContent: "center",
height: 26
}}>
<Text style={{
fontSize: 13,
fontWeight: "bold",
backgroundColor: 'red',
}} numberOfLines={1}>已取件(粗体),设置了numberOfLines=1 * 121221</Text>
</View>
```
result:

| Issue: Author Provided Repro,Platform: Android,Newer Patch Available,Type: New Architecture | low | Major |
2,747,548,191 | ollama | Clarification: "format" vs "tools" behaviours | ### What is the issue?
Structured output formatting was recently released. There are now 2 different ways of structuring your output:
1. Pass a schema to the "format" argument to the API.
2. Convert the schema to a tool and pass it to the "tools" argument to the API.
I understand that semantically, for structured output, it would make more sense to use the "format" option, whereas for tool use, the "tools" argument is more sensible.
My question is if there is any difference under the hood?
Specifically, I have the following questions:
- Does `tools` insert the tool into the prompt using the tool tokens in the model chat template?
- Does "format" insert the tool into the prompt using the tool tokens in the model chat template?
- Does "format" ensure that the schema is respected, or only that the output is "json", but not necessarily "json" that complies to the schema? Is the behaviour exactly the same for "tools", or is there a difference?
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.5.4 | bug | low | Minor |
2,747,568,609 | flutter | [GoRouter] Simultaneous Button Taps Trigger Multiple Navigation Events | ### Steps to reproduce
1. Set up GoRouter with two routes:
Define two screens (ScreenA and ScreenB).
Add the routes for these screens to your GoRouter configuration.
2. Create a screen with two buttons:
Place two buttons (ButtonA and ButtonB) on the same screen.
Each button should navigate to a different screen using GoRouter.go().
3. Simulate simultaneous button presses:
Run the app and navigate to the screen with the buttons.
Tap both buttons at the same time.
### Expected results
When two buttons are tapped simultaneously:
Only the navigation event for the button tapped first should be processed.
The second navigation event should be ignored, ensuring only one new screen is opened.
This behavior aligns with how Navigator 1.0 handles simultaneous navigation requests.
### Actual results
Both navigation events are processed, and two screens are pushed onto the stack, resulting in unintended behavior.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
final GoRouter _router = GoRouter(
routes: [
GoRoute(
path: '/',
name: 'home',
builder: (context, state) => HomeScreen(),
),
GoRoute(
path: '/screenA',
name: 'screenA',
builder: (context, state) => ScreenA(),
),
GoRoute(
path: '/screenB',
name: 'screenB',
builder: (context, state) => ScreenB(),
),
],
);
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: _router,
);
}
}
class HomeScreen extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('Home')),
body: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
ElevatedButton(
onPressed: () => GoRouter.of(context).pushNamed('screenA'),
child: Text('Go to Screen A'),
),
ElevatedButton(
onPressed: () => GoRouter.of(context).pushNamed('screenB'),
child: Text('Go to Screen B'),
),
],
),
);
}
}
class ScreenA extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('Screen A')),
body: Center(child: Text('Welcome to Screen A')),
);
}
}
class ScreenB extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('Screen B')),
body: Center(child: Text('Welcome to Screen B')),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| package,has reproducible steps,P2,p: go_router,team-go_router,triaged-go_router,found in release: 3.27,found in release: 3.28 | low | Minor |
2,747,595,102 | three.js | Honor camera layers on light.shadow.camera for selective shadow | ### Description
In order to build a selective real time shadow along with a baked shadow system the light.shadow.camera should honor the layers to be tested against an object to render the shadow
I did add support on a custom build of the WebGLRenderer and this was very handy
Here an example disabling all the layers for the shadow camera but still renders :
[Live Link](https://jsfiddle.net/uwpyqoe4/6/)
### Solution
Before rendering the shadow, the mask is copied from the main render camera, wondering if this copy is mandatory, and if we could go with manually setting the layers instead for more controls, or maybe provide a solution to play with the shadow.camera layer without being overwritten :
[https://github.com/mrdoob/three.js/blob/672d77ab93218a00555331db45ffc19055c1fe59/src/nodes/lighting/ShadowNode.js#L659](https://github.com/mrdoob/three.js/blob/672d77ab93218a00555331db45ffc19055c1fe59/src/nodes/lighting/ShadowNode.js#L659)
| Suggestion | low | Major |
2,747,619,615 | flutter | [Flutter Web] Rotating the keyboard from landscape to portrait with keyboard open leaves a white area | ### Steps to reproduce
I'm developing a web app with Flutter (version 3.27.0 but had same issue with 3.24). I deployed the app on a web server with flutter build web --release (thus should use canvaskit).
Using the web app on a Android (Samsung) device on multiple browsers (Chrome, Firefox and the default Samsung one) I noticed that whenever I rotate the screen from landscape to portrait with the keyboard open I get that the screen space previously used by the keyboard inset turns white and becomes unavailable for the app in a similar way to resizing the screen. Then, the app runs as expected in the remaining part of the screen. When I rotate the screen again, the full screen is used again by the web app.
The same issue happens whenever LastPass is automatically opened while writing the username or password (in this case even just staying in portrait). However, I prefer to focus on the rotation part as it is more reproducable.
On iOS the issue IS NOT present, probably due to the different rendering engine.
This problem happens on any screen which allows to open up the virtual keyboard.
### Expected results
When running on Flutter Web on android device (any browser), the app should properly reclaim its space after the keyboard is closed.
### Actual results
A white space area occupies the area previously used by the virtual keyboard inset when the app is rotated from landscape to portrait with the virtual keyboard open.
### Code sample
<details open><summary>Code sample</summary>
To verify that the issue is not related to my code I implemented the following basic app with only a Scaffold and a TextFormField, which also presents the aforementioned problem once deployed.
``` dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Window Bug Test',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple)
.copyWith(surface: Colors.black),
useMaterial3: true,
),
home: const TestScreen(),
);
}
}
class TestScreen extends StatefulWidget {
const TestScreen({super.key});
@override
State<TestScreen> createState() => _TestScreenState();
}
class _TestScreenState extends State<TestScreen> {
final TextEditingController _controller = TextEditingController();
@override
void dispose() {
_controller.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
return Scaffold(
resizeToAvoidBottomInset: false,
body: Center(
child: Container(
margin: const EdgeInsets.all(16),
padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 4),
decoration: BoxDecoration(
color: Colors.white, borderRadius: BorderRadius.circular(16)),
child: TextFormField(
controller: _controller,
autocorrect: false,
enableSuggestions: true,
keyboardType: TextInputType.text,
maxLines: 8,
onSaved: (value) {},
decoration: InputDecoration(
hintText: "Add text here",
hintStyle: Theme.of(context).textTheme.bodyMedium,
alignLabelWithHint: true,
border: InputBorder.none,
),
style: Theme.of(context).textTheme.bodyMedium,
autovalidateMode: AutovalidateMode.onUserInteraction,
validator: (value) {
if (value!.isEmpty || value.length < 16) {
return "Input is too short";
}
return null;
},
),
)));
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
Screen looks normal
<img src="https://github.com/user-attachments/assets/fc7a1042-5230-4f98-ad20-cdf2f270233a" width ="300">
Rotate the screen and open the virtual keyboard
<img src="https://github.com/user-attachments/assets/f1287144-d27f-47f9-9b50-01ece7918ddd" width ="300">
Rotate the screen back to portrait -> white area appears. Only goes away when rotating again
<img src="https://github.com/user-attachments/assets/e348a6b9-00cd-4136-8236-f9af307ee073" width ="300">
</details>
### Logs
<details open><summary>Logs</summary>
I could not see logs on the deployed application, but I don't get any error uploaded on Sentry.
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.0, on macOS 15.1.1 24B91 darwin-arm64, locale en-IT)
• Flutter version 3.27.0 on channel stable at /Users/stefanoconoci/Code/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 8495dee1fd (8 days ago), 2024-12-10 14:23:39 -0800
• Engine revision 83bacfc525
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/stefanoconoci/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
! iOS 18.2 Simulator not installed; this may be necessary for iOS and macOS development.
To download and install the platform, open Xcode, select Xcode > Settings > Components,
and click the GET button for the required platform.
For more information, please visit:
https://developer.apple.com/documentation/xcode/installing-additional-simulator-runtimes
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.3)
• IntelliJ at /Applications/IntelliJ IDEA.app
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
[✓] VS Code (version 1.96.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1.1 24B91 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1.1 24B91 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.140
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.```
</details>
| a: text input,platform-android,engine,platform-web,has reproducible steps,P2,team-web,triaged-web,found in release: 3.27,found in release: 3.28 | low | Critical |
2,747,666,574 | excalidraw | Adding Entity-Relationship Diagram Symbols and Notation | It would be immensly usefull for data modelling. I'm pretty sure it's not too much work, so if anyone can do this (my knowledge doesn't extend to js), I'd be thankfull! <3

| enhancement | low | Minor |
2,747,667,244 | kubernetes | On namespace deletion send sigterm to pods first | ### What would you like to be added?
After sending a request to the Kubernetes API to delete a namespace, the pods within the namespace are reporting an error (golang app) while trying to reach another pod in the cluster:
```
dial tcp 10.43.16.12:80: connect: connection refused
```
I was expecting a context Cancelled error in my app as the request is passed a context that listens for SIGTERM. It seems when a namespace is deleted, it starts by deleting endpoints, services etc. before sending the SIGTERM which results in the app not knowing that it should be shutting down.
### Why is this needed?
Without a SIGTERM the pods in the deployment continue to try and operate while their namespace environment is shutting down, leading to errors and data loss.
Considered sending a deployment delete request before the namespace delete but it adds an extra API call.
Would be helpful to know how the order of deletion occurs, and potentially look to revise the order to ensure pods receive the SIGTERM first. | kind/feature,needs-sig,needs-triage | low | Critical |
2,747,683,610 | pytorch | No reproducibility after ONNX export of fully converted QAT model | ### 🐛 Describe the bug
When I use quantization-aware training, results are not reproducible between:
1. fake quantized model
2. real quantized model
3. exported ONNX model
### Code example
```python
import torch
import onnxruntime as ort
torch.manual_seed(42)
def dummy_training(model):
model.train()
image = torch.rand(1, 3, 224, 224)
model(image)
# ...
class DummyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.quant = torch.ao.quantization.QuantStub()
self.conv = torch.nn.Conv2d(3, 1, 1)
self.bn = torch.nn.BatchNorm2d(1)
self.relu = torch.nn.ReLU()
self.dequant = torch.ao.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = self.bn(x)
x = self.relu(x)
x = self.dequant(x)
return x
def fuse_model(self):
torch.ao.quantization.fuse_modules(self, ['conv', 'bn', 'relu'], inplace=True)
model_fp32 = DummyModel()
# prepare qat
model_fp32.eval()
model_fp32.qconfig = torch.ao.quantization.get_default_qat_qconfig('x86')
model_fp32.fuse_model()
model_fp32.train()
model_fake_quant = torch.ao.quantization.prepare_qat(model_fp32)
# run training
dummy_training(model_fake_quant)
# quantize
model_fake_quant.eval()
model_fake_quant.apply(torch.ao.quantization.disable_observer)
model_fake_quant.apply(torch.nn.intrinsic.qat.freeze_bn_stats)
model_real_quant = torch.ao.quantization.convert(model_fake_quant)
# create onnx model
torch.onnx.export(model_real_quant, torch.rand(1, 3, 224, 224), "quantized.onnx", input_names=["input"], output_names=["output"])
model_onnx = ort.InferenceSession("quantized.onnx", providers=["CPUExecutionProvider"])
# test reproducability
x = torch.rand(1, 3, 224, 224)
res_fake_quant = model_fake_quant(x)
res_real_quant = model_real_quant(x)
res_onnx = model_onnx.run(None, {"input": x.numpy()})[0]
# all asserts will fail!!!
# torch.testing.assert_close(res_fake_quant, res_real_quant)
torch.testing.assert_close(res_real_quant, torch.tensor(res_onnx))
# torch.testing.assert_close(res_fake_quant, torch.tensor(res_onnx))
```
### Error
```
Traceback (most recent call last):
File "D:\Projekte\Quantization 2.0\qat_reproduce3.py", line 62, in <module>
torch.testing.assert_close(res_real_quant, torch.tensor(res_onnx))
File "D:\Tools\Venv\Pytorch_2.5\lib\site-packages\torch\testing\_comparison.py", line 1524, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: Tensor-likes are not close!
Mismatched elements: 49 / 50176 (0.1%)
Greatest absolute difference: 0.011374831199645996 at index (0, 0, 105, 1) (up to 1e-05 allowed)
Greatest relative difference: 0.021739039570093155 at index (0, 0, 116, 173) (up to 1.3e-06 allowed)
```
### Expected Behavior
According to https://discuss.pytorch.org/t/173684 we cannot expect a fake quantized model to behave the same as the quantized model. However, I would expect that once the model is fully quantized, it behaves the same between Pytorch and ONNX Runtime. This was the case for all float32 models I tested.
In this minimal example, the problem ist not severe, but for a real model (ResNet18), the number of mismatched elements grows to over 40% and the greatest absolute difference to 0.05 (greated rel diff to infinity).
### Versions
```
Collecting environment information...
PyTorch version: 2.4.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro (10.0.19045 64-Bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 11.6.55
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Quadro RTX 5000
GPU 1: Quadro RTX 5000
Nvidia driver version: Could not collect
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
Manufacturer: GenuineIntel
Family: 179
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2201
MaxClockSpeed: 2201
L2CacheSize: 3072
L2CacheSpeed: None
Revision: 20225
Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.19.2
[pip3] torch==2.4.1+cu118
[pip3] torchvision==0.19.1+cu118
[conda] Could not collect
```
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim | module: onnx,oncall: quantization,triaged | low | Critical |
2,747,708,391 | transformers | Deepseek v2 | ### Model description
There is stale PR https://github.com/huggingface/transformers/pull/31976. Is anybody working on this model?
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | New model | low | Major |
2,747,758,828 | PowerToys | Change Keyboard Language when specific hardware is detected | ### Description of the new feature / enhancement
#Product-Keyboard Shortcut Manager
As a Brazilian, who has lived in the US and now lives in Spain, one of the most annoying things I have to deal with my work computer is that all my home keyboards are US-layout ones, and the keyboard in the work laptop is in Spanish.
I can type in both, no problem, but I have to select a default language (US-International) and when I'm using the other keyboard, I have to manually change the language to Spanish every time the computer sleeps, or I close the lid. Many questions in morning chats are ended with `_` instead of `?` because of that 🫤...
The request is to have a program running that detects the keyboard being typed on (maybe based on model or serial number), and switches to the last selected language for that keyboard, in a similar way that Windows remembers the monitor layout and resolution when you plug the same set-up after having worked in a different place.
Alternatively, it could detect a new plugged layout and change the global layout configuration for the computer to the latest one selected for that set of plugged-in keyboards. This would eliminate the need to have more than one layout active at a time, but would not fix the issue that pressing a `Ç` in my Spanish keyboard while having the US-Intl layout selected for the wireless keyboard actually produces a `\`
### Scenario when this would be used?
Used for people that, like me, have multiple keyboard keyboard layouts throughout the day (at different desks / docks for example).
### Supporting information
I searched around, and can't find anything that does this... the closest I got are these 2 answers from the foruns, but don't really address the issue
https://answers.microsoft.com/en-us/windows/forum/all/changing-keyboard-when-docked/1b0f269a-8898-41ef-8aa7-e7dc20f6490b
https://answers.microsoft.com/en-us/windows/forum/windows_7-hardware/changing-docked-profile/ae909c40-3383-4daa-b136-fe8a1d8619e8 | Idea-New PowerToy,Needs-Triage | low | Minor |
2,747,762,194 | pytorch | AOTI_TORCH_CHECK failed in aot_compile-d model | ### 🐛 Describe the bug
I exported some model using `torch.export(strict=False)`. Exported model itself works well, but if I compile it using `torch._inductor.aot_compile`, the process crashes with some internal check in generated code.
Reproducer:
https://colab.research.google.com/drive/1U8fe9k85_S4fRurxz_M7g9kYf7Yq2CRy?usp=sharing
Exported program to reproduce is here:
[exported_program.pt2.zip](https://github.com/user-attachments/files/18181526/exported_program.pt2.zip)
Another reproducer with model generation:
https://colab.research.google.com/drive/1W930GmsJEDVMdsBKHuTQruqV6IyoLqBa?usp=sharing
The same in pytorch nightly build
https://colab.research.google.com/drive/1WcRHyac8K2G6Ed4v1NywCoHSGDAMEstq?usp=sharing
the generated code is here
[c5dq6ajvevkzbzmo54sijfiqey4wp7tumw7wdk34ethlfoqcf2by.cpp.zip](https://github.com/user-attachments/files/18181571/c5dq6ajvevkzbzmo54sijfiqey4wp7tumw7wdk34ethlfoqcf2by.cpp.zip)
### Error logs
```
prediction/deployable_modules/tests/test_something.py !!! Uncaught exception: index out of bounds: 0 <= tmp7 < ks1
Exception raised from cpp_fused_index_index_put_stack_zeros_4 at /tmp/torchinductor_vscode/c5dq6ajvevkzbzmo54sijfiqey4wp7tumw7wdk34ethlfoqcf2by.cpp:1084 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x9e (0x7ff115c286de in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0x80 (0x7ff115bcd0a8 in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: <unknown function> + 0x5024044 (0x7ff173653044 in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #3: cpp_fused_index_index_put_stack_zeros_4 + 0xdcd (0x7ff0bf4a4acd in /tmp/torchinductor_vscode/cxe2kfjxcuzlrkqbjd6yr6psu3h364iewxfja7zwiewr56krpm3n.so)
frame #4: torch::aot_inductor::AOTInductorModel::run_impl(AtenTensorOpaque**, AtenTensorOpaque**, void*, AOTIProxyExecutorOpaque*) + 0xded (0x7ff0bf4a621d in /tmp/torchinductor_vscode/cxe2kfjxcuzlrkqbjd6yr6psu3h364iewxfja7zwiewr56krpm3n.so)
frame #5: torch::aot_inductor::AOTInductorModelContainer::run(AtenTensorOpaque**, AtenTensorOpaque**, void*, AOTIProxyExecutorOpaque*) + 0xf1 (0x7ff0bf4b4c01 in /tmp/torchinductor_vscode/cxe2kfjxcuzlrkqbjd6yr6psu3h364iewxfja7zwiewr56krpm3n.so)
frame #6: AOTInductorModelContainerRun + 0x86 (0x7ff0bf4a9686 in /tmp/torchinductor_vscode/cxe2kfjxcuzlrkqbjd6yr6psu3h364iewxfja7zwiewr56krpm3n.so)
frame #7: torch::inductor::AOTIModelContainerRunner::run(std::vector<at::Tensor, std::allocator<at::Tensor> >&, AOTInductorStreamOpaque*) + 0x115 (0x7ff1736394e5 in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #8: torch::inductor::AOTIModelContainerRunnerCpu::run(std::vector<at::Tensor, std::allocator<at::Tensor> >&) + 0x22 (0x7ff17363a192 in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0x8cbcbf (0x7ff17c609cbf in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
frame #10: <unknown function> + 0x48edbf (0x7ff17c1ccdbf in /home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/deployable_modules/tests/test_something.runfiles/pytorch/lib/python3.10/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
, terminating !!!
```
### Versions
I had the problem in torch 2.5.1 built from nixpkgs, but it reproduces also in vanilla torch 2.5.1+cu121 from google colab.
Also I checked it on nightly build 2.6. The difference is that I see some error in python, and ipynb kernel doesn't crash
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 | triaged,oncall: pt2,ciflow/inductor,oncall: export,module: aotinductor | low | Critical |
2,747,778,971 | material-ui | [docs] Add TanStack Router examples | ### Related page
https://mui.com/material-ui/integrations/routing/#react-router-examples
### Kind of issue
Missing information
### Issue description
It is not clear how to combine the TanStack router `<Link>` component with MUI components. One could add examples below the examples for React Router
### Context
I want to use typesafe routing via TanStack `Link` components but want to keep the MUI styled `Link` component. Since this is not possible
```
<Link to="/" style={{ textDecoration: 'none', color: 'inherit' }}>
<MuiLink underline="hover" color="inherit">
Tanstack Link rendered as MUI Link
</MuiLink>
</Link>
```
( because you can't render <a> inside <a> ) it would be awesome if the docs would have some examples on how to "embed" TanStack routing components inside MUI elements to keep the MUI look.
**Search keywords**: TanStack | docs,waiting for 👍,support: docs-feedback | low | Minor |
2,747,844,342 | flutter | GestureDetector.onTapCancel is called during build phase, causing setState error unexpectedly | ### Steps to reproduce
1. run the sample code below
2. press grey box
3. remove grey box by rebuilding (in the sample code, this will be done using `Timer`)
4. `onTapCancel` is called, where I call `setState()` inside.
5. the error below happens.
### Expected results
No error occurs, meaning `onTapCancel` should be called outside of the rebuilding process.
### Actual results
The error below happened.
```console
════════ Exception caught by gesture ═══════════════════════════════════════════
The following assertion was thrown while handling a gesture:
setState() or markNeedsBuild() called when widget tree was locked.
This TapCancelDemo widget cannot be marked as needing to build because the framework is locked.
The widget on which setState() or markNeedsBuild() was called was: TapCancelDemo
state: _TapCancelDemoState#27489
When the exception was thrown, this was the stack:
#0 Element.markNeedsBuild.<anonymous closure> (package:flutter/src/widgets/framework.dart:5193:9)
framework.dart:5193
#1 Element.markNeedsBuild (package:flutter/src/widgets/framework.dart:5203:6)
framework.dart:5203
#2 State.setState (package:flutter/src/widgets/framework.dart:1224:15)
framework.dart:1224
```
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:async';
import 'package:flutter/material.dart';
void main() => runApp(const MaterialApp(home: TapCancelDemo()));
class TapCancelDemo extends StatefulWidget {
const TapCancelDemo({super.key});
@override
State<TapCancelDemo> createState() => _TapCancelDemoState();
}
class _TapCancelDemoState extends State<TapCancelDemo> {
String? _message;
bool _isVisible = true;
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(),
body: Column(
mainAxisSize: MainAxisSize.min,
children: [
if (_isVisible)
Expanded(
child: GestureDetector(
onTapCancel: () {
setState(() => _message = 'onTapCancel');
},
onTapDown: (details) {
setState(() => _message = 'onTapDown');
Timer(const Duration(milliseconds: 100), () {
// remove the widget after 100ms
setState(() => _isVisible = false);
});
},
child: Container(
margin: const EdgeInsets.all(16),
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(16),
color: Colors.grey,
),
child: const Center(
child: Text('Tap me'),
),
),
),
),
const SizedBox(height: 32),
Text(_message ?? ''),
const SizedBox(height: 32),
],
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/cf9ceed8-87a8-4707-b8f9-a8c7107ec10f
</details>
### Logs
<details open><summary>Logs</summary>
```console
════════ Exception caught by gesture ═══════════════════════════════════════════
The following assertion was thrown while handling a gesture:
setState() or markNeedsBuild() called when widget tree was locked.
This TapCancelDemo widget cannot be marked as needing to build because the framework is locked.
The widget on which setState() or markNeedsBuild() was called was: TapCancelDemo
state: _TapCancelDemoState#f8162
When the exception was thrown, this was the stack:
#0 Element.markNeedsBuild.<anonymous closure> (package:flutter/src/widgets/framework.dart:5193:9)
framework.dart:5193
#1 Element.markNeedsBuild (package:flutter/src/widgets/framework.dart:5203:6)
framework.dart:5203
#2 State.setState (package:flutter/src/widgets/framework.dart:1224:15)
framework.dart:1224
#3 _TapCancelDemoState.build.<anonymous closure> (package:example/reorderable_page.dart:29:19)
reorderable_page.dart:29
#4 GestureRecognizer.invokeCallback (package:flutter/src/gestures/recognizer.dart:351:24)
recognizer.dart:351
#5 TapGestureRecognizer.handleTapCancel (package:flutter/src/gestures/tap.dart:680:11)
tap.dart:680
#6 BaseTapGestureRecognizer._checkCancel (package:flutter/src/gestures/tap.dart:318:5)
tap.dart:318
#7 BaseTapGestureRecognizer.resolve (package:flutter/src/gestures/tap.dart:266:7)
tap.dart:266
#8 OneSequenceGestureRecognizer.dispose (package:flutter/src/gestures/recognizer.dart:469:5)
recognizer.dart:469
#9 PrimaryPointerGestureRecognizer.dispose (package:flutter/src/gestures/recognizer.dart:762:11)
recognizer.dart:762
#
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
$ fvm flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.27.0, on macOS 14.3.1 23D60 darwin-arm64, locale ja-JP)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2022.3)
[✓] Android Studio (version 2022.2)
[✓] VS Code (version 1.95.3)
[✓] VS Code (version 1.61.2)
[✓] Connected device (4 available)
[✓] Network resources
• No issues found!
```
</details>
| framework,f: gestures,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.27,found in release: 3.28 | low | Critical |
2,747,849,400 | ant-design | Watermark 水印组件多次渲染导致页面卡死 | ### Reproduction link
[https://codepen.io/linong/pen/MYgpbae?editors=001](https://codepen.io/linong/pen/MYgpbae?editors=001)
### Steps to reproduce
1. 点击覆盖按钮,这个按钮是覆盖 toDataURL,模拟响应较慢的场景。
2. 修改水印文案
3. 观察控制台,会发现console被多次触发。
### What is expected?
console 只触发一次
### What is actually happening?
console 触发了多次,导致页面被卡死
| Environment | Info |
| --- | --- |
| antd | 5.22.5 |
| React | 18 |
| System | macOS 14.6.1 (23G93) |
| Browser | 版本 120.0.6099.234(正式版本) (arm64) |
---
我看到了一个类似的问题,https://github.com/ant-design/ant-design/issues/46765。
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug,help wanted | low | Major |
2,747,871,381 | flutter | CupertinoContextMenu is not interactible until the long press gesture is released | ### Use case
CupertinoContextMenu has a major native distinguish behavior in the dragging process.
Current behavior:
1 - When user long press on item the context appear but there is no interacting.
2 - The User need to release his finger and repress the preview area to interact with the menu.
https://github.com/user-attachments/assets/1a3de1a5-1540-47e9-8617-f3c224995625
The expected behavior:
1 - When user long press on item the context menu should appear
2 - While user finger keep finger down and swap the menu should keep interacting scrolling.
https://github.com/user-attachments/assets/0413149c-771b-4c9b-b4a8-16f7b4a0ca51
In both videos i does not released my finger after popup appear, you can reproduce this misbehave by keep your finger pressed down until pop-up show and try to move your finger around.
Also the drag cancel time is not same as the native, but this is not our topic today , as i see in the source code, there is a constants holding timing for drag, speed .. etc , so it well be easy to make this calibration latter.
### Proposal
I guess we need somehow a way to track the global gesture position , and pass it from the child to the parent and keep synchronizing tracking while preview showing.
I really thinking about solve this issue, but it need more digging to collect more information's about the reflection of IOS accessibility preferences and there impact on the context menu.
If any one have any suggestions that could help, please share them to discuss. | framework,f: cupertino,has reproducible steps,P2,would require significant investment,team-design,triaged-design,found in release: 3.27,found in release: 3.28 | low | Minor |
2,747,877,556 | rust | Slice bounds check not elided in trivial case when using a local varaible outside of a branch | I tried this code:
```rust
fn last_four(s: &[u8]) -> &[u8] {
let start = if s.len() <= 4 {
0
} else {
s.len() - 4
};
&s[start..]
}
```
Which produces the following assembly on x86_64:
```
last_four:
mov rax, rdi
lea rcx, [rsi - 4]
xor edi, edi
cmp rsi, 5
cmovae rdi, rcx
mov rdx, rsi
sub rdx, rdi
jb .LBB0_2
add rax, rdi
ret
.LBB0_2:
push rax
lea rdx, [rip + .L__unnamed_1]
call qword ptr [rip + core::slice::index::slice_start_index_len_fail::hda7fe0c134770be5@GOTPCREL]
.L__unnamed_2:
.ascii "/app/example.rs"
.L__unnamed_1:
.quad .L__unnamed_2
.asciz "\017\000\000\000\000\000\000\000\b\000\000\000\007\000\000"
```
The branch with the call to `slice_start_index_len_fail` should be unreachable.
Indeed, I can remove it by moving the slice operation inside the `if` statement:
```rust
fn last_four(s: &[u8]) -> &[u8] {
if s.len() <= 4 {
&s[0..]
} else {
&s[s.len() - 4..]
}
}
```
Which gives:
```
last_four:
cmp rsi, 5
lea rax, [rdi + rsi - 4]
mov edx, 4
cmovb rdx, rsi
cmovb rax, rdi
ret
```
Which is what I expected. | A-LLVM,I-slow,T-compiler,llvm-fixed-upstream,C-optimization | low | Minor |
2,747,879,223 | rust | Unhelpful suggestions on reference mutability | This code example is minimized from a wild goose chase I ran into when implementing a solution for one of the Advent of Code puzzles.
The error messages and suggestions were not helpful in solving the issue. Some of them were actively confusing - from the programmer's perspective, the variables _clearly_ need mutability in some form - yet the compiler kept insisting the `mut` keywords should be removed.
### Code
```Rust
fn main() {
// const INPUT: &str = include_str!("input.txt");
const INPUT: &str = r#"Hello
Hola
Bonjour
Ciao
"#;
let parsed = parse(INPUT);
let value = part1(&parsed);
println!("Part 1: {value}");
let value = part2(&parsed);
println!("Part 2: {value}");
}
struct Ferris {
greeting: String,
count: usize,
}
impl Ferris {
pub fn new(greeting: &str) -> Self {
Ferris {
greeting: greeting.to_string(),
count: 0,
}
}
pub fn greet(&mut self) {
self.count += 1;
println!("{}", self.greeting);
}
pub fn greet_count(&self) -> usize {
self.count
}
}
type ParsedData = Vec<Ferris>;
fn parse(input: &str) -> ParsedData {
input.lines().map(Ferris::new).collect()
}
fn part1(data: &ParsedData) -> usize {
let mut crabs = data.clone();
for crab in crabs {
crab.greet();
}
crabs.iter().map(Ferris::greet_count).sum()
}
fn part2(_data: &ParsedData) -> usize {
0
}
```
### Current output
```Shell
warning: variable does not need to be mutable
--> src/main.rs:47:9
|
47 | let mut crabs = data.clone();
| ----^^^^^
| |
| help: remove this `mut`
|
= note: `#[warn(unused_mut)]` on by default
error[E0596]: cannot borrow `*crab` as mutable, as it is behind a `&` reference
--> src/main.rs:49:9
|
48 | for crab in crabs {
| ----- this iterator yields `&` references
49 | crab.greet();
| ^^^^ `crab` is a `&` reference, so the data it refers to cannot be borrowed as mutable
For more information about this error, try `rustc --explain E0596`.
```
### Other cases
Remove mut as instructed:
```Rust
fn part1(data: &ParsedData) -> usize {
let crabs = data.clone();
for crab in crabs {
crab.greet();
}
crabs.iter().map(Ferris::greet_count).sum()
}
```
```shell
error[E0596]: cannot borrow `*crab` as mutable, as it is behind a `&` reference
--> src/main.rs:49:9
|
48 | for crab in crabs {
| ----- this iterator yields `&` references
49 | crab.greet();
| ^^^^ `crab` is a `&` reference, so the data it refers to cannot be borrowed as mutable
For more information about this error, try `rustc --explain E0596`.
error: could not compile `mutability` (bin "mutability") due to 1 previous error
```
You know this should be mutable, and the error talks about references, so let's try `&mut` instead:
```Rust
fn part1(data: &ParsedData) -> usize {
let mut crabs = data.clone();
for &mut crab in crabs {
crab.greet();
}
crabs.iter().map(Ferris::greet_count).sum()
}
```
```shell
error[E0308]: mismatched types
--> src/main.rs:48:9
|
48 | for &mut crab in crabs {
| ^^^^^^^^^ ----- this is an iterator with items of type `&Ferris`
| |
| types differ in mutability
|
= note: expected reference `&Ferris`
found mutable reference `&mut _`
For more information about this error, try `rustc --explain E0308`.
```
Maybe use the `&mut` in a different place then?
```Rust
fn part1(data: &ParsedData) -> usize {
let mut crabs = data.clone();
for crab in &mut crabs {
crab.greet();
}
crabs.iter().map(Ferris::greet_count).sum()
}
```
```shell
error[E0277]: `&Vec<Ferris>` is not an iterator
--> src/main.rs:48:17
|
48 | for crab in &mut crabs {
| ^^^^^^^^^^ `&Vec<Ferris>` is not an iterator
|
= help: the trait `Iterator` is not implemented for `&Vec<Ferris>`, which is required by `&mut &Vec<Ferris>: IntoIterator`
= note: required for `&mut &Vec<Ferris>` to implement `Iterator`
= note: required for `&mut &Vec<Ferris>` to implement `IntoIterator`
help: consider removing the leading `&`-reference
|
48 - for crab in &mut crabs {
48 + for crab in crabs {
|
For more information about this error, try `rustc --explain E0277`.
```
### Rust Version
```Shell
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: aarch64-apple-darwin
release: 1.83.0
LLVM version: 19.1.1
```
### Anything else?
Here's the actual fix, hidden under a spoiler if anyone wants to try and figure it out first by themselves:
<details><summary>Solution</summary>
```rust
#[derive(Clone)] // <- this missing Clone derive was the real cause all along!
struct Ferris {
greeting: String,
count: usize,
}
fn part1(data: &ParsedData) -> usize {
let mut crabs = data.clone();
for crab in &mut crabs {
crab.greet();
}
crabs.iter().map(Ferris::greet_count).sum()
}
```
</details>
Additionally, a friend pointed out to me that using `for crab in crabs.iter_mut()` produces a helpful error message. | A-diagnostics,T-compiler | low | Critical |
2,747,883,791 | vscode | Make 'status bar item' error icon more noticeable or customisable | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
## Problem
The status bar item's error indicator is too hard to see, so it's very easy to miss an alert.
## Example
In this example, Prettier's own status bar implementation clearly indicates an error. Compare it to the status bar item: the icon difference is so small that it's effectively unnoticeable, so users will miss the prompt.

## Suggestion
- Let the user/theme customise the colour when there's an error status (e.g. to red)
- Or otherwise make the error icon more noticeable
## Benefits of change
- Users able to notice errors more easily
- Hopefully encourages more uptake of status bar icon/extension standardisation
- No longer need to use a magnifying glass to see it | feature-request,languages-basic,workbench-status | low | Critical |
2,747,960,718 | vscode | Highlight tabs whose file has problems, is untracked or modified compared to repo version | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Could you please provide a highlighting colors setting for tabs whose file is modified compared to repo version, or became untracked after a file renaming ?
I know there is a little U or M appearing on the right of file names when file is Untracked or Modified:

This is not highlighting these tabs enough imo.
Also, if VSCode identifies problems in the file, the problem count is displayed next to the M letter, but the file name's font color changes from orange to default color.
The orange or custom color should stay, even if a problem is identified in the file.
Ability to have a different color for staged and unstaged changed files, and untracked files would be ideal, especially after a file renaming.
Thanks in advance
Mathias
| feature-request,workbench-tabs | low | Major |
2,747,980,408 | flutter | WidgetSpan text cannot be copied using SelectableText.rich | ### Steps to reproduce
```dart
SelectableText.rich(
TextSpan(
children: [
WidgetSpan(
alignment: PlaceholderAlignment.middle,
child: Container(
decoration: BoxDecoration(color: Colors.red, borderRadius: BorderRadius.circular(10)),
padding: const EdgeInsets.only(right: 5.0),
child: RichText(
overflow: TextOverflow.visible,
text: TextSpan(
text: '12312 3123',
),
)
),
),
TextSpan(
text: ' $time',
style: TextStyle(color: Colors.transparent),
),
],
),
);
```
### Expected results
'12312 3123'
### Actual results

It's possible add placeholder to WidgetSpan with String vale ?
### Code sample
<details></details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.27.1, on Microsoft Windows [Version 10.0.14393], locale ru-RU)
• Flutter version 3.27.1 on channel stable at C:\Users\Poloten\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (2 days ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at C:\Users\Poloten\AppData\Local\Android\Sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = C:\Users\Poloten\AppData\Local\Android\Sdk
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.7.1)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.7.34009.444
• Windows 10 SDK version 10.0.22621.0
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• android-studio-dir = C:\Program Files\Android\Android Studio
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
[√] VS Code (version 1.96.0)
• VS Code at C:\Users\Poloten\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.102.0
[√] Connected device (4 available)
• sdk gphone64 x86 64 (mobile) • emulator-5554 • android-x64 • Android 13 (API 33) (emulator)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.14393]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.140
• Edge (web) • edge • web-javascript • Microsoft Edge 130.0.2849.68
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| a: text input,framework,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.27,found in release: 3.28 | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.