id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,731,256,830 | vscode | javascript output with </script> can close scripts | ### Applies To
- [x] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
when the javascript output has a </script> closing tag, it appears to be able to close the script element which is used for running the javascript, so the remainder of the javascript and some of the internal code used to run/render the javascript output end up outside the script as just html


this happens both when running it the first time and also when viewing a notebook/cell which has already been run
also running multiple times will cause it to output multiple times

### VS Code Version
Version: 1.95.3 (Universal) Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813 Date: 2024-11-13T14:50:04.152Z (3 wks ago) Electron: 32.2.1 ElectronBuildId: 10427718 Chromium: 128.0.6613.186 Node.js: 20.18.0 V8: 12.8.374.38-electron.0 OS: Darwin arm64 24.1.0
### Jupyter Extension Version
v2024.10.0
### Jupyter logs
```shell
Visual Studio Code (1.95.3, undefined, desktop)
Jupyter Extension Version: 2024.10.0.
Python Extension Version: 2024.20.0.
Pylance Extension Version: 2024.12.1.
Platform: darwin (arm64).
Temp Storage folder ~/Library/Application Support/Code/User/globalStorage/ms-toolsai.jupyter/version-2024.10.0
No workspace folder opened.
22:32:11.797 [info] Starting Kernel (Python Path: ~/opt/anaconda3/bin/python, Conda, 3.9.13) for 'Untitled-1.ipynb' (disableUI=true)
22:32:14.185 [info] Process Execution: ~/opt/anaconda3/bin/python -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
22:32:14.186 [info] Process Execution: ~/opt/anaconda3/bin/python -m ipykernel_launcher --f=/Users/~/Library/Jupyter/runtime/kernel-v30ee5b0816dd5f6d40229dbe1be4eff4731a31bec.json
> cwd: /
22:32:14.214 [info] Process Execution: ~/opt/anaconda3/bin/python -m pip list
22:32:14.862 [info] Kernel successfully started
22:32:14.864 [info] Process Execution: ~/opt/anaconda3/bin/python /Users/~/.vscode/extensions/ms-toolsai.jupyter-2024.10.0-darwin-arm64/pythonFiles/printJupyterDataDir.py
```
### Coding Language and Runtime Version
Javascript
### Language Extension Version (if applicable)
_No response_
### Anaconda Version (if applicable)
_No response_
### Running Jupyter locally or remotely?
Local | bug,notebook-output | low | Minor |
2,731,261,170 | neovim | build from source on Windows using CLion | ### Problem
I find it very difficult to follow the building instructions..
I have CLion and Visual Studio 2022 installed and the options mentioned in the linked document do not correspond to what I see.
[BUILD.md](https://github.com/neovim/neovim/blob/master/BUILD.md)
### Expected behavior
It would be very helpful to develop one or more powershell scripts that will clone all necessary repositories and execute the building process automatically and review the provided instructions for Visual Studio and CLion.
| build,documentation,platform:windows | low | Major |
2,731,274,422 | rust | E0599: incorrect import suggestion for block-scoped trait | ### Code
```Rust
fn main() {
{
trait Hello {
fn hello(&self) {
println!("hello world");
}
}
impl<T> Hello for T {}
}
().hello();
}
```
### Current output
```Shell
error[E0599]: no method named `hello` found for unit type `()` in the current scope
--> src/main.rs:11:8
|
4 | fn hello(&self) {
| ----- the method is available for `()` here
...
11 | ().hello();
| ^^^^^ method not found in `()`
|
= help: items from traits can only be used if the trait is in scope
help: trait `Hello` which provides `hello` is implemented but not in scope; perhaps you want to import it
|
1 + use crate::main::Hello;
|
```
### Desired output
```Shell
error[E0599]: no method named `hello` found for unit type `()` in the current scope
--> src/main.rs:11:8
|
4 | fn hello(&self) {
| ----- the method is available for `()` here
...
11 | ().hello();
| ^^^^^ method not found in `()`
|
= help: items from traits can only be used if the trait is in scope
help: trait `Hello` which provides `hello` is implemented in a block and cannot be used here
```
### Rationale and extra context
There is no way to import a block-scoped trait and it should not be suggested for import.
### Other cases
```Rust
```
### Rust Version
```Shell
> rustc --version --verbose
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,731,308,373 | pytorch | We don't have test coverage for aarch64 (ARM64) bfloat16 feature (__ARM_FEATURE_BF16) | ### 🐛 Describe the bug
As demonstrated by #142370 and #142501, we have no builds that will catch compilation failures under `__ARM_FEATURE_BF16`, let alone running tests in this configuration.
### Versions
N/A
cc @seemethere @malfet @pytorch/pytorch-dev-infra | module: build,module: ci,triaged | medium | Critical |
2,731,328,390 | opencv | Is Qt Test necessary? It's not substantively used and it's a requirement for CVV on windows. | ### Describe the feature and motivation
this has been making me crazy:
[https://github.com/opencv/opencv/issues/23826]
It's apparently only been an issue for me and two others over the last two years, and it involves intricacies of cmake and cuda that are beyond my capabilities.
But it seems entirely caused by QTest, and I'm pretty sure the only reference to QTest in OpenCV is this one function in Window_QT.cpp in HighGUI:
```
//Need more test here !
void CvWindow::keyPressEvent(QKeyEvent *evnt)
{
int key = evnt->key();
const Qt::Key qtkey = static_cast<Qt::Key>(key);
if ( isTranslatableKey( qtkey ) )
key = static_cast<int>( QTest::keyToAscii( qtkey ) );
else
key = evnt->nativeVirtualKey(); //same codes as returned by GTK-based backend
//control plus (Z, +, -, up, down, left, right) are used for zoom/panning functions
if (evnt->modifiers() != Qt::ControlModifier)
{
mutexKey.lock();
last_key = key;
mutexKey.unlock();
key_pressed.wakeAll();
//evnt->accept();
}
QWidget::keyPressEvent(evnt);
}
```
```
static bool isTranslatableKey(Qt::Key key)
{
// https://github.com/opencv/opencv/issues/21899
// https://doc.qt.io/qt-5/qt.html#Key-enum
// https://doc.qt.io/qt-6/qt.html#Key-enum
// https://github.com/qt/qtbase/blob/dev/src/testlib/qasciikey.cpp
bool ret = false;
switch ( key )
{
// Special keys
case Qt::Key_Escape:
case Qt::Key_Tab:
case Qt::Key_Backtab:
case Qt::Key_Backspace:
case Qt::Key_Enter:
case Qt::Key_Return:
ret = true;
break;
// latin-1 keys.
default:
ret = (
( ( Qt::Key_Space <= key ) && ( key <= Qt::Key_AsciiTilde ) ) // 0x20--0x7e
||
( ( Qt::Key_nobreakspace <= key ) && ( key <= Qt::Key_ssharp ) ) // 0x0a0--0x0de
||
( key == Qt::Key_division ) // 0x0f7
||
( key == Qt::Key_ydiaeresis ) // 0x0ff
);
break;
}
return ret;
}
```
If isTranslatableKey returns true, it calls QTest::keyToAscii:
```
char QTest::keyToAscii(Qt::Key key)
{
switch (key) {
case Qt::Key_Backspace: return 0x8; //BS
case Qt::Key_Tab: return 0x09; // HT
case Qt::Key_Backtab: return 0x0b; // VT
case Qt::Key_Enter:
case Qt::Key_Return: return 0x0d; // CR
case Qt::Key_Escape: return 0x1b; // ESC
case Qt::Key_Space: return 0x20; // 7 bit printable ASCII
case Qt::Key_Exclam: return 0x21;
case Qt::Key_QuoteDbl: return 0x22;
case Qt::Key_NumberSign: return 0x23;
case Qt::Key_Dollar: return 0x24;
case Qt::Key_Percent: return 0x25;
case Qt::Key_Ampersand: return 0x26;
case Qt::Key_Apostrophe: return 0x27;
case Qt::Key_ParenLeft: return 0x28;
case Qt::Key_ParenRight: return 0x29;
case Qt::Key_Asterisk: return 0x2a;
case Qt::Key_Plus: return 0x2b;
case Qt::Key_Comma: return 0x2c;
case Qt::Key_Minus: return 0x2d;
case Qt::Key_Period: return 0x2e;
case Qt::Key_Slash: return 0x2f;
case Qt::Key_0: return 0x30;
case Qt::Key_1: return 0x31;
case Qt::Key_2: return 0x32;
case Qt::Key_3: return 0x33;
case Qt::Key_4: return 0x34;
case Qt::Key_5: return 0x35;
case Qt::Key_6: return 0x36;
case Qt::Key_7: return 0x37;
case Qt::Key_8: return 0x38;
case Qt::Key_9: return 0x39;
case Qt::Key_Colon: return 0x3a;
case Qt::Key_Semicolon: return 0x3b;
case Qt::Key_Less: return 0x3c;
case Qt::Key_Equal: return 0x3d;
case Qt::Key_Greater: return 0x3e;
case Qt::Key_Question: return 0x3f;
case Qt::Key_At: return 0x40;
case Qt::Key_A: return 0x61; // 0x41 == 'A', 0x61 == 'a'
case Qt::Key_B: return 0x62;
case Qt::Key_C: return 0x63;
case Qt::Key_D: return 0x64;
case Qt::Key_E: return 0x65;
case Qt::Key_F: return 0x66;
case Qt::Key_G: return 0x67;
case Qt::Key_H: return 0x68;
case Qt::Key_I: return 0x69;
case Qt::Key_J: return 0x6a;
case Qt::Key_K: return 0x6b;
case Qt::Key_L: return 0x6c;
case Qt::Key_M: return 0x6d;
case Qt::Key_N: return 0x6e;
case Qt::Key_O: return 0x6f;
case Qt::Key_P: return 0x70;
case Qt::Key_Q: return 0x71;
case Qt::Key_R: return 0x72;
case Qt::Key_S: return 0x73;
case Qt::Key_T: return 0x74;
case Qt::Key_U: return 0x75;
case Qt::Key_V: return 0x76;
case Qt::Key_W: return 0x77;
case Qt::Key_X: return 0x78;
case Qt::Key_Y: return 0x79;
case Qt::Key_Z: return 0x7a;
case Qt::Key_BracketLeft: return 0x5b;
case Qt::Key_Backslash: return 0x5c;
case Qt::Key_BracketRight: return 0x5d;
case Qt::Key_AsciiCircum: return 0x5e;
case Qt::Key_Underscore: return 0x5f;
case Qt::Key_QuoteLeft: return 0x60;
case Qt::Key_BraceLeft: return 0x7b;
case Qt::Key_Bar: return 0x7c;
case Qt::Key_BraceRight: return 0x7d;
case Qt::Key_AsciiTilde: return 0x7e;
case Qt::Key_Delete: return 0;
case Qt::Key_Insert: return 0; // = 0x1006,
case Qt::Key_Pause: return 0; // = 0x1008,
case Qt::Key_Print: return 0; // = 0x1009,
case Qt::Key_SysReq: return 0; // = 0x100a,
case Qt::Key_Clear: return 0; // = 0x100b,
case Qt::Key_Home: return 0; // = 0x1010, // cursor movement
case Qt::Key_End: return 0; // = 0x1011,
case Qt::Key_Left: return 0; // = 0x1012,
case Qt::Key_Up: return 0; // = 0x1013,
case Qt::Key_Right: return 0; // = 0x1014,
case Qt::Key_Down: return 0; // = 0x1015,
case Qt::Key_PageUp: return 0; // = 0x1016,
case Qt::Key_PageDown: return 0; // = 0x1017,
case Qt::Key_Shift: return 0; // = 0x1020, // modifiers
case Qt::Key_Control: return 0; // = 0x1021,
case Qt::Key_Meta: return 0; // = 0x1022,
case Qt::Key_Alt: return 0; // = 0x1023,
case Qt::Key_CapsLock: return 0; // = 0x1024,
case Qt::Key_NumLock: return 0; // = 0x1025,
case Qt::Key_ScrollLock: return 0; // = 0x1026,
case Qt::Key_F1: return 0; // = 0x1030, // function keys
case Qt::Key_F2: return 0; // = 0x1031,
case Qt::Key_F3: return 0; // = 0x1032,
case Qt::Key_F4: return 0; // = 0x1033,
case Qt::Key_F5: return 0; // = 0x1034,
case Qt::Key_F6: return 0; // = 0x1035,
case Qt::Key_F7: return 0; // = 0x1036,
case Qt::Key_F8: return 0; // = 0x1037,
case Qt::Key_F9: return 0; // = 0x1038,
case Qt::Key_F10: return 0; // = 0x1039,
case Qt::Key_F11: return 0; // = 0x103a,
case Qt::Key_F12: return 0; // = 0x103b,
case Qt::Key_F13: return 0; // = 0x103c,
case Qt::Key_F14: return 0; // = 0x103d,
case Qt::Key_F15: return 0; // = 0x103e,
case Qt::Key_F16: return 0; // = 0x103f,
case Qt::Key_F17: return 0; // = 0x1040,
case Qt::Key_F18: return 0; // = 0x1041,
case Qt::Key_F19: return 0; // = 0x1042,
case Qt::Key_F20: return 0; // = 0x1043,
case Qt::Key_F21: return 0; // = 0x1044,
case Qt::Key_F22: return 0; // = 0x1045,
case Qt::Key_F23: return 0; // = 0x1046,
case Qt::Key_F24: return 0; // = 0x1047,
case Qt::Key_F25: return 0; // = 0x1048, // F25 .. F35 only on X11
case Qt::Key_F26: return 0; // = 0x1049,
case Qt::Key_F27: return 0; // = 0x104a,
case Qt::Key_F28: return 0; // = 0x104b,
case Qt::Key_F29: return 0; // = 0x104c,
case Qt::Key_F30: return 0; // = 0x104d,
case Qt::Key_F31: return 0; // = 0x104e,
case Qt::Key_F32: return 0; // = 0x104f,
case Qt::Key_F33: return 0; // = 0x1050,
case Qt::Key_F34: return 0; // = 0x1051,
case Qt::Key_F35: return 0; // = 0x1052,
case Qt::Key_Super_L: return 0; // = 0x1053, // extra keys
case Qt::Key_Super_R: return 0; // = 0x1054,
case Qt::Key_Menu: return 0; // = 0x1055,
case Qt::Key_Hyper_L: return 0; // = 0x1056,
case Qt::Key_Hyper_R: return 0; // = 0x1057,
case Qt::Key_Help: return 0; // = 0x1058,
case Qt::Key_Direction_L: return 0; // = 0x1059,
case Qt::Key_Direction_R: return 0; // = 0x1060,
// Latin 1 codes adapted from X: keysymdef.h,v 1.21 94/08/28 16:17:06
case Qt::Key_nobreakspace: return char(0xa0);
case Qt::Key_exclamdown: return char(0xa1);
case Qt::Key_cent: return char(0xa2);
case Qt::Key_sterling: return char(0xa3);
case Qt::Key_currency: return char(0xa4);
case Qt::Key_yen: return char(0xa5);
case Qt::Key_brokenbar: return char(0xa6);
case Qt::Key_section: return char(0xa7);
case Qt::Key_diaeresis: return char(0xa8);
case Qt::Key_copyright: return char(0xa9);
case Qt::Key_ordfeminine: return char(0xaa);
case Qt::Key_guillemotleft: return char(0xab); // left angle quotation mar
case Qt::Key_notsign: return char(0xac);
case Qt::Key_hyphen: return char(0xad);
case Qt::Key_registered: return char(0xae);
case Qt::Key_macron: return char(0xaf);
case Qt::Key_degree: return char(0xb0);
case Qt::Key_plusminus: return char(0xb1);
case Qt::Key_twosuperior: return char(0xb2);
case Qt::Key_threesuperior: return char(0xb3);
case Qt::Key_acute: return char(0xb4);
case Qt::Key_micro: return char(0xb5);
case Qt::Key_paragraph: return char(0xb6);
case Qt::Key_periodcentered: return char(0xb7);
case Qt::Key_cedilla: return char(0xb8);
case Qt::Key_onesuperior: return char(0xb9);
case Qt::Key_masculine: return char(0xba);
case Qt::Key_guillemotright: return char(0xbb); // right angle quotation mar
case Qt::Key_onequarter: return char(0xbc);
case Qt::Key_onehalf: return char(0xbd);
case Qt::Key_threequarters: return char(0xbe);
case Qt::Key_questiondown: return char(0xbf);
case Qt::Key_Agrave: return char(0xc0);
case Qt::Key_Aacute: return char(0xc1);
case Qt::Key_Acircumflex: return char(0xc2);
case Qt::Key_Atilde: return char(0xc3);
case Qt::Key_Adiaeresis: return char(0xc4);
case Qt::Key_Aring: return char(0xe5);
case Qt::Key_AE: return char(0xe6);
case Qt::Key_Ccedilla: return char(0xc7);
case Qt::Key_Egrave: return char(0xc8);
case Qt::Key_Eacute: return char(0xc9);
case Qt::Key_Ecircumflex: return char(0xca);
case Qt::Key_Ediaeresis: return char(0xcb);
case Qt::Key_Igrave: return char(0xcc);
case Qt::Key_Iacute: return char(0xcd);
case Qt::Key_Icircumflex: return char(0xce);
case Qt::Key_Idiaeresis: return char(0xcf);
case Qt::Key_ETH: return char(0xd0);
case Qt::Key_Ntilde: return char(0xd1);
case Qt::Key_Ograve: return char(0xd2);
case Qt::Key_Oacute: return char(0xd3);
case Qt::Key_Ocircumflex: return char(0xd4);
case Qt::Key_Otilde: return char(0xd5);
case Qt::Key_Odiaeresis: return char(0xd6);
case Qt::Key_multiply: return char(0xd7);
case Qt::Key_Ooblique: return char(0xf8);
case Qt::Key_Ugrave: return char(0xd9);
case Qt::Key_Uacute: return char(0xda);
case Qt::Key_Ucircumflex: return char(0xdb);
case Qt::Key_Udiaeresis: return char(0xdc);
case Qt::Key_Yacute: return char(0xdd);
case Qt::Key_THORN: return char(0xde);
case Qt::Key_ssharp: return char(0xdf);
case Qt::Key_division: return char(0xf7);
case Qt::Key_ydiaeresis: return char(0xff);
// multimedia/internet keys - ignored by default - see QKeyEvent c'tor
case Qt::Key_Back : return 0; // = 0x1061,
case Qt::Key_Forward : return 0; // = 0x1062,
case Qt::Key_Stop : return 0; // = 0x1063,
case Qt::Key_Refresh : return 0; // = 0x1064,
case Qt::Key_VolumeDown: return 0; // = 0x1070,
case Qt::Key_VolumeMute : return 0; // = 0x1071,
case Qt::Key_VolumeUp: return 0; // = 0x1072,
case Qt::Key_BassBoost: return 0; // = 0x1073,
case Qt::Key_BassUp: return 0; // = 0x1074,
case Qt::Key_BassDown: return 0; // = 0x1075,
case Qt::Key_TrebleUp: return 0; // = 0x1076,
case Qt::Key_TrebleDown: return 0; // = 0x1077,
case Qt::Key_MediaPlay : return 0; // = 0x1080,
case Qt::Key_MediaStop : return 0; // = 0x1081,
case Qt::Key_MediaPrevious : return 0; // = 0x1082,
case Qt::Key_MediaNext : return 0; // = 0x1083,
case Qt::Key_MediaRecord: return 0; // = 0x1084,
case Qt::Key_HomePage : return 0; // = 0x1090,
case Qt::Key_Favorites : return 0; // = 0x1091,
case Qt::Key_Search : return 0; // = 0x1092,
case Qt::Key_Standby: return 0; // = 0x1093,
case Qt::Key_OpenUrl: return 0; // = 0x1094,
case Qt::Key_LaunchMail : return 0; // = 0x10a0,
case Qt::Key_LaunchMedia: return 0; // = 0x10a1,
case Qt::Key_Launch0 : return 0; // = 0x10a2,
case Qt::Key_Launch1 : return 0; // = 0x10a3,
case Qt::Key_Launch2 : return 0; // = 0x10a4,
case Qt::Key_Launch3 : return 0; // = 0x10a5,
case Qt::Key_Launch4 : return 0; // = 0x10a6,
case Qt::Key_Launch5 : return 0; // = 0x10a7,
case Qt::Key_Launch6 : return 0; // = 0x10a8,
case Qt::Key_Launch7 : return 0; // = 0x10a9,
case Qt::Key_Launch8 : return 0; // = 0x10aa,
case Qt::Key_Launch9 : return 0; // = 0x10ab,
case Qt::Key_LaunchA : return 0; // = 0x10ac,
case Qt::Key_LaunchB : return 0; // = 0x10ad,
case Qt::Key_LaunchC : return 0; // = 0x10ae,
case Qt::Key_LaunchD : return 0; // = 0x10af,
case Qt::Key_LaunchE : return 0; // = 0x10b0,
case Qt::Key_LaunchF : return 0; // = 0x10b1,
// Keypad navigation keys
case Qt::Key_Select : return 0; // = 0x01010000
case Qt::Key_Yes : return 0; // = 0x01010001
case Qt::Key_No : return 0; // = 0x01010002
default: QTEST_ASSERT(false); return 0;
}
}
```
Currently, if you don't have QTest, then you lose QT, and on windows you lose CVV.
What about just including that implementation in the calling function, or as a separate static function?
| feature,category: build/install | low | Critical |
2,731,333,511 | pytorch | DISABLED test_run_decompositions_preserve_handle (__main__.TestNumericDebugger) | Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_run_decompositions_preserve_handle&suite=TestNumericDebugger&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/34205786469).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 18 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_run_decompositions_preserve_handle`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr @malfet @albanD | oncall: quantization,triaged,module: flaky-tests,module: macos,skipped | low | Critical |
2,731,343,430 | PowerToys | [Peek][PreviewPane] Support for .resx and .resw files | Allow users to quickly preview resource files (.resx and .resw) in Peek and Explorer PreviewPane. Both are xml files and can be easy previewed in Monaco editor. | Idea-Enhancement,Help Wanted,Resolution-Fix Committed,Product-File Explorer,Product-Peek | low | Major |
2,731,349,097 | flutter | [Android] Modify sensitive content support implementation to use JNIgen | The initial implementation of sensitive content support on Android (to address https://github.com/flutter/flutter/issues/150218) will be implemented with method channels. In the future, the plan is to convert the implementation to use JNIgen to avoid a possible race condition between marking a view as sensitive and displaying the view before it is marked by Flutter. This issue is to track that work. | platform-android,framework,P2,team-android,triaged-android | low | Minor |
2,731,362,544 | flutter | [Android] Un-mark sensitive content when it is not visible on screen | The initial implementation of sensitive content support on Android (to address https://github.com/flutter/flutter/issues/150218) will not include logic that detects whether or not a sensitive content widget is no longer visible, meaning a `View` may still be obscured when there is no longer sensitive content on screen. As a follow up to that initial implementation, then, there should be work to make sure that the screen is not obscured when there are no sensitive content widgets on screen to limit over-obscuring the screen as marking any one widget as sensitive will obscure the entire `FlutterView`.
To implement this, the following use cases must be considered:
- [ ] Scrolling: https://docs.flutter.dev/ui/widgets/scrolling
- [ ] Navigation: https://docs.flutter.dev/ui/navigation
- [ ] The [`Visibility`](https://api.flutter.dev/flutter/widgets/Visibility-class.html) widget | platform-android,framework,P2,team-android,triaged-android | low | Minor |
2,731,396,319 | pytorch | [v.2.6.0] Release Tracker | We cut a [release branch](https://github.com/pytorch/pytorch/tree/release/2.6) for the 2.6.0 release.
Our plan from this point from this point is roughly:
* Phase 1 (until 1/13/25): work on finalizing the release branch
* Phase 2 (after 1/13/25): perform extended integration/stability/performance testing based on Release Candidate builds.
This issue is for tracking cherry-picks to the release branch.
## Cherry-Pick Criteria
**Phase 1 (until 1/13/25):**
Only low-risk changes may be cherry-picked from main:
1. Fixes to regressions against the most recent minor release (e.g. 2.5.x for this release; see [module: regression issue list](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22module%3A+regression%22+))
2. Critical fixes for: [silent correctness](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+correctness+%28silent%29%22), [backwards compatibility](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+bc-breaking%22+), [crashes](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+crash%22+), [deadlocks](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+deadlock%22+), (large) [memory leaks](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+memory+usage%22+)
3. Critical fixes to new features introduced in the most recent minor release (e.g. 2.5.x for this release)
4. Test/CI fixes
5. Documentation improvements
6. Compilation fixes or ifdefs required for different versions of the compilers or third-party libraries
7. Release branch specific changes (e.g. change version identifiers)
Any other change requires special dispensation from the release managers (currently @atalman, @malfet, @kit1980). If this applies to your change please write "Special Dispensation" in the "Criteria Category:" template below and explain.
**Phase 2 (after 1/13/25):**
Note that changes here require us to rebuild a Release Candidate and restart extended testing (likely delaying the release). Therefore, the only accepted changes are **Release-blocking** critical fixes for: [silent correctness](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+correctness+%28silent%29%22), [backwards compatibility](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+bc-breaking%22+), [crashes](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+crash%22+), [deadlocks](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+deadlock%22+), (large) [memory leaks](https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3Aopen+label%3A%22topic%3A+memory+usage%22+)
Changes will likely require a discussion with the larger release team over VC or Slack.
## Cherry-Pick Process
1. Ensure your PR has landed in master. This does not apply for release-branch specific changes (see Phase 1 criteria).
2. Create (but do not land) a PR against the [release branch](https://github.com/pytorch/pytorch/tree/release/2.6).
<details>
```bash
# Find the hash of the commit you want to cherry pick
# (for example, abcdef12345)
git log
git fetch origin release/2.6
git checkout release/2.6
git cherry-pick -x abcdef12345
# Submit a PR based against 'release/2.6' either:
# via the GitHub UI
git push my-fork
# via the GitHub CLI
gh pr create --base release/2.6
```
You can also use the `@pytorchbot cherry-pick` command to cherry-pick your PR. To do this, just add a comment in your merged PR. For example:
```
@pytorchbot cherry-pick --onto release/2.6 -c docs
```
(`-c docs` - is the category of your changes - adjust accordingly):
For more information, see [pytorchbot cherry-pick docs](https://github.com/pytorch/pytorch/wiki/Bot-commands#cherry-pick).
</details>
3. Make a request below with the following format:
```
Link to landed trunk PR (if applicable):
*
Link to release branch PR:
*
Criteria Category:
*
```
1. Someone from the release team will reply with approved / denied or ask for more information.
2. If approved, someone from the release team will merge your PR once the tests pass. **Do not land the release branch PR yourself.**
**NOTE: Our normal tools (ghstack / ghimport, etc.) do not work on the release branch.**
Please note HUD Link with branch CI status and link to the HUD to be provided here.
[HUD](https://hud.pytorch.org/hud/pytorch/pytorch/release%2F2.6)
### Versions
2.6.0
cc @seemethere @malfet @pytorch/pytorch-dev-infra | module: ci,triaged,release tracker | high | Critical |
2,731,398,460 | rust | Tracking Issue for `byte_search` | <!--
Thank you for creating a tracking issue!
Tracking issues are for tracking a feature from implementation to stabilization.
Make sure to include the relevant RFC for the feature if it has one.
If the new feature is small, it may be fine to skip the RFC process. In that
case, you can use `issue = "none"` in your initial implementation PR. The
reviewer will ask you to open a tracking issue if they agree your feature can be
added without an RFC.
-->
Feature gate: `#![feature(byte_search)]`
acp: https://github.com/rust-lang/libs-team/issues/499
This is a tracking issue for byte searching: searching for a particular pattern of bytes in a `&[u8]`, and potentially trimming, splitting or otherwise processing the input based on the matches of the pattern.
### Public API
The API mirrors the one for strings at [std::str::pattern::Pattern](
https://doc.rust-lang.org/std/str/pattern/trait.Pattern.html).
```rust
pub trait BytePattern: Sized {
/// Associated searcher for this pattern
type Searcher<'a>: Searcher<'a>;
// etc.
}
```
and then adds a number of inherent methods for `&[u8]`. The exact set has not yet been decided, but will likely include (modulo slight naming changes):
- `contains_bytes`
- `find_bytes`
- `rfind_bytes`
- `split_bytes`
- `rsplit_bytes`
- `split_bytes_once`
- `rsplit_bytes`
- `splitn_bytes`
- `rsplitn_bytes`
- `replace_bytes`
- `replacen_bytes`
- `starts_with_bytes`
- `ends_with_bytes`
- `matches_bytes`
- `rmatches_bytes`
- `match_indices_bytes`
- `rmatch_indices_bytes`
- `trim_bytes_start_matches`
- `trim_bytes_end_matches`
- `strip_bytes_prefix`
- `strip_bytes_suffix`
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [ ] ACP: https://github.com/rust-lang/libs-team/issues/499#issuecomment-2532563916, https://github.com/rust-lang/libs-team/issues/311
- [ ] Implementation: #...
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,731,402,353 | pytorch | torch.linalg.solve fails on CPU with multiple threads and batch dimension | Running this code
```
import torch
torch.set_num_threads(2)
n = 151
xtx = torch.eye(n).unsqueeze(0).repeat(2,1,1).contiguous()
xty = torch.ones(2,n)
x = torch.linalg.solve(xtx, xty)
```
give this error
```
Intel MKL ERROR: Parameter 6 was incorrect on entry to SLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to SLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to SLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to SLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to SLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to SLASWP.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[12], line 6
4 xtx = torch.eye(n).unsqueeze(0).repeat(2,1,1).contiguous()
5 xty = torch.ones(2,n)
----> 6 x = torch.linalg.solve(xtx, xty)
RuntimeError: Pivots given to lu_solve must all be greater or equal to 1. Did you properly pass the result of lu_factor?
```
Smaller matrices, single threaded execution, no batch dimension makes the problem go away.
The identity matrix should not cause `linalg.solve` to fail.
### Versions
This happens in 2.5.0 and 2.5.1. Does not happen in 1.12.1. I didn't try other versions.
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | triaged,module: mkl,module: linear algebra | low | Critical |
2,731,406,188 | terminal | [Group Policy] Process based enforcing single window mode | Is it possible to add a policy that enforces single window mode based on the process that is executing the PowerShell script or CMD script.
Use case: We have a backup tool that uses multiple PowerShell scripts/commands to do the work. It would be cool if I can configure WT centralized to open the commands called from this specific process in single/independent window mode.
**Implementation details:**
A list of calling process names/paths that should execute their scripts/commands in single window mode.
Example:
- Calling process: `MyApp.exe`
- Command: `powershell.exe -File MyScript.ps1`
- Policy entry: MyApp.exe | Issue-Feature,Product-Terminal,Needs-Tag-Fix,Area-GroupPolicy | low | Minor |
2,731,445,476 | vscode | "Clear All Output" is greyed out and doesn't clear execution count unless output is available | ### Applies To
- [X] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
Create a new cell in an *.ipynb.
run an import statement, the execution count increments.
**expected** that this execution statement could be cleared with the "Clear All Outputs" button it cannot.

add a print statement to the cell and run.
The "Clear All Outputs" is now available and can be used to clear the execution count as well as outputs.

"Clear All Outputs" should be available to clear execution counts even if none of the cells have printed outputs.
### VS Code Version
1.86.2
### Jupyter Extension Version
2024.1.1
### Jupyter logs
_No response_
### Coding Language and Runtime Version
Python v 3.10.10
### Language Extension Version (if applicable)
_No response_
### Anaconda Version (if applicable)
_No response_
### Running Jupyter locally or remotely?
None | feature-request,notebook-commands | medium | Major |
2,731,456,884 | kubernetes | [FG:InPlacePodVerticalScaling] Inconsistent handling of memory limit decrease | /kind bug
When shrinking the pod-level memory limits (sum of container limits iff all containers have limits), the Kubelet checks the current pod memory usage, and doesn't apply the new limits if the new limits < current usage. However, the Kubelet doesn't place the same restriction on containers, and we don't require container runtimes to make the same check.
In practice, this means that for a single container pod, decreasing the memory limit below usage will result in the resize being indefinitely in progress, but for a multiple container pod, if some containers have sufficient free memory, you can shrink the memory limits below usage for a container in the pod, resulting in that container being OOM-killed.
What is the desired behavior here?
https://docs.google.com/document/d/1cEFLXKwNOSNLAkzyhoJUgkBW0OiX-9bXB_aJV7OAypw/edit?tab=t.0 provides more background information and several options for how this should be handled.
/sig node
/priority important-soon | kind/bug,priority/important-soon,sig/node,triage/accepted | low | Critical |
2,731,460,356 | react | [Compiler Bug]: Compiler fails to memoize hooks with no hook calls | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [X] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://github.com/billyjanitsch/react-compiler-hook-detection-bug
### Repro steps
Given the following three custom hooks:
```tsx
import {useDebugValue} from 'react'
function useFoo() {
return () => 'foo'
}
function useBar() {
useDebugValue('bar')
return () => 'bar'
}
function useBaz() {
return useCallback(() => 'baz', [])
}
```
I'd expect the compiler to memoize all of them, but it only memoizes `useBar` and `useBaz`:
```tsx
import { useCallback, useDebugValue } from "react";
function useFoo() {
return () => "foo";
}
function useBar() {
useDebugValue("bar");
return _temp;
}
function _temp() {
return "bar";
}
function useBaz() {
return _temp2;
}
function _temp2() {
return "baz";
}
```
I'm guessing that it's because the compiler's hook detection logic looks for at least one hook call in the function body. I understand that it generally doesn't make sense to write a custom hook that doesn't use any other hooks, but the exception is when the custom hook would have used only `useMemo` and/or `useCallback`, such as `useBaz`. I expect the compiler to let me remove those hooks without losing memoization.
This doesn't reproduce in the playground. I'm not sure why.
### How often does this bug happen?
Every time
### What version of React are you using?
19.0.0
### What version of React Compiler are you using?
19.0.0-beta-37ed2a7-20241206 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | low | Critical |
2,731,465,844 | tauri | window.print in android | ### Describe the bug
`window.print` function (javascript) is not working in android, you can call it but despite desktop nothing happens
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[⚠] Environment
- OS: NixOS 25.5.0 x86_64 (X64)
✔ webkit2gtk-4.1: 2.46.4
✔ rsvg2: 2.58.3
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
⚠ rustup: not installed!
If you have rust installed some other way, we recommend uninstalling it
then use rustup instead. Visit https://rustup.rs/
⚠ Rust toolchain: couldn't be detected!
Maybe you don't have rustup installed? if so, Visit https://rustup.rs/
- node: 20.18.0
- pnpm: 9.14.2
- npm: 10.8.2
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.0
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1 (outdated, latest: 2.2.0)
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../build
- devUrl: http://localhost:1420/
- framework: Svelte
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,731,483,367 | tauri | [bug] Remote API Access from localhost does not inject window.__TAURI__ | ### Describe the bug
I am trying to access the Tauri API in my browser via `http://localhost:1420/`. This is intended for the release, not just development. I use the following snippet to check if the API is available.
```tsx
// App.tsx
useEffect(() => {
console.log("__TAURI__" in window ? "Tauri" : "Web");
}, []);
```
For the Tauri window, this logs `Tauri` as expected. For the browser window however, this logs `Web`.
I followed [the capabilities docs](https://v2.tauri.app/security/capabilities/#remote-api-access) to set this up.
```jsonc
// capabilities/remote-access.json
{
"$schema": "../gen/schemas/desktop-schema.json",
"identifier": "remote-access-capability",
"windows": [
"main"
],
"remote": {
"urls": [
"http://localhost/*"
]
},
"platforms": [
"windows"
],
"permissions": [
"core:default"
]
}
```
```jsonc
// tauri.conf.json
...
"app": {
...
"security": {
"csp": null,
"capabilities": [
"remote-access-capability"
]
},
"withGlobalTauri": true
}
```
Is this a mistake on my end?
### Reproduction
Here is a minimal reproduction example, starting from create-tauri-app.
https://github.com/bischoff-m/tauri-remote-api-issue/commit/aa0a1eefcbd49c89c099420ae979383ca03c2484
### Expected behavior
I expected the `window.__TAURI__` object to be accessible via `http://localhost/`.
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
✔ WebView2: 131.0.2903.86
✔ MSVC: Visual Studio Community 2022
✔ rustc: 1.81.0 (eeb90cda1 2024-09-04)
✔ cargo: 1.81.0 (2dbb1af80 2024-08-20)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.18.0
- npm: 10.8.2
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-shell 🦀: 2.2.0
- @tauri-apps/plugin-shell : 2.2.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,731,523,044 | vscode | Allow for subtraction in Settings editor search queries | I realized while looking at https://github.com/microsoft/vscode/issues/201456 that allowing `-query` to subtract results could be pretty cool. | feature-request,settings-editor | low | Minor |
2,731,554,618 | PowerToys | Checkbox ticks in Markdown files have bad contrast | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Peek
### Steps to reproduce
Create a Markdown file using the following code and use Peek to preview its content:
````markdown
- [ ] Unchecked
- [x] Checked
````
### ✔️ Expected Behavior
Mockups:

or

### ❌ Actual Behavior

### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Peek | low | Minor |
2,731,565,713 | rust | Tracking issue for release notes of #132975: De-duplicate and improve definition of core::ffi::c_char |
This issue tracks the release notes text for #132975.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Compatibility Notes
- [More closely match clang behavior for definition of core::ffi::c_char](https://github.com/rust-lang/rust/pull/132975)
On many targets, this has changed the definition of c_char, resulting in potential compilation issues.
The new definition is more likely to be accurate for the corresponding C definition on the relevant target.
The libc crate matches this change as of its 0.2.169 release.
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @arichardson, @BurntSushi, @tgross35 -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,T-libs,relnotes-tracking-issue | low | Minor |
2,731,571,056 | pytorch | Add [low, high] interval for torch.distributions.beta | ### 🚀 The feature, motivation and pitch
I think it would be nice to extend the torch.distributions.beta with two additional parameters "low" and "high" similar to the ones in the torch.distributions.uniform to be able to scale the beta distribution on a custom interval [low, high] in order not to be limited to the interval [0, 1].
### Alternatives
It is possible to obtain a similar result using an AffineTransform
`dist = TransformedDistribution(
Beta(concentration1=alpha, concentration0=beta),
[AffineTransform(loc=min_val, scale=(max_val - min_val)])`
, but I feel like it would be more intuitive to be able to specify the interval from the constructor as it is done in the Uniform distribution.
In addition the resulted TransformedDistribution does not implement all the methods and properties of the original Beta distribution, ( mean, std_dev, mode, etc... )
### Additional context
_No response_
cc @fritzo @neerajprad @alicanb @nikitaved | module: distributions,triaged | low | Minor |
2,731,574,183 | langchain | Bedrock: Unknown parameter in toolConfig.tools[0].toolSpec: "strict", must be one of: name, description, inputSchema | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I created a StructuredPrompt with LangSmith and now am trying to use it with Promptim.
### Error Message and Stack Trace (if applicable)
```
Error running target function: Parameter validation failed:
Unknown parameter in toolConfig.tools[0].toolSpec: "strict", must be one of: name, description, inputSchema
Traceback (most recent call last):
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langsmith/evaluation/_arunner.py", line 1046, in
_aforward
await fn(
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langsmith/run_helpers.py", line 522, in
async_wrapper
raise e
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langsmith/run_helpers.py", line 508, in
async_wrapper
function_result = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/promptimizer/src/promptim/trainer.py", line 900, in predict
return await task.system_safe(prompt, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/promptimizer/src/promptim/trainer.py", line 358, in prompt_system
return await prompt_wrapper._postlude.ainvoke(prompt.invoke(inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3066, in
ainvoke
input = await asyncio.create_task(part(), context=context) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5366, in
ainvoke
return await self.bound.ainvoke(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py",
line 307, in ainvoke
llm_result = await self.agenerate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py",
line 796, in agenerate_prompt
return await self.agenerate(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py",
line 756, in agenerate
raise exceptions[0]
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py",
line 924, in _agenerate_with_cache
result = await self._agenerate(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py",
line 964, in _agenerate
return await run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 588, in
run_in_executor
return await asyncio.get_running_loop().run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/mambaforge/envs/py311/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 579, in
wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/langchain_aws/chat_models/bedrock_converse.py",
line 501, in _generate
response = self.client.converse(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/botocore/client.py", line 569, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/botocore/client.py", line 980, in _make_api_call
request_dict = self._convert_to_request_dict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/botocore/client.py", line 1047, in
_convert_to_request_dict
request_dict = self._serializer.serialize_to_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/austinmw/Desktop/test_promptim/.venv/lib/python3.11/site-packages/botocore/validate.py", line 381, in
serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in toolConfig.tools[0].toolSpec: "strict", must be one of: name, description, inputSchema
```
### Description
See above
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Wed Jul 31 20:49:39 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T6000
> Python Version: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:49:36) [Clang 16.0.6 ]
Package Information
-------------------
> langchain_core: 0.3.24
> langchain: 0.3.11
> langsmith: 0.2.2
> langchain_anthropic: 0.3.0
> langchain_aws: 0.2.9
> langchain_openai: 0.2.12
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.10
> anthropic: 0.40.0
> async-timeout: Installed. No version info available.
> boto3: 1.35.78
> defusedxml: 0.7.1
> httpx: 0.28.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.57.2
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.3
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2 | 🤖:bug | low | Critical |
2,731,623,085 | ui | [bug]: running npx shadcn@latest init the modifications to tailwind.config.ts generate an error | ### Describe the bug
When running npx shadcn@latest init the modifications to tailwind.config.ts generate an error by backgroundImage: {
video: 'url('../images/video/video.png')'
}, it is incorrectly configured, it should be backgroundImage: {
video: 'url("../images/video/video.png")'
},
### Affected component/components
none
### How to reproduce
create netx app run
npx shadcn@latest init

### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
:3000/:1
GET http://localhost:3000/ 500 (Internal Server Error)
[NEW] Explain Console errors by using Copilot in Edge: click
to explain an error.
Learn more
Don't show again
index.tsx:935 Uncaught ModuleBuildError: Module build failed (from ./node_modules/next/dist/build/webpack/loaders/postcss-loader/src/index.js):
SyntaxError: Unexpected token, expected "," (311:44)
at unexpected (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\util.js:99:15)
at expect (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\util.js:86:5)
at parseObj (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:759:20)
at parseExprAtom (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:553:7)
at parseExprSubscripts (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:276:20)
at parseMaybeUnary (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:257:20)
at parseExprOps (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:185:20)
at parseMaybeConditional (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:157:20)
at baseParseMaybeAssign (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:142:20)
at tsParseMaybeAssignWithoutJSX (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\plugins\typescript.js:1580:45)
at tsParseMaybeAssign (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\plugins\typescript.js:1548:12)
at parseMaybeAssign (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:121:43)
at parseObjectProperty (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:851:7)
at parseObjPropValue (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:892:5)
at parseObj (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:808:5)
at parseExprAtom (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:553:7)
at parseExprSubscripts (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:276:20)
at parseMaybeUnary (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:257:20)
at parseExprOps (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:185:20)
at parseMaybeConditional (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:157:20)
at baseParseMaybeAssign (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:142:20)
at tsParseMaybeAssignWithoutJSX (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\plugins\typescript.js:1580:45)
at tsParseMaybeAssign (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\plugins\typescript.js:1548:12)
at parseMaybeAssign (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:121:43)
at parseObjectProperty (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:851:7)
at parseObjPropValue (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:892:5)
at parseObj (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:808:5)
at parseExprAtom (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:553:7)
at parseExprSubscripts (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:276:20)
at parseMaybeUnary (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:257:20)
at parseExprOps (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:185:20)
at parseMaybeConditional (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:157:20)
at baseParseMaybeAssign (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:142:20)
at tsParseMaybeAssignWithoutJSX (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\plugins\typescript.js:1580:45)
at tsParseMaybeAssign (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\plugins\typescript.js:1548:12)
at parseMaybeAssign (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:121:43)
at parseObjectProperty (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:851:7)
at parseObjPropValue (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\parser\traverser\expression.js:892:5)
at parseObj (F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\sucrase\dist\pars
getServerError @ nodeStackFrames.ts:30
eval @ index.tsx:935
setTimeout
hydrate @ index.tsx:922
await in hydrate
pageBootrap @ page-bootstrap.ts:22
eval @ next-dev.ts:21
Promise.then
eval @ next-dev.ts:20
./node_modules/next/dist/client/next-dev.js @ main.js:820
options.factory @ webpack.js:647
__webpack_require__ @ webpack.js:37
__webpack_exec__ @ main.js:1964
(anonymous) @ main.js:1965
webpackJsonpCallback @ webpack.js:1195
(anonymous) @ main.js:9
websocket.ts:27 [HMR] connected
hydration-error-info.ts:72 ./node_modules/flatpickr/dist/flatpickr.min.css.webpack[javascript/auto]!=!./node_modules/next/dist/build/webpack/loaders/css-loader/src/index.js??ruleSet[1].rules[14].oneOf[12].use[2]!./node_modules/next/dist/build/webpack/loaders/postcss-loader/src/index.js??ruleSet[1].rules[14].oneOf[12].use[3]!./node_modules/flatpickr/dist/flatpickr.min.css
SyntaxError: Unexpected token, expected "," (311:44)
overrideMethod @ hook.js:608
console.error @ hydration-error-info.ts:72
window.console.error @ setup-hydration-warning.ts:21
handleErrors @ hot-reloader-client.ts:199
processMessage @ hot-reloader-client.ts:295
eval @ hot-reloader-client.ts:82
handleMessage @ websocket.ts:34
hydration-error-info.ts:72 ./node_modules/jsvectormap/dist/jsvectormap.css.webpack[javascript/auto]!=!./node_modules/next/dist/build/webpack/loaders/css-loader/src/index.js??ruleSet[1].rules[14].oneOf[12].use[2]!./node_modules/next/dist/build/webpack/loaders/postcss-loader/src/index.js??ruleSet[1].rules[14].oneOf[12].use[3]!./node_modules/jsvectormap/dist/jsvectormap.css
SyntaxError: Unexpected token, expected "," (311:44)
overrideMethod @ hook.js:608
console.error @ hydration-error-info.ts:72
window.console.error @ setup-hydration-warning.ts:21
handleErrors @ hot-reloader-client.ts:199
processMessage @ hot-reloader-client.ts:295
eval @ hot-reloader-client.ts:82
handleMessage @ websocket.ts:34
hydration-error-info.ts:72 ./src/css/satoshi.css.webpack[javascript/auto]!=!./node_modules/next/dist/build/webpack/loaders/css-loader/src/index.js??ruleSet[1].rules[14].oneOf[12].use[2]!./node_modules/next/dist/build/webpack/loaders/postcss-loader/src/index.js??ruleSet[1].rules[14].oneOf[12].use[3]!./src/css/satoshi.css
SyntaxError: Unexpected token, expected "," (311:44)
overrideMethod @ hook.js:608
console.error @ hydration-error-info.ts:72
window.console.error @ setup-hydration-warning.ts:21
handleErrors @ hot-reloader-client.ts:199
processMessage @ hot-reloader-client.ts:295
eval @ hot-reloader-client.ts:82
handleMessage @ websocket.ts:34
hydration-error-info.ts:72 ./src/css/style.css.webpack[javascript/auto]!=!./node_modules/next/dist/build/webpack/loaders/css-loader/src/index.js??ruleSet[1].rules[14].oneOf[12].use[2]!./node_modules/next/dist/build/webpack/loaders/postcss-loader/src/index.js??ruleSet[1].rules[14].oneOf[12].use[3]!./src/css/style.css
SyntaxError: Unexpected token, expected "," (311:44)
overrideMethod @ hook.js:608
console.error @ hydration-error-info.ts:72
window.console.error @ setup-hydration-warning.ts:21
handleErrors @ hot-reloader-client.ts:199
processMessage @ hot-reloader-client.ts:295
eval @ hot-reloader-client.ts:82
handleMessage @ websocket.ts:34
hydration-error-info.ts:72 ./node_modules/flatpickr/dist/flatpickr.min.css
SyntaxError: Unexpected token, expected "," (311:44)
-- inner error --
SyntaxError: Unexpected token, expected "," (311:44)
Generated code for F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\next\dist\build\webpack\loaders\css-loader\src\index.js??ruleSet[1].rules[14].oneOf[12].use[2]!F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\next\dist\build\webpack\loaders\postcss-loader\src\index.js??ruleSet[1].rules[14].oneOf[12].use[3]!F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\flatpickr\dist\flatpickr.min.css
overrideMethod @ hook.js:608
console.error @ hydration-error-info.ts:72
window.console.error @ setup-hydration-warning.ts:21
handleErrors @ hot-reloader-client.ts:199
processMessage @ hot-reloader-client.ts:295
eval @ hot-reloader-client.ts:82
handleMessage @ websocket.ts:34
hydration-error-info.ts:72 ./node_modules/jsvectormap/dist/jsvectormap.css
SyntaxError: Unexpected token, expected "," (311:44)
-- inner error --
SyntaxError: Unexpected token, expected "," (311:44)
Generated code for F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\next\dist\build\webpack\loaders\css-loader\src\index.js??ruleSet[1].rules[14].oneOf[12].use[2]!F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\next\dist\build\webpack\loaders\postcss-loader\src\index.js??ruleSet[1].rules[14].oneOf[12].use[3]!F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\jsvectormap\dist\jsvectormap.css
overrideMethod @ hook.js:608
console.error @ hydration-error-info.ts:72
window.console.error @ setup-hydration-warning.ts:21
handleErrors @ hot-reloader-client.ts:199
processMessage @ hot-reloader-client.ts:295
eval @ hot-reloader-client.ts:82
handleMessage @ websocket.ts:34
hydration-error-info.ts:72 ./src/css/satoshi.css
SyntaxError: Unexpected token, expected "," (311:44)
-- inner error --
SyntaxError: Unexpected token, expected "," (311:44)
Generated code for F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\next\dist\build\webpack\loaders\css-loader\src\index.js??ruleSet[1].rules[14].oneOf[12].use[2]!F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\next\dist\build\webpack\loaders\postcss-loader\src\index.js??ruleSet[1].rules[14].oneOf[12].use[3]!F:\TRABAJO\GAMO\aas\gamo-dashboard\src\css\satoshi.css
overrideMethod @ hook.js:608
console.error @ hydration-error-info.ts:72
window.console.error @ setup-hydration-warning.ts:21
handleErrors @ hot-reloader-client.ts:199
processMessage @ hot-reloader-client.ts:295
eval @ hot-reloader-client.ts:82
handleMessage @ websocket.ts:34
hydration-error-info.ts:72 ./src/css/style.css
SyntaxError: Unexpected token, expected "," (311:44)
-- inner error --
SyntaxError: Unexpected token, expected "," (311:44)
Generated code for F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\next\dist\build\webpack\loaders\css-loader\src\index.js??ruleSet[1].rules[14].oneOf[12].use[2]!F:\TRABAJO\GAMO\aas\gamo-dashboard\node_modules\next\dist\build\webpack\loaders\postcss-loader\src\index.js??ruleSet[1].rules[14].oneOf[12].use[3]!F:\TRABAJO\GAMO\aas\gamo-dashboard\src\css\style.css
overrideMethod @ hook.js:608
console.error @ hydration-error-info.ts:72
window.console.error @ setup-hydration-warning.ts:21
handleErrors @ hot-reloader-client.ts:199
processMessage @ hot-reloader-client.ts:295
eval @ hot-reloader-client.ts:82
handleMessage @ websocket.ts:34
```
### System Info
```bash
windows 11
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,731,684,551 | godot | GLTFDocument.append_from_scene executed on non-ui thread cost tool much time compared to on ui thread. | ### Tested versions
Reproducible in Godot v4.0.4.stable
### System information
Mac & Android & iOS - Godot v4.0.4.stable
### Issue description
try to execute GLTFDocument.append_from_file on ui thread and non-ui thread respectively, the time cost on non-ui thread is much higher than on ui thread. for example, in the minimal reproduction project, append on ui thread time cost: 3436ms,
but append on other thread time cost: 117287ms
Hope to see your replay as soon as possible! Thanks so much!
### Steps to reproduce
run the MRP Main.tscn to check result
### Minimal reproduction project (MRP)
[MinimalProject.zip](https://github.com/user-attachments/files/18088789/MinimalProject.zip)
| bug,needs testing,topic:import,performance | low | Minor |
2,731,692,485 | pytorch | Segmentation fault in `replication_pad3d_backward` | ### 🐛 Describe the bug
Under specific inputs, `replication_pad3d_backward` triggered a crash.
```python
import torch
grad_output = torch.full((2, 0, 6, 8, 10,), 1, dtype=torch.float)
self = torch.full((2, 2, 4, 4, 4,), 1, dtype=torch.float)
padding = [3, 3, 2, 2, 1, 1]
torch.ops.aten.replication_pad3d_backward(grad_output, self, padding)
```
Output
```
Segmentation fault (core dumped)
```
ASAN Ouput
```
AddressSanitizer:DEADLYSIGNAL
AddressSanitizer=================================================================
:DEADLYSIGNAL
AddressSanitizer:DEADLYSIGNAL
AddressSanitizer:DEADLYSIGNAL
==3988389==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7f7e67cfca59 bp 0x7ffc041e53f0 sp 0x7ffc041e5180 T0)
==3988389==The signal is caused by a READ memory access.
==3988389==Hint: address points to the zero page.
#0 0x7f7e67cfca59 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/PaddingKernel.cpp:380
#1 0x7f7e67d5d9ee in operator() /mnt/pytorch-2.5.0/aten/src/ATen/Parallel-inl.h:36
#2 0x7f7e67da7015 in __invoke_impl<void, at::parallel_for<at::native::(anonymous namespace)::cpu_padding_backward<float, at::native::(anonymous namespace)::ReplicationPad>(const at::Tensor&, const at::Tensor&, at::native::(anonymous namespace)::PaddingParams&)::<lambda(int64_t, int64_t)> >(int64_t, int64_t, int64_t, const at::native::(anonymous namespace)::cpu_padding_backward<float, at::native::(anonymous namespace)::ReplicationPad>(const at::Tensor&, const at::Tensor&, at::native::(anonymous namespace)::PaddingParams&)::<lambda(int64_t, int64_t)>&)::<lambda(int64_t, int64_t)>&, long int, long int> /usr/include/c++/11/bits/invoke.h:61
#3 0x7f7e67d98bcd in __invoke_r<void, at::parallel_for<at::native::(anonymous namespace)::cpu_padding_backward<float, at::native::(anonymous namespace)::ReplicationPad>(const at::Tensor&, const at::Tensor&, at::native::(anonymous namespace)::PaddingParams&)::<lambda(int64_t, int64_t)> >(int64_t, int64_t, int64_t, const at::native::(anonymous namespace)::cpu_padding_backward<float, at::native::(anonymous namespace)::ReplicationPad>(const at::Tensor&, const at::Tensor&, at::native::(anonymous namespace)::PaddingParams&)::<lambda(int64_t, int64_t)>&)::<lambda(int64_t, int64_t)>&, long int, long int> /usr/include/c++/11/bits/invoke.h:111
#4 0x7f7e67d842c3 in _M_invoke /usr/include/c++/11/bits/std_function.h:290
#5 0x7f7e5332ab04 in std::function<void (long, long)>::operator()(long, long) const /usr/include/c++/11/bits/std_function.h:590
#6 0x7f7e533152f0 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/ParallelNative.cpp:166
#7 0x7f7e533195d3 in __invoke_impl<void, at::internal::invoke_parallel(int64_t, int64_t, int64_t, const std::function<void(long int, long int)>&)::<lambda(int, size_t)>&, int, long unsigned int> /usr/include/c++/11/bits/invoke.h:61
#8 0x7f7e533190e1 in __invoke_r<void, at::internal::invoke_parallel(int64_t, int64_t, int64_t, const std::function<void(long int, long int)>&)::<lambda(int, size_t)>&, int, long unsigned int> /usr/include/c++/11/bits/invoke.h:111
#9 0x7f7e533189a5 in _M_invoke /usr/include/c++/11/bits/std_function.h:290
#10 0x7f7e5332a5f8 in std::function<void (int, unsigned long)>::operator()(int, unsigned long) const /usr/include/c++/11/bits/std_function.h:590
#11 0x7f7e53314e94 in _run_with_pool /mnt/pytorch-2.5.0/aten/src/ATen/ParallelNative.cpp:96
#12 0x7f7e53315a60 in at::internal::invoke_parallel(long, long, long, std::function<void (long, long)> const&) /mnt/pytorch-2.5.0/aten/src/ATen/ParallelNative.cpp:181
#13 0x7f7e67d5dcc2 in parallel_for<at::native::(anonymous namespace)::cpu_padding_backward<float, at::native::(anonymous namespace)::ReplicationPad>(const at::Tensor&, const at::Tensor&, at::native::(anonymous namespace)::PaddingParams&)::<lambda(int64_t, int64_t)> > /mnt/pytorch-2.5.0/aten/src/ATen/Parallel-inl.h:33
#14 0x7f7e67cfddf2 in cpu_padding_backward<float, at::native::(anonymous namespace)::ReplicationPad> /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/PaddingKernel.cpp:368
#15 0x7f7e67c59d59 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/PaddingKernel.cpp:693
#16 0x7f7e67c5a400 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/PaddingKernel.cpp:693
#17 0x7f7e67c5bb6d in replication_pad3d_backward_kernel_impl /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/PaddingKernel.cpp:693
#18 0x7f7e54a25908 in void at::native::DispatchStub<void (*)(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>), at::native::replication_pad3d_backward_kernel_DECLARE_DISPATCH_type>::operator()<at::Tensor&, at::Tensor const&, c10::ArrayRef<long>&>(c10::DeviceType, at::Tensor&, at::Tensor const&, c10::ArrayRef<long>&) /mnt/pytorch-2.5.0/aten/src/ATen/native/DispatchStub.h:233
#19 0x7f7e54a22508 in replication_pad3d_backward_out_cpu_template /mnt/pytorch-2.5.0/aten/src/ATen/native/ReplicationPadding.cpp:266
#20 0x7f7e54a236ea in at::native::replication_pad3d_backward_cpu(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>) /mnt/pytorch-2.5.0/aten/src/ATen/native/ReplicationPadding.cpp:347
#21 0x7f7e5870646d in wrapper_CPU__replication_pad3d_backward /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCPU.cpp:23994
#22 0x7f7e58b11792 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#23 0x7f7e58b11792 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#24 0x7f7e5789b37a in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#25 0x7f7e575a9b57 in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt> >(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:93
#26 0x7f7e575a9b57 in at::Tensor c10::Dispatcher::redispatch<at::Tensor, at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt> >(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>)> const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:714
#27 0x7f7e573c1b5f in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:536
#28 0x7f7e573c1b5f in at::_ops::replication_pad3d_backward::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_2.cpp:12562
#29 0x7f7e6076051e in at::redispatch::replication_pad3d_backward_symint(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>) /mnt/pytorch-2.5.0/build/aten/src/ATen/RedispatchFunctions.h:14732
#30 0x7f7e605286a4 in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_2.cpp:15816
#31 0x7f7e6052912b in replication_pad3d_backward /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_2.cpp:15817
#32 0x7f7e606c27e5 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#33 0x7f7e606c27e5 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:485
#34 0x7f7e60730252 in call_functor_with_args_from_stack_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor(c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, c10::ArrayRef<c10::SymInt>), torch::autograd::VariableType::(anonymous namespace)::replication_pad3d_backward>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, c10::ArrayRef<c10::SymInt> > >, false, 0, 1, 2, const at::Tensor&, const at::Tensor&, c10::ArrayRef<c10::SymInt> > /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:506
#35 0x7f7e606f3a19 in call_functor_with_args_from_stack<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor(c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, c10::ArrayRef<c10::SymInt>), torch::autograd::VariableType::(anonymous namespace)::replication_pad3d_backward>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, c10::ArrayRef<c10::SymInt> > >, false> /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:518
#36 0x7f7e606c299a in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:584
#37 0x7f7e9b7b0bec in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/BoxedKernel_impl.h:41
#38 0x7f7e9b7b36aa in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:46
#39 0x7f7e9cf4d0f1 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:747
#40 0x7f7e53199912 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:461
#41 0x7f7e63635610 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:465
#42 0x7f7e654e5a16 in operator() /mnt/pytorch-2.5.0/torch/csrc/jit/runtime/register_c10_ops.cpp:13
#43 0x7f7e654e89b0 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10::IValue> >&> /usr/include/c++/11/bits/invoke.h:61
#44 0x7f7e654e878f in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10::IValue> >&> /usr/include/c++/11/bits/invoke.h:111
#45 0x7f7e654e8448 in _M_invoke /usr/include/c++/11/bits/std_function.h:290
#46 0x7f7e9c3b1020 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/include/c++/11/bits/std_function.h:590
#47 0x7f7e9c3a2416 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /mnt/pytorch-2.5.0/aten/src/ATen/core/stack.h:42
#48 0x7f7e9c394b45 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args const&, pybind11::kwargs const&, std::optional<c10::DispatchKey>) /mnt/pytorch-2.5.0/torch/csrc/jit/python/pybind_utils.cpp:813
#49 0x7f7e9c3964e0 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args const&, pybind11::kwargs const&, bool, std::optional<c10::DispatchKey>) /mnt/pytorch-2.5.0/torch/csrc/jit/python/pybind_utils.cpp:892
#50 0x7f7e9bebb6ba in operator() /mnt/pytorch-2.5.0/torch/csrc/jit/python/init.cpp:1771
#51 0x7f7e9bfd7e7e in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&)>::<lambda(const pybind11::args&, const pybind11::kwargs&)>&, 0, 1, pybind11::detail::void_type> /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/cast.h:1631
#52 0x7f7e9bfa5944 in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&)>::<lambda(const pybind11::args&, const pybind11::kwargs&)>&> /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/cast.h:1600
#53 0x7f7e9bf239b4 in operator() /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/pybind11.h:296
#54 0x7f7e9bf23bc1 in _FUN /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/pybind11.h:267
#55 0x7f7e9aa46c89 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/pybind11.h:987
#56 0x56abf2 in cfunction_call /usr/local/src/conda/python-3.13.0/Objects/methodobject.c:540
#57 0x6153f4 in _PyObject_Call /usr/local/src/conda/python-3.13.0/Objects/call.c:361
#58 0x54df40 in PyObject_Call /usr/local/src/conda/python-3.13.0/Objects/call.c:373
#59 0x54df40 in PyCFunction_Call /usr/local/src/conda/python-3.13.0/Objects/call.c:381
#60 0x54df40 in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:1355
#61 0x606f14 in _PyEval_EvalFrame /usr/local/src/conda/python-3.13.0/Include/internal/pycore_ceval.h:119
#62 0x606f14 in _PyEval_Vector /usr/local/src/conda/python-3.13.0/Python/ceval.c:1806
#63 0x606f14 in _PyFunction_Vectorcall /usr/local/src/conda/python-3.13.0/Objects/call.c:413
#64 0x606f14 in _PyObject_VectorcallDictTstate /usr/local/src/conda/python-3.13.0/Objects/call.c:135
#65 0x65dc10 in _PyObject_Call_Prepend /usr/local/src/conda/python-3.13.0/Objects/call.c:504
#66 0x65dc10 in slot_tp_call /usr/local/src/conda/python-3.13.0/Objects/typeobject.c:9537
#67 0x5341f3 in _PyObject_MakeTpCall /usr/local/src/conda/python-3.13.0/Objects/call.c:242
#68 0x549ece in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:813
#69 0x60902d in PyEval_EvalCode /usr/local/src/conda/python-3.13.0/Python/ceval.c:596
#70 0x62eedc in run_eval_code_obj /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1323
#71 0x629d9c in run_mod /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1408
#72 0x64888f in pyrun_file /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1241
#73 0x6473fa in _PyRun_SimpleFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:490
#74 0x64711a in _PyRun_AnyFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:77
#75 0x640b66 in pymain_run_file_obj /usr/local/src/conda/python-3.13.0/Modules/main.c:409
#76 0x640b66 in pymain_run_file /usr/local/src/conda/python-3.13.0/Modules/main.c:428
#77 0x640b66 in pymain_run_python /usr/local/src/conda/python-3.13.0/Modules/main.c:696
#78 0x640b66 in Py_RunMain /usr/local/src/conda/python-3.13.0/Modules/main.c:775
#79 0x5f9508 in Py_BytesMain /usr/local/src/conda/python-3.13.0/Modules/main.c:829
#80 0x7f7ea3b45d8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
#81 0x7f7ea3b45e3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
#82 0x5f885c (/mnt/anaconda3/envs/pytorch-2.3-asan/bin/python3.13+0x5f885c)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /mnt/pytorch-2.5.0/aten/src/ATen/native/cpu/PaddingKernel.cpp:380 in operator()
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: crash,module: nn,triaged,actionable,module: edge cases | low | Critical |
2,731,694,067 | pytorch | Segmentation fault in `replication_pad2d_backward` | ### 🐛 Describe the bug
Under specific inputs, `replication_pad2d_backward` triggered a crash.
```python
import torch
grad_output = torch.full((2, 0, 6, 8,), 1, dtype=torch.float)
self = torch.full((2, 2, 4, 4,), 1, dtype=torch.float)
padding = [2, 2, 1, 1]
torch.ops.aten.replication_pad2d_backward(grad_output, self, padding)
```
Output
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: crash,module: nn,triaged,actionable,module: edge cases | low | Critical |
2,731,701,635 | pytorch | Floating point exception (core dumped) in `convolution_backward` | ### 🐛 Describe the bug
Under specific inputs, `convolution_backward` triggered a crash.
```python
import torch
grad_output = torch.full((5, 4, 3,), 0.5, dtype=torch.double)
input = torch.full((2,5,8,), 0.5, dtype=torch.double)
weight = torch.full((2,5,8,), 1, dtype=torch.double)
bias_sizes = [1]
stride = [65534]
padding = [1]
dilation = [536870912]
transposed = True
output_padding = [1879048192]
groups = 0
output_mask = [0,1,1]
torch.ops.aten.convolution_backward(grad_output, input, weight, bias_sizes, stride, padding, dilation, transposed, output_padding, groups, output_mask)
```
Output
```
Floating point exception (core dumped)
```
ASAN Output
```
=================================================================
==4097879==ERROR: AddressSanitizer: FPE on unknown address 0x7f245990af66 (pc 0x7f245990af66 bp 0x7ffcce6b6ed0 sp 0x7ffcce6b66b0 T0)
#0 0x7f245990af66 in check_shape_forward<long int> /mnt/pytorch-2.5.0/aten/src/ATen/native/Convolution.cpp:678
#1 0x7f245990c8dc in check_shape_backward<long int> /mnt/pytorch-2.5.0/aten/src/ATen/native/Convolution.cpp:742
#2 0x7f2459900bef in at::native::convolution_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long, std::array<bool, 3ul>) /mnt/pytorch-2.5.0/aten/src/ATen/native/Convolution.cpp:2020
#3 0x7f245e584d2b in wrapper_CompositeExplicitAutograd__convolution_backward /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCompositeExplicitAutograd.cpp:1763
#4 0x7f245e8e0cee in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#5 0x7f245e8e0cee in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#6 0x7f245dc45bfa in std::tuple<at::Tensor, at::Tensor, at::Tensor> c10::callUnboxedKernelFunction<std::tuple<at::Tensor, at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul> >(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>&&, c10::ArrayRef<c10::SymInt>&&, c10::ArrayRef<c10::SymInt>&&, c10::ArrayRef<c10::SymInt>&&, bool&&, c10::ArrayRef<c10::SymInt>&&, c10::SymInt&&, std::array<bool, 3ul>&&) /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:53
#7 0x7f245dad3adb in std::tuple<at::Tensor, at::Tensor, at::Tensor> c10::KernelFunction::call<std::tuple<at::Tensor, at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul> >(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:93
#8 0x7f245dad3adb in std::tuple<at::Tensor, at::Tensor, at::Tensor> c10::Dispatcher::redispatch<std::tuple<at::Tensor, at::Tensor, at::Tensor>, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul> >(c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>)> const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:714
#9 0x7f245d81bc54 in c10::TypedOperatorHandle<std::tuple<at::Tensor, at::Tensor, at::Tensor> (at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:536
#10 0x7f245d81bc54 in at::_ops::convolution_backward::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_4.cpp:1824
#11 0x7f24669a1251 in at::redispatch::convolution_backward_symint(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3ul>) /mnt/pytorch-2.5.0/build/aten/src/ATen/RedispatchFunctions.h:1907
#12 0x7f24666e36aa in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_4.cpp:7715
#13 0x7f24666e501c in convolution_backward /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_4.cpp:7716
#14 0x7f24668f42f7 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#15 0x7f24668f42f7 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:485
#16 0x7f2466971891 in call_functor_with_args_from_stack_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<std::tuple<at::Tensor, at::Tensor, at::Tensor>(c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3>), torch::autograd::VariableType::(anonymous namespace)::convolution_backward>, std::tuple<at::Tensor, at::Tensor, at::Tensor>, c10::guts::typelist::typelist<c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3> > >, false, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3> > /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:506
#17 0x7f2466942a7d in call_functor_with_args_from_stack<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<std::tuple<at::Tensor, at::Tensor, at::Tensor>(c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3>), torch::autograd::VariableType::(anonymous namespace)::convolution_backward>, std::tuple<at::Tensor, at::Tensor, at::Tensor>, c10::guts::typelist::typelist<c10::DispatchKeySet, const at::Tensor&, const at::Tensor&, const at::Tensor&, c10::OptionalArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, bool, c10::ArrayRef<c10::SymInt>, c10::SymInt, std::array<bool, 3> > >, false> /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:518
#18 0x7f24668f4607 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:584
#19 0x7f24a0fd5bec in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/BoxedKernel_impl.h:41
#20 0x7f24a0fd86aa in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/KernelFunction_impl.h:46
#21 0x7f24a27720f1 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:747
#22 0x7f24589be912 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:461
#23 0x7f2468e5a610 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:465
#24 0x7f246ad0aa16 in operator() /mnt/pytorch-2.5.0/torch/csrc/jit/runtime/register_c10_ops.cpp:13
#25 0x7f246ad0d9b0 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10::IValue> >&> /usr/include/c++/11/bits/invoke.h:61
#26 0x7f246ad0d78f in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10::IValue> >&> /usr/include/c++/11/bits/invoke.h:111
#27 0x7f246ad0d448 in _M_invoke /usr/include/c++/11/bits/std_function.h:290
#28 0x7f24a1bd6020 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/include/c++/11/bits/std_function.h:590
#29 0x7f24a1bc7416 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /mnt/pytorch-2.5.0/aten/src/ATen/core/stack.h:42
#30 0x7f24a1bb9b45 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args const&, pybind11::kwargs const&, std::optional<c10::DispatchKey>) /mnt/pytorch-2.5.0/torch/csrc/jit/python/pybind_utils.cpp:813
#31 0x7f24a1bbb4e0 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol, pybind11::args const&, pybind11::kwargs const&, bool, std::optional<c10::DispatchKey>) /mnt/pytorch-2.5.0/torch/csrc/jit/python/pybind_utils.cpp:892
#32 0x7f24a16e06ba in operator() /mnt/pytorch-2.5.0/torch/csrc/jit/python/init.cpp:1771
#33 0x7f24a17fce7e in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&)>::<lambda(const pybind11::args&, const pybind11::kwargs&)>&, 0, 1, pybind11::detail::void_type> /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/cast.h:1631
#34 0x7f24a17ca944 in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&)>::<lambda(const pybind11::args&, const pybind11::kwargs&)>&> /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/cast.h:1600
#35 0x7f24a17489b4 in operator() /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/pybind11.h:296
#36 0x7f24a1748bc1 in _FUN /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/pybind11.h:267
#37 0x7f24a026bc89 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /mnt/pytorch-2.5.0/third_party/pybind11/include/pybind11/pybind11.h:987
#38 0x56abf2 in cfunction_call /usr/local/src/conda/python-3.13.0/Objects/methodobject.c:540
#39 0x6153f4 in _PyObject_Call /usr/local/src/conda/python-3.13.0/Objects/call.c:361
#40 0x54df40 in PyObject_Call /usr/local/src/conda/python-3.13.0/Objects/call.c:373
#41 0x54df40 in PyCFunction_Call /usr/local/src/conda/python-3.13.0/Objects/call.c:381
#42 0x54df40 in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:1355
#43 0x606f14 in _PyEval_EvalFrame /usr/local/src/conda/python-3.13.0/Include/internal/pycore_ceval.h:119
#44 0x606f14 in _PyEval_Vector /usr/local/src/conda/python-3.13.0/Python/ceval.c:1806
#45 0x606f14 in _PyFunction_Vectorcall /usr/local/src/conda/python-3.13.0/Objects/call.c:413
#46 0x606f14 in _PyObject_VectorcallDictTstate /usr/local/src/conda/python-3.13.0/Objects/call.c:135
#47 0x65dc9c in _PyObject_Call_Prepend /usr/local/src/conda/python-3.13.0/Objects/call.c:504
#48 0x65dc9c in slot_tp_call /usr/local/src/conda/python-3.13.0/Objects/typeobject.c:9537
#49 0x5341f3 in _PyObject_MakeTpCall /usr/local/src/conda/python-3.13.0/Objects/call.c:242
#50 0x549ece in _PyEval_EvalFrameDefault /usr/local/src/conda/python-3.13.0/Python/generated_cases.c.h:813
#51 0x60902d in PyEval_EvalCode /usr/local/src/conda/python-3.13.0/Python/ceval.c:596
#52 0x62eedc in run_eval_code_obj /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1323
#53 0x629d9c in run_mod /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1408
#54 0x64888f in pyrun_file /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:1241
#55 0x6473fa in _PyRun_SimpleFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:490
#56 0x64711a in _PyRun_AnyFileObject /usr/local/src/conda/python-3.13.0/Python/pythonrun.c:77
#57 0x640b66 in pymain_run_file_obj /usr/local/src/conda/python-3.13.0/Modules/main.c:409
#58 0x640b66 in pymain_run_file /usr/local/src/conda/python-3.13.0/Modules/main.c:428
#59 0x640b66 in pymain_run_python /usr/local/src/conda/python-3.13.0/Modules/main.c:696
#60 0x640b66 in Py_RunMain /usr/local/src/conda/python-3.13.0/Modules/main.c:775
#61 0x5f9508 in Py_BytesMain /usr/local/src/conda/python-3.13.0/Modules/main.c:829
#62 0x7f24a936ad8f (/lib/x86_64-linux-gnu/libc.so.6+0x29d8f)
#63 0x7f24a936ae3f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x29e3f)
#64 0x5f885c (/mnt/anaconda3/envs/pytorch-2.3-asan/bin/python3.13+0x5f885c)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: FPE /mnt/pytorch-2.5.0/aten/src/ATen/native/Convolution.cpp:678 in check_shape_forward<long int>
==4097879==ABORTING
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi | module: crash,module: convolution,triaged,module: edge cases | low | Critical |
2,731,708,822 | pytorch | [MPS] Incorrect output from convolution ops with large dimensions | ### 🐛 Describe the bug
#140726 attempted to reintroduce support for convolutions with `out_channels` > 2**16. While this appeared to work for `Conv1d`, it introduced a regression in `Conv2d` (see https://github.com/pytorch/pytorch/issues/142515#issuecomment-2533495363). It remains unclear whether `Conv3d` is affected.
The issue results in a silent correctness bug. A guard was previously added in #129484 to prevent this (lifted in #140726 for macOS >= 15.1 which caused the regression), and we should reinstate the guard for the affected operations.
#### Minimal repro
```python
import torch
import torch.nn.functional as F
weight_cpu = torch.randn(10, 10, 1, 65536, device="cpu")
weight_mps = weight_cpu.detach().clone().to("mps")
x_cpu = torch.randn(1, 10, 1, 65536, device="cpu")
x_mps = x_cpu.detach().clone().to("mps")
y_cpu = F.conv2d(x_cpu, weight_cpu)
y_mps = F.conv2d(x_mps, weight_mps)
print(y_cpu)
print(y_mps)
```
outputs
```
tensor([[[[-1725.4941]],
[[ 497.0602]],
[[ -475.0262]],
[[-1376.2498]],
[[-1118.9232]],
[[ 566.6099]],
[[ -195.5112]],
[[-1461.4688]],
[[ -119.6234]],
[[ -679.2219]]]])
tensor([[[[0.]],
[[0.]],
[[0.]],
[[0.]],
[[0.]],
[[0.]],
[[0.]],
[[0.]],
[[0.]],
[[0.]]]], device='mps:0')
```
### TODO
- [ ] Investigate which ops are currently broken.
- [ ] Add additional tests with large dimensions for all operations on the affected code path.
- [ ] Reinstate the guard for broken operations on all macOS versions.
- [ ] Ensure the fix is included in the v2.6.0 release.
Sorry for the breakage!
xref #129207 #134416 #140722 huggingface/parler-tts#148 haoheliu/versatile_audio_super_resolution#70
### Versions
PyTorch version: 2.6.0a0+gite1196df
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.31.1
Libc version: N/A
Python version: 3.12.8 | packaged by conda-forge | (main, Dec 5 2024, 14:19:53) [Clang 18.1.8 ] (64-bit runtime)
Python platform: macOS-15.1.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.2
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gite1196df
[pip3] torchaudio==2.5.0a0+ba696ea
[pip3] torchvision==0.20.0a0+e9a3213
[conda] numpy 1.26.2 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+gite1196df dev_0 <develop>
[conda] torchaudio 2.5.0a0+ba696ea dev_0 <develop>
[conda] torchvision 0.20.0a0+e9a3213 dev_0 <develop>
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | high priority,triaged,module: regression,module: correctness (silent),module: mps | low | Critical |
2,731,720,304 | pytorch | Segmentation fault (core dumped) in `embedding_dense_backward` | ### 🐛 Describe the bug
Under specific inputs, `embedding_dense_backward` triggered a crash.
```python
import torch
grad_output = torch.full((0, 0, 11, 0, 0,), -1.5e+300, dtype=torch.double)
indices = torch.full((1, 1, 1, 1, 1, 1, 1, 1, 1, 1,), -5.36871e+08, dtype=torch.long)
num_weights = 0
padding_idx = 0
scale_grad_by_freq = True
torch.ops.aten.embedding_dense_backward(grad_output, indices, num_weights, padding_idx, scale_grad_by_freq)
```
Output
```
Segmentation fault (core dumped)
```
ASAN Output
```
=================================================================
==21192==ERROR: AddressSanitizer: SEGV on unknown address 0x601f00c86fd0 (pc 0x7fef19b609dc bp 0x7ffc04597580 sp 0x7ffc04597370 T0)
==21192==The signal is caused by a WRITE memory access.
#0 0x7fef19b609dc in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/Embedding.cpp:137
#1 0x7fef19b61c2b in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/Embedding.cpp:137
#2 0x7fef19b6307a in at::native::embedding_dense_backward_cpu(at::Tensor const&, at::Tensor const&, long, long, bool) /mnt/pytorch-2.5.0/aten/src/ATen/native/Embedding.cpp:137
#3 0x7fef1dec679a in wrapper_CPU__embedding_dense_backward /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCPU.cpp:5211
#4 0x7fef1e313f59 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
#5 0x7fef1e313f59 in call /mnt/pytorch-2.5.0/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:468
#6 0x7fef1c025c9c in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&, at::Tensor const&, c10::SymInt, c10::SymInt, bool>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::SymInt&&, c10::SymInt&&, bool&&) (/mnt/pytorch-2.5.0/torch/lib/libtorch_cpu.so+0x142e4c9c)
#7 0x7fef1bcbe0ed in at::Tensor c10::Dispatcher::redispatch<at::Tensor, at::Tensor const&, at::Tensor const&, c10::SymInt, c10::SymInt, bool>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, at::Tensor const&, c10::SymInt, c10::SymInt, bool)> const&, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::SymInt, c10::SymInt, bool) const (/mnt/pytorch-2.5.0/torch/lib/libtorch_cpu.so+0x13f7d0ed)
#8 0x7fef1b9585c7 in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&, at::Tensor const&, c10::SymInt, c10::SymInt, bool)>::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::SymInt, c10::SymInt, bool) const /mnt/pytorch-2.5.0/aten/src/ATen/core/dispatch/Dispatcher.h:536
#9 0x7fef1b9585c7 in at::_ops::embedding_dense_backward::redispatch(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::SymInt, c10::SymInt, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/Operators_0.cpp:2866
#10 0x7fef255fdb3f in at::redispatch::embedding_dense_backward_symint(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, c10::SymInt, c10::SymInt, bool) /mnt/pytorch-2.5.0/build/aten/src/ATen/RedispatchFunctions.h:2767
#11 0x7fef25387a1a in operator() /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_0.cpp:9035
#12 0x7fef25388537 in embedding_dense_backward /mnt/pytorch-2.5.0/torch/csrc/autograd/generated/VariableType_0.cpp:9036
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi | module: crash,triaged,module: embedding,module: edge cases | low | Critical |
2,731,726,167 | rust | E0599: confusing error and unhelpful suggestion on unsatisfied associated type bound in trait impl | ### Code
```Rust
use std::iter::Sum;
trait SumIter<T> {
fn sum_iter(self) -> T;
}
impl<A> SumIter<A::Item> for A
where
A: IntoIterator<Item: Sum>,
{
fn sum_iter(self) -> A::Item {
self.into_iter().sum::<A::Item>()
}
}
fn sum_vec<T>(a: Vec<T>) -> T {
a.sum_iter()
}
```
### Current output
```Shell
error[E0599]: no method named `sum_iter` found for struct `Vec<T>` in the current scope
--> src/lib.rs:16:7
|
16 | a.sum_iter()
| ^^^^^^^^ `Vec<T>` is not an iterator
|
help: call `.into_iter()` first
|
16 | a.into_iter().sum_iter()
| ++++++++++++
```
### Desired output
```Shell
error[E0599]: the method `sum_iter` exists for struct `Vec<T>`, but its trait bounds were not satisfied
--> src/lib.rs:18:7
|
18 | a.sum_iter()
| ^^^^^^^^
[...]
| doesn't satisfy `Vec<T>: SumIter<Vec<T>>` or `Vec<T>: IntoIterator<Item: Sum>` because `T: Sum` is not satisfied
|
7 | impl<A> SumIter<A> for A
| ^ ---------- -
| |
| unsatisfied trait bound introduced here
8 | where
9 | A: IntoIterator<Item: Sum>,
| ^^^ required by this bound
```
### Rationale and extra context
The actual issue is that `T` doesn't implement `Sum`, but the `Sum` trait is not mentioned anywhere in the error output. The claim that `Vec<T>` is not iterable is misleading and confusing, and the suggestion to add `.into_iter()` is unhelpful.
Actually following the suggestion just results in the same error again:
```
error[E0599]: no method named `sum_iter` found for struct `std::vec::IntoIter<T>` in the current scope
--> src/lib.rs:17:19
|
17 | a.into_iter().sum_iter()
| ^^^^^^^^ `std::vec::IntoIter<T>` is not an iterator
|
help: call `.into_iter()` first
|
17 | a.into_iter().into_iter().sum_iter()
| ++++++++++++
```
### Other cases
```Rust
// introducing an additional type parameter T
// for the item type and specifying the bound on T
// instead of IntoIterator::Item results in a different
// error, but the error still doesn't mention Sum
use std::iter::Sum;
trait SumIter<T> {
fn sum_iter(self) -> T;
}
impl<A, T> SumIter<A::Item> for A
where
A: IntoIterator<Item = T>,
T: Sum,
{
fn sum_iter(self) -> A::Item {
self.into_iter().sum::<A::Item>()
}
}
fn sum_vec<T>(a: Vec<T>) -> T {
a.sum_iter()
}
// ============================
error[E0599]: the method `sum_iter` exists for struct `Vec<T>`, but its trait bounds were not satisfied
--> src/lib.rs:18:7
|
18 | a.sum_iter()
| ^^^^^^^^
|
note: the following trait bounds were not satisfied:
`<[T] as IntoIterator>::Item = _`
`[T]: IntoIterator`
`[T]: Sized`
--> src/lib.rs:7:6
|
7 | impl<A, T> SumIter<A::Item> for A
| ^ ---------------- -
| |
| unsatisfied trait bound introduced here
8 | where
9 | A: IntoIterator<Item = T>,
| ^^^^^^^^^^^^^^^^^^^^^^
| | |
| | unsatisfied trait bound introduced here
| unsatisfied trait bound introduced here
= note: the following trait bounds were not satisfied:
`<[T] as IntoIterator>::Item = _`
which is required by `[T]: SumIter<_>`
= help: items from traits can only be used if the trait is implemented and in scope
note: `SumIter` defines an item `sum_iter`, perhaps you need to implement it
--> src/lib.rs:3:1
|
3 | trait SumIter<T> {
| ^^^^^^^^^^^^^^^^
help: consider relaxing the type parameter's implicit `Sized` bound
|
7 | impl<A: ?Sized, T> SumIter<A::Item> for A
| ++++++++
```
### Rust Version
```Shell
> rustc --version --verbose
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
```
### Anything else?
The same scenario but using bounds on a function instead of a trait impl results in the correct error message and a helpful suggestion:
```rs
fn sum_iter<I>(i: I) -> I::Item
where
I: IntoIterator<Item: Sum>,
{
i.into_iter().sum::<I::Item>()
}
fn sum_vec<T>(a: Vec<T>) -> T {
sum_iter(a)
}
```
```
error[E0277]: a value of type `T` cannot be made by summing an iterator over elements of type `T`
--> src/lib.rs:18:14
|
18 | sum_iter(a)
| -------- ^ value of type `T` cannot be made by summing a `std::iter::Iterator<Item=T>`
| |
| required by a bound introduced by this call
|
note: required by a bound in `sum_iter`
--> src/lib.rs:23:27
|
21 | fn sum_iter<I>(i: I) -> I::Item
| -------- required by a bound in this function
22 | where
23 | I: IntoIterator<Item: Sum>,
| ^^^ required by this bound in `sum_iter`
help: consider restricting type parameter `T`
|
17 | fn sum_vec<T: std::iter::Sum>(a: Vec<T>) -> T {
| ++++++++++++++++
``` | A-diagnostics,T-compiler | low | Critical |
2,731,726,354 | pytorch | Segmentation fault (core dumped) in `fractional_max_pool3d_backward` | ### 🐛 Describe the bug
Under specific inputs, `fractional_max_pool3d_backward` triggered a crash.
```python
import torch
grad_output = torch.full((2, 3, 2, 2, 2,), 0.5, dtype=torch.double)
self = torch.full((2, 3, 2, 2, 2,), 0.5, dtype=torch.double)
kernel_size = [2, 2, 2]
output_size = [2, 2, 2]
indices = torch.full((0, 0, 11, 0, 0, 7, 0, 13, 11, 1, 13, 2, 10, 0, 1, 0, 12, 0, 0, 0,), 1.251e+12, dtype=torch.long)
torch.ops.aten.fractional_max_pool3d_backward(grad_output, self, kernel_size, output_size, indices)
```
Output
```
Segmentation fault (core dumped)
```
ASAN Output
```
==60872==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x7ff2885abb80 bp 0x7ffe8857d920 sp 0x7ffe8857d7f0 T0)
==60872==The signal is caused by a READ memory access.
==60872==Hint: address points to the zero page.
#0 0x7ff2885abb80 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/FractionalMaxPool3d.cpp:283
#1 0x7ff2885b2087 in parallel_for<at::native::(anonymous namespace)::fractional_max_pool3d_backward_out_single_batch_frame<double>(double*, double const*, const int64_t*, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t)::<lambda(int64_t, int64_t)> > /mnt/pytorch-2.5.0/aten/src/ATen/Parallel-inl.h:29
#2 0x7ff2885ac327 in fractional_max_pool3d_backward_out_single_batch_frame<double> /mnt/pytorch-2.5.0/aten/src/ATen/native/FractionalMaxPool3d.cpp:270
#3 0x7ff28859e70c in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/FractionalMaxPool3d.cpp:313
#4 0x7ff2885ac4f2 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/Parallel-inl.h:36
#5 0x7ff2885b789f in __invoke_impl<void, at::parallel_for<at::native::(anonymous namespace)::fractional_max_pool3d_backward_out_frame<double>(double*, double const*, const int64_t*, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t)::<lambda(int64_t, int64_t)> >(int64_t, int64_t, int64_t, const at::native::(anonymous namespace)::fractional_max_pool3d_backward_out_frame<double>(double*, double const*, const int64_t*, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t)::<lambda(int64_t, int64_t)>&)::<lambda(int64_t, int64_t)>&, long int, long int> /usr/include/c++/11/bits/invoke.h:61
#6 0x7ff2885b5d1b in __invoke_r<void, at::parallel_for<at::native::(anonymous namespace)::fractional_max_pool3d_backward_out_frame<double>(double*, double const*, const int64_t*, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t)::<lambda(int64_t, int64_t)> >(int64_t, int64_t, int64_t, const at::native::(anonymous namespace)::fractional_max_pool3d_backward_out_frame<double>(double*, double const*, const int64_t*, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t)::<lambda(int64_t, int64_t)>&)::<lambda(int64_t, int64_t)>&, long int, long int> /usr/include/c++/11/bits/invoke.h:111
#7 0x7ff2885b4355 in _M_invoke /usr/include/c++/11/bits/std_function.h:290
#8 0x7ff287532b04 in std::function<void (long, long)>::operator()(long, long) const /usr/include/c++/11/bits/std_function.h:590
#9 0x7ff28751d2f0 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/ParallelNative.cpp:166
#10 0x7ff2875215d3 in __invoke_impl<void, at::internal::invoke_parallel(int64_t, int64_t, int64_t, const std::function<void(long int, long int)>&)::<lambda(int, size_t)>&, int, long unsigned int> /usr/include/c++/11/bits/invoke.h:61
#11 0x7ff2875210e1 in __invoke_r<void, at::internal::invoke_parallel(int64_t, int64_t, int64_t, const std::function<void(long int, long int)>&)::<lambda(int, size_t)>&, int, long unsigned int> /usr/include/c++/11/bits/invoke.h:111
#12 0x7ff2875209a5 in _M_invoke /usr/include/c++/11/bits/std_function.h:290
#13 0x7ff2875325f8 in std::function<void (int, unsigned long)>::operator()(int, unsigned long) const /usr/include/c++/11/bits/std_function.h:590
#14 0x7ff28751ce94 in _run_with_pool /mnt/pytorch-2.5.0/aten/src/ATen/ParallelNative.cpp:96
#15 0x7ff28751da60 in at::internal::invoke_parallel(long, long, long, std::function<void (long, long)> const&) /mnt/pytorch-2.5.0/aten/src/ATen/ParallelNative.cpp:181
#16 0x7ff2885ac7c6 in parallel_for<at::native::(anonymous namespace)::fractional_max_pool3d_backward_out_frame<double>(double*, double const*, const int64_t*, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t, int64_t)::<lambda(int64_t, int64_t)> > /mnt/pytorch-2.5.0/aten/src/ATen/Parallel-inl.h:33
#17 0x7ff28859ebe5 in fractional_max_pool3d_backward_out_frame<double> /mnt/pytorch-2.5.0/aten/src/ATen/native/FractionalMaxPool3d.cpp:311
#18 0x7ff28859626e in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/FractionalMaxPool3d.cpp:375
#19 0x7ff2885973b7 in operator() /mnt/pytorch-2.5.0/aten/src/ATen/native/FractionalMaxPool3d.cpp:375
#20 0x7ff288598e04 in fractional_max_pool3d_backward_out_cpu_template /mnt/pytorch-2.5.0/aten/src/ATen/native/FractionalMaxPool3d.cpp:375
#21 0x7ff2885994b6 in at::native::fractional_max_pool3d_backward_cpu(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, at::Tensor const&) /mnt/pytorch-2.5.0/aten/src/ATen/native/FractionalMaxPool3d.cpp:418
#22 0x7ff28c90133b in wrapper_CPU__fractional_max_pool3d_backward /mnt/pytorch-2.5.0/build/aten/src/ATen/RegisterCPU.cpp:23111
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @mikaylagawarecki | module: crash,triaged,module: pooling,module: edge cases | low | Critical |
2,731,751,836 | pytorch | [Inductor] [CPU] `InstanceNorm2d` outputs different value compared with eager on CPU | ### 🐛 Describe the bug
It's a `CPU Inductor` issue. When **H** and **W** are big enough (in my situation, they are set as **1024**, **1024**), the error is very obvious.
`CUDA Inductor` can pass the check.
```python
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super().__init__()
self.inn = nn.InstanceNorm2d(num_features=3)
def forward(self, x):
x = self.inn(x)
return x
model = Model()
x = torch.randn(1, 3, 1024, 1024) # As `H` and `W` increase, the error will be amplified
inputs = [x]
c_model = torch.compile(model)
output = model(*inputs)
c_output = c_model(*inputs)
print(torch.allclose(output, c_output, 1.3e-6, 1e-5)) # I set a less strict value
print(torch.max(torch.abs(output - c_output)))
```
### Error logs
on CPU
```
False
tensor(1.8597e-05)
```
on CUDA
```
True
tensor(4.7684e-07, device='cuda:0')
```
### Versions
torch version: 2.6.0.dev20241205+cu124
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241205+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2500.000
BogoMIPS: 5000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241205+cu124
[pip3] torchaudio==2.5.0.dev20241205+cu124
[pip3] torchvision==0.20.0.dev20241205+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0.dev20241205+cu124 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241205+cu124 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241205+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | high priority,triaged,module: correctness (silent),oncall: pt2,module: inductor,oncall: cpu inductor | low | Critical |
2,731,758,418 | pytorch | Document spectral norm non-support on non-diagonalizable inputs. | ### 📚 The doc issue
Document spectral norm support on non-diagonalizable inputs.
The spectral norm implementation(s) rely upon a power iteration which doesn't guarantee a correct result for non-diagonalizable inputs:
- https://pytorch.org/docs/stable/generated/torch.nn.utils.spectral_norm.html
- https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.spectral_norm.html#torch.nn.utils.parametrizations.spectral_norm
Example:
```
import torch
m = torch.nn.Linear(2, 2)
# Example input A is non-diagonalizable, with repeated dominant eigenvalue 3.
# So expected spectral norm is sigma = 3, and returned result should be A / 3.
A = torch.tensor([[3., 3.], [0., 3.]])
with torch.no_grad():
m.weight = torch.nn.Parameter(A)
torch.nn.utils.parametrizations.spectral_norm(m)
expected = A / 3
actual = m.weight.data
if not torch.allclose(actual, expected):
# Spectral norm algorithm returning A / sigma computes sigma = 4.8541 instead of sigma = 3.
print(f"expected: {expected} \ngot: {actual}")
```
### Suggest a potential alternative/fix
Spectral norm non-support on non-diagonalizable inputs should be included in the docs (perhaps as a 'warning').
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: docs,module: nn,triaged,module: nn.utils.parametrize | low | Minor |
2,731,824,496 | godot | Random Editor Crash | ### Tested versions
- Reproducible in 4.3.stable, 4.4.dev1, 4.4.dev3 and 4.4.dev6
- Not Reproducible in 4.2.2.stable
### System information
Windows 10 - Godot 4.3.stable - Vulkan - NVIDIA RTX 3060 Laptop - 11th Gen Intel Core i7-11800H @ 2.3GHz (16 Threads)
### Issue description
So the editor would sometimes crash at random, and it's a little annoying. I tried opening the console version and I do not see any errors pop out when the editor becomes unresponsive.
### Steps to reproduce
From my experience, these are the things that trigger this issue:
- Changing the value in the color picker using the color wheel
- Editing text by only highlighting certain parts of text using mouse
- Popup window when right-clicking in file manager, node properties, etc.
On very rare occasions, it can also crash when:
- The animation player in the editor finishes playing.
- Typing in any textbox (create nodes, create scripts, directory search, etc.)
If you want to try and reproduce the bug, try doing any of the actions mentioned above in rapid succession.
### Minimal reproduction project (MRP)
You can reproduce this in any blank project, to be honest.
[new-game-project-dev.zip](https://github.com/user-attachments/files/18089743/new-game-project-dev.zip)
| bug,needs testing,crash,regression,topic:gui | low | Critical |
2,731,834,793 | storybook | [Bug]: react-docgen-typescript not working after v8.4.6 | ### Describe the bug
After v8.4.6, using `react-docgen-typescript` as the argTypes inferrer fails to pick up arguments and JSDoc comments entirely.
### Reproduction link
https://github.com/fertolg/storybook-8_4_6-react-docgen-typescript-bug
### Reproduction steps
1. Clone the repo
2. Run `pnpm storybook`
3. Notice how the default example stories are missing type inference (size property is missing for example) and the JSDocs comments are not being rendered in the documentation either.
### System
```bash
Storybook Environment Info:
System:
OS: macOS 14.7.1
CPU: (11) arm64 Apple M3 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.9.0 - ~/.nvm/versions/node/v20.9.0/bin/node
npm: 10.1.0 - ~/.nvm/versions/node/v20.9.0/bin/npm
pnpm: 9.10.0 - /opt/homebrew/bin/pnpm <----- active
Browsers:
Chrome: 131.0.6778.109
Safari: 18.1.1
npmPackages:
@storybook/addon-essentials: 8.4.6 => 8.4.6
@storybook/addon-interactions: 8.4.6 => 8.4.6
@storybook/addon-onboarding: 8.4.6 => 8.4.6
@storybook/blocks: 8.4.6 => 8.4.6
@storybook/react: 8.4.6 => 8.4.6
@storybook/react-vite: 8.4.6 => 8.4.6
@storybook/test: 8.4.6 => 8.4.6
eslint-plugin-storybook: ^0.11.1 => 0.11.1
storybook: 8.4.6 => 8.4.6
```
### Additional context
You don't have to clone the repo I provided, you can also:
1. Initialize a brand new project in an empty directory with `pnpm dlx storybook@latest init`
2. Choose React + Vite
3. After initialization, update `.storybook/main.ts` to use `react-docgen-typescript`.
4. Run storybook and see the result. | bug,react,has workaround,builder-vite,sev:S2,docgen | low | Critical |
2,731,854,685 | next.js | Problem with apollographql using turbopack - Unexpected error processing request: TypeError: module.require is not a function | ### Link to the code that reproduces this issue
https://github.com/dincerpece/nextjs15-1_turbo_apollographql_error
### To Reproduce
When I start it with turbopack and open the page, an error occurs.
next dev --port 65000 --turbo
### Current vs. Expected behavior
When I run "next dev --port 65000" without turbopack, the response comes as it should. However, when I add the --turbo flag and start it, an error occurs!
Error: Response not successful: Received status code 500
at resolveErrorDev (http://localhost:65000/_next/static/chunks/node_modules_next_dist_compiled_107ce8._.js:3662:65)
at processFullStringRow (http://localhost:65000/_next/static/chunks/node_modules_next_dist_compiled_107ce8._.js:3824:23)
at processFullBinaryRow (http://localhost:65000/_next/static/chunks/node_modules_next_dist_compiled_107ce8._.js:3812:9)
at progress (http://localhost:65000/_next/static/chunks/node_modules_next_dist_compiled_107ce8._.js:3932:102)


And the warning I mentioned earlier keeps popping up during build!
https://github.com/vercel/next.js/issues/72525

When I run next dev --port 65000 without turbopack, the response comes as it should and works without any problems.


### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 32607
Available CPU cores: 16
Binaries:
Node: 23.4.0
npm: 10.9.2
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.1.0 // Latest available version is detected (15.1.0).
eslint-config-next: 15.0.3
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
I am running it locally. I want to use my application with turbopack and get fast response. A powerful framework like apollographql needs to be supported by turbopack. @trevor-scheer, @martinnabhan | bug,Turbopack | low | Critical |
2,731,959,693 | pytorch | Deprecate `torch.squeeze()` with unspecified `dim` | ### 🚀 The feature, motivation and pitch
Deprecate unspecified `dim` in calls to `torch.squeeze()`, and/or issue a single runtime warning.
#### Rationale
`squeeze()` without an `dim` argument is the cause of many subtle bugs. Example:
```python
x = images[:batch_size]
B, C, H, W = x.shape
# Incorrect attempt to remove "grayscale" channel dimension.
gray = x.squeeze()
```
This causes hard-to-debug issues if:
- `B == 1` (batch size is 1)
- `H == 1` (height is 1)
- `W == 1` (width is 1)
The above should have been written:
```python
gray = x.squeeze(1)
```
#### Examples in the wild
Many users of this library are not "experts" at code, and frequently produce and publish programs containing this bug.
Roughly 486000 bugs here:
https://github.com/search?q=language%3APython+squeeze%28%29&type=code
I challenge the reader to find an example which is *not* a (potential) bug!
[YOLOX (10k stars)](https://github.com/Megvii-BaseDetection/YOLOX/blob/d872c71bf63e1906ef7b7bb5a9d7a529c7a59e6a/yolox/utils/boxes.py#L49):
```python
conf_mask = (image_pred[:, 4] * class_conf.squeeze() >= conf_thre).squeeze()
```
The correct code would have been:
```python
conf_mask = image_pred[:, 4] * class_conf.squeeze(1) >= conf_thre
```
The above bug actually inspired the creation of this proposal. See also: https://github.com/numpy/numpy/issues/27968
---
P.S. On a similar note, `torch.eye(3).squeeze(0)` should also issue a runtime warning. NumPy correctly raises a full-fledged error in this case.
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD | triaged,module: viewing and reshaping,module: python frontend | low | Critical |
2,732,012,171 | transformers | Improve tensor parallel memory usage | ### Feature request
Thanks to #34184 we can use TP for llama with only one line change. However the current implementation loads the whole model to each GPU in each rank before applying TP, significantly increasing the memory footprint.
### Motivation
We can load the model in CPU before applying TP. I tested this with llama3.1 8B on 2 GPUs. The memory usage is reduced from 60G to less than 20G. Below is my test script
```
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch.distributed import device_mesh
from stainedglass_core.integrations.lm_eval.models.tensor_parallel.llama import parallelize_model
model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
rank = int(os.environ["RANK"])
device = torch.device(f"cuda:{rank}")
torch.cuda.set_device(device)
torch.distributed.init_process_group("nccl")
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map='cpu',
)
num_gpus = torch.cuda.device_count()
tp_mesh = device_mesh.init_device_mesh("cuda", (num_gpus,), mesh_dim_names=("tp",))
model.tensor_parallel(tp_mesh)
model.to(device) # needed for weights and buffers that are not included by the TP plan
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Can I help"
inputs = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
outputs = model(inputs)
print(tokenizer.decode(outputs.logits.squeeze()[-1].argmax()))
```
### Your contribution
We can set `device_map` to `cpu` in `PreTrainedModel.from_pretrained` if `tp_plan` is not `None`, and apply TP at the end.
happy to have discussions and work on a pr for this.
CC @kwen2501 @ArthurZucker | Feature request | low | Minor |
2,732,042,985 | vscode | Add Cell toolbar remains sticky in blank notebook | ### Applies To
- [x] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
1. Create a new blank notebook (with no cells at all)
2. Open that blank notebook
3. Notice that the Add Cell toolbar is available without hovering (expected would be that this is only available on hover)
4. Add a few cells
5. Notice that the Add Cell toolbar at the top of the notebook remains sticky and is still available without hovering
Video recording: https://github.com/user-attachments/assets/64fb76f6-f3c6-4aae-8b61-7b5fcce23e89
### VS Code Version
Version: 1.96.0-insider (user setup) Commit: 54d1a4d6f395e73204ce0b5999439b267aec3fef Date: 2024-10-31T10:29:28.944Z Electron: 32.2.1 ElectronBuildId: 10427718 Chromium: 128.0.6613.186 Node.js: 20.18.0 V8: 12.8.374.38-electron.0 OS: Windows_NT x64 10.0.22621
### Jupyter Extension Version
Name: Jupyter Id: ms-toolsai.jupyter Description: Jupyter notebook support, interactive programming and computing that supports Intellisense, debugging and more. Version: 2024.11.2024103101 Publisher: Microsoft VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter
### Jupyter logs
11:31:49.448 [info] Starting Kernel (Python Path: ~\projects\housing\Scripts\python.exe, Venv, 3.10.11) for '~\projects\housing\test.ipynb' (disableUI=true)
11:31:51.133 [info] Process Execution: ~\projects\housing\Scripts\python.exe -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
11:31:51.152 [info] Process Execution: ~\projects\housing\Scripts\python.exe -m ipykernel_launcher --f=c:\Users\~\AppData\Roaming\jupyter\runtime\kernel-v3d91a32716d7e0e029c4de6545e6dd8e72d90bf73.json
> cwd: ~\projects\housing
11:31:51.204 [info] Process Execution: ~\projects\housing\Scripts\python.exe -m pip list
11:31:56.103 [info] Kernel successfully started
### Coding Language and Runtime Version
_No response_
### Language Extension Version (if applicable)
_No response_
### Anaconda Version (if applicable)
_No response_
### Running Jupyter locally or remotely?
None | bug,notebook-celltoolbar | low | Critical |
2,732,047,318 | vscode | Re-opening a notebook results in failure to render output with the right renderer | ### Applies To
- [x] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
[tqdm.zip](https://github.com/user-attachments/files/17686619/tqdm.zip)
The #16188 issue could be reproduced in the above file:

### VS Code Version
Version: 1.95.2 (system setup) Commit: e8653663e8840adaf45af01eab5c627a5af81807 Date: 2024-11-07T11:07:22.054Z Electron: 32.2.1 ElectronBuildId: 10427718 Chromium: 128.0.6613.186 Node.js: 20.18.0 V8: 12.8.374.38-electron.0 OS: Windows_NT x64 10.0.22000
### Jupyter Extension Version
v2024.10.0
### Jupyter logs
```shell
```
### Coding Language and Runtime Version
_No response_
### Language Extension Version (if applicable)
_No response_
### Anaconda Version (if applicable)
_No response_
### Running Jupyter locally or remotely?
None | bug,notebook-output | low | Critical |
2,732,075,450 | terminal | "unauthorized: unauthorized: AuthenticateToken authentication failed" error with GitHub Copilot | ### Windows Terminal version
1.23.3401.0
### Windows build number
10.0.26338.5000
### Other Software
_No response_
### Steps to reproduce
On my build:
1. Launched Terminal Canary
2. Opened the chat pane, which had GitHub Copilot as the active provider, and typed a query to GitHub (I was using Ubuntu)
### Expected Behavior
I was expecting GitHub Copilot to give me a response
### Actual Behavior
The chat bubble with the model's response gave me back an error reading, "unauthorized: unauthorized: AuthenticateToken authentication failed".
I haven't tried as yet but am assuming I can work around this by clearing the token and trying again.
One thing to note is that around the same time I was messing with clearing out the token on a Win10 device, so I'm not sure if this is potentially caused by roaming + merge weirdness w/ the data in Credential Manager. That could be a total red herring, but mentioning here for completeness. | Issue-Bug,Product-Terminal,Needs-Tag-Fix,Area-Chat | low | Critical |
2,732,078,435 | react-native | Unable to build iOS and Android with RN version 0.76.5 | ### Description
While upgrading the RN version from 0.73.6 to 0.76.5 we are facing a few issues, attaching the screenshots.
In IOS we cannot install pods, facing unable to find a specification for RCT-Folly.
1. IOS
<img width="1456" alt="Screenshot 2024-12-11 at 12 38 24 PM" src="https://github.com/user-attachments/assets/18c878ff-b476-4e94-a5f5-92544f382241">
In Android, we can not build gradle.
2. Android
<img width="1156" alt="Screenshot 2024-12-11 at 12 40 41 PM" src="https://github.com/user-attachments/assets/715eb76a-4f24-4d50-b6a1-3ddb15b65940">
<img width="1007" alt="Screenshot 2024-12-11 at 12 43 02 PM" src="https://github.com/user-attachments/assets/5b83036f-02d6-4e8f-bd43-fcb538565047">
### Steps to reproduce
For Android
yarn android
-------------------
For iOS
yarn ios
### React Native Version
0.76.5
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1.1
CPU: (10) arm64 Apple M2 Pro
Memory: 363.48 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.16.0
path: ~/.nvm/versions/node/v18.16.0/bin/node
Yarn:
version: 1.22.22
path: ~/.nvm/versions/node/v18.16.0/bin/yarn
npm:
version: 9.5.1
path: ~/.nvm/versions/node/v18.16.0/bin/npm
Watchman:
version: 2024.09.09.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.14.3
path: /Users/I557816/.gem/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 2022.2 AI-222.4459.24.2221.10121639
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.7
path: /usr/bin/javac
Ruby:
version: 3.1.1
path: /Users/I557816/.asdf/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.73.4
wanted: 0.73.4
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
> Task :sap_react-native-ausweisapp2-wrapper:copyDebugJniLibsProjectOnly UP-TO-DATE
> Task :app:validateSigningDebug UP-TO-DATE
> Task :app:writeDebugAppMetadata UP-TO-DATE
> Task :app:writeDebugSigningConfigVersions UP-TO-DATE
FAILURE: Build completed with 5 failures.
1: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task ':react-native-svg:compileDebugJavaWithJavac'.
> Compilation failed; see the compiler error output for details.
* Try:
> Run with --info option to get more log output.
> Run with --scan to get full insights.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':react-native-svg:compileDebugJavaWithJavac'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:130)
at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:293)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:128)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:116)
at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:74)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:42)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:331)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:318)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.lambda$execute$0(DefaultTaskExecutionGraph.java:314)
at org.gradle.internal.operations.CurrentBuildOperationRef.with(CurrentBuildOperationRef.java:85)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:314)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:303)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:459)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:376)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:48)
Caused by: org.gradle.api.internal.tasks.compile.CompilationFailedException: Compilation failed; see the compiler error output for details.
at org.gradle.api.internal.tasks.compile.JdkJavaCompiler.execute(JdkJavaCompiler.java:79)
at org.gradle.api.internal.tasks.compile.JdkJavaCompiler.execute(JdkJavaCompiler.java:46)
at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.delegateAndHandleErrors(NormalizingJavaCompiler.java:98)
at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.execute(NormalizingJavaCompiler.java:52)
at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.execute(NormalizingJavaCompiler.java:38)
at org.gradle.api.internal.tasks.compile.AnnotationProcessorDiscoveringCompiler.execute(AnnotationProcessorDiscoveringCompiler.java:52)
at org.gradle.api.internal.tasks.compile.AnnotationProcessorDiscoveringCompiler.execute(AnnotationProcessorDiscoveringCompiler.java:38)
at org.gradle.api.internal.tasks.compile.ModuleApplicationNameWritingCompiler.execute(ModuleApplicationNameWritingCompiler.java:46)
at org.gradle.api.internal.tasks.compile.ModuleApplicationNameWritingCompiler.execute(ModuleApplicationNameWritingCompiler.java:36)
at org.gradle.jvm.toolchain.internal.DefaultToolchainJavaCompiler.execute(DefaultToolchainJavaCompiler.java:57)
at org.gradle.api.tasks.compile.JavaCompile.lambda$createToolchainCompiler$3(JavaCompile.java:204)
at org.gradle.api.internal.tasks.compile.CleaningJavaCompiler.execute(CleaningJavaCompiler.java:53)
at org.gradle.api.internal.tasks.compile.incremental.IncrementalCompilerFactory.lambda$createRebuildAllCompiler$0(IncrementalCompilerFactory.java:52)
at org.gradle.api.internal.tasks.compile.incremental.SelectiveCompiler.execute(SelectiveCompiler.java:70)
at org.gradle.api.internal.tasks.compile.incremental.SelectiveCompiler.execute(SelectiveCompiler.java:44)
at org.gradle.api.internal.tasks.compile.incremental.IncrementalResultStoringCompiler.execute(IncrementalResultStoringCompiler.java:66)
at org.gradle.api.internal.tasks.compile.incremental.IncrementalResultStoringCompiler.execute(IncrementalResultStoringCompiler.java:52)
at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler$1.call(CompileJavaBuildOperationReportingCompiler.java:64)
at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler$1.call(CompileJavaBuildOperationReportingCompiler.java:48)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.api.internal.tasks.compile.CompileJavaBuildOperationReportingCompiler.execute(CompileJavaBuildOperationReportingCompiler.java:48)
at org.gradle.api.tasks.compile.JavaCompile.performCompilation(JavaCompile.java:222)
at org.gradle.api.tasks.compile.JavaCompile.performIncrementalCompilation(JavaCompile.java:163)
at org.gradle.api.tasks.compile.JavaCompile.compile(JavaCompile.java:148)
at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(Unknown Source)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:125)
at org.gradle.api.internal.project.taskfactory.IncrementalTaskAction.doExecute(IncrementalTaskAction.java:45)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:51)
at org.gradle.api.internal.project.taskfactory.IncrementalTaskAction.execute(IncrementalTaskAction.java:26)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:29)
at org.gradle.api.internal.tasks.execution.TaskExecution$3.run(TaskExecution.java:244)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeAction(TaskExecution.java:229)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeActions(TaskExecution.java:212)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeWithPreviousOutputFiles(TaskExecution.java:195)
at org.gradle.api.internal.tasks.execution.TaskExecution.execute(TaskExecution.java:162)
at org.gradle.internal.execution.steps.ExecuteStep.executeInternal(ExecuteStep.java:105)
at org.gradle.internal.execution.steps.ExecuteStep.access$000(ExecuteStep.java:44)
at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:59)
at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:56)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:56)
at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:44)
at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:42)
at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:75)
at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:55)
at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:50)
at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:28)
at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:67)
at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:37)
at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:61)
at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:26)
at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:69)
at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:46)
at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:40)
at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:29)
at org.gradle.internal.execution.steps.BuildCacheStep.executeWithoutCache(BuildCacheStep.java:189)
at org.gradle.internal.execution.steps.BuildCacheStep.lambda$execute$1(BuildCacheStep.java:75)
at org.gradle.internal.Either$Right.fold(Either.java:175)
at org.gradle.internal.execution.caching.CachingState.fold(CachingState.java:62)
at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:73)
at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:48)
at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:46)
at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:35)
at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:75)
at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$2(SkipUpToDateStep.java:53)
at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:53)
at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:35)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:37)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:27)
at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:49)
at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:27)
at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:71)
at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:39)
at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:65)
at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:36)
at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:107)
at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:56)
at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:64)
at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:43)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.executeWithNonEmptySources(AbstractSkipEmptyWorkStep.java:125)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:61)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:36)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38)
at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:36)
at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:23)
at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:75)
at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:41)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.lambda$execute$0(AssignMutableWorkspaceStep.java:35)
at org.gradle.api.internal.tasks.execution.TaskExecution$4.withWorkspace(TaskExecution.java:289)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:31)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:22)
at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:40)
at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:23)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.lambda$execute$2(ExecuteWorkBuildOperationFiringStep.java:67)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:67)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:39)
at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:46)
at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:34)
at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:48)
at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:35)
at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:61)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:127)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:116)
at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:74)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:42)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:331)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:318)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.lambda$execute$0(DefaultTaskExecutionGraph.java:314)
at org.gradle.internal.operations.CurrentBuildOperationRef.with(CurrentBuildOperationRef.java:85)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:314)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:303)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:459)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:376)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:48)
```
### Reproducer
https://github.com/IAMDeveloperChetanSharma/KulturpassAppIssueReproducer
### Screenshots and Videos
<img width="1456" alt="Screenshot 2024-12-11 at 12 38 24 PM" src="https://github.com/user-attachments/assets/7b7842aa-e146-47ed-85a2-15368746e334">
<img width="1156" alt="Screenshot 2024-12-11 at 12 40 41 PM" src="https://github.com/user-attachments/assets/c9a5686d-4b1e-44c8-9e2b-6b49e0de13af">
<img width="1007" alt="Screenshot 2024-12-11 at 12 43 02 PM" src="https://github.com/user-attachments/assets/c40a475a-743f-4fc7-b994-e92de7eba282">
| Platform: iOS,Platform: Android,Needs: Author Feedback | low | Critical |
2,732,174,796 | stable-diffusion-webui | [Security Issue]: Open Redirect Vulnerability in Stable Diffusion WebUI via Gradio (CVE-2024-4940) | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [x] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
An Open Redirect vulnerability was discovered in Stable Diffusion WebUI due to improper validation of the **`file` parameter in `Gradio.`** This vulnerability, tracked as _CVE-2024-4940_, affects `Gradio versions 4.36.1 and below`. It allows attackers to redirect users to attacker-controlled websites by crafting malicious URLs.
CVE-2024-4940 details: https://nvd.nist.gov/vuln/detail/CVE-2024-4940
The issue arises due to improper handling of user-supplied input in URL processing. When a malicious URL is supplied, the application redirects users to an unintended external location without validation.
### Steps to reproduce the problem
1. Launch Stable Diffusion WebUI in a local environment (e.g., `http://127.0.0.1:7860`).

2. Use a crafted URL to supply an external URL to the `file` parameter:
```
http://127.0.0.1:7860/file=https://google.com
```

3. Observe that the browser redirects the user to `https://google.com` without proper validation of the input.

### What should have happened?
Stable-Diffusion WebUI currently utilizes `Gradio version 3.41.2`, which is outdated and vulnerable to known security issues, including CVE-2024-4940. To address these vulnerabilities, it is recommended to update **`Gradio to version 4.37.1 or later`**
1. **Upgrade Gradio**
- Update Gradio to version 4.37.1 or later, where this vulnerability has been addressed.
```bash
pip install gradio>=4.37.1
```
2. **Add Input Validation**
- Enhance input validation in the `create_prompts` function to check if the `file` parameter contains a valid file path. Reject or sanitize external URLs.
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
### Vulnerable Code Location
- **File:** `/modules/ui_toprow.py`
- **Function:** `create_prompts`
```python
def create_prompts(self):
with gr.Column(elem_id=f"{self.id_part}_prompt_container", elem_classes=["prompt-container-compact"] if self.is_compact else [], scale=6):
with gr.Row(elem_id=f"{self.id_part}_prompt_row", elem_classes=["prompt-row"]):
self.prompt = gr.Textbox(label="Prompt", elem_id=f"{self.id_part}_prompt", show_label=False, lines=3, placeholder="Prompt\n(Press Ctrl+Enter to generate, Alt+Enter to skip, Esc to interrupt)", elem_classes=["prompt"])
self.prompt_img = gr.File(label="", elem_id=f"{self.id_part}_prompt_image", file_count="single", type="binary", visible=False)
```
The `create_prompts` function processes user input for text or file-based prompts. However, the `gr.File()` component does not properly validate inputs, allowing URLs to be treated as file paths. This leads to unintended redirections.
### Console logs
```Shell
- Application launched at: http://127.0.0.1:7860
- Received crafted request: `/file=https://google.com`
- Redirecting user to: `https://google.com`
```
### Additional information
_No response_ | bug-report | low | Critical |
2,732,215,607 | ui | [feat]: Simple Rich Text Editor | ### Feature description
wouldn't it be amazingly easier if shadcn has it's own implementation of rich text editor component like tiptap or editor js.
that works easily with nextjs and tailwind
### Affected component/components
Textarea
### Additional Context
With image support too
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,732,255,030 | next.js | Middleware matcher does not catch root path if base path is set | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/boring-cookies-go8s7s
### To Reproduce
1. Start application
2. Open root url (`/test`) - nothing will happen (but it should redirect)
### Current vs. Expected behavior
Commonly used matcher:
```
export const config = {
matcher: [
'/((?!api|_next/static|_next/image|favicon.ico|sitemap.xml|robots.txt).*)'
]
}
```
does not catch root path if base path is set in next config.
If I add sole `/` it matches:
```
export const config = {
matcher: [
'/',
'/((?!api|_next/static|_next/image|favicon.ico|sitemap.xml|robots.txt).*)'
]
}
```
If I remove base path from config the first example matches root path.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Debian 6.1.115-1 (2024-11-01)
Available memory (MB): 31945
Available CPU cores: 8
Binaries:
Node: 21.7.1
npm: 10.8.2
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.4 // There is a newer version (15.1.0) available, upgrade recommended!
eslint-config-next: 15.0.4
react: 19.0.0
react-dom: 19.0.0
typescript: 5.6.3
Next.js Config:
output: standalone
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Other (Deployed)
### Additional context
_No response_ | bug,Middleware | low | Minor |
2,732,274,547 | tensorflow | Division by zero error at random places if GPU is used | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.18
### Custom code
Yes
### OS platform and distribution
Linux, RedHatEnterprise 8.6
### Mobile device
_No response_
### Python version
N/A
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
12.3
### GPU model and memory
Quadro RTX 6000
### Current behavior?
(I do not use Python and found no nightly build for C API)
I have a simple program that builds graphs through the c_api and executes sessions. It works perfectly as long as it is run only on CPU. If GPU is involved then at random places the program generates a Div0 error. The very same program run twice within one minute on the same hardware, etc. behave differently.
The error is somewhere deep in TF/Cuda because in the error dump I see 15 program jumps outside my code (what I can still debug).
I tried it on the different partition of a HPC, but both behave the same way. Card is Quadro RTX 6000, driver 565.57.01 Cuda 12.7 as seen in nvidia-smi, but 12.3 is available as libraries.
I tried with many different settings, etc., but cannot identify any rootcause. I am not even sure if the bug is in the TF binary or in one of the Cuda libraries (or elsewhere).
### Standalone code to reproduce the issue
```shell
I use a Pascal program, called examples, available here: https://github.com/zsoltszakaly/tensorflowforpascal.
It is compiled on the HPC with fpc -MObjFPC -Sh -Fl../tensorflow/lib examples.pas.
The tensorflow/lib directory has
Jan 1 2000 libtensorflow_framework.so -> libtensorflow_framework.so.2
Dec 10 11:04 libtensorflow_framework.so.2 -> libtensorflow_framework.so.2.18.0
Jan 1 2000 libtensorflow_framework.so.2.18.0
Jan 1 2000 libtensorflow.so -> libtensorflow.so.2
Dec 10 11:00 libtensorflow.so.2 -> libtensorflow.so.2.18.0
Jan 1 2000 libtensorflow.so.2.18.0
```
### Relevant log output
```shell
It generates every time a session is run, correctly this:
I0000 00:00:1733834127.549506 1676833 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 38484 MB memory: -> device: 0, name: NVIDIA A100-SXM4-40GB, pci bus id: 0000:41:00.0, compute capability: 8.0
But in the last one (again, it is random when!):
An unhandled exception occurred at $00007F223F261EA1:
EDivByZero: Division by zero
$00007F223F261EA1
$00007F223F1E2E72
$00007F223F1E1D5D
$00007F223F1E4183
$00007F223F1E4AD2
$00007F223F1E6DAE
$00007F223F1D842B
$00007F223F1D4241
$00007F223EA97604
$00007F223EA9608F
$00007F223EA921E6
$00007F223EA9067A
$00007F223EA90231
$00007F222E718923
$00007F222E72561B
$00000000004626AA
$0000000000462931
```
| stat:awaiting tensorflower,type:build/install,subtype: ubuntu/linux,TF 2.18 | low | Critical |
2,732,291,155 | storybook | [Bug]: [addon-docs] MDX stories not working at all | ### Describe the bug
I tried the code example in [code/addons/docs/vue3/README.md#mdx](https://github.com/storybookjs/storybook/blob/next/code/addons/docs/vue3/README.md#mdx) but it does not work at all.
I tried with both `8.4.7` and `8.5.0-alpha.20` both versions fails. Also without creating `mdx` file (which does not throw the error) `Docs` tab is not shown at all.
Following error is thrown:
```
Unable to index ./src/stories/Test.stories.mdx:
WARN Error: Invariant failed: No matching indexer found for /home/projects/llqiwrnnox.github/src/stories/Test.stories.mdx
WARN at invariant (/home/projects/llqiwrnnox.github/node_modules/@storybook/core/dist/core-server/index.cjs:34558:11)
WARN at Si.extractStories (/home/projects/llqiwrnnox.github/node_modules/@storybook/core/dist/core-server/index.cjs:34758:5)
WARN at eval (/home/projects/llqiwrnnox.github/node_modules/@storybook/core/dist/core-server/index.cjs:34703:53)
WARN at eval (/home/projects/llqiwrnnox.github/node_modules/@storybook/core/dist/core-server/index.cjs:34679:30)
WARN at eval (/home/projects/llqiwrnnox.github/node_modules/@storybook/core/dist/core-server/index.cjs:34676:26)
WARN at Si.updateExtracted (/home/projects/llqiwrnnox.github/node_modules/@storybook/core/dist/core-server/index.cjs:34668:23)
WARN at Si.ensureExtracted (/home/projects/llqiwrnnox.github/node_modules/@storybook/core/dist/core-server/index.cjs:34702:16)
WARN at Si.initialize (/home/projects/llqiwrnnox.github/node_modules/@storybook/core/dist/core-server/index.cjs:34663:16)
```
### Reproduction link
https://stackblitz.com/edit/github-tyj2nvxh?file=src%2Fstories%2FTest.stories.mdx
### Reproduction steps
- add `addon-docs` addon
- create an mdx file like in example
### System
```bash
Storybook Environment Info:
System:
OS: Linux 5.15 Ubuntu 24.04.1 LTS 24.04.1 LTS (Noble Numbat)
CPU: (8) x64 Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
Shell: 5.9 - /usr/bin/zsh
Binaries:
Node: 22.12.0 - ~/.local/share/mise/installs/node/22.12.0/bin/node
npm: 10.9.0 - ~/.local/share/mise/installs/node/22.12.0/bin/npm
pnpm: 9.15.0 - ~/.local/share/mise/installs/node/22.12.0/bin/pnpm <----- active
npmPackages:
@storybook/addon-docs: ^8.4.7 => 8.4.7
@storybook/addon-essentials: ^8.4.7 => 8.4.7
@storybook/addon-interactions: ^8.4.7 => 8.4.7
@storybook/addon-onboarding: ^8.4.7 => 8.4.7
@storybook/blocks: ^8.4.7 => 8.4.7
@storybook/test: ^8.4.7 => 8.4.7
@storybook/vue3: ^8.4.7 => 8.4.7
@storybook/vue3-vite: ^8.4.7 => 8.4.7
storybook: ^8.4.7 => 8.4.7
```
### Additional context
_No response_ | bug,documentation,mdx | low | Critical |
2,732,294,724 | flutter | Mac_x64 build_tests_3_4 is 2.02% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Mac_x64 build_tests_3_4"
}
-->
The post-submit test builder `Mac_x64 build_tests_3_4` had a flaky ratio 2.02% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_x64%20build_tests_3_4/4624
Commit: https://github.com/flutter/flutter/commit/d311b481cff0d9230529dc2af32302bfc3f8d4bc
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_x64%20build_tests_3_4/4624
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_x64%20build_tests_3_4/4564
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Mac_x64%20build_tests_3_4
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| P2,c: flake,team-tool | low | Major |
2,732,297,618 | vscode | 3 monitor setup, settings dropdown not readable on fullscreen | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
- OS Version: Windows 11 24H2
Steps to Reproduce:
Open VSCode on 3 monitor setup, and click on gear icon top right if the VSCode opened in full screen.
On 3 monitor setup, using the VSCode, on right top gear icon, if I click its open the menu to the right from gear icon, and only can see the dropdowns 1/4.
The menu not appear on the right screen, its cropped down in middle screen.

(The image is top right part of the screen of middle, the dark is background with some stars of the right monitors top left.)
This is the windowed view:

(As you can see some apps under is set to fullscreen, you can see the X of it, the VSCode windows going trough middle to right screen, and the dropdown open to the right place, not like in fullscreen mode) | bug,menus,confirmation-pending | low | Critical |
2,732,307,906 | vscode | Report issue fails with HTTP 401 | # Behaviour
After running the "Python: Report Issue..." command and filling everything out, the "Create on GitHub" button appears to do nothing.
The extension is making a request using `Authorization: Bearer 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` and gets a HTTP 401 "Bad credentials" response from the GitHub API.
## Steps to reproduce:
1. Follow https://github.com/microsoft/vscode-python/wiki/Reporting-a-bug#the-extension-loads or https://github.com/microsoft/vscode-python/wiki/Reporting-a-bug#the-extension-wont-even-start
2. Press "Create on GitHub"
3. Toggle Developer Tools and check the console logs and network requests
# Diagnostic data
No relevant output for `Python` in the `Output` panel | bug,issue-reporter | low | Critical |
2,732,350,359 | flutter | Build many nested lists of slivers lazily | ### Use case
The use case is a view that displays chat messages and their history like popular messaging Apps do it. All messages of one day should have a sticky date header. There can be many messages for each day and also many days in the chat history (multiple years e.g.). The list should still scroll smoothly, and all messages and days should be built lazily.
This is not possible with the stock flutter widgets available, today. There are workarounds, which have severe disadvantages, though:
**Workaround 1:**
Use a `CustomScrollView` with a fixed list of `SliverMainAxisGroup` (one for a day).
Messages for each day build lazily, because of `SliverList`, however the days don't build lazily and thus slow down rebuilds as can be seen in the screenshots below.
Executing `setState` multiple times:
With 10 days:
<img width="1561" alt="Screenshot 2024-12-11 at 09 55 31" src="https://github.com/user-attachments/assets/c27ca7f4-244b-4e23-87de-6690074f8d5a">
With 2000 days (5 - 6 years of chat history):
<img width="1558" alt="Screenshot 2024-12-11 at 09 57 05" src="https://github.com/user-attachments/assets/31be25a2-3a37-4ff0-84aa-71f0c911d749">
<details>
<summary>Example</summary>
```dart
import 'package:flutter/material.dart';
import 'package:sliver_tools/sliver_tools.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key});
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
final _data = List<(int, List<String>)>.generate(
2000,
(index) => (index, List.generate(50, (index) => index.toString())),
);
@override
Widget build(BuildContext context) {
return Scaffold(
body: CustomScrollView(
slivers: _data.map((it) {
return SliverMainAxisGroup(
slivers: [
SliverPinnedHeader(
child: Container(
color: Colors.blue,
child: Text(it.$1.toString()),
),
),
SliverList.list(
children: it.$2.map((it) {
return Text(it.toString());
}).toList(),
)
],
);
}).toList(),
),
floatingActionButton: GestureDetector(
onTap: () => setState(() {}),
child: Container(color: Colors.red, height: 100, width: 100),
),
);
}
}
```
</details>
**Workaround 2:**
Have a `ListView` and use a package like https://pub.dev/packages/sticky_headers where the content of the sticky header is a `Column` containing all the messages of the day.
Days build lazily, however, always all messages of that day are built, even when the day has many messages.
### Proposal
Solutions I can think of:
- Nest `SliverList` in `SliverList`
- Implement `builder` pattern for `CustomScrollView`
Both solution seem to be impossible with the current Sliver implementation, according to https://github.com/Kavantix/sliver_tools/issues/44#issuecomment-1096994549
There are some related issues. Most of them are closed without being solved:
- https://github.com/flutter/flutter/issues/97107
- https://github.com/flutter/flutter/issues/114299
- https://github.com/flutter/flutter/issues/31606 | framework,f: scrolling,would be a good package,c: proposal,P3,team-framework,triaged-framework | low | Major |
2,732,366,568 | transformers | logged loss is not correct with gradient accumulation | ### System Info
transformer v4.46.3
### Who can help?
@muellerzr
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
currently logged loss doesn't divide gradient accumulation steps, so it will be bigger than expected:
https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2521-L2536
```
with context():
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
if (
args.logging_nan_inf_filter
and not is_torch_xla_available()
and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
):
# if loss is nan or inf simply add the average of previous logged losses
tr_loss = tr_loss + tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)
else:
if tr_loss.device != tr_loss_step.device:
raise ValueError(
f"Calculated loss must be on the original device: {tr_loss.device} but device in use is {tr_loss_step.device}"
)
tr_loss = tr_loss + tr_loss_step
```
### Expected behavior
## how to fix
```
diff --git a/trainer.py b/trainer.py
index 1b9b80f..043c6c9 100755
--- a/trainer.py
+++ b/trainer.py
@@ -2546,7 +2546,7 @@ class Trainer:
self.state.global_step += 1
self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epoch
self.control = self.callback_handler.on_step_end(args, self.state, self.control)
- self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
+ self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, num_batches)
else:
self.control = self.callback_handler.on_substep_end(args, self.state, self.control)
@@ -2571,7 +2571,7 @@ class Trainer:
self.control.should_training_stop = True
self.control = self.callback_handler.on_epoch_end(args, self.state, self.control)
- self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
+ self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, self.args.gradient_accumulation_steps)
if DebugOption.TPU_METRICS_DEBUG in self.args.debug:
if is_torch_xla_available():
@@ -2976,7 +2976,7 @@ class Trainer:
) from exc
return metrics
- def _maybe_log_save_evaluate(self, tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval):
+ def _maybe_log_save_evaluate(self, tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, ga_steps):
if self.control.should_log and self.state.global_step > self._globalstep_last_logged:
if is_torch_xla_available():
xm.mark_step()
@@ -2990,7 +2990,7 @@ class Trainer:
# reset tr_loss to zero
tr_loss -= tr_loss
- logs["loss"] = round(tr_loss_scalar / (self.state.global_step - self._globalstep_last_logged), 4)
+ logs["loss"] = round(tr_loss_scalar/ ga_steps / (self.state.global_step - self._globalstep_last_logged), 4)
if grad_norm is not None:
logs["grad_norm"] = grad_norm.detach().item() if isinstance(grad_norm, torch.Tensor) else grad_norm
``` | bug | low | Critical |
2,732,372,396 | opencv | Unstable CRF from CalibrateDebevec | ### System Information
OpenCV python version: 4.9.0.80
Operating system: Windows 11
Python version: 3.11.9
### Detailed description
The function of CalibrateDebevec is to _approximate_ the Camera Response Function (CRF). The CRF describes a property of the camera and should apply to that camera in different situations.
- Therefore the CRF should approximate the same polynomial if calibrated on different image sets.
- With more images or more sample points, the CRF should describe the same polynomial with higher accuracy.
**Bug:** If the CRF is calibrated multiple times with different parameters or image sets, the resulting CRF is different.
The CRF is different when:
- Different amount of sample points (argument parameter defaults to 70)
- Images are resized (tested with 1/2, 1/4, 1/8)
- Different amount of exposures/images
- Different scene illumination
The effect is strongest with a higher number of sample points or images.
The image below shows CRF lines of 6 image sets, the black lines are created with 3 images, the coloured lines with 33 images, 500 sample points.

Overview of my images. The number in the filename is a percentage of the base exposure time.

### Steps to reproduce
Capture a set of images with different exposure times but no movement or changes in scene illumination.
This reproduction shows the issue with sample points, however, it also shows with number of images or resolution.
```
crf_list=[]
# images = load your images
# exposure_times = list of exposure times in milliseconds
for sample_points in [70, 200, 400, 700]:
calib_debevec = cv2.createCalibrateDebevec(samples=sample_points )
crf = calib_debevec.process(src=images, times=exposure_times)
crf_list.append(crf)
# plot and compare entries of crf_list
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: calib3d | low | Critical |
2,732,384,551 | next.js | `next dev --turbo` get 404 for `.next/static/chunks/` because of specified `deploymentId` in `next.config.ts` | ### Link to the code that reproduces this issue
https://github.com/Subrequest/nextjs-bug-report
### To Reproduce
1. Start the dev server
2. Go to `http://localhost:3000`
3. See the page not loading stuff (like CSS)
4. Go to `next.config.ts`, comment out `deploymentId`
5. Restart and server and go back to localhost
6. Page is loading fine
You can try the same without `--turbo` it will work with and without `deploymentId` being set.
### Current vs. Expected behavior
Setting `deploymentId` in the `next.config.ts` with Webpack is working. But as long as you're using `--turbo` it leads to 404s.
```ts
const nextConfig: NextConfig = {
deploymentId: Date.now().toString(),
};
```
As, "[_Next.js will automatically mitigate most instances of version skew and automatically reload the application to retrieve new assets when detected._](https://nextjs.org/docs/app/building-your-application/deploying)" I'm not sure if it's really necessary to use `deploymentId` for self-hosting. I'll get rid of that.
Also, `deploymentId` is not mentioned in [`next.config.ts`'s doc](https://nextjs.org/docs/app/api-reference/config/next-config-js). It is only mentioned [here](https://nextjs.org/docs/app/building-your-application/deploying#version-skew). Maybe something to add ?
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Nov 14 18:15:21 PST 2024; root:xnu-11215.41.3~13/RELEASE_ARM64_T6041
Available memory (MB): 49152
Available CPU cores: 12
Binaries:
Node: 23.3.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.14.4
Relevant Packages:
next: 15.1.0 // Latest available version is detected (15.1.0).
eslint-config-next: 15.1.0
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Turbopack,linear: turbopack | low | Critical |
2,732,452,669 | angular | Support application-level providers in server rendering utilities |
### **Description**
The current SSR rendering utilities, [`renderModule`](https://github.com/angular/angular/blob/main/packages/platform-server/src/utils.ts#L267) and [`renderApplication`](https://github.com/angular/angular/blob/f3729cec87ac3b9c2798be604dd2425535f8fb66/packages/platform-server/src/utils.ts#L302), allow adding **only platform-level providers**, not application-level providers.
Due to this limitation:
1. It is not possible to reuse the same platform instance across multiple requests, instead of we destroy and recreating it for each request.
2. Application-specific providers cannot be effectively scoped, which can lead to unintended behavior for request-specific tokens.
### **Proposed solution**
By enabling **application-level dependency injection tokens**, we can better distinguish application-specific providers from platform-level ones.
For example, tokens such as the below could be scoped truly to the **application level** instead of being treated as platform-level providers.:
- [`REQUEST`](https://github.com/angular/angular/blob/f3729cec87ac3b9c2798be604dd2425535f8fb66/packages/core/src/application/platform_tokens.ts#L30)
- [`INITIAL_CONFIG`](https://github.com/angular/angular/blob/f3729cec87ac3b9c2798be604dd2425535f8fb66/packages/platform-server/src/utils.ts#L57)
This change would beneficial to:
- Improve DI scoping for SSR environments.
- Allow cleaner separation of **request-specific** and **platform-specific** providers.
- Prevent issues where shared platform instances inadvertently mix providers across multiple requests.
#### **Related Files**
- [`renderModule`](https://github.com/angular/angular/blob/main/packages/platform-server/src/utils.ts#L267)
- [`renderApplication`](https://github.com/angular/angular/blob/f3729cec87ac3b9c2798be604dd2425535f8fb66/packages/platform-server/src/utils.ts#L302)
### **Use Cases**
1. Using the `REQUEST` token for handling request-specific logic.
2. Defining `INITIAL_CONFIG` at an application level to avoid platform-level pollution.
3. Ensuring clean instantiation of providers for every incoming request in SSR.
| area: server | low | Major |
2,732,581,923 | kubernetes | Allow Services to select only the latest backend Pods | ### What would you like to be added?
Add a new field `spec.selectorLatest` to the `kind: Service` resource to enable automatic selection of the latest pods without need of sophisticated controller to make updates.
So side to pod selector, would be extra `selectorLatest: true` like
```
spec:
selectorLatest: true
selector:
service_name: foo
```
This will ensure that Load balancer contains ONLY the latest new version, in case of Rolling update.
Which means, depends on rollingUpdate strategy and race conditions, it might have just one or more instance - but this should be enough to serve a new version.
### Why is this needed?
## Why is needed
In many dynamic environments, managing Service selectors with controller can be cumbersome. When a Deployment creates a new ReplicaSet, the Service's `spec.selector` must be updated rapidly if changes occur. Automating this process via a `selectorLatest: true` field could simplify operations and reduce errors (race conditions).
## Additional Context
This feature would reduce the operational overhead for users running applications in Kubernetes,
particularly for CI/CD pipelines and dynamic workloads.
## Example
Having two `kind: service` exposed publicly
* one traditional - serving dynamic content ( serving old and new version while deploying ) - heavy traffic
* one via caching CDN with new feature - serving uncached static files only (different domain) - low traffic
ASCII art: 🥲
```
www.example.com [ dynamic content ] -> [kind: Service traditional] -> same_app
static.example.com [CDN - cache statics] -> [kind: Service with latest version true] -> same_app
```
The dynamic content could be served in parallel with static content from the same app and using rolling update strategy.
## Benefits
* Simple unified deployment; No need for sophisticated A/B deployment
* Simple CI/CD; No need to warm up caches upfront ( can be triggered during/after deployment or not at all )
* No race conditions in serving static files - always first request for new static can be served by first instance in Service and then cached in CDN - no heavy load on first instance thx to cache upfront. | sig/network,kind/feature,needs-triage | medium | Critical |
2,732,587,800 | flutter | Mac_x64 build_tests_4_4 is 2.11% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Mac_x64 build_tests_4_4"
}
-->
The post-submit test builder `Mac_x64 build_tests_4_4` had a flaky ratio 2.11% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_x64%20build_tests_4_4/4670
Commit: https://github.com/flutter/flutter/commit/0147b1f0eca0eda19ff9ec6fc29fcd2881e50b2d
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_x64%20build_tests_4_4/4670
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_x64%20build_tests_4_4/4655
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Mac_x64%20build_tests_4_4
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| P2,c: flake,team-tool | low | Major |
2,732,633,620 | vscode | Visualisation of Tests breaks | # Behaviour
Whenever some of `pytest` parametrised test fail, they are not cleaned up before showing updated run and new text is printed on top of old

## Steps to reproduce:
1. Setup paremetrised pytest
2. Run tests with failure
3. Fix failure
4. Run test again
# Diagnostic data
Output for Python Test Log
<details>
<summary>Output for <code>Python</code> in the <code>Output</code> panel (<code>View</code>→<code>Output</code>, change the drop-down the upper-right of the <code>Output</code> panel to <code>Python</code>)
</summary>
<p>
```
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.2]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]
backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.0]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]
backend/tests/test_pipeline.py::test_pipeline[inexpected]
8 tests collected in 0.39s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.Running pytest with args: ['-p', 'vscode_pytest', '--quiet', '--tb=short', '--rootdir=/Users/ilyakochik/Developer/refining-company/sql-refining', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[inexpected]']
.....F.. [100%]
=================================== FAILURES ===================================
___________________ test_pipeline[src.code.ingest_content.1] ___________________
backend/tests/test_pipeline.py:194: in test_pipeline
assert correct[name] == captured[name], f"Snapshots '{name}' are different"
E AssertionError: Snapshots 'src.code.ingest_content.1' are different
E assert Strings are not equal:
E ---
E +++
E @@ -6,394 +5,0 @@
E - { "Query(0-accounts.sql:26:1)": {
E - "expressions": [
E - { "Expression(0-accounts.sql:28:5) = Expression(Binary_expression(Column(?.?.revenue), Column(?.?.contract_duration_days)))": {
E - "columns": [...
E
E ...Full output truncated (390 lines hidden), use '-vv' to show
=========================== short test summary info ============================
FAILED backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]
1 failed, 7 passed in 0.35s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.
==================================== ERRORS ====================================
_______________ ERROR collecting backend/tests/test_pipeline.py ________________
backend/tests/test_pipeline.py:190: in <module>
@pytest.mark.parametrize("name,captured,correct", get_test_params())
backend/tests/test_pipeline.py:181: in get_test_params
captured = capture_snapshots(config)
backend/tests/test_pipeline.py:150: in capture_snapshots
server.main(**init["server:main"])
backend/src/server.py:99: in main
session.load_codebase(codebase_path)
backend/src/workspace.py:22: in load_codebase
self.queries_codebase = code.from_dir(self.path_codebase)
backend/src/code.py:111: in from_dir
tree = ingest_content(tree, str(file.relative_to(dir)), file.read_text())
backend/tests/test_pipeline.py:124: in wrapper
result = target(*args, **kwargs)
backend/src/code.py:127: in ingest_content
queries=tree.queries() + updated_queries,
E TypeError: 'list' object is not callable
------------------------------- Captured stderr --------------------------------
Loading codebase from /Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/inputs/codebase
=========================== short test summary info ============================
ERROR backend/tests/test_pipeline.py - TypeError: 'list' object is not callable
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
no tests collected, 1 error in 0.43s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.2]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]
backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.0]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]
backend/tests/test_pipeline.py::test_pipeline[inexpected]
8 tests collected in 0.32s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.Running pytest with args: ['-p', 'vscode_pytest', '--quiet', '--tb=short', '--rootdir=/Users/ilyakochik/Developer/refining-company/sql-refining', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]']
. [100%]
1 passed in 0.31s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.Running pytest with args: ['-p', 'vscode_pytest', '--quiet', '--tb=short', '--rootdir=/Users/ilyakochik/Developer/refining-company/sql-refining', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[inexpected]']
........ [100%]
8 passed in 0.31s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.2]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]
backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.0]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]
backend/tests/test_pipeline.py::test_pipeline[inexpected]
8 tests collected in 0.33s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.Running pytest with args: ['-p', 'vscode_pytest', '--quiet', '--tb=short', '--rootdir=/Users/ilyakochik/Developer/refining-company/sql-refining', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[inexpected]']
........ [100%]
8 passed in 0.34s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.2]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]
backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.0]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]
backend/tests/test_pipeline.py::test_pipeline[inexpected]
8 tests collected in 0.37s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.Running pytest with args: ['-p', 'vscode_pytest', '--quiet', '--tb=short', '--rootdir=/Users/ilyakochik/Developer/refining-company/sql-refining', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[inexpected]']
F...FF.F [100%]
=================================== FAILURES ===================================
___________________ test_pipeline[src.code.ingest_content.2] ___________________
backend/tests/test_pipeline.py:193: in test_pipeline
assert name in captured, f"Snapshot '{name}' was not captured"
E AssertionError: Snapshot 'src.code.ingest_content.2' was not captured
E assert 'src.code.ingest_content.2' in {'src.code.ingest.0': '{\n "files": { "0-accounts.sql": "<Tree>" },\n "queries": [\n { "Query(0-accounts.sql:26:1...s.industry" ] } ],\n "alias": "industry_tech" } } ],\n "reliability": 1,\n "similarity": 0.95 } ]', ...}
___________________ test_pipeline[src.code.ingest_content.0] ___________________
backend/tests/test_pipeline.py:193: in test_pipeline
assert name in captured, f"Snapshot '{name}' was not captured"
E AssertionError: Snapshot 'src.code.ingest_content.0' was not captured
E assert 'src.code.ingest_content.0' in {'src.code.ingest.0': '{\n "files": { "0-accounts.sql": "<Tree>" },\n "queries": [\n { "Query(0-accounts.sql:26:1...s.industry" ] } ],\n "alias": "industry_tech" } } ],\n "reliability": 1,\n "similarity": 0.95 } ]', ...}
___________________ test_pipeline[src.code.ingest_content.1] ___________________
backend/tests/test_pipeline.py:193: in test_pipeline
assert name in captured, f"Snapshot '{name}' was not captured"
E AssertionError: Snapshot 'src.code.ingest_content.1' was not captured
E assert 'src.code.ingest_content.1' in {'src.code.ingest.0': '{\n "files": { "0-accounts.sql": "<Tree>" },\n "queries": [\n { "Query(0-accounts.sql:26:1...s.industry" ] } ],\n "alias": "industry_tech" } } ],\n "reliability": 1,\n "similarity": 0.95 } ]', ...}
__________________________ test_pipeline[inexpected] ___________________________
backend/tests/test_pipeline.py:197: in test_pipeline
assert not extra_snapshots, f"Unexpected snapshots captured: {extra_snapshots}"
E AssertionError: Unexpected snapshots captured: {'src.code.ingest.1', 'src.code.ingest.0', 'src.code.ingest.2'}
E assert not {'src.code.ingest.0', 'src.code.ingest.1', 'src.code.ingest.2'}
=========================== short test summary info ============================
FAILED backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.2]
FAILED backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.0]
FAILED backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]
FAILED backend/tests/test_pipeline.py::test_pipeline[inexpected] - AssertionE...
4 failed, 4 passed in 0.34s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.2]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]
backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.0]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]
backend/tests/test_pipeline.py::test_pipeline[inexpected]
8 tests collected in 0.31s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.Running pytest with args: ['-p', 'vscode_pytest', '--quiet', '--tb=short', '--rootdir=/Users/ilyakochik/Developer/refining-company/sql-refining', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[inexpected]']
no tests ran in 0.31s
ERROR: not found: /Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.2]
(no match in any of [<Module test_pipeline.py>])
ERROR: not found: /Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.0]
(no match in any of [<Module test_pipeline.py>])
ERROR: not found: /Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest_content.1]
(no match in any of [<Module test_pipeline.py>])
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.0]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.1]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]
backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.2]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]
backend/tests/test_pipeline.py::test_pipeline[inexpected]
8 tests collected in 0.31s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.Running pytest with args: ['-p', 'vscode_pytest', '--quiet', '--tb=short', '--rootdir=/Users/ilyakochik/Developer/refining-company/sql-refining', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[inexpected]']
........ [100%]
8 passed in 0.31s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.0]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.1]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]
backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.2]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]
backend/tests/test_pipeline.py::test_pipeline[inexpected]
8 tests collected in 0.32s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.0]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.1]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]
backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]
backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.2]
backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]
backend/tests/test_pipeline.py::test_pipeline[inexpected]
8 tests collected in 0.31s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.Running pytest with args: ['-p', 'vscode_pytest', '--quiet', '--tb=short', '--rootdir=/Users/ilyakochik/Developer/refining-company/sql-refining', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.logic.compare.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.1]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.code.ingest.2]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[src.sql.parse.0]', '/Users/ilyakochik/Developer/refining-company/sql-refining/backend/tests/test_pipeline.py::test_pipeline[inexpected]']
........ [100%]
8 passed in 0.32s
Starting now, all test run output will be sent to the Test Result panel, while test discovery output will be sent to the "Python" output channel instead of the "Python Test Log" channel. The "Python Test Log" channel will be deprecated within the next month. See https://github.com/microsoft/vscode-python/wiki/New-Method-for-Output-Handling-in-Python-Testing for details.
```
</p>
</details>
| bug,testing | low | Critical |
2,732,649,252 | svelte | function bindings error with bind:group | ### Describe the bug
Function bindings not working with bind:group whether it's radio buttons or components
### Reproduction
[repl](https://svelte.dev/playground/hello-world?version=5.10.1#H4sIAAAAAAAACp2Qy2rEMAxFf0WIQjKQZvaZJNBd_6Huwkk0weDKxlamLSH_XpxHmUULpSvZ5-pxpRlZvxFW-EzWOnh3wQ6Q02CEhhMWeDWWIlYvM8qnT3kJYHFUPXlfxhtZSazTkX7ivWMhlogV1rEPxgtYzWOjUKLCVoliJZYExuAmDw08RNFCeaa9t5SdFKeE3nEUuOpeDwQNzIkpGUny0_FREkimwFujywaXYouRJDd3qccwczlImuAsldaN-aqe9g4pLBfF9Xmz366OWHFt2E8C6TSNwqAH4xTCTduJmt09pIs02ao9rl0zMMO32hkeqhU387ZbOZIU-55lJFng3Cqure7IwtWFo7SFNUJ9XqX2D446zZr1r5YO-T-e9toWtsedKyxQ6EOwkjDR8rp8AYlWfMNyAgAA)
### Logs
_No response_
### System Info
```shell
svelte repl
```
### Severity
annoyance | feature request | low | Critical |
2,732,687,491 | vscode | Highlighted line in terminal moving with resizing the window |
Type: <b>Bug</b>
When I highlight a line printed in the terminal and then resize the terminal window, the selected line moves up and down along the resizing, changing the selected line.
VS Code version: Code 1.94.2 (Universal) (384ff7382de624fb94dbaf6da11977bba1ecd427, 2024-10-09T16:08:44.566Z)
OS version: Darwin arm64 23.6.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Pro (8 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|3, 3, 3|
|Memory (System)|16.00GB (0.05GB free)|
|Process Argv|--crash-reporter-id 4f2899d4-95f0-49d7-9a3b-f8229a52caf2|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (21)</summary>
Extension|Author (truncated)|Version
---|---|---
csdevkit|ms-|1.14.14
csharp|ms-|2.55.29
vscode-dotnet-runtime|ms-|2.2.3
debugpy|ms-|2024.12.0
python|ms-|2024.20.0
vscode-pylance|ms-|2024.12.1
jupyter|ms-|2024.9.1
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
fabric8-analytics|red|0.9.5
java|red|1.37.0
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
945dj816:31013170
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl1:31139838
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | bug,confirmed,terminal-rendering | low | Critical |
2,732,704,593 | godot | _input: No InputEventMouse after click and drag when mouse is outside game window in the first window focus | ### Tested versions
Reproducible in: v4.4.dev6.official [1f47e4c4e], v4.3.stable.official [77dcf97d8]
### System information
Godot v4.4.dev6 - Windows 10.0.19045 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 SUPER (NVIDIA; 31.0.15.3713) - AMD Ryzen 5 3600 6-Core Processor (12 threads)
### Issue description
Any mouse interaction outside the game window will correctly not fire any `InputEventMouse`. But if the user clicks and drags something inside the window, an `InputEventMouse` is always fired, even when the mouse leaves the window, which is to be expected.
https://github.com/user-attachments/assets/b690596e-b2de-4e1b-8f53-249ef093cd54
Code in this example:
```gdscript
var pressed: bool
func _input(event: InputEvent) -> void:
if event is InputEventMouseButton:
if event.button_index == MOUSE_BUTTON_LEFT:
pressed = event.pressed
print(pressed)
if pressed:
$Icon.global_position = get_global_mouse_position()
```
However, if the window is not focused _before_ the click and drag, no `InputEventMouse` is fired when the mouse leaves the window.
This can lead to an unexpected situation when a user clicks inside and releases outside the window; Since no `InputMouseEventButton` is fired outside the window, `pressed` is never updated.
https://github.com/user-attachments/assets/de45b373-b411-46c0-8f4e-c8f971094df5
As a workaround, you can do your mouse logic inside `_process` instead of `_input`.
```gdscript
func _process(_delta: float) -> void:
if Input.is_mouse_button_pressed(MOUSE_BUTTON_LEFT):
$Icon.global_position = get_global_mouse_position()
```
### Steps to reproduce
- Add this snippet into any script:
```gdscript
func _input(event: InputEvent) -> void:
print(event)
```
- Run the project and see that if the window is not focused before, clicking and dragging does not fire any `InputMouseEvent` when the mouse is outside the window.
### Minimal reproduction project (MRP)
[MouseEventOutsideWindowBug.zip](https://github.com/user-attachments/files/18094418/MouseEventOutsideWindowBug.zip)
| bug,topic:porting,topic:input,topic:gui | low | Critical |
2,732,727,172 | godot | Web builds with Scenes containing a SubViewportContainer lead to "WebGL context lost" errors on iOS Devices | ### Tested versions
- Reproducible in 4.3 stable and 4.4-dev*
- Tested on iPhones and iPads running iOS Version 17 and 18
- Behaves the same on all Browsers
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 3060 (NVIDIA; 32.0.15.6094) - 13th Gen Intel(R) Core(TM) i7-13700K (24 Threads)
### Issue description
Browsers on iOS Devices stop with a generic "OpenGL context lost" error when running a Web build with a SubViewportContainer in it.
### Steps to reproduce
- open attached MRP or create a new Project containing a Scene with a SubViewportContainer
- build to Web
- open in any Broswer on iOS
### Minimal reproduction project (MRP)
[ios-wasm-bug.zip](https://github.com/user-attachments/files/18094598/ios-wasm-bug.zip)
| bug,platform:web,platform:ios,topic:rendering,topic:porting,needs testing | low | Critical |
2,732,756,142 | vscode | Switching code editor tabs slow in workspace with many tests (~150,000) | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: Windows 11 23H2
```
Version: 1.95.3 (system setup)
Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813
Date: 2024-11-13T14:50:04.152Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.22631
```
Steps to Reproduce:
1. Open a vscode workspace with many tests (in my case ~150,000)
2. Wait for the language extension to discover all the tests in the project. In my case this is done by C# Dev Kit, though I doubt this matters.
3. switching between code editor tabs takes a full second for vscode to become responsive again.
I profiled the extension host while it was hanging, which show practically 0% CPU usage.
Next, I profiled the main window. See [Trace-20241211T121347.json](https://github.com/user-attachments/files/18094526/Trace-20241211T121347.json)
The bulk of time is spent in `encodeURIComponentFast` which is triggered by [`testingDecorations.ts`](https://github.com/microsoft/vscode/blob/4fb4fc90310a408a4f5c6216372e634634ae8ea0/src/vs/workbench/contrib/testing/browser/testingDecorations.ts#L517-L526):

Notes:
- This happens on all editors in the window, even on file types not associated with the tests. E.g. switching between Markdown files is slow too. Though, switching to e.g. the "Settings" tabs doesn't have this problem, presumably because it isn't a code editor.
- The delay is the same regardless of whether the open file contains any tests or not.
- I have the Test Gutter disabled, so AFAIK, there no need to be computing this information in the first place.
- Observation as someone not familiar with this codebase all: I notice the [code in question](https://github.com/microsoft/vscode/blob/4fb4fc90310a408a4f5c6216372e634634ae8ea0/src/vs/workbench/contrib/testing/browser/testingDecorations.ts#L517-L526) already "spawns" an `async` function to prevent blocking. Though, looking at the profile, all `await` points end up being scheduled on the microtask queue of the original `mousedown` event handler, so the UI rendering is blocked anyway. | bug,author-verification-requested,testing | low | Critical |
2,732,762,585 | godot | Control.PivotOffset assignment is destructive with existing Rotation or Scale | ### Tested versions
4.3.1-rc [ff9bc0422], custom build (`scons module_mono_enabled=yes debug_symbols=yes ccflags="-fvar-tracking -fvar-tracking-assignments"`)
### System information
Godot v4.3.1.rc.mono (ff9bc0422) - Debian GNU/Linux trixie/sid trixie - X11 - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 4060 Laptop GPU - 13th Gen Intel(R) Core(TM) i7-13650HX (20 Threads)
### Issue description
Assigning to a `Control.PivotOffset` will change the `Control`'s transform immediately if a non-default `Scale` or `Rotation` is in effect, rather than updating the other properties to keep the `Transform` constant while `PivotOffset` changes.
The small attached project lets one interactively compare what currently happens, versus what I expected to happen.
Controls:
- Scrollwheel scales the Godot logo
- Shift+Scrollwheel rotates the Godot logo
- Drag pans it around
Uncheck `TestingContainer`'s `"Use workaround"` property in the Editor to see the current behavior.
This is the implementation of the workaround for what I believe is more developer-friendly behaviour, which just updates the `Control`'s position to preserve the current transform, in a way I tried to make forwards-compatible:
```cs
private static void ChangePivotOffset(Control control, Vector2 newPivot) {
/*
* Scale and Rotation will correctly apply themselves relative to the
* pivot offset, so they can be freely set without altering other
* properties.
*
* If there is no Scale or Rotation set, then PivotOffset can be set
* non-destructively; the net transform will not change, and the
* Control will not move.
*
* However, if Scale or Rotation is set, then changing the PivotOffset
* will move the Control, as Scale, Rotation, and Position values are
* kept constant, and Scale and Rotation are applied relative to the
* PivotOffset, leading to different results.
*
* This function attempts to change the PivotOffset without moving the
* Control, or otherwise causing any change in the overall Transform,
* by adjusting the Position of the Control to compensate for the
* change in the PivotOffset.
*/
GD.Print("Control position before:", control.Position);
Vector2 originPre = control.GetTransform().Origin;
control.PivotOffset = newPivot;
Vector2 originPost = control.GetTransform().Origin;
GD.Print("Origin change:", originPre, " -> ", originPost, " net: ", originPost - originPre);
GD.Print("Control position after:", control.Position);
control.Position -= originPost - originPre;
GD.Print("Control position after correction:", control.Position);
}
```
I'm happy to implement this in C++, if someone can show me where to look. There's currently a `Control::_edit_set_pivot` method which does some adjustment to `Position` and might be equivalent, but a breakpoint on it never seems to be hit, only `Control::set_pivot_offset`. I could also try implementing this as a new `Control` method, if the existing behaviour is likely to be relied on?
Possibly related: #52383
### Steps to reproduce
- Create a `Control`, and assign a non-default `Scale` and/or `Rotation`
- Assign a different value to PivotOffset
- `Position`, `Scale`, and `Rotation` are unchanged, but given that `Scale` and `Rotation` take effect from the new `PivotOffset`, the Control's transform has changed.
### Minimal reproduction project (MRP)
[test-pivot.zip](https://github.com/user-attachments/files/18094821/test-pivot.zip)
| documentation,topic:gui | low | Critical |
2,732,767,061 | react | Bug: Updating state during render when using `useSyncExternalStore` throws `Cannot update a component (Component) while rendering a different component (Component)` | React version: 19.0.0
## Steps To Reproduce
1. Open the Console before you work with the page
2. Click the "increment" button 5 times
3. You will see the error in the Console
Link to code example:
https://codesandbox.io/p/sandbox/polished-wind-d9dyvj
## The current behavior
When updating the store it throws `Cannot update a component (`Component`) while rendering a different component (`Component`)`.
## The expected behavior
It should work the same way `useState` works when calling `setState` during render.
## Notes
This is coming from a user of `use-local-storage-state` hook that has around ~500k monthly downloads — https://github.com/astoilkov/use-local-storage-state/issues/77. | Status: Unconfirmed | medium | Critical |
2,732,792,582 | flutter | iOS Impeller crash : impeller::RenderPassMTL::Draw() EXC_BAD_ACCESS (KERN_INVALID_ADDRESS) | ### Steps to reproduce
I have a new crash in my live app due to impeller on iOS.
I have never encountered it in debug mode neither in release mode myself, it's a live user that has got this crash.
Here is the full [trace](https://github.com/user-attachments/files/18095018/stack.txt)
### Expected results
The app should not crash
### Actual results
The app is crashing
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Crashed: io.flutter.1.raster
0 Metal 0x1bc4 MTLResourceListAddResource + 64
1 AGXMetalG14 0x6a0c28 <redacted> + 144
2 AGXMetalG14 0x36cb9c <redacted> + 248
3 AGXMetalG14 0x376470 <redacted> + 420
4 Flutter 0x49b6cc impeller::RenderPassMTL::Draw() + 343 (render_pass_mtl.mm:343)
5 Flutter 0x161b04 impeller::SolidColorContents::Render(impeller::ContentContext const&, impeller::Entity const&, impeller::RenderPass&) const + 72 (status.h:72)
6 Flutter 0x13eefc std::_fl::__function::__func<impeller::Contents::RenderToSnapshot(impeller::ContentContext const&, impeller::Entity const&, std::_fl::optional<impeller::TRect<float>>, std::_fl::optional<impeller::SamplerDescriptor> const&, bool, int, std::_fl::basic_string<char, std::_fl::char_traits<char>, std::_fl::allocator<char>> const&) const::$_0, std::_fl::allocator<impeller::Contents::RenderToSnapshot(impeller::ContentContext const&, impeller::Entity const&, std::_fl::optional<impeller::TRect<float>>, std::_fl::optional<impeller::SamplerDescriptor> const&, bool, int, std::_fl::basic_string<char, std::_fl::char_traits<char>, std::_fl::allocator<char>> const&) const::$_0>, bool (impeller::ContentContext const&, impeller::RenderPass&)>::operator()(impeller::ContentContext const&, impeller::RenderPass&) + 105 (contents.cc:105)
7 Flutter 0x13dea4 impeller::ContentContext::MakeSubpass(std::_fl::basic_string_view<char, std::_fl::char_traits<char>>, impeller::RenderTarget const&, std::_fl::shared_ptr<impeller::CommandBuffer> const&, std::_fl::function<bool (impeller::ContentContext const&, impeller::RenderPass&)> const&) const + 522 (content_context.cc:522)
8 Flutter 0x13dd54 impeller::ContentContext::MakeSubpass(std::_fl::basic_string_view<char, std::_fl::char_traits<char>>, impeller::TSize<long long>, std::_fl::shared_ptr<impeller::CommandBuffer> const&, std::_fl::function<bool (impeller::ContentContext const&, impeller::RenderPass&)> const&, bool, bool, int) const + 21 (render_target.cc:21)
9 Flutter 0x13e844 impeller::Contents::RenderToSnapshot(impeller::ContentContext const&, impeller::Entity const&, std::_fl::optional<impeller::TRect<float>>, std::_fl::optional<impeller::SamplerDescriptor> const&, bool, int, std::_fl::basic_string<char, std::_fl::char_traits<char>, std::_fl::allocator<char>> const&) const + 470 (function.h:470)
10 Flutter 0x151cbc impeller::ContentsFilterInput::GetSnapshot(std::_fl::basic_string<char, std::_fl::char_traits<char>, std::_fl::allocator<char>> const&, impeller::ContentContext const&, impeller::Entity const&, std::_fl::optional<impeller::TRect<float>>, int) const + 33 (contents_filter_input.cc:33)
11 Flutter 0x14d41c impeller::GaussianBlurFilterContents::RenderFilter(std::_fl::vector<std::_fl::shared_ptr<impeller::FilterInput>, std::_fl::allocator<std::_fl::shared_ptr<impeller::FilterInput>>> const&, impeller::ContentContext const&, impeller::Entity const&, impeller::Matrix const&, impeller::TRect<float> const&, std::_fl::optional<impeller::TRect<float>> const&) const + 1498 (string:1498)
12 Flutter 0x14c834 impeller::FilterContents::GetEntity(impeller::ContentContext const&, impeller::Entity const&, std::_fl::optional<impeller::TRect<float>> const&) const + 238 (filter_contents.cc:238)
13 Flutter 0x14c310 impeller::FilterContents::Render(impeller::ContentContext const&, impeller::Entity const&, impeller::RenderPass&) const + 344 (optional:344)
14 Flutter 0x170054 impeller::EntityPass::RenderElement(impeller::Entity&, unsigned long, impeller::InlinePassContext&, int, impeller::ContentContext&, impeller::EntityPassClipStack&, impeller::TPoint<float>) const + 832 (entity_pass.cc:832)
15 Flutter 0x16f120 impeller::EntityPass::OnRender(impeller::ContentContext&, impeller::TSize<long long>, impeller::EntityPassTarget&, impeller::TPoint<float>, impeller::TPoint<float>, unsigned int, impeller::EntityPassClipStack&, unsigned long, std::_fl::shared_ptr<impeller::Contents>, std::_fl::optional<impeller::InlinePassContext::RenderPassResult> const&) const + 984 (entity_pass.cc:984)
16 Flutter 0x111d34 impeller::AiksContext::Render(impeller::Picture const&, impeller::RenderTarget&, bool) + 501 (entity_pass.cc:501)
17 Flutter 0x17f454 impeller::Renderer::Render(std::_fl::unique_ptr<impeller::Surface, std::_fl::default_delete<impeller::Surface>>, std::_fl::function<bool (impeller::RenderTarget&)> const&) const + 46 (renderer.cc:46)
18 Flutter 0x5f6ca8 std::_fl::__function::__func<fml::internal::CopyableLambda<flutter::GPUSurfaceMetalImpeller::AcquireFrameFromCAMetalLayer(SkISize const&)::$_0>, std::_fl::allocator<fml::internal::CopyableLambda<flutter::GPUSurfaceMetalImpeller::AcquireFrameFromCAMetalLayer(SkISize const&)::$_0>>, bool (flutter::SurfaceFrame&, flutter::DlCanvas*)>::operator()(flutter::SurfaceFrame&, flutter::DlCanvas*&&) + 191 (gpu_surface_metal_impeller.mm:191)
19 Flutter 0x4f432c flutter::SurfaceFrame::Submit() + 56 (surface_frame.cc:56)
20 Flutter 0x5aa3b4 flutter::Rasterizer::DrawToSurfacesUnsafe(flutter::FrameTimingsRecorder&, std::_fl::vector<std::_fl::unique_ptr<flutter::LayerTreeTask, std::_fl::default_delete<flutter::LayerTreeTask>>, std::_fl::allocator<std::_fl::unique_ptr<flutter::LayerTreeTask, std::_fl::default_delete<flutter::LayerTreeTask>>>>) + 803 (rasterizer.cc:803)
21 Flutter 0x5aa8a0 std::_fl::__function::__func<flutter::Rasterizer::DrawToSurfaces(flutter::FrameTimingsRecorder&, std::_fl::vector<std::_fl::unique_ptr<flutter::LayerTreeTask, std::_fl::default_delete<flutter::LayerTreeTask>>, std::_fl::allocator<std::_fl::unique_ptr<flutter::LayerTreeTask, std::_fl::default_delete<flutter::LayerTreeTask>>>>)::$_1, std::_fl::allocator<flutter::Rasterizer::DrawToSurfaces(flutter::FrameTimingsRecorder&, std::_fl::vector<std::_fl::unique_ptr<flutter::LayerTreeTask, std::_fl::default_delete<flutter::LayerTreeTask>>, std::_fl::allocator<std::_fl::unique_ptr<flutter::LayerTreeTask, std::_fl::default_delete<flutter::LayerTreeTask>>>>)::$_1>, void ()>::operator()() + 597 (rasterizer.cc:597)
22 Flutter 0x802a8 fml::SyncSwitch::Execute(fml::SyncSwitch::Handlers const&) const + 29 (shared_mutex.h:29)
23 Flutter 0x5a9ae4 flutter::Rasterizer::DrawToSurfaces(flutter::FrameTimingsRecorder&, std::_fl::vector<std::_fl::unique_ptr<flutter::LayerTreeTask, std::_fl::default_delete<flutter::LayerTreeTask>>, std::_fl::allocator<std::_fl::unique_ptr<flutter::LayerTreeTask, std::_fl::default_delete<flutter::LayerTreeTask>>>>) + 470 (function.h:470)
24 Flutter 0x5ab720 std::_fl::__function::__func<flutter::Rasterizer::Draw(std::_fl::shared_ptr<flutter::Pipeline<flutter::FrameItem>> const&)::$_0, std::_fl::allocator<flutter::Rasterizer::Draw(std::_fl::shared_ptr<flutter::Pipeline<flutter::FrameItem>> const&)::$_0>, void (std::_fl::unique_ptr<flutter::FrameItem, std::_fl::default_delete<flutter::FrameItem>>)>::operator()(std::_fl::unique_ptr<flutter::FrameItem, std::_fl::default_delete<flutter::FrameItem>>&&) + 423 (vector:423)
25 Flutter 0x5aaf3c flutter::Rasterizer::Draw(std::_fl::shared_ptr<flutter::Pipeline<flutter::FrameItem>> const&) + 259 (unique_ptr.h:259)
26 Flutter 0x5c27c0 std::_fl::__function::__func<fml::internal::CopyableLambda<flutter::Shell::OnAnimatorDraw(std::_fl::shared_ptr<flutter::Pipeline<flutter::FrameItem>>)::$_0>, std::_fl::allocator<fml::internal::CopyableLambda<flutter::Shell::OnAnimatorDraw(std::_fl::shared_ptr<flutter::Pipeline<flutter::FrameItem>>)::$_0>>, void ()>::operator()() + 1290 (shell.cc:1290)
27 Flutter 0x7e858 fml::MessageLoopImpl::FlushTasks(fml::FlushType) + 128 (message_loop_impl.cc:128)
28 Flutter 0x81e5c fml::MessageLoopDarwin::OnTimerFire(__CFRunLoopTimer*, fml::MessageLoopDarwin*) + 86 (message_loop_darwin.mm:86)
29 CoreFoundation 0xb4894 __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ + 32
30 CoreFoundation 0xb4538 __CFRunLoopDoTimer + 1012
31 CoreFoundation 0xb408c __CFRunLoopDoTimers + 288
32 CoreFoundation 0x533b4 __CFRunLoopRun + 1856
33 CoreFoundation 0x52830 CFRunLoopRunSpecific + 588
34 Flutter 0x81f48 fml::MessageLoopDarwin::Run() + 52 (message_loop_darwin.mm:52)
35 Flutter 0x81b90 std::_fl::__function::__func<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0, std::_fl::allocator<fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0>, void ()>::operator()() + 94 (message_loop_impl.cc:94)
36 Flutter 0x81820 fml::ThreadHandle::ThreadHandle(std::_fl::function<void ()>&&)::$_0::__invoke(void*) + 470 (function.h:470)
37 libsystem_pthread.dylib 0x637c _pthread_start + 136
38 libsystem_pthread.dylib 0x1494 thread_start + 8
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.5, on macOS 15.1.1 24B91 darwin-x64, locale fr-FR)
• Flutter version 3.24.5 on channel stable at /Users/foxtom/Desktop/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision dec2ee5c1f (4 weeks ago), 2024-11-13 11:13:06 -0800
• Engine revision a18df97ca5
• Dart version 3.5.4
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/foxtom/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915915-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915915-b509.11)
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension can be installed from:
🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (4 available)
• Now You See Me (mobile) • 00008020-001204401E78002E • ios • iOS 18.1.1 22B91
• macOS (desktop) • macos • darwin-x64 • macOS 15.1.1 24B91 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.110
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: crash,platform-ios,a: production,P2,needs repro info,e: impeller,team-engine,triaged-engine | low | Critical |
2,732,834,969 | PowerToys | MWB: Delete Device | ### Description of the new feature / enhancement
Can you please add a feature to delete a device from MWB?
### Scenario when this would be used?
I used to have three computers. I removed one. I should be able to remove that device from MWB as well, since connecting to that device no longer needed.
### Supporting information
_No response_ | Idea-Enhancement,Needs-Triage,Product-Mouse Without Borders | low | Minor |
2,732,881,941 | next.js | Cache handler ERR_MODULE_NOT_FOUND when using ESM modules | ### Link to the code that reproduces this issue
https://github.com/hdodov/test-nextjs/tree/cache-handler-not-working
### To Reproduce
1. Clone the repo https://github.com/hdodov/test-nextjs/tree/cache-handler-not-working
2. `npm i`
3. `npm run dev`
4. You'll see the error
### Current vs. Expected behavior
I have `"type": "module"` in my `package.json`, so I can't add the cache handler like this, as [the docs suggest](https://nextjs.org/docs/app/api-reference/config/next-config-js/incrementalCacheHandlerPath):
```js
module.exports = {
cacheHandler: require.resolve('./cache-handler.js'),
cacheMaxMemorySize: 0, // disable default in-memory caching
}
```
Instead, I need to add it like this:
```diff
module.exports = {
- cacheHandler: require.resolve('./cache-handler.js'),
+ cacheHandler: import.meta.resolve('./cache-handler.js'),
cacheMaxMemorySize: 0, // disable default in-memory caching
}
```
However, when I do so, I get the following error on `npm run dev`:
```
$ npm run dev
> [email protected] dev
> next dev
▲ Next.js 15.1.0
- Local: http://localhost:3000
- Network: http://192.168.100.16:3000
✓ Starting...
✓ Ready in 1625ms
[Error: Cannot find module '/Users/hristiyan.dodov/Projects/test-nextjs/.next/file:/Users/hristiyan.dodov/Projects/test-nextjs/cache-handler.js' imported from /Users/hristiyan.dodov/Projects/test-nextjs/node_modules/next/dist/server/next-server.js] {
code: 'ERR_MODULE_NOT_FOUND',
url: 'file:///Users/hristiyan.dodov/Projects/test-nextjs/.next/file:/Users/hristiyan.dodov/Projects/test-nextjs/cache-handler.js'
}
```
…and I also get it if I try `npm run build` directly:
```
$ npm run build
> [email protected] build
> next build
▲ Next.js 15.1.0
Creating an optimized production build ...
✓ Compiled successfully
✓ Linting and checking validity of types
> Build error occurred
[Error: Cannot find module '/Users/hristiyan.dodov/Projects/test-nextjs/file:/Users/hristiyan.dodov/Projects/test-nextjs/cache-handler.js' imported from /Users/hristiyan.dodov/Projects/test-nextjs/node_modules/next/dist/export/helpers/create-incremental-cache.js] {
type: 'Error',
code: 'ERR_MODULE_NOT_FOUND',
url: 'file:///Users/hristiyan.dodov/Projects/test-nextjs/file:/Users/hristiyan.dodov/Projects/test-nextjs/cache-handler.js'
}
```
Check these two branches:
- https://github.com/hdodov/test-nextjs/tree/cache-handler-working — cache handler works here
- https://github.com/hdodov/test-nextjs/tree/cache-handler-not-working — here it doesn't
Here's a diff between the two: https://github.com/hdodov/test-nextjs/compare/cache-handler-working...cache-handler-not-working
### Provide environment information
```
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 23.6.0: Thu Sep 12 23:34:49 PDT 2024; root:xnu-10063.141.1.701.1~1/RELEASE_X86_64
Available memory (MB): 32768
Available CPU cores: 16
Binaries:
Node: 20.17.0
npm: 10.8.2
Yarn: N/A
pnpm: 9.15.0
Relevant Packages:
next: 15.1.0 // Latest available version is detected (15.1.0).
eslint-config-next: 15.1.0
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Module Resolution
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
_No response_ | bug,Module Resolution | low | Critical |
2,732,902,405 | pytorch | DISABLED test_flip_cpu (__main__.CpuTests) | Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_flip_cpu&suite=CpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/34233139841).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 12 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_flip_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_torchinductor.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @wdvr @malfet @albanD @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,module: flaky-tests,module: macos,skipped,oncall: pt2,module: inductor,oncall: cpu inductor | low | Critical |
2,732,975,224 | react-native | iOS jsi value ~shared_ptr Crash when RCTHost dealloc | ### Description
When a `shared_ptr` is created using a JSI Value, a crash occurs during the destruction of the `shared_ptr` when the RCTHost is deallocated.
### Steps to reproduce
no
### React Native Version
0.74.3
### Affected Platforms
Runtime - iOS
### Areas
JSI - Javascript Interface
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1.1
CPU: (12) arm64 Apple M3 Pro
Memory: 133.20 MB / 18.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 23.3.0
path: /opt/homebrew/bin/node
Yarn:
version: 1.22.22
path: /opt/homebrew/bin/yarn
npm:
version: 10.9.0
path: /opt/homebrew/bin/npm
Watchman:
version: 2024.11.18.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.12.1
path: /Users/didi/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK: Not Found
IDEs:
Android Studio: Not Found
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java: Not Found
Ruby:
version: 2.6.10
path: /Users/xxx/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.74.3
wanted: 0.74.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: true
info React Native v0.76.5 is now available (your project is running on v0.74.3).
info Changelog: https://github.com/facebook/react-native/releases/tag/v0.76.5
info Diff: https://react-native-community.github.io/upgrade-helper/?from=0.74.3
info For more info, check out "https://reactnative.dev/docs/upgrading?os=macos".
```
### Stacktrace or Logs
```text
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x0000000000000010
Exception Codes: 0x0000000000000001, 0x0000000000000010
VM Region Info: 0x10 is not in any region. Bytes before following region: 68719476720
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
UNUSED SPACE AT START
--->
commpage (reserved) 1000000000-7000000000 [384.0G] ---/--- SM=NUL ...(unallocated)
Termination Reason: SIGNAL 11 Segmentation fault: 11
Terminating Process: exc handler [61876]
Triggered by Thread: 23
Crash Stack:
Thread 23 Crashed:
0 App 0x000000010516bc44 std::__1::__shared_weak_count::__release_shared[abi:ne180100]() + 56
1 App 0x000000010727a43c std::__1::shared_ptr<reanimated::Shareable>::~shared_ptr[abi:ne180100]() + 28
2 App 0x00000001072972d0 std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<reanimated::Shareable>>::~pair() + 24
3 App 0x0000000107297588 std::__1::vector<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<reanimated::Shareable>>, std::__1::allocator<std::__1::pai... + 40
4 App 0x0000000107297538 std::__1::vector<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<reanimated::Shareable>>, std::__1::allocator<std::__1::pai... + 32
5 App 0x0000000107297504 std::__1::vector<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<reanimated::Shareable>>, std::__1::allocator<std::__1::pai... + 32
6 App 0x0000000107295f28 reanimated::ShareableObject::~ShareableObject() + 48
7 App 0x000000010516bc50 std::__1::__shared_weak_count::__release_shared[abi:ne180100]() + 68
8 App 0x000000010727a43c std::__1::shared_ptr<reanimated::Shareable>::~shared_ptr[abi:ne180100]() + 28
9 App 0x0000000107296ff0 std::__1::vector<std::__1::shared_ptr<reanimated::Shareable>, std::__1::allocator<std::__1::shared_ptr<reanimated::Shareable>>>::__base_destruct_at_end[abi:ne180100](std::__1::shared_ptr<reanimated... + 40
10 App 0x0000000107296fa0 std::__1::vector<std::__1::shared_ptr<reanimated::Shareable>, std::__1::allocator<std::__1::shared_ptr<reanimated::Shareable>>>::__destroy_vector::operator()[abi:ne180100]() + 32
11 App 0x0000000107296f6c std::__1::vector<std::__1::shared_ptr<reanimated::Shareable>, std::__1::allocator<std::__1::shared_ptr<reanimated::Shareable>>>::~vector[abi:ne180100]() + 32
12 App 0x000000010729786c reanimated::ShareableArray::~ShareableArray() + 32
13 App 0x000000010516bc50 std::__1::__shared_weak_count::__release_shared[abi:ne180100]() + 68
14 App 0x000000010727a43c std::__1::shared_ptr<reanimated::Shareable>::~shared_ptr[abi:ne180100]() + 28
15 App 0x0000000107296ff0 std::__1::vector<std::__1::shared_ptr<reanimated::Shareable>, std::__1::allocator<std::__1::shared_ptr<reanimated::Shareable>>>::__base_destruct_at_end[abi:ne180100](std::__1::shared_ptr<reanimated... + 40
16 App 0x0000000107296fa0 std::__1::vector<std::__1::shared_ptr<reanimated::Shareable>, std::__1::allocator<std::__1::shared_ptr<reanimated::Shareable>>>::__destroy_vector::operator()[abi:ne180100]() + 32
17 App 0x0000000107296f6c std::__1::vector<std::__1::shared_ptr<reanimated::Shareable>, std::__1::allocator<std::__1::shared_ptr<reanimated::Shareable>>>::~vector[abi:ne180100]() + 32
18 App 0x000000010729786c reanimated::ShareableArray::~ShareableArray() + 32
19 App 0x000000010516bc50 std::__1::__shared_weak_count::__release_shared[abi:ne180100]() + 68
20 App 0x000000010727a43c std::__1::shared_ptr<reanimated::Shareable>::~shared_ptr[abi:ne180100]() + 28
21 App 0x00000001072972d0 std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<reanimated::Shareable>>::~pair() + 24
22 App 0x0000000107297588 std::__1::vector<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<reanimated::Shareable>>, std::__1::allocator<std::__1::pai... + 40
23 App 0x0000000107297538 std::__1::vector<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<reanimated::Shareable>>, std::__1::allocator<std::__1::pai... + 32
24 App 0x0000000107297504 std::__1::vector<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<reanimated::Shareable>>, std::__1::allocator<std::__1::pai... + 32
25 App 0x0000000107295f28 reanimated::ShareableObject::~ShareableObject() + 48
26 App 0x000000010516bc50 std::__1::__shared_weak_count::__release_shared[abi:ne180100]() + 68
27 App 0x000000010727a43c std::__1::shared_ptr<reanimated::Shareable>::~shared_ptr[abi:ne180100]() + 28
28 App 0x00000001072972d0 std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<reanimated::Shareable>>::~pair() + 24
29 App 0x0000000107297588 std::__1::vector<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<reanimated::Shareable>>, std::__1::allocator<std::__1::pai... + 40
30 App 0x0000000107297538 std::__1::vector<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<reanimated::Shareable>>, std::__1::allocator<std::__1::pai... + 32
31 App 0x0000000107297504 std::__1::vector<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<reanimated::Shareable>>, std::__1::allocator<std::__1::pai... + 32
32 App 0x0000000107295f28 reanimated::ShareableObject::~ShareableObject() + 48
33 App 0x000000010516bc50 std::__1::__shared_weak_count::__release_shared[abi:ne180100]() + 68
34 App 0x000000010727a43c std::__1::shared_ptr<reanimated::Shareable>::~shared_ptr[abi:ne180100]() + 28
35 App 0x0000000107295624 reanimated::ShareableJSRef::~ShareableJSRef() + 32
36 hermes 0x000000010d7b655c 0x10d7a0000 + 91484
37 hermes 0x000000010d90a5d0 0x10d7a0000 + 1484240
38 hermes 0x000000010d88234c 0x10d7a0000 + 926540
39 hermes 0x000000010d8827dc 0x10d7a0000 + 927708
40 hermes 0x000000010d7b3b08 0x10d7a0000 + 80648
41 hermes 0x000000010d7b02cc 0x10d7a0000 + 66252
42 App 0x000000010516bc50 std::__1::__shared_weak_count::__release_shared[abi:ne180100]() + 68
43 App 0x0000000107512610 std::__1::shared_ptr<facebook::hermes::HermesRuntime>::~shared_ptr[abi:ne180100]() + 28
44 App 0x00000001074fe844 facebook::react::HermesJSRuntime::~HermesJSRuntime() + 48
45 App 0x00000001074fe570 facebook::react::HermesJSRuntime::~HermesJSRuntime() + 12
46 App 0x000000010516bc50 std::__1::__shared_weak_count::__release_shared[abi:ne180100]() + 68
47 App 0x00000001074f2818 std::__1::shared_ptr<facebook::react::JSRuntime>::~shared_ptr[abi:ne180100]() + 28
48 App 0x00000001074f269c facebook::react::ReactInstance::~ReactInstance() + 80
49 App 0x00000001074f2644 std::__1::default_delete<facebook::react::ReactInstance>::operator()[abi:ne180100](facebook::react::ReactInstance*) const + 20
50 App 0x00000001074efd70 __25-[RCTInstance invalidate]_block_invoke + 48
51 App 0x00000001073841ac facebook::react::tryAndReturnError(std::__1::function<void ()> const&) + 16
52 App 0x00000001074f4a28 -[RCTJSThreadManager _tryAndHandleError:] + 56
53 App 0x00000001074f4c00 std::__1::__function::__func<-[RCTJSThreadManager dispatchToJSThread:]::$_0, std::__1::allocator<-[RCTJSThreadManager dispatchToJSThread:]::$_0>, void ()>::operator()() + 36
54 App 0x00000001073841ac facebook::react::tryAndReturnError(std::__1::function<void ()> const&) + 16
55 App 0x00000001073901ec facebook::react::RCTMessageThread::tryFunc(std::__1::function<void ()> const&) + 24
56 App 0x0000000107390070 invocation function for block in facebook::react::RCTMessageThread::runAsync(std::__1::function<void ()>) + 32
57 CoreFoundation 0x00000001b4cb1138 __CFRUNLOOP_IS_CALLING_OUT_TO_A_BLOCK__ + 28 (CFRunLoop.c:1805)
58 CoreFoundation 0x00000001b4caf92c __CFRunLoopDoBlocks + 356 (CFRunLoop.c:1847)
59 CoreFoundation 0x00000001b4cad808 __CFRunLoopRun + 812 (CFRunLoop.c:2953)
60 CoreFoundation 0x00000001b4cad3f8 CFRunLoopRunSpecific + 608 (CFRunLoop.c:3420)
61 App 0x00000001074f4990 +[RCTJSThreadManager runRunLoop] + 204
62 Foundation 0x00000001b3cc3d40 __NSThread__start__ + 732 (NSThread.m:991)
63 libsystem_pthread.dylib 0x000000021ed744d4 _pthread_start + 136 (pthread.c:904)
64 libsystem_pthread.dylib 0x000000021ed73a10 thread_start + 8 (:-1)
Thread 23 crashed with ARM Thread State (64-bit):
x0: 0x0000000281b8d300 x1: 0x00000002826b7980 x2: 0x0000000000000000 x3: 0x000000028117d980
x4: 0x00000002829c9940 x5: 0x000000011c1f84a0 x6: 0x0000000000000365 x7: 0x0000000000ffff7f
x8: 0x0000000000000000 x9: 0x0000000000000000 x10: 0xffffffffffffffff x11: 0x0000000000000000
x12: 0x00000000000007fb x13: 0x00000000000007fd x14: 0x0000000097c11866 x15: 0x0000000000000066
x16: 0x0000000097a1103d x17: 0x0000000000011800 x18: 0x0000000000000000 x19: 0x0000000281b8d300
x20: 0x00000002829c97e8 x21: 0x000000014b5c20d0 x22: 0x000000014b5c20e0 x23: 0x000000010dabb0a0
x24: 0x00000000141300cd x25: 0x0000000000000000 x26: 0x000000011a31d040 x27: 0x00000002811b9dc0
x28: 0x0000000000000000 fp: 0x000000016bd456a0 lr: 0x000000010727a43c
sp: 0x000000016bd45690 pc: 0x000000010516bc44 cpsr: 0x20000000
esr: 0x92000006 (Data Abort) byte read Translation fault
```
### Reproducer
no
### Screenshots and Videos
```c++
#include "Shareables.h"
using namespace facebook;
namespace reanimated {
jsi::Function getValueUnpacker(jsi::Runtime &rt) {
auto valueUnpacker = rt.global().getProperty(rt, "__valueUnpacker");
assert(valueUnpacker.isObject() && "valueUnpacker not found");
return valueUnpacker.asObject(rt).asFunction(rt);
}
#ifndef NDEBUG
static const auto callGuardLambda = [](facebook::jsi::Runtime &rt,
const facebook::jsi::Value &thisVal,
const facebook::jsi::Value *args,
size_t count) {
return args[0].asObject(rt).asFunction(rt).call(rt, args + 1, count - 1);
};
jsi::Function getCallGuard(jsi::Runtime &rt) {
auto callGuard = rt.global().getProperty(rt, "__callGuardDEV");
if (callGuard.isObject()) {
// Use JS implementation if `__callGuardDEV` has already been installed.
// This is the desired behavior.
return callGuard.asObject(rt).asFunction(rt);
}
// Otherwise, fallback to C++ JSI implementation. This is necessary so that we
// can install `__callGuardDEV` itself and should happen only once. Note that
// the C++ implementation doesn't intercept errors and simply throws them as
// C++ exceptions which crashes the app. We assume that installing the guard
// doesn't throw any errors.
return jsi::Function::createFromHostFunction(
rt, jsi::PropNameID::forAscii(rt, "callGuard"), 1, callGuardLambda);
}
#endif // NDEBUG
jsi::Value makeShareableClone(
jsi::Runtime &rt,
const jsi::Value &value,
const jsi::Value &shouldRetainRemote,
const jsi::Value &nativeStateSource) {
std::shared_ptr<Shareable> shareable;
if (value.isObject()) {
auto object = value.asObject(rt);
if (!object.getProperty(rt, "__workletHash").isUndefined()) {
shareable = std::make_shared<ShareableWorklet>(rt, object);
} else if (!object.getProperty(rt, "__init").isUndefined()) {
shareable = std::make_shared<ShareableHandle>(rt, object);
} else if (object.isFunction(rt)) {
auto function = object.asFunction(rt);
if (function.isHostFunction(rt)) {
shareable =
std::make_shared<ShareableHostFunction>(rt, std::move(function));
} else {
shareable =
std::make_shared<ShareableRemoteFunction>(rt, std::move(function));
}
} else if (object.isArray(rt)) {
if (shouldRetainRemote.isBool() && shouldRetainRemote.getBool()) {
shareable = std::make_shared<RetainingShareable<ShareableArray>>(
rt, object.asArray(rt));
} else {
shareable = std::make_shared<ShareableArray>(rt, object.asArray(rt));
}
} else if (object.isArrayBuffer(rt)) {
shareable =
std::make_shared<ShareableArrayBuffer>(rt, object.getArrayBuffer(rt));
} else if (object.isHostObject(rt)) {
if (object.isHostObject<ShareableJSRef>(rt)) {
return object;
}
shareable =
std::make_shared<ShareableHostObject>(rt, object.getHostObject(rt));
} else {
if (shouldRetainRemote.isBool() && shouldRetainRemote.getBool()) {
shareable = std::make_shared<RetainingShareable<ShareableObject>>(
rt,
object
#if SUPPORTS_NATIVE_STATE
,
nativeStateSource
#endif // SUPPORTS_NATIVE_STATE
);
} else {
shareable = std::make_shared<ShareableObject>(
rt,
object
#if SUPPORTS_NATIVE_STATE
,
nativeStateSource
#endif // SUPPORTS_NATIVE_STATE
);
}
}
} else if (value.isString()) {
shareable = std::make_shared<ShareableString>(value.asString(rt).utf8(rt));
} else if (value.isUndefined()) {
shareable = std::make_shared<ShareableScalar>();
} else if (value.isNull()) {
shareable = std::make_shared<ShareableScalar>(nullptr);
} else if (value.isBool()) {
shareable = std::make_shared<ShareableScalar>(value.getBool());
} else if (value.isNumber()) {
shareable = std::make_shared<ShareableScalar>(value.getNumber());
} else if (value.isBigInt()) {
shareable = std::make_shared<ShareableBigInt>(rt, value.getBigInt(rt));
} else if (value.isSymbol()) {
// TODO: this is only a placeholder implementation, here we replace symbols
// with strings in order to make certain objects to be captured. There isn't
// yet any usecase for using symbols on the UI runtime so it is fine to keep
// it like this for now.
shareable =
std::make_shared<ShareableString>(value.getSymbol(rt).toString(rt));
} else {
throw std::runtime_error(
"[Reanimated] Attempted to convert an unsupported value type.");
}
return ShareableJSRef::newHostObject(rt, shareable);
}
std::shared_ptr<Shareable> extractShareableOrThrow(
jsi::Runtime &rt,
const jsi::Value &maybeShareableValue,
const std::string &errorMessage) {
if (maybeShareableValue.isObject()) {
auto object = maybeShareableValue.asObject(rt);
if (object.isHostObject<ShareableJSRef>(rt)) {
return object.getHostObject<ShareableJSRef>(rt)->value();
}
throw std::runtime_error(
"[Reanimated] Attempted to extract from a HostObject that wasn't converted to a Shareable.");
} else if (maybeShareableValue.isUndefined()) {
return Shareable::undefined();
}
throw std::runtime_error(errorMessage);
}
Shareable::~Shareable() {}
std::shared_ptr<Shareable> Shareable::undefined() {
static auto undefined = std::make_shared<ShareableScalar>();
return undefined;
}
template <typename BaseClass>
jsi::Value RetainingShareable<BaseClass>::toJSValue(jsi::Runtime &rt) {
if (&rt == primaryRuntime_) {
// TODO: it is suboptimal to generate new object every time getJS is
// called on host runtime – the objects we are generating already exists
// and we should possibly just grab a hold of such object and use it here
// instead of creating a new JS representation. As far as I understand the
// only case where it can be realistically called this way is when a
// shared value is created and then accessed on the same runtime
return BaseClass::toJSValue(rt);
}
if (secondaryValue_ == nullptr) {
auto value = BaseClass::toJSValue(rt);
secondaryValue_ = std::make_unique<jsi::Value>(rt, value);
secondaryRuntime_ = &rt;
return value;
}
if (&rt == secondaryRuntime_) {
return jsi::Value(rt, *secondaryValue_);
}
return BaseClass::toJSValue(rt);
}
ShareableJSRef::~ShareableJSRef() {}
ShareableArray::ShareableArray(jsi::Runtime &rt, const jsi::Array &array)
: Shareable(ArrayType) {
auto size = array.size(rt);
data_.reserve(size);
for (size_t i = 0; i < size; i++) {
data_.push_back(extractShareableOrThrow(rt, array.getValueAtIndex(rt, i)));
}
}
jsi::Value ShareableArray::toJSValue(jsi::Runtime &rt) {
auto size = data_.size();
auto ary = jsi::Array(rt, size);
for (size_t i = 0; i < size; i++) {
ary.setValueAtIndex(rt, i, data_[i]->toJSValue(rt));
}
return ary;
}
jsi::Value ShareableArrayBuffer::toJSValue(jsi::Runtime &rt) {
auto size = static_cast<int>(data_.size());
auto arrayBuffer = rt.global()
.getPropertyAsFunction(rt, "ArrayBuffer")
.callAsConstructor(rt, size)
.getObject(rt)
.getArrayBuffer(rt);
memcpy(arrayBuffer.data(rt), data_.data(), size);
return arrayBuffer;
}
ShareableObject::ShareableObject(jsi::Runtime &rt, const jsi::Object &object)
: Shareable(ObjectType) {
auto propertyNames = object.getPropertyNames(rt);
auto size = propertyNames.size(rt);
data_.reserve(size);
for (size_t i = 0; i < size; i++) {
auto key = propertyNames.getValueAtIndex(rt, i).asString(rt);
auto value = extractShareableOrThrow(rt, object.getProperty(rt, key));
data_.emplace_back(key.utf8(rt), value);
}
#if SUPPORTS_NATIVE_STATE
if (object.hasNativeState(rt)) {
nativeState_ = object.getNativeState(rt);
}
#endif // SUPPORTS_NATIVE_STATE
}
#if SUPPORTS_NATIVE_STATE
ShareableObject::ShareableObject(
jsi::Runtime &rt,
const jsi::Object &object,
const jsi::Value &nativeStateSource)
: ShareableObject(rt, object) {
if (nativeStateSource.isObject() &&
nativeStateSource.asObject(rt).hasNativeState(rt)) {
nativeState_ = nativeStateSource.asObject(rt).getNativeState(rt);
}
}
#endif // SUPPORTS_NATIVE_STATE
jsi::Value ShareableObject::toJSValue(jsi::Runtime &rt) {
auto obj = jsi::Object(rt);
for (size_t i = 0, size = data_.size(); i < size; i++) {
obj.setProperty(rt, jsi::String::createFromUtf8(rt, data_[i].first), data_[i].second->toJSValue(rt));
}
#if SUPPORTS_NATIVE_STATE
if (nativeState_ != nullptr) {
obj.setNativeState(rt, nativeState_);
}
#endif // SUPPORTS_NATIVE_STATE
return obj;
}
jsi::Value ShareableHostObject::toJSValue(jsi::Runtime &rt) {
return jsi::Object::createFromHostObject(rt, hostObject_);
}
jsi::Value ShareableHostFunction::toJSValue(jsi::Runtime &rt) {
return jsi::Function::createFromHostFunction(
rt, jsi::PropNameID::forUtf8(rt, name_), paramCount_, hostFunction_);
}
jsi::Value ShareableWorklet::toJSValue(jsi::Runtime &rt) {
assert(
std::any_of(
data_.cbegin(),
data_.cend(),
[](const auto &item) { return item.first == "__workletHash"; }) &&
"ShareableWorklet doesn't have `__workletHash` property");
jsi::Value obj = ShareableObject::toJSValue(rt);
return getValueUnpacker(rt).call(
rt, obj, jsi::String::createFromAscii(rt, "Worklet"));
}
jsi::Value ShareableRemoteFunction::toJSValue(jsi::Runtime &rt) {
if (&rt == runtime_) {
return jsi::Value(rt, *function_);
} else {
#ifndef NDEBUG
return getValueUnpacker(rt).call(
rt,
ShareableJSRef::newHostObject(rt, shared_from_this()),
jsi::String::createFromAscii(rt, "RemoteFunction"),
jsi::String::createFromUtf8(rt, name_));
#else
return ShareableJSRef::newHostObject(rt, shared_from_this());
#endif
}
}
jsi::Value ShareableHandle::toJSValue(jsi::Runtime &rt) {
if (remoteValue_ == nullptr) {
auto initObj = initializer_->toJSValue(rt);
auto value = std::make_unique<jsi::Value>(getValueUnpacker(rt).call(
rt, initObj, jsi::String::createFromAscii(rt, "Handle")));
// We are locking the initialization here since the thread that is
// initalizing can be pre-empted on runtime lock. E.g.
// UI thread can be pre-empted on initialization of a shared value and then
// JS thread can try to access the shared value, locking the whole runtime.
// If we put the lock on `getValueUnpacker` part (basically any part that
// requires runtime) we would get a deadlock since UI thread would never
// release it.
std::unique_lock<std::mutex> lock(initializationMutex_);
if (remoteValue_ == nullptr) {
remoteValue_ = std::move(value);
remoteRuntime_ = &rt;
}
}
return jsi::Value(rt, *remoteValue_);
}
jsi::Value ShareableString::toJSValue(jsi::Runtime &rt) {
return jsi::String::createFromUtf8(rt, data_);
}
jsi::Value ShareableBigInt::toJSValue(jsi::Runtime &rt) {
return rt.global()
.getPropertyAsFunction(rt, "BigInt")
.call(rt, jsi::String::createFromUtf8(rt, string_));
}
jsi::Value ShareableScalar::toJSValue(jsi::Runtime &) {
switch (valueType_) {
case Shareable::UndefinedType:
return jsi::Value();
case Shareable::NullType:
return jsi::Value(nullptr);
case Shareable::BooleanType:
return jsi::Value(data_.boolean);
case Shareable::NumberType:
return jsi::Value(data_.number);
default:
throw std::runtime_error(
"[Reanimated] Attempted to convert object that's not of a scalar type.");
}
}
} /* namespace reanimated */
``` | Platform: iOS,API: Share,Needs: Author Feedback,Needs: Repro,Newer Patch Available,Type: New Architecture | low | Critical |
2,732,985,737 | storybook | [Bug]: Focused Test results leak to global results | When running Focused Tests and they _pass_, the global Testing Module UI will show Component Tests as passing/green, even though there are still failing tests other places. (The reverse is probably also true)
https://github.com/user-attachments/assets/5ebab7b5-476f-4201-8a2f-fc97c299af97
| bug,sev:S2,addon: test | low | Critical |
2,733,050,931 | langchain | AzureAIDocumentIntelligenceLoader is calling the wrong endpoint and we can't change anything | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
di_endpoint = 'https://<endpoint>.cognitiveservices.azure.com/'
key = 'api_key'
file_url = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
# Step 2: Load the file with AzureAIDocumentIntelligenceLoader
loader = AzureAIDocumentIntelligenceLoader(
api_endpoint=di_endpoint,
url_path = file_url,
api_key=key,
api_version='2023-07-31',
api_model='prebuilt-layout'
)
documents = loader.load()
```
### Error Message and Stack Trace (if applicable)
```
2024-12-11 14:44:00,968 [MainThread ] [INFO ] Request URL: 'https://amirrahnamadisweden.cognitiveservices.azure.com/documentintelligence/documentModels/prebuilt-layout:analyze?api-version=REDACTED&outputContentFormat=REDACTED'
Request method: 'POST'
Request headers:
'content-type': 'application/json'
'Content-Length': '146'
'Accept': 'application[/json](http://localhost:8888/json)'
'x-ms-client-request-id': 'fa6d3882-b7c5-11ef-9cb0-76d7b7a30648'
'x-ms-useragent': 'REDACTED'
'User-Agent': 'azsdk-python-ai-documentintelligence[/1.0.0b4](http://localhost:8888/1.0.0b4) Python[/3.11.11](http://localhost:8888/3.11.11) (macOS-14.7-arm64-arm-64bit)'
'Ocp-Apim-Subscription-Key': 'REDACTED'
A body is sent with the request
2024-12-11 14:44:00,977 [MainThread ] [DEBUG] Starting new HTTPS connection (1): amirrahnamadisweden.cognitiveservices.azure.com:443
2024-12-11 14:44:01,298 [MainThread ] [DEBUG] https://amirrahnamadisweden.cognitiveservices.azure.com:443 "POST /documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2023-07-31&outputContentFormat=markdown HTTP[/11](http://localhost:8888/11)" 404 56
2024-12-11 14:44:01,300 [MainThread ] [INFO ] Response status: 404
Response headers:
'Content-Length': '56'
'Content-Type': 'application[/json](http://localhost:8888/json)'
'apim-request-id': 'REDACTED'
'Strict-Transport-Security': 'REDACTED'
'x-content-type-options': 'REDACTED'
'Date': 'Wed, 11 Dec 2024 13:44:00 GMT'
---------------------------------------------------------------------------
ResourceNotFoundError Traceback (most recent call last)
Cell In[30], line 18
9 # Step 2: Load the file with AzureAIDocumentIntelligenceLoader
10 loader = AzureAIDocumentIntelligenceLoader(
11 api_endpoint=di_endpoint,
12 url_path = file_url,
(...)
15 api_model='prebuilt-layout'
16 )
---> 18 documents = loader.load()
File [~/code/rag_azure/venv/lib/python3.11/site-packages/langchain_core/document_loaders/base.py:31](http://localhost:8888/lab/tree/~/code/rag_azure/venv/lib/python3.11/site-packages/langchain_core/document_loaders/base.py#line=30), in BaseLoader.load(self)
29 def load(self) -> list[Document]:
30 """Load data into Document objects."""
---> 31 return list(self.lazy_load())
File [~/code/rag_azure/venv/lib/python3.11/site-packages/langchain_community/document_loaders/doc_intelligence.py:103](http://localhost:8888/lab/tree/~/code/rag_azure/venv/lib/python3.11/site-packages/langchain_community/document_loaders/doc_intelligence.py#line=102), in AzureAIDocumentIntelligenceLoader.lazy_load(self)
101 yield from self.parser.parse(blob)
102 elif self.url_path is not None:
--> 103 yield from self.parser.parse_url(self.url_path) # type: ignore[arg-type]
104 elif self.bytes_source is not None:
105 yield from self.parser.parse_bytes(self.bytes_source)
File [~/code/rag_azure/venv/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/doc_intelligence.py:98](http://localhost:8888/lab/tree/~/code/rag_azure/venv/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/doc_intelligence.py#line=97), in AzureAIDocumentIntelligenceParser.parse_url(self, url)
95 def parse_url(self, url: str) -> Iterator[Document]:
96 from azure.ai.documentintelligence.models import AnalyzeDocumentRequest
---> 98 poller = self.client.begin_analyze_document(
99 self.api_model,
100 AnalyzeDocumentRequest(url_source=url),
101 # content_type="application[/octet-stream](http://localhost:8888/octet-stream)",
102 output_content_format="markdown" if self.mode == "markdown" else "text",
103 )
104 result = poller.result()
106 if self.mode in ["single", "markdown"]:
File [~/code/rag_azure/venv/lib/python3.11/site-packages/azure/core/tracing/decorator.py:105](http://localhost:8888/lab/tree/~/code/rag_azure/venv/lib/python3.11/site-packages/azure/core/tracing/decorator.py#line=104), in distributed_trace.<locals>.decorator.<locals>.wrapper_use_tracer(*args, **kwargs)
103 span_impl_type = settings.tracing_implementation()
104 if span_impl_type is None:
--> 105 return func(*args, **kwargs)
107 # Merge span is parameter is set, but only if no explicit parent are passed
108 if merge_span and not passed_in_parent:
File [~/code/rag_azure/venv/lib/python3.11/site-packages/azure/ai/documentintelligence/_operations/_patch.py:537](http://localhost:8888/lab/tree/~/code/rag_azure/venv/lib/python3.11/site-packages/azure/ai/documentintelligence/_operations/_patch.py#line=536), in DocumentIntelligenceClientOperationsMixin.begin_analyze_document(self, model_id, analyze_request, pages, locale, string_index_type, features, query_fields, output_content_format, output, **kwargs)
535 cont_token: Optional[str] = kwargs.pop("continuation_token", None)
536 if cont_token is None:
--> 537 raw_result = self._analyze_document_initial(
538 model_id=model_id,
539 analyze_request=analyze_request,
540 pages=pages,
541 locale=locale,
542 string_index_type=string_index_type,
543 features=features,
544 query_fields=query_fields,
545 output_content_format=output_content_format,
546 output=output,
547 content_type=content_type,
548 cls=lambda x, y, z: x,
549 headers=_headers,
550 params=_params,
551 **kwargs,
552 )
553 raw_result.http_response.read() # type: ignore
554 kwargs.pop("error_map", None)
File [~/code/rag_azure/venv/lib/python3.11/site-packages/azure/ai/documentintelligence/_operations/_operations.py:713](http://localhost:8888/lab/tree/~/code/rag_azure/venv/lib/python3.11/site-packages/azure/ai/documentintelligence/_operations/_operations.py#line=712), in DocumentIntelligenceClientOperationsMixin._analyze_document_initial(self, model_id, analyze_request, pages, locale, string_index_type, features, query_fields, output_content_format, output, **kwargs)
711 except (StreamConsumedError, StreamClosedError):
712 pass
--> 713 map_error(status_code=response.status_code, response=response, error_map=error_map)
714 error = _deserialize(_models.ErrorResponse, response.json())
715 raise HttpResponseError(response=response, model=error)
File [~/code/rag_azure/venv/lib/python3.11/site-packages/azure/core/exceptions.py:163](http://localhost:8888/lab/tree/~/code/rag_azure/venv/lib/python3.11/site-packages/azure/core/exceptions.py#line=162), in map_error(status_code, response, error_map)
161 return
162 error = error_type(response=response)
--> 163 raise error
ResourceNotFoundError: (404) Resource not found
Code: 404
Message: Resource not found
...
```
### Description
The problem here is that my documentintelligence resource is of type FormRecognizer. I have double checked that in AI studio, the api key and endpoint work perfectly fine (see image below).

But the problem is that langchain by default calls the documentintelligence endpoint even though you pass the url explicitly:
https://amirrahnamadisweden.cognitiveservices.azure.com:443 "POST /documentintelligence/documentModels/prebuilt-layout:analyze?api-version=2023-07-31&outputContentFormat=markdown
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Wed Jul 31 20:48:04 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T6030
> Python Version: 3.11.11 (main, Dec 3 2024, 17:20:40) [Clang 16.0.0 (clang-1600.0.26.4)]
Package Information
-------------------
> langchain_core: 0.3.21
> langchain: 0.3.9
> langchain_community: 0.3.9
> langsmith: 0.1.147
> langchain_openai: 0.2.11
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.10
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.28.0
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.57.0
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
``` | 🤖:bug | low | Critical |
2,733,073,167 | create-react-app | web-vital.js compatibility issue <with solution> | # step 1: Run
```bash
npm install web-vitals@latest
```
I found the web-vitals have changed its export variables; resulting in a web-vital.js crash with the latest updates.
# Step 2: change the file "reportWebVitals"
```typscript
// reportWebVitals.ts
import { onCLS, onFCP, onLCP, onTTFB } from 'web-vitals';
const reportWebVitals = (onPerfEntry?: (metric: any) => void) => {
if (onPerfEntry && typeof onPerfEntry === 'function') {
onCLS(onPerfEntry);
onFCP(onPerfEntry);
onLCP(onPerfEntry);
onTTFB(onPerfEntry);
}
};
export default reportWebVitals;
```
The code above would resolve the web-vitals issues with the latest update.
| needs triage,issue: bug report | low | Critical |
2,733,100,818 | three.js | For reference: glTF Extension `KHR_gaussian_splatting` for 3DGS scene representation | ### Description
Following thread https://github.com/NASA-AMMOS/3DTilesRendererJS/issues/863 I just opened, @gkjohnson suggested I should cross-post it directly on the three.js repo. Feel free to close this and anyone interested can watch the universal splat thread instead - link [below](https://github.com/mkkellogg/GaussianSplats3D/issues/47) - since this 3DGS feature might be out-of-scope for the threejs library at the moment.
### Solution
Here https://github.com/mkkellogg/GaussianSplats3D/issues/47 is a mega discussion regarding a universal gaussian-splat format, with a lot of great takes and mentions - mostly regarding compression, like Niantic SPZ, and standardization.
An interesting recent [comment](https://github.com/mkkellogg/GaussianSplats3D/issues/47#issuecomment-2532791378) by lilleyse (Cesium team member) redirects to the draft [KHR_gaussian_splatting](https://github.com/CesiumGS/glTF/tree/proposal-KHR_gaussian_splatting/extensions/2.0/Khronos/KHR_gaussian_splatting) glTF extension as well as an implementation within CesiumJS in the experimental [splat-shader branch](https://github.com/CesiumGS/cesium/tree/splat-shader). This extension adds `_ROTATION` and `_SCALE` attributes to a glTF `POINTS`, so these can be used to represent a gaussian-splat scene. If or once that standard settles and makes its way to official KHR_extension, it would be uesful for threejs to interpret this extension and give access to these extra attributes.
The Cesiumjs implementation extends that `KHR_gaussian_splatting` extension to a HLOD (Hierarchical LoD) tileset container of mesh scenes, OGC-3D-tiles, where every leaf node tile is a glTF scene that supports that gaussian-splat extension.
### Alternatives
CesiumJS viewer will probably be the first reference implementation for that `KHR_gaussian_splatting extension + OGC-3D-tiles`
### Additional context
Following the original release of the [3DGS paper](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/) by INRIA team at SIGGRAPH 2023, there have been multiple JS renderers and viewers implemented, some based on threejs like [mkkellogg/GaussianSplats3D ](https://github.com/mkkellogg/GaussianSplats3D/) and some others which aren't, eg [antimatter15/splat](https://github.com/antimatter15/splat) [lumalabs](https://lumalabs.ai/luma-web-library) or [babylonjs](https://forum.babylonjs.com/t/gaussian-splatting-in-babylon-js/45027). | Loaders | low | Minor |
2,733,130,184 | react | Bug: "reject is not a function" in react-server | We've been investigating a `t is not a function` error in production logs of our Next.js application. After some digging, we found that the `t` corresponds to `reject` in [react-server/src/ReactFlightReplyServer.js:166](https://github.com/facebook/react/blob/79ddf5b574db085104d917c24a964cbd5b824e09/packages/react-server/src/ReactFlightReplyServer.js#L166).
The error is our case appears when someone (likely a bot) sends a POST request to a server action, as if it was submitting a form on our page, but it sends the JSON values for `$ACTION_1:0` and `$ACTION_1:1` with the quotation marks encoded as HTML entities (probably this is the trigger for the error, could be something with newline characters though, not sure).
React version: 19.0.0-rc.1
## Steps To Reproduce
1. Download/fork [this sandbox](https://codesandbox.io/p/devbox/blissful-gauss-6cvc8v) locally.
2. Install dependencies `npm i --legacy-peer-deps --force`
3. Run in development `npm run dev`
4. Download, unzip and put this [requestBody.zip](https://github.com/user-attachments/files/18098248/requestBody.zip) into the project.
5. Send a curl request using the requestBody file:
```
curl -X POST http://localhost:3000/ -H 'Content-Type: multipart/form-data; boundary=60eb717e9b0dc' --data-binary @requestBody
```
6. Watch the logs of the application, you should see the mentioned error:

## The current behavior
There's an unhandled error in the logs when action's metadata comes in unexpected format.
## The expected behavior
The situation is handled, and there's no errors in the logs.
## Possible solution
Similar to what's been added by @sebmarkbage in [ReactFlightClient.js](https://github.com/facebook/react/blob/79ddf5b574db085104d917c24a964cbd5b824e09/packages/react-client/src/ReactFlightClient.js#L262) in [this commit](https://github.com/facebook/react/pull/28847/commits/8e33e92aee3c00def90d2ab189eca8bca7380703#diff-4bc3e2f66cfadc63bd7abcc0387fcc7c1373092f832cac1b34614d8cddca5eacR229-R231), wrap the reject call in a `if (reject)`. Additionally, the underlying error which is thrown by this reject (`SyntaxError: Expected property name or '}' in JSON at position 1`), should also be handled.
| Status: Unconfirmed | medium | Critical |
2,733,146,174 | create-react-app | npx create-react-app <app-name> --template typescript "Unable to generate a basic tsconfig.json file"; with solutions | # Error Arising in typescript project
I am new to react. So apologies if I am going out of bounds. But, based on the old videos. I project seems to work flawlessly after running the command:
1. logo.svg: unable to interpret what is logo.svg.
2. "web-vitals" update renders "reportWebVitals.ts" outdated and crash-prone ("Throw error").
3. The absence of "tsconfig.json" file makes it unable to run the react app on the go after creation.
4. The conflicts between packages are caused by varying updates in various packages. (can be fixed with package.json provided to working condition).
5. Calls to the reportWebVitals.ts need modification with the new update in the web-vital.js.
The changes below will render the project on the computer smoothly, with new updates in the packages.
```bash
npx create-react-app <app-name> --template typescript
```
but, with current updates the above trigger a lot of errors.
Which include typescript errors and the absence of tsconfig.json file. Which we have to add and configure.
# I would suggest the following changes to the template download
## Step 1: package.json file
```json
{
"name": "<app-name>",
"version": "0.1.0",
"private": true,
"dependencies": {
"@testing-library/jest-dom": "^6.6.3",
"@testing-library/react": "^16.1.0",
"react": "^18.0.0",
"react-dom": "^18.0.0",
"react-scripts": "^5.0.1",
"typescript": "^4.7.2",
"web-vitals": "^4.2.4"
},
"devDependencies": {
"@types/jest": "^29.5.14",
"@types/mocha": "^10.0.10",
"@types/react": "^18.0.0",
"@types/react-dom": "^18.0.0",
"eslint": "^8.57.1",
"eslint-config-react-app": "^7.0.1"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": [
"react-app",
"react-app/jest"
]
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}
```
## Step 2: tsconfig.json
```json
{
"compilerOptions": {
"target": "es5",
"lib": ["dom", "es2015"],
"allowJs": true,
"jsx": "react-jsx",
"moduleResolution": "node",
"esModuleInterop": true,
"skipLibCheck": true,
"strict": true,
"forceConsistentCasingInFileNames": true,
"module": "esnext" // Change this to "esnext" or "commonjs"
},
"include": [
"src"
]
}
```
## Step 3: Add the Declaration file to the src folder "src/declarations.d.ts"; for the logo.svg error
```typescript
declare module '*.svg' {
const content: string;
export default content;
}
```
## Step 4: Modifying the reportWebVitals.ts file for new updated web-vitals.js
```typescript
import { onCLS, onFCP, onLCP, onTTFB } from 'web-vitals';
const reportWebVitals = (onPerfEntry?: (metric: any) => void) => {
if (onPerfEntry && typeof onPerfEntry === 'function') {
onCLS(onPerfEntry);
onFCP(onPerfEntry);
onLCP(onPerfEntry);
onTTFB(onPerfEntry);
}
};
export default reportWebVitals;
``` | needs triage,issue: bug report | low | Critical |
2,733,149,854 | rust | Tracking Issue for rustdoc `--doctest-compilation-args` command line option | ### Steps
[Original issue](https://github.com/rust-lang/rust/issues/67533)
- [ ] [Implementation PR](https://github.com/rust-lang/rust/pull/128780)
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
### Unresolved Questions
* [ ] Should argument parsing be closer to what already exists for [RUSTFLAGS](https://doc.rust-lang.org/nightly/cargo/reference/environment-variables.html?highlight=rustflags) (whitespace splitting and nothing else) or should it be closer to how shell parses input (take into account escaped characters and quote characters)? Or should we offer two options, one that works like RUSTFLAGS for human users and one that works like CARGO_ENCODED_RUSTLFAGS for tool usage? | T-rustdoc,C-tracking-issue | low | Minor |
2,733,205,717 | pytorch | onnx export broken for Upsample layers when using deterministic algorithms | ### 🐛 Describe the bug
Using torch==2.5.1+cu121 I get the following error
```
Exception has occurred: RuntimeError
(note: full exception trace is shown but execution is paused at: _run_module_as_main)
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and
cpu! (when checking argument for argument max in method wrapper_CUDA_clamp_Tensor)
torch\_decomp\decompositions.py line 3657
xp1 = (x + 1).clamp(max=inp_size - 1)
```
When running
```
import torch
import torch.nn as nn
torch.use_deterministic_algorithms(True)
upsample = nn.Upsample(scale_factor=2, mode="bilinear", align_corners=True)
x = torch.randn(1, 3, 64, 64).to("cuda:0")
type(x.size()[0])
y = upsample(x)
torch.onnx.export(
upsample,
(x,),
"test.onnx",
input_names=["x"],
output_names=["y"],
opset_version=18,
)
```
While the export works when using `torch.use_deterministic_algorithms(False)`
For some reason
`osize` in the function `_upsample_linear_vec` in `torch\_decomp\decompositions.py` is a list of single element tensors `[tensor(128), tensor(128)]` that are on the cpu instead of a list of integers, causing the error down the line
### Versions
I am using torch==2.5.1+cu121 on windows | module: onnx,triaged | low | Critical |
2,733,240,570 | PowerToys | Outlook Reminders Pop-Up not possible to exclude from snap area full screen with FancyZones | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
This problem returned a few days ago, Exclusion of "Reminders" used to work keeping reminders pop-up original size, now the small fixed reminders pop-up window is again expanding to the area full screen, even when listed in exclusions.
### ✔️ Expected Behavior
Outlook Reminders pop-up should not expand when listed under FancyZones exclusions.
### ❌ Actual Behavior
Outlook Reminders pop-up expands
### Other Software
_No response_ | Issue-Bug,Product-FancyZones,Needs-Triage | low | Minor |
2,733,362,528 | angular | Canonical: Data Dependencies on Views and Components | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
Currently there's no way to identify the data dependencies of a component or a view since we do not have a data primitive like a suspense boundary. `Resource` is a good start, but in order to solve things like when to invalidate hydration data caches independently for incremental hydration, we'll need some solution for this problem.
| area: core,canonical | low | Critical |
2,733,368,266 | next.js | [bug]: intercepting routes cannot go back outside dynamic params | ### Link to the code that reproduces this issue
https://github.com/juliusmarminge/next-intercepting-routes
### To Reproduce
1. Deploy the project. This issue works when running locally (both `next dev` and `next start`).
2. Navigate down the tree until you get to an "app"-segment (e.g. https://next-intercepting-routes-one.vercel.app/dashboard/1/1)
3. Click on a card, it should open as a modal but instead it hard navigates
### Current vs. Expected behavior
Locally it does what you expect, deployed it hard-navigates and checking server logs reveals this error:

But that doesn't make any sense since the `/dashboard/[team]/audit` page doesn't expect the `appId` param, and on the `/dashboard/[team]/[appId]` page it should be present. Are you only tracking 1 URL?
Demo:
https://github.com/user-attachments/assets/22a21b29-a18f-4d62-8db6-f469f80fc014
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Fri Nov 15 19:28:48 PST 2024; root:xnu-11215.61.3~1/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 20.18.0
npm: 10.8.2
Yarn: N/A
pnpm: 10.0.0-alpha.0
Relevant Packages:
next: 15.1.1-canary.0 // Latest available version is detected (15.1.1-canary.0).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.5.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
I cannot move the modal up above the `appId` segment since I don't want the modal to show when navigating from `/[team]` to `/[team]/audit` for example.
Also tried to create a route group between the two and mounting the modal there, without success.
At the end I even tried with a hacky workaround to do conditional hard navigations but that resulted in a whole other class of problems | bug,Parallel & Intercepting Routes | low | Critical |
2,733,376,302 | go | x/crypto/x509roots/fallback: should not exclude roots with Distrust After dates | Due to <https://github.com/golang/crypto/blob/7042ebcbe097f305ba3a93f9a22b4befa4b83d29/x509roots/gen_fallback_bundle.go#L129-L134>, roots in the Mozilla trust store with Distrust After dates, such as Entrust, are being excluded from the fallback bundle, meaning certificates that Firefox would accept will be incorrectly rejected by Go programs which use x509roots/fallback. I believe this creates a compatibility risk for the WebPKI and the correct thing to do until #70623 is fixed is to include roots with constraints.
This does mean that Distrust After dates would be ignored, but the security value of Distrust After is practically nil due to backdating, and the real point of Distrust After is to pave the way for an uneventful root removal 398 days in the future.
(Apologies for not filing this sooner; when I [did my review last month](https://sslmate.com/blog/post/entrust_distrust_more_disruptive_than_intended) I unfortunately looked only at x509roots/nss/parser.go and missed the code in gen_fallback_bundle.go)
cc @rolandshoemaker @FiloSottile | NeedsInvestigation | low | Minor |
2,733,427,066 | godot | A type error reported when GDScript preload an scene that used this GDScript | ### Tested versions
- 4.3 stable
### System information
MacOS 15, develop enviroment
### Issue description
hex_grid.gd
```gdscript
class_name HexGrid
extends Node2D
# the hex_grid.tscn used this GDScript
const HEX_GRID = preload("res://scene/hex_grid.tscn")
```
test.gd
```gdscript
class_name Module
extends Node2D
# the $HexGrid is hex_grid.tscn
@onready var hex_grid: HexGrid = $HexGrid
```
the Editor report: Trying to assign value of type 'Node2D' to a variable of type 'hex_grid.gd'

### Steps to reproduce
as reported above
### Minimal reproduction project (MRP)
NA | bug,topic:gdscript | low | Critical |
2,733,435,962 | go | proposal: unicode: add CategoryAliases, LC, Cn | The Unicode specification defines aliases for some of the general category names. For example the category "L" has alias "Letter".
The regexp package supports \p{L} but not \p{Letter}, because there is nothing in the Unicode tables that lets regexp know about Letter.
In order to support \p{Letter}, I propose to add a new, small table to unicode,
var CategoryAliases = map[string]string{
"Other": "C",
"Control": "Cc",
...,
"Letter": "L",
...
}
This would be auto-generated from the Unicode database like all our other tables. For Unicode 15, the table would have only 38 entries, listed below.
```
% grep '^gc' PropertyValueAliases.txt
gc ; C ; Other # Cc | Cf | Cn | Co | Cs
gc ; Cc ; Control ; cntrl
gc ; Cf ; Format
gc ; Cn ; Unassigned
gc ; Co ; Private_Use
gc ; Cs ; Surrogate
gc ; L ; Letter # Ll | Lm | Lo | Lt | Lu
gc ; LC ; Cased_Letter # Ll | Lt | Lu
gc ; Ll ; Lowercase_Letter
gc ; Lm ; Modifier_Letter
gc ; Lo ; Other_Letter
gc ; Lt ; Titlecase_Letter
gc ; Lu ; Uppercase_Letter
gc ; M ; Mark ; Combining_Mark # Mc | Me | Mn
gc ; Mc ; Spacing_Mark
gc ; Me ; Enclosing_Mark
gc ; Mn ; Nonspacing_Mark
gc ; N ; Number # Nd | Nl | No
gc ; Nd ; Decimal_Number ; digit
gc ; Nl ; Letter_Number
gc ; No ; Other_Number
gc ; P ; Punctuation ; punct # Pc | Pd | Pe | Pf | Pi | Po | Ps
gc ; Pc ; Connector_Punctuation
gc ; Pd ; Dash_Punctuation
gc ; Pe ; Close_Punctuation
gc ; Pf ; Final_Punctuation
gc ; Pi ; Initial_Punctuation
gc ; Po ; Other_Punctuation
gc ; Ps ; Open_Punctuation
gc ; S ; Symbol # Sc | Sk | Sm | So
gc ; Sc ; Currency_Symbol
gc ; Sk ; Modifier_Symbol
gc ; Sm ; Math_Symbol
gc ; So ; Other_Symbol
gc ; Z ; Separator # Zl | Zp | Zs
gc ; Zl ; Line_Separator
gc ; Zp ; Paragraph_Separator
gc ; Zs ; Space_Separator
%
``` | Proposal | low | Major |
2,733,441,677 | go | proposal: regexp/syntax: recognize Unicode category aliases | The Unicode specification defines aliases for some of the general category names. For example the category "L" has alias "Letter".
The regexp package supports \p{L} but not \p{Letter}, because there is nothing in the Unicode tables that lets regexp know about Letter.
Package regexp would be a permitted implementation for use in JSON-API Schema implementations except that there are tests with usage of aliases like \p{Letter} instead of \p{L}.
In #70780 I proposed adding a new CategoryAliases table to package unicode.
If that is accepted, I propose to also recognize the category aliases in regexp/syntax, which will make them work in package regexp. | Proposal | low | Major |
2,733,442,994 | flutter | Flutter tool times out installing | https://ci.chromium.org/ui/p/flutter/builders/prod/Linux_pixel_7pro%20cubic_bezier_perf__timeline_summary/6670/overview
https://ci.chromium.org/ui/p/flutter/builders/prod/Linux_pixel_7pro%20cubic_bezier_perf__timeline_summary/6612/overview
```
[2024-12-09 19:56:23.297900] [STDOUT] stdout: [ +6 ms] Stopping app 'app-profile.apk' on Pixel 7 Pro.
[2024-12-09 19:56:23.298321] [STDOUT] stdout: [ ] executing: /opt/s/w/ir/cache/android/sdk/platform-tools/adb -s 34011FDH3000Q5 shell am force-stop com.example.macrobenchmarks
[2024-12-09 19:56:23.374855] [STDOUT] stdout: [ +76 ms] executing: /opt/s/w/ir/cache/android/sdk/platform-tools/adb -s 34011FDH3000Q5 shell pm list packages com.example.macrobenchmarks
[2024-12-09 19:56:23.475442] [STDOUT] stdout: [ +100 ms] Installing APK.
[2024-12-09 19:56:23.478810] [STDOUT] stdout: [ +3 ms] Installing build/app/outputs/flutter-apk/app-profile.apk...
[2024-12-09 19:56:23.479510] [STDOUT] stdout: [ ] executing: /opt/s/w/ir/cache/android/sdk/platform-tools/adb -s 34011FDH3000Q5 install -t -r /opt/s/w/ir/x/w/rc/tmpsljs0cs5/flutter sdk/dev/benchmarks/macrobenchmarks/build/app/outputs/flutter-apk/app-profile.apk
[2024-12-09 19:56:26.658732] [STDOUT] stdout: [+3178 ms] Performing Streamed Install
[2024-12-09 19:56:26.658883] [STDOUT] stdout: Success
[2024-12-09 19:56:26.659757] [STDOUT] stdout: [ +1 ms] Installing build/app/outputs/flutter-apk/app-profile.apk... (completed in 3.2s)
[2024-12-09 19:56:26.663357] [STDOUT] stdout: [ +4 ms] executing: /opt/s/w/ir/cache/android/sdk/platform-tools/adb -s 34011FDH3000Q5 shell echo -n df528eedf59425eaa213578b93161f1a2a1dd980 > /data/local/tmp/sky.com.example.macrobenchmarks.sha1
[2024-12-09 19:56:26.681340] [STDOUT] stdout: [ +17 ms] executing: /opt/s/w/ir/cache/android/sdk/platform-tools/adb -s 34011FDH3000Q5 shell -x logcat -v time -t 1
[2024-12-09 19:56:26.728586] [STDOUT] stdout: [ +46 ms] --------- beginning of main
[2024-12-09 19:56:26.728680] [STDOUT] stdout: 12-09 20:01:17.755 D/ControlsListingControllerImpl( 1752): ServiceConfig reloaded, count: 0
[2024-12-09 19:56:26.756237] [STDOUT] stdout: [ +27 ms] executing: /opt/s/w/ir/cache/android/sdk/platform-tools/adb -s 34011FDH3000Q5 shell am start -a android.intent.action.MAIN -c android.intent.category.LAUNCHER -f 0x20000000 --ez enable-dart-profiling true --ez trace-startup true --ez start-paused true --ez verbose-logging true com.example.macrobenchmarks/com.example.macrobenchmarks.MainActivity
[2024-12-09 19:56:26.825625] [STDOUT] stdout: [ +69 ms] Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x20000000 cmp=com.example.macrobenchmarks/.MainActivity (has extras) }
[2024-12-09 19:56:26.826119] [STDOUT] stdout: [ ] Waiting for VM Service port to be available...
Step failed (timeout) (retcode: -15)
``` | P2,team-tool,triaged-tool | low | Critical |
2,733,461,659 | angular | Add display contents by default to all host elements | ### Which @angular/* package(s) are relevant/related to the feature request?
elements, common, forms
### Description
Hi,
since I recognized myself that I normally dont want my host element to interrupt my style as its for me mostly just a wrapper of my component inner html. I suggest to use `display: contents;` by default for it to not interrupt styles by default like `display: flex;` and so on.
More information:
https://developer.mozilla.org/en-US/docs/Web/CSS/display#contents
### Proposed solution
Set all host elements `display: contents` by default.
### Alternatives considered
Leave it as it is. | area: core,core: host and host bindings | low | Minor |
2,733,463,402 | next.js | [bug]: parallel/intercepting routes requires server reboot to work | ### Link to the code that reproduces this issue
https://github.com/juliusmarminge/next-intercepting-routes
### To Reproduce
1. Start the dev server
2. Navigate to the parallel segment (e.g http://localhost:3000/dashboard/1/1)
3. Click on a card, it opens in a modal
4. Now suppose we change the URL structure, so rename `audit` to `auditt` (in both places) and update the link to point to the new pathname. Refresh the page and click the card again, it opens in the full-page view
5. Restart the dev server, refresh the page again and now it opens in the modal view
Demo:
https://github.com/user-attachments/assets/e4513513-561d-4eff-bc0b-45e5ed0b8aa5
### Current vs. Expected behavior
It requires a server reboot when it shouldn't
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Fri Nov 15 19:28:48 PST 2024; root:xnu-11215.61.3~1/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 20.18.0
npm: 10.8.2
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.1.1-canary.0 // Latest available version is detected (15.1.1-canary.0).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.5.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
Changing the path like this might not be very realistic, but I wasted a lot of time to understand why my parallel routes wasn't working when adding them yesterday, trying to understand the necessary folder structure. Then I tihnk i went and added a dep and when i started the server back up it was working all of a sudden... It wasn't until i was fighting another issue (https://github.com/vercel/next.js/issues/73801) and ran into it again when trying other stuff that I understood that this was the issue | bug,Parallel & Intercepting Routes | low | Critical |
2,733,499,474 | flutter | [web][test][firefox] `bootstrapAndRunApp` hangs in `test/engine/` tests | Switching `test/engine/` tests from HTML to CanvasKit made Firefox unhappy ([PR](https://github.com/flutter/engine/pull/54786)). Calling `bootstrapAndRunApp` in a test causes Firefox to hang.
More info:
- It gets stuck exactly at [this promise](https://github.com/flutter/engine/blob/7941d7801b0826b21d82f84bc0d160fe599918c1/lib/web_ui/lib/src/engine/canvaskit/renderer.dart#L81-L82) that never completes.
Odd things I have noticed:
- This only happens in the `firefox-dart2js-canvaskit-engine` suite (other firefox suites work fine).
- Some tests only fail in CI and not locally. Other tests fail in both environments.
After spending much time on this without luck, I decided to skip a few tests to unblock myself. But we need to revisit this and try to figure out a way to fix it. | a: tests,engine,platform-web,e: web_canvaskit,browser: firefox,P2,team-web,triaged-web | low | Minor |
2,733,522,551 | pytorch | [user empathy day] No speedups for SAM2 model | Internal doc: https://docs.google.com/document/d/1T-jo9d2mVdgiDB27fdrDI9k60ZVVqjNhoBYvEZ_sktQ/edit?usp=sharing
Repo: https://github.com/facebookresearch/sam2
Benchmark script:
```python
import sys
import time
import torch
import numpy as np
from sam2.build_sam import build_sam2
from sam2.sam2_image_predictor import SAM2ImagePredictor
from PIL import Image
import torch._dynamo.config
if len(sys.argv) != 3:
print(f"Usage: {sys.argv[0]} <do compile> <iters>")
sys.exit(1)
do_compile = bool(int(sys.argv[1]))
iters = int(sys.argv[2])
assert iters > 0
##
## Most of the following adapted from:
## https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/image_predictor_example.ipynb
##
assert torch.cuda.is_available()
device = torch.device("cuda")
# use bfloat16 for the entire notebook
torch.autocast("cuda", dtype=torch.bfloat16).__enter__()
# turn on tfloat32 for Ampere GPUs (https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices)
if torch.cuda.get_device_properties(0).major >= 8:
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
# Looks like they only have 3 sample images:
images = [
Image.open('./notebooks/images/cars.jpg'),
Image.open('./notebooks/images/groceries.jpg'),
Image.open('./notebooks/images/truck.jpg'),
]
input_point = np.array([[500, 375]])
input_label = np.array([1])
checkpoint = "./checkpoints/sam2.1_hiera_large.pt"
model_cfg = "configs/sam2.1/sam2.1_hiera_l.yaml"
predictor = SAM2ImagePredictor(build_sam2(model_cfg, checkpoint))
if do_compile:
predictor.predict = torch.compile(predictor.predict)
times = []
with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
# Warmup:
for _ in range(5):
for image in images:
predictor.set_image(image)
masks, scores, logits = predictor.predict(
point_coords=input_point,
point_labels=input_label,
)
# Benchmark:
for _ in range(iters):
for image in images:
start = time.time()
predictor.set_image(image)
masks, scores, logits = predictor.predict(
point_coords=input_point,
point_labels=input_label,
)
times.append(time.time() - start)
print(f"Avg: {sum(times) / len(times)}")
if do_compile:
torch._dynamo.utils.print_time_report()
```
Results:
- Eager: 0.167
- Compile: 0.176
- Cold-start compile time: 38s
- Warm-start compile time: 28s
Graph breaks:
- `dynamic shape operator: aten.nonzero.default; to enable, set torch._dynamo.config.capture_dynamic_output_shape_ops = True`
- `torch.* op returned non-Tensor _GeneratorContextManager call_function`
cc @msaroufim @chauhang @penguinwu | module: performance,triaged,oncall: pt2,empathy-day | low | Minor |
2,733,522,704 | neovim | LSP: inhibit language servers with vim.lsp.config / vim.lsp.enable (project-specific) | ### Problem
I've read https://github.com/neovim/neovim/pull/31031 and realize that it's meant to be a 90% solution. I also realize that my use-case are likely not shared by that 90%. But I also feel it could be captured by "project-specific setup", which was mentioned in the PR as a desired use case.
My current configuration is simple conceptually, but looks ugly in code. It satisfies the following needs:
- (OSS development use-case) Start configured LSPs (gopls, luals, ...) when certain file-types and allowed roots are signaled, using `vim.lsp.start()`. This is straightforward, and would be simplified a little more with `vim.lsp.config()` and `vim.lsp.enable()`.
- ($DAY_JOB use-case) Allow supplying an [override LSP](https://github.com/aktau/dotfiles/blob/47b85e2c733f8e66b7176da08c4812679d41d316/.vim/lua/lsp.lua#L151-L179) that inhibits the previously configured LSPs from running under the given condition (matches root path, has matching filetype).
This override LSP is a company-internal one that can handle multiple languages. Open-source LSPs don't work with the source code in this tree at all.
This looks akin to the project-specific configuration discussed in https://github.com/neovim/neovim/pull/31031. Where I'd define this special project as "anything under `$SPECIAL_DIR`". But I can't figure out how to do it with the new APIs given that they're all global.
To make it more concrete, If I want that when a file under the "override" root is opened, only the override LSP should start **for that specific buffer**, even if the root patterns and filetypes match for other LSPs:
- `vim.lsp.enable()` seems to be a global property, but I want buffer-local decisions.
- I could imagine a hack where I override the `vim.lsp.config`'s for the other LSPs so that they match only a dummy file-type (will that (de)register the auto-commands?), but too won't allow them to start.
- I could imagine inhibiting the start from `vim.lsp.ClientConfig.before_init`. But I don't see it documented that `before_init` is able to do that.
If `root_marker` was changed to accept a function, I can imagine doing something like this (psuedo-code):
```lua
for i, v in pairs(vim.lsp.config) do
local old_root_marker = v.root_marker
v.root_marker = function(fname)
return not vim.startswith(fname, override_root) and old_root_marker(fname) or nil
end
vim.lsp.config[i] = v
end
```
It's fine if the response is: this use case isn't and won't be supported, just use `vim.lsp.start()`. Just thought I'd bring it up in case I missed something or this type of flexibility is desired in the end.
cc @lewis6991
### Expected behavior
I expect either:
- Someone telling me that these types of use-cases is exactly what the lower-level `vim.lsp.start` is for.
- Someone telling me that I misunderstand things and it's easy to use `vim.lsp.config` and `vim.lsp.enable` to do what I want. Or
- It's not available right now and may come in the future. | enhancement,lsp,options | low | Minor |
2,733,529,914 | rust | ICE: `Encountered error SignatureMismatch` | <!--
[31mICE[0m: Rustc ./a.rs '-Zvalidate-mir -Zinline-mir=yes -ooutputfile -Zdump-mir-dir=dir' 'error: internal compiler error: compiler/rustc_traits/src/codegen.rs:45:13: Encountered error `SignatureMismatch(SignatureMismatchData { found_trait_ref: <{closure@/tmp/im/a.rs:17:15: 17:20} as std::ops::FnMut<(i32, i32)>>, expected_trait_ref: <{closure@/tmp/im/a.rs:17:15: 17:20} as std::ops::FnMut<(i32, i16)>>, terr: Sorts(ExpectedFound { expected: i16, found: i32 }) })` selecting `<{closure@/tmp/im/a.rs:17:15: 17:20} as std::ops::FnMut<(i32, i16)>>` during codegen', 'error: internal compiler error: compiler/rustc_traits/src/codegen.rs:45:13: Encountered error `SignatureMismatch(SignatureMismatchData { found_trait_ref: <{closure@/tmp/im/a.rs:17:15: 17:20} as std::ops::FnMut<(i32, i32)>>, expected_trait_ref: <{closure@/tmp/im/a.rs:17:15: 17:20} as std::ops::FnMut<(i32, i16)>>, terr: Sorts(ExpectedFound { expected: i16, found: i32 }) })` selecting `<{closure@/tmp/im/a.rs:17:15: 17:20} as std::ops::FnMut<(i32, i16)>>` during codegen'
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
//@compile-flags: -Zvalidate-mir -Zinline-mir=yes
use std::vec::IntoIter;
pub(crate) trait Foo: Iterator<Item = <Self as Foo>::Key> {
type Key;
}
impl Foo for IntoIter<i16> {}
fn sum_foo<F: Foo<Key = i32>>(f: F) -> i32 {
f.fold(0, |a, b| a + b)
}
fn main() {
let x = sum_foo(vec![11, 10, 1].into_iter());
}
````
original:
````rust
//@ run-pass
// Test case where an associated type is referenced from within the
// supertrait definition. Issue #20220.
use std::vec::IntoIter;
pub(crate) trait Foo: Iterator<Item=<Self as Foo>::Key> {
type Key;
}
impl Foo for IntoIter<i16> {
type Key = i32;
}
fn sum_foo<F:Foo<Key=i32>>(f: F) -> i32 {
f.fold(0, |a,b| a + b)
}
fn main() {
let x = sum_foo(vec![11, 10, 1].into_iter());
assert_eq!(x, 22);
}
````
Version information
````
rustc 1.85.0-nightly (5a6036a18 2024-12-11)
binary: rustc
commit-hash: 5a6036a1802262f8cf02192b02026688d396f1d7
commit-date: 2024-12-11
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.5
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/5a6036a1802262f8cf02192b02026688d396f1d7/compiler/rustc_traits/src/codegen.rs#L39-L51
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zvalidate-mir -Zinline-mir=yes`
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0046]: not all trait items implemented, missing: `Key`
--> /tmp/icemaker_global_tempdir.xwRxwlsen7Ee/rustc_testrunner_tmpdir_reporting.HhkZSZGzNZAb/mvce.rs:7:1
|
4 | type Key;
| -------- `Key` from trait
...
7 | impl Foo for IntoIter<i16> {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ missing `Key` in implementation
warning: unused variable: `x`
--> /tmp/icemaker_global_tempdir.xwRxwlsen7Ee/rustc_testrunner_tmpdir_reporting.HhkZSZGzNZAb/mvce.rs:14:9
|
14 | let x = sum_foo(vec![11, 10, 1].into_iter());
| ^ help: if this is intentional, prefix it with an underscore: `_x`
|
= note: `#[warn(unused_variables)]` on by default
error: internal compiler error: compiler/rustc_traits/src/codegen.rs:45:13: Encountered error `SignatureMismatch(SignatureMismatchData { found_trait_ref: <{closure@/tmp/icemaker_global_tempdir.xwRxwlsen7Ee/rustc_testrunner_tmpdir_reporting.HhkZSZGzNZAb/mvce.rs:10:15: 10:21} as std::ops::FnMut<(i32, i32)>>, expected_trait_ref: <{closure@/tmp/icemaker_global_tempdir.xwRxwlsen7Ee/rustc_testrunner_tmpdir_reporting.HhkZSZGzNZAb/mvce.rs:10:15: 10:21} as std::ops::FnMut<(i32, i16)>>, terr: Sorts(ExpectedFound { expected: i16, found: i32 }) })` selecting `<{closure@/tmp/icemaker_global_tempdir.xwRxwlsen7Ee/rustc_testrunner_tmpdir_reporting.HhkZSZGzNZAb/mvce.rs:10:15: 10:21} as std::ops::FnMut<(i32, i16)>>` during codegen
thread 'rustc' panicked at compiler/rustc_traits/src/codegen.rs:45:13:
Box<dyn Any>
stack backtrace:
0: 0x7bfe54f5c11a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hc45fac407a6ff7a2
1: 0x7bfe55613c26 - core::fmt::write::h06d264594044da0f
2: 0x7bfe565c8a51 - std::io::Write::write_fmt::h996e36892eed4c22
3: 0x7bfe54f5bf72 - std::sys::backtrace::BacktraceLock::print::hef4faa4abc10c4e1
4: 0x7bfe54f5e48a - std::panicking::default_hook::{{closure}}::h93c9dc5f751f518c
5: 0x7bfe54f5e2d3 - std::panicking::default_hook::h7e0cdf55d7d92b2f
6: 0x7bfe540c1bc8 - std[2b58fe5c94a29799]::panicking::update_hook::<alloc[fb981c72c3bff65d]::boxed::Box<rustc_driver_impl[497e5fb2fdc9e592]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7bfe54f5ec88 - std::panicking::rust_panic_with_hook::he1642df02554cf98
8: 0x7bfe540f77d1 - std[2b58fe5c94a29799]::panicking::begin_panic::<rustc_errors[252cf4afcb204d9b]::ExplicitBug>::{closure#0}
9: 0x7bfe540ec976 - std[2b58fe5c94a29799]::sys::backtrace::__rust_end_short_backtrace::<std[2b58fe5c94a29799]::panicking::begin_panic<rustc_errors[252cf4afcb204d9b]::ExplicitBug>::{closure#0}, !>
10: 0x7bfe540e9409 - std[2b58fe5c94a29799]::panicking::begin_panic::<rustc_errors[252cf4afcb204d9b]::ExplicitBug>
11: 0x7bfe54101771 - <rustc_errors[252cf4afcb204d9b]::diagnostic::BugAbort as rustc_errors[252cf4afcb204d9b]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x7bfe546f0ad3 - rustc_middle[fc7bfb690a1873b0]::util::bug::opt_span_bug_fmt::<rustc_span[7131546b166d82b0]::span_encoding::Span>::{closure#0}
13: 0x7bfe546d69da - rustc_middle[fc7bfb690a1873b0]::ty::context::tls::with_opt::<rustc_middle[fc7bfb690a1873b0]::util::bug::opt_span_bug_fmt<rustc_span[7131546b166d82b0]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x7bfe546d686b - rustc_middle[fc7bfb690a1873b0]::ty::context::tls::with_context_opt::<rustc_middle[fc7bfb690a1873b0]::ty::context::tls::with_opt<rustc_middle[fc7bfb690a1873b0]::util::bug::opt_span_bug_fmt<rustc_span[7131546b166d82b0]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x7bfe529ee880 - rustc_middle[fc7bfb690a1873b0]::util::bug::bug_fmt
16: 0x7bfe52d8cfcf - rustc_traits[84112b02c32ecba7]::codegen::codegen_select_candidate
17: 0x7bfe55fd6951 - rustc_query_impl[cb988c35710418b0]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cb988c35710418b0]::query_impl::codegen_select_candidate::dynamic_query::{closure#2}::{closure#0}, rustc_middle[fc7bfb690a1873b0]::query::erase::Erased<[u8; 16usize]>>
18: 0x7bfe55fd6cf7 - rustc_query_system[104428f5be780cfb]::query::plumbing::try_execute_query::<rustc_query_impl[cb988c35710418b0]::DynamicConfig<rustc_query_system[104428f5be780cfb]::query::caches::DefaultCache<rustc_middle[fc7bfb690a1873b0]::ty::PseudoCanonicalInput<rustc_type_ir[d093507361b4f614]::predicate::TraitRef<rustc_middle[fc7bfb690a1873b0]::ty::context::TyCtxt>>, rustc_middle[fc7bfb690a1873b0]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[cb988c35710418b0]::plumbing::QueryCtxt, false>
19: 0x7bfe55fd68ef - rustc_query_impl[cb988c35710418b0]::query_impl::codegen_select_candidate::get_query_non_incr::__rust_end_short_backtrace
20: 0x7bfe530dd6e0 - rustc_ty_utils[7184913b0844be97]::instance::resolve_instance_raw
21: 0x7bfe55910632 - rustc_query_impl[cb988c35710418b0]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cb988c35710418b0]::query_impl::resolve_instance_raw::dynamic_query::{closure#2}::{closure#0}, rustc_middle[fc7bfb690a1873b0]::query::erase::Erased<[u8; 32usize]>>
22: 0x7bfe559109ec - rustc_query_system[104428f5be780cfb]::query::plumbing::try_execute_query::<rustc_query_impl[cb988c35710418b0]::DynamicConfig<rustc_query_system[104428f5be780cfb]::query::caches::DefaultCache<rustc_middle[fc7bfb690a1873b0]::ty::PseudoCanonicalInput<(rustc_span[7131546b166d82b0]::def_id::DefId, &rustc_middle[fc7bfb690a1873b0]::ty::list::RawList<(), rustc_middle[fc7bfb690a1873b0]::ty::generic_args::GenericArg>)>, rustc_middle[fc7bfb690a1873b0]::query::erase::Erased<[u8; 32usize]>>, false, false, false>, rustc_query_impl[cb988c35710418b0]::plumbing::QueryCtxt, false>
23: 0x7bfe559105b0 - rustc_query_impl[cb988c35710418b0]::query_impl::resolve_instance_raw::get_query_non_incr::__rust_end_short_backtrace
24: 0x7bfe5565955f - <rustc_middle[fc7bfb690a1873b0]::ty::instance::Instance>::try_resolve
25: 0x7bfe561b3626 - rustc_mir_transform[cc0fe7e8983fa4e4]::inline::cycle::mir_callgraph_reachable::process
26: 0x7bfe561b435a - rustc_mir_transform[cc0fe7e8983fa4e4]::inline::cycle::mir_callgraph_reachable::process
27: 0x7bfe561b2fb1 - rustc_mir_transform[cc0fe7e8983fa4e4]::inline::cycle::mir_callgraph_reachable
28: 0x7bfe561b2ea9 - rustc_query_impl[cb988c35710418b0]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cb988c35710418b0]::query_impl::mir_callgraph_reachable::dynamic_query::{closure#2}::{closure#0}, rustc_middle[fc7bfb690a1873b0]::query::erase::Erased<[u8; 1usize]>>
29: 0x7bfe561b2e6b - <rustc_query_impl[cb988c35710418b0]::query_impl::mir_callgraph_reachable::dynamic_query::{closure#2} as core[9c63f579735f14fb]::ops::function::FnOnce<(rustc_middle[fc7bfb690a1873b0]::ty::context::TyCtxt, (rustc_middle[fc7bfb690a1873b0]::ty::instance::Instance, rustc_span[7131546b166d82b0]::def_id::LocalDefId))>>::call_once
30: 0x7bfe561b21e1 - rustc_query_system[104428f5be780cfb]::query::plumbing::try_execute_query::<rustc_query_impl[cb988c35710418b0]::DynamicConfig<rustc_query_system[104428f5be780cfb]::query::caches::DefaultCache<(rustc_middle[fc7bfb690a1873b0]::ty::instance::Instance, rustc_span[7131546b166d82b0]::def_id::LocalDefId), rustc_middle[fc7bfb690a1873b0]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[cb988c35710418b0]::plumbing::QueryCtxt, false>
31: 0x7bfe561b1f33 - rustc_query_impl[cb988c35710418b0]::query_impl::mir_callgraph_reachable::get_query_non_incr::__rust_end_short_backtrace
32: 0x7bfe5612f23a - <rustc_mir_transform[cc0fe7e8983fa4e4]::inline::Inliner>::try_inlining
33: 0x7bfe56120c68 - <rustc_mir_transform[cc0fe7e8983fa4e4]::inline::Inliner>::process_blocks
34: 0x7bfe56120119 - <rustc_mir_transform[cc0fe7e8983fa4e4]::inline::Inline as rustc_mir_transform[cc0fe7e8983fa4e4]::pass_manager::MirPass>::run_pass
35: 0x7bfe55607eee - rustc_mir_transform[cc0fe7e8983fa4e4]::pass_manager::run_passes_inner
36: 0x7bfe55bae8e0 - rustc_mir_transform[cc0fe7e8983fa4e4]::optimized_mir
37: 0x7bfe55bae1b1 - rustc_query_impl[cb988c35710418b0]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cb988c35710418b0]::query_impl::optimized_mir::dynamic_query::{closure#2}::{closure#0}, rustc_middle[fc7bfb690a1873b0]::query::erase::Erased<[u8; 8usize]>>
38: 0x7bfe55641d5f - rustc_query_system[104428f5be780cfb]::query::plumbing::try_execute_query::<rustc_query_impl[cb988c35710418b0]::DynamicConfig<rustc_query_system[104428f5be780cfb]::query::caches::DefIdCache<rustc_middle[fc7bfb690a1873b0]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[cb988c35710418b0]::plumbing::QueryCtxt, false>
39: 0x7bfe5564121f - rustc_query_impl[cb988c35710418b0]::query_impl::optimized_mir::get_query_non_incr::__rust_end_short_backtrace
40: 0x7bfe52c9d589 - <rustc_middle[fc7bfb690a1873b0]::ty::context::TyCtxt>::instance_mir
41: 0x7bfe55a80a58 - rustc_interface[e1f6b3f7cc83500f]::passes::run_required_analyses
42: 0x7bfe5655351e - rustc_interface[e1f6b3f7cc83500f]::passes::analysis
43: 0x7bfe565534ef - rustc_query_impl[cb988c35710418b0]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cb988c35710418b0]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[fc7bfb690a1873b0]::query::erase::Erased<[u8; 0usize]>>
44: 0x7bfe56635915 - rustc_query_system[104428f5be780cfb]::query::plumbing::try_execute_query::<rustc_query_impl[cb988c35710418b0]::DynamicConfig<rustc_query_system[104428f5be780cfb]::query::caches::SingleCache<rustc_middle[fc7bfb690a1873b0]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[cb988c35710418b0]::plumbing::QueryCtxt, false>
45: 0x7bfe5663564e - rustc_query_impl[cb988c35710418b0]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
46: 0x7bfe5668016e - rustc_interface[e1f6b3f7cc83500f]::interface::run_compiler::<(), rustc_driver_impl[497e5fb2fdc9e592]::run_compiler::{closure#0}>::{closure#1}
47: 0x7bfe564ef507 - std[2b58fe5c94a29799]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[e1f6b3f7cc83500f]::util::run_in_thread_with_globals<rustc_interface[e1f6b3f7cc83500f]::util::run_in_thread_pool_with_globals<rustc_interface[e1f6b3f7cc83500f]::interface::run_compiler<(), rustc_driver_impl[497e5fb2fdc9e592]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
48: 0x7bfe564ef9a2 - <<std[2b58fe5c94a29799]::thread::Builder>::spawn_unchecked_<rustc_interface[e1f6b3f7cc83500f]::util::run_in_thread_with_globals<rustc_interface[e1f6b3f7cc83500f]::util::run_in_thread_pool_with_globals<rustc_interface[e1f6b3f7cc83500f]::interface::run_compiler<(), rustc_driver_impl[497e5fb2fdc9e592]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[9c63f579735f14fb]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
49: 0x7bfe564f0f6f - std::sys::pal::unix::thread::Thread::new::thread_start::hef76b0be97717a17
50: 0x7bfe5086a39d - <unknown>
51: 0x7bfe508ef49c - <unknown>
52: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.85.0-nightly (5a6036a18 2024-12-11) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z validate-mir -Z inline-mir=yes -Z dump-mir-dir=dir
query stack during panic:
#0 [codegen_select_candidate] computing candidate for `<{closure@/tmp/icemaker_global_tempdir.xwRxwlsen7Ee/rustc_testrunner_tmpdir_reporting.HhkZSZGzNZAb/mvce.rs:10:15: 10:21} as core::ops::function::FnMut<(i32, i16)>>`
#1 [resolve_instance_raw] resolving instance `<{closure@/tmp/icemaker_global_tempdir.xwRxwlsen7Ee/rustc_testrunner_tmpdir_reporting.HhkZSZGzNZAb/mvce.rs:10:15: 10:21} as core::ops::function::FnMut<(i32, i16)>>::call_mut`
end of query stack
error: aborting due to 2 previous errors; 1 warning emitted
For more information about this error, try `rustc --explain E0046`.
```
</p>
</details>
<!--
query stack:
#0 [codegen_select_candidate] computing candidate for `<{closure@/tmp/icemaker_global_tempdir.xwRxwlsen7Ee/rustc_testrunner_tmpdir_reporting.HhkZSZGzNZAb/mvce.rs:10:15: 10:21} as core::ops::function::FnMut<(i32, i16)>>`
#1 [resolve_instance_raw] resolving instance `<{closure@/tmp/icemaker_global_tempdir.xwRxwlsen7Ee/rustc_testrunner_tmpdir_reporting.HhkZSZGzNZAb/mvce.rs:10:15: 10:21} as core::ops::function::FnMut<(i32, i16)>>::call_mut`
-->
| I-ICE,T-compiler,C-bug,A-mir-opt-inlining,S-bug-has-test,T-types | low | Critical |
2,733,604,398 | go | crypto/tls: VerifyClientCertIfGiven with "bad" client certificate | ### Go version
go version go1.21.11 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/.cache/go-build'
GOENV='/home/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/dn/sdk/go1.21.11'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/sdk/go1.21.11/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.21.11'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build3756811533=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
I ran a server configured with multiple client authentication modes, one of which is verify-if-given using `VerifyClientCertIfGiven`. To test its behavior, I used a client that presented an invalid certificate—meaning the certificate could not be verified by the server's stored CA. Given that I expected the TLS handshake to fail.
### What did you see happen?
The TLS handshake did not fail. Instead it continued to the application authentication logic, and when I checked the certificate metadata, I noticed that `len(tlsInfo.State.PeerCertificates)` was equal to zero, treating it like certificate not provided at all.
### What did you expect to see?
Per the official docs Im expecting that it should be return and failed (and not fallback) since once client provide a certificate it should be a valid one to the stored CA:
```
// VerifyClientCertIfGiven indicates that a client certificate should be requested
// during the handshake, but does not require that the client sends a
// certificate. If the client does send a certificate it is required to be
// valid.
VerifyClientCertIfGiven
``` | NeedsInvestigation | low | Critical |
2,733,607,797 | flutter | [ios] Flutter framework feeds wrong text range into the engine | This is a follow up issue for https://github.com/flutter/flutter/issues/138464
Flutter framework feeds invalid text range into the engine. The [temp workaround here](https://github.com/flutter/engine/pull/55909) stoped the crash by handling invalid range in the engine. However, we should further investigate it by figuring out where the invalid range came from and fix the upstream.
See discussion thread at: https://github.com/flutter/engine/pull/55909#discussion_r1829749289
### Expected results
NA
### Actual results
NA
### Code sample
NA
### Screenshots or Video
NA
### Logs
NA
### Flutter Doctor output
NA | platform-ios,engine,P2,team-ios,triaged-ios | low | Critical |
2,733,635,127 | pytorch | DataLoader hangs when object fails during pickling | ### 🐛 Describe the bug
Multiprocess `torch.utils.data.DataLoader` hangs indefinitely when an object fails during pickling, tested on Linux.
Notably, this issue was caught while a PIL.Image was being pickled, because an OS Error was raised in the `__getstate__` method: https://github.com/python-pillow/Pillow/blob/d66c51a35663247357bf6b625ef093df7ea12a45/src/PIL/Image.py#L751
However we can repro this with the following code easily with any function that fails pickling, see below.
Root cause:
When an object is put on an mp.Queue, the object is serialized with Pickle, but **in a background thread**: https://github.com/python/cpython/blob/dd9da738ad1d420fabafaded3fe63912b2b17cfb/Lib/multiprocessing/queues.py#L230-L262
~~The background thread dies, and the parent worker process has no chance to catch it~~
The exception is caught by this handler which prints the stack trace, but then the queue continues to run: https://github.com/python/cpython/blob/dd9da738ad1d420fabafaded3fe63912b2b17cfb/Lib/multiprocessing/queues.py#L293
~~The good news is that the parent worker process should die~~ The parent worker stays alive, which is why it's not being caught here: https://github.com/pytorch/pytorch/blob/main/torch/utils/data/dataloader.py#L1259
DataLoader just hangs at this point, when it should fail instead. We should investigate how to deal with this scenario
Minimal Repro plain python: https://gist.github.com/andrewkho/0a57cb936802f4b81fe4f84cee727625
Minimal Repro with torch DataLoader:
```python
from torch.utils.data import DataLoader, Dataset
class FailsPickling:
def __init__(self):
self.x = 1
def __getstate__(self):
print("Entered getstate")
raise OSError("Fails pickling")
def __setstate__(self, state):
print("Entered setstate")
self.x = state["1"]
class MyDataset(Dataset):
def __getitem__(self, i):
return {"i": i, "bad_obj": FailsPickling()}
def __len__(self):
return 100
def main():
dataset = MyDataset()
dl = DataLoader(dataset, batch_size=None, num_workers=1)
for i, batch in enumerate(dl):
print(f"{i}, {batch=}")
if __name__ == "__main__":
main()
```
### Versions
Tested on nightly, linux
cc @divyanshk @SsnL @VitalyFedyunin @dzhulgakov | module: dataloader,triaged | low | Critical |
2,733,640,781 | react-native | FaltList scroll does not work | ### Description
I used a `FlatList` and it does not work
This is my code snipet of implementing the `FlatList`
```
return (
<View style={{ flex: 1 }}>
<FlatList
data={Entries}
keyExtractor={(item) => item.id}
renderItem={renderCategoryItem}
numColumns={1}
contentContainerStyle={{paddingBottom: 10}}
style={styles.flatlist}
scrollEnabled={true}
/>
</View>
);
}
export default AllQuote;
const styles = StyleSheet.create({
flatlist: {
flex: 1,
marginVertical: 8,
},
});
```
And this is the component Im using:
```
return (
<View style={styles.gridItem}>
<Pressable
android_ripple={{color: colors}}
style={({ pressed }) => [
styles.button,
pressed ? styles.buttonPress : null,
]}
onPress={onPress}
>
<View style={styles.innerContainer}>
<Text style={styles.title}>{title}</Text>
{subtitle && <Text style={styles.authorText}> -{subtitle}</Text>}
</View>
</Pressable>
</View>
);
}
export default CompButton;
const styles = StyleSheet.create({
gridItem: {
flex: 1,
margin: 16,
marginVertical: 8,
borderRadius: 8,
elevation: 4,
backgroundColor: 'white',
shadowColor: 'black',
shadowOpacity: 0.25,
shadowOffset: { width: 0, height: 2},
shadowRadius: 8,
overflow: Platform.OS === 'android' ? 'hidden' : 'visible',
},
button: {
flex: 1,
shadowColor: 'black',
shadowOpacity: 0.25,
shadowOffset: { width: 0, height: 2},
shadowRadius: 8,
overflow: Platform.OS === 'android' ? 'hidden' : 'visible',
},
buttonPress: {
opacity: .75,
},
innerContainer: {
flex: 1,
padding: 16,
borderRadius: 8,
justifyContent: 'center',
alignItems: 'center',
backgroundColor: colors.darkOliveGreen,
},
title: {
fontWeight: 'bold',
fontSize: 18,
color: colors.khaki,
},
authorText: {
fontSize: 20,
color: colors.darkKhaki,
fontStyle: 'italic',
},
})
```
### Expected behavior
It shoul scroll without any problem, but it does not.
it stayes as if its a static element.
### Steps to reproduce
[this is the link to the repository](https://github.com/BKazimy/Learn)
BTW im using web for development since I cant use my phone for network AND testing device.
### React Native Version
0.76.3
### Output of `npx react-native info`
```text
⚠️ react-native depends on @react-native-community/cli for cli commands. To fix update your package.json to include:
"devDependencies": {
"@react-native-community/cli": "latest",
}
```
### Screenshots and Videos
_No response_ | Needs: Author Feedback,Newer Patch Available | low | Minor |
2,733,655,435 | go | x/tools/gopls/internal/extract: extract L-value should replace its uses with dereferenced access | ### gopls version
.
### go env
```shell
.
```
### What did you do?
```go
type foo struct {
bar int
}
f := foo{bar: 1}
f.bar = 2 // extract f.bar
```
### What did you see happen?
Currently the produced code has some type error, this messy output happens in any LHS of an assignment:
```go
type foo struct {
bar int
}
f := foo{bar: 1}
xf.bar
f.bar = 2 // extract f.bar
```
But even we fix the indent/newline format bug, the result would be:
```go
type foo struct {
bar int
}
f := foo{bar: 1}
x := f.bar
x = 2 // semantic different, f.bar remains 1
```
### What did you expect to see?
```go
type foo struct {
bar int
}
f := foo{bar: 1}
x := &f.bar
*x = 2
```
Reference https://go-review.googlesource.com/c/tools/+/624035/comments/51ffbf6c_61124c11
> The crucial point is that any time we extract an addressable expression to a new variable, we may need to extract its address, and replace all uses by *newVar.
### Editor and settings
_No response_
### Logs
_No response_ | gopls,Tools,Refactoring | low | Critical |
2,733,656,217 | PowerToys | "PowerToys Run" plugin activation lag when not using it for an extended period | ### Description of the new feature / enhancement
The PowerToys Run plugin should respond instantly when invoked using the "Alt + Space" shortcut, even after being idle for an extended period. Currently, there is a noticeable delay (lag) before the Run window appears after it has not been used for some time. Improving its responsiveness will ensure a smoother and more reliable user experience.
### Scenario when this would be used?
This issue affects productivity during workflows that rely on quick access to PowerToys Run. As a power user, I frequently use "Alt + Space" to quickly open applications, search for files, or execute commands. The delay after the tool has been idle disrupts this flow, making it less reliable and slowing down tasks that require fast execution.
### Supporting information
I am using the latest version of PowerToys on Windows 11. The delay occurs consistently after the Run plugin has been idle for a while. This might be related to resource management or sleep states. A fix or optimization for responsiveness would greatly enhance the tool's usability. | Product-PowerToys Run,Needs-Triage | low | Major |
2,733,672,816 | flutter | `Windows tool_integration_tests_2_9` failing on wrong Android NDK version | https://ci.chromium.org/ui/p/flutter/builders/prod/Windows%20tool_integration_tests_2_9/874/infra
```
09:54 +6 -1: test/integration.shard/android_plugin_example_app_build_test.dart: FFI plugin example can be built using current Flutter Gradle plugin [E]
Exception: flutter build failed: 1
Your project is configured with Android NDK 26.1.10909125, but the following plugin(s) depend on a different Android NDK version:
- plugin_ffi_test requires Android NDK 27.0.12077973
Fix this issue by using the highest Android NDK version (they are backward compatible).
Add the following to C:\b\s\w\ir\x\t\flutter_flutter_ffi_plugin_test.7d47a08a\plugin_ffi_test\example\android\app\build.gradle.kts:
android {
ndkVersion = "27.0.12077973"
...
}
FAILURE: Build failed with an exception.
``` | team-infra,P2,triaged-infra | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.