id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,809,764,819 | node | parallel.test-tls-min-max-version is flaky | ### Test
parallel.test-tls-min-max-version
### Platform
SmartOS
### Console output
```console
---
duration_ms: 300088.347
exitcode: -15
severity: fail
stack: |-
timeout
test: U U U U U SSLv2_method U expect U U ERR_TLS_INVALID_PROTOCOL_METHOD
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:90:1)
client undefined
server ERR_TLS_INVALID_PROTOCOL_METHOD
test: U U U U U SSLv3_method U expect U U ERR_TLS_INVALID_PROTOCOL_METHOD
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:92:1)
client undefined
server ERR_TLS_INVALID_PROTOCOL_METHOD
test: U U U U U hokey-pokey U expect U U ERR_TLS_INVALID_PROTOCOL_METHOD
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:100:1)
client undefined
server ERR_TLS_INVALID_PROTOCOL_METHOD
test: U U U U U %s_method U expect U U ERR_TLS_INVALID_PROTOCOL_METHOD
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:105:1)
client undefined
server ERR_TLS_INVALID_PROTOCOL_METHOD
test: U U U U TLSv1.2 TLS1_2_method U expect U U ERR_TLS_PROTOCOL_VERSION_CONFLICT
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:109:1)
client undefined
server ERR_TLS_PROTOCOL_VERSION_CONFLICT
test: U U U TLSv1.2 U TLS1_2_method U expect U U ERR_TLS_PROTOCOL_VERSION_CONFLICT
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:111:1)
client undefined
server ERR_TLS_PROTOCOL_VERSION_CONFLICT
test: U U SSLv2_method U U U U expect U ERR_TLS_INVALID_PROTOCOL_METHOD U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:94:1)
client ERR_TLS_INVALID_PROTOCOL_METHOD
server undefined
test: U U SSLv3_method U U U U expect U ERR_TLS_INVALID_PROTOCOL_METHOD U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:96:1)
client ERR_TLS_INVALID_PROTOCOL_METHOD
server undefined
test: U U hokey-pokey U U U U expect U ERR_TLS_INVALID_PROTOCOL_METHOD U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:98:1)
client ERR_TLS_INVALID_PROTOCOL_METHOD
server undefined
test: U TLSv1.2 TLS1_2_method U U U U expect U ERR_TLS_PROTOCOL_VERSION_CONFLICT U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:113:1)
client ERR_TLS_PROTOCOL_VERSION_CONFLICT
server undefined
test: TLSv1.2 U TLS1_2_method U U U U expect U ERR_TLS_PROTOCOL_VERSION_CONFLICT U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:115:1)
client ERR_TLS_PROTOCOL_VERSION_CONFLICT
server undefined
test: U U TLSv1_1_method U U SSLv23_method U expect U ERR_SSL_TLSV1_ALERT_PROTOCOL_VERSION ERR_SSL_UNSUPPORTED_PROTOCOL
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:154:3)
client ERR_SSL_TLSV1_ALERT_PROTOCOL_VERSION
server ERR_SSL_UNSUPPORTED_PROTOCOL
test: U U TLSv1_method U U SSLv23_method U expect U ERR_SSL_TLSV1_ALERT_PROTOCOL_VERSION ERR_SSL_UNSUPPORTED_PROTOCOL
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:157:3)
client ERR_SSL_TLSV1_ALERT_PROTOCOL_VERSION
server ERR_SSL_UNSUPPORTED_PROTOCOL
test: U U TLSv1_1_method U U U U expect U ERR_SSL_TLSV1_ALERT_PROTOCOL_VERSION ERR_SSL_UNSUPPORTED_PROTOCOL
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:191:3)
client ERR_SSL_TLSV1_ALERT_PROTOCOL_VERSION
server ERR_SSL_UNSUPPORTED_PROTOCOL
test: U U TLSv1_method U U U U expect U ERR_SSL_TLSV1_ALERT_PROTOCOL_VERSION ERR_SSL_UNSUPPORTED_PROTOCOL
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:194:3)
client ERR_SSL_TLSV1_ALERT_PROTOCOL_VERSION
server ERR_SSL_UNSUPPORTED_PROTOCOL
test: U U U U U TLSv1_1_method U expect U ERR_SSL_TLSV1_ALERT_PROTOCOL_VERSION ERR_SSL_UNSUPPORTED_PROTOCOL
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:205:5)
client ERR_SSL_TLSV1_ALERT_PROTOCOL_VERSION
server ERR_SSL_UNSUPPORTED_PROTOCOL
test: U U U U U TLSv1_method U expect U ERR_SSL_TLSV1_ALERT_PROTOCOL_VERSION ERR_SSL_UNSUPPORTED_PROTOCOL
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:208:5)
client ERR_SSL_TLSV1_ALERT_PROTOCOL_VERSION
server ERR_SSL_UNSUPPORTED_PROTOCOL
test: U U U U U U U expect TLSv1.3 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:87:1)
test: U U SSLv23_method U U TLSv1_1_method ALL@SECLEVEL=0 expect U ERR_SSL_UNSUPPORTED_PROTOCOL ERR_SSL_WRONG_VERSION_NUMBER
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:160:3)
client ERR_SSL_UNSUPPORTED_PROTOCOL
server ERR_SSL_WRONG_VERSION_NUMBER
test: U U SSLv23_method U U TLSv1_method ALL@SECLEVEL=0 expect U ERR_SSL_UNSUPPORTED_PROTOCOL ERR_SSL_WRONG_VERSION_NUMBER
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:162:3)
client ERR_SSL_UNSUPPORTED_PROTOCOL
server ERR_SSL_WRONG_VERSION_NUMBER
test: TLSv1 TLSv1.3 U TLSv1.3 TLSv1.3 U U expect TLSv1.3 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:263:1)
test: TLSv1 TLSv1.3 U TLSv1.2 TLSv1.3 U U expect TLSv1.3 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:264:1)
test: TLSv1.3 TLSv1.3 U TLSv1 TLSv1.3 U U expect TLSv1.3 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:270:1)
test: U U TLSv1_2_method U U TLS_method U expect TLSv1.2 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:119:1)
test: U U TLSv1_1_method U U TLS_method ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:120:1)
test: U U TLSv1_method U U TLS_method ALL@SECLEVEL=0 expect TLSv1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:121:1)
test: U U TLS_method U U TLSv1_2_method U expect TLSv1.2 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:122:1)
test: U U TLS_method U U TLSv1_1_method ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:123:1)
test: U U TLS_method U U TLSv1_method ALL@SECLEVEL=0 expect TLSv1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:124:1)
test: U U TLSv1_2_method U U SSLv23_method U expect TLSv1.2 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:139:3)
test: U U TLSv1_2_method U U TLSv1_2_method U expect TLSv1.2 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:185:1)
test: U U TLSv1_1_method U U TLSv1_1_method ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:186:1)
test: U U TLSv1_method U U TLSv1_method ALL@SECLEVEL=0 expect TLSv1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:187:1)
test: TLSv1 TLSv1.2 U U U TLSv1_method ALL@SECLEVEL=0 expect TLSv1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:242:1)
test: TLSv1 TLSv1.2 U U U TLSv1_1_method ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:243:1)
test: TLSv1 TLSv1.2 U U U TLSv1_2_method U expect TLSv1.2 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:244:1)
test: TLSv1 TLSv1.2 U U U TLS_method U expect TLSv1.2 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:245:1)
test: U U TLSv1_1_method TLSv1 TLSv1.2 U ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:248:1)
test: U U TLSv1_2_method TLSv1 TLSv1.2 U U expect TLSv1.2 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:249:1)
test: TLSv1 TLSv1.1 U TLSv1 TLSv1.3 U ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:251:1)
test: TLSv1 TLSv1.1 U TLSv1 TLSv1.2 U ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:252:1)
test: TLSv1 TLSv1.2 U TLSv1 TLSv1.1 U ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:253:1)
test: TLSv1 TLSv1.3 U TLSv1 TLSv1.1 U ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:254:1)
test: TLSv1 TLSv1 U TLSv1 TLSv1.1 U ALL@SECLEVEL=0 expect TLSv1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:255:1)
test: TLSv1 TLSv1.2 U TLSv1 TLSv1 U ALL@SECLEVEL=0 expect TLSv1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:256:1)
test: TLSv1 TLSv1.3 U TLSv1 TLSv1 U ALL@SECLEVEL=0 expect TLSv1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:257:1)
test: TLSv1.1 TLSv1.1 U TLSv1 TLSv1.2 U ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:258:1)
test: TLSv1 TLSv1.2 U TLSv1.1 TLSv1.1 U ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:259:1)
test: TLSv1 TLSv1.2 U TLSv1 TLSv1.3 U U expect TLSv1.2 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:260:1)
test: TLSv1 TLSv1.3 U TLSv1.2 TLSv1.2 U U expect TLSv1.2 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:265:1)
test: TLSv1 TLSv1.3 U TLSv1.1 TLSv1.1 U ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:266:1)
test: TLSv1 TLSv1.3 U TLSv1 TLSv1 U ALL@SECLEVEL=0 expect TLSv1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:267:1)
test: TLSv1.2 TLSv1.2 U TLSv1 TLSv1.3 U U expect TLSv1.2 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:271:1)
test: TLSv1.1 TLSv1.1 U TLSv1 TLSv1.3 U ALL@SECLEVEL=0 expect TLSv1.1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:272:1)
test: TLSv1 TLSv1 U TLSv1 TLSv1.3 U ALL@SECLEVEL=0 expect TLSv1 U U
(/home/iojs/build/workspace/node-test-commit-smartos/nodes/smartos22-x64/test/parallel/test-tls-min-max-version.js:273:1)
(node:92424) [DEP0060] DeprecationWarning: The `util._extend` API is deprecated. Please use Object.assign() instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
...
```
### Build links
https://ci.nodejs.org/job/node-test-commit-smartos/58807/
### Additional information
_No response_ | smartos,flaky-test | low | Minor |
2,809,773,697 | vscode | Feature Request: Implement Extended Search Options for Git-Modified Files | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Add new scopes in the existing Search panel to specifically target files recognized by Git as modified or limit the search to the current HEAD. For instance, in the Files to include section (currently offering an “Open Editors” button), offer toggle options for (Open Editors), (Git: modified), and (Git: current head). This makes it simpler to locate and remove debugging code or other changes by focusing on the most relevant files only. | feature-request,git | low | Critical |
2,809,791,154 | deno | Unexpected SyntaxError while compiling `await (() => 42)` | Version: Deno 2.1.7
```
> () => 42
[Function (anonymous)]
> await (() => 42)
Uncaught SyntaxError: Malformed arrow function parameter list
> await [() => 42]
[ [Function (anonymous)] ]
> const f = () => 42; await f
[Function: f]
> await function() { return 42 }
[Function (anonymous)]
```
`await (() => 42)` should be treated as a valid syntax. | repl | low | Critical |
2,809,798,882 | vscode | request fail |
Type: <b>Bug</b>
I am getting
```Sorry, your request failed. Please try again```
many times
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz (8 x 2419)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.75GB (7.36GB free)|
|Process Argv|--crash-reporter-id 79bb009f-2083-4c99-8f47-5b9ed7aad7d0|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (24)</summary>
Extension|Author (truncated)|Version
---|---|---
code-runner|for|0.12.2
copilot|Git|1.259.0
copilot-chat|Git|0.23.2
data-workspace-vscode|ms-|0.5.0
mssql|ms-|1.27.0
sql-bindings-vscode|ms-|0.4.0
sql-database-projects-vscode|ms-|1.4.5
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.394.0
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
vscode-remote-extensionpack|ms-|0.26.0
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
windows-ai-studio|ms-|0.8.4
vscode-yaml|red|1.15.0
r|REd|2.8.4
(2 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
9064b325:31222308
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,809,814,422 | godot | Imeadiate usage of a duplicated `ViewportTexture` result in null acess, despite being instanciated. | ### Tested versions
4.3.stable
### System information
linux
### Issue description
This may ocour to any node, and is not tested, as it apeared in this context.
But, when .duplicating a ViewportTexture, one cant acess it imediately.
Despite it showing a reference id:
REF1 <ViewportTexture#-9223372012745915145>
VAR1 2987
REF2 <ViewportTexture#-9223372012309707516>
the error:
"
Attempt to call 'save_png_to_buffer' in base 'null instance' on a null instance
"
is trhown, but there is already the reference "9223372012309707516" being confirmed as instanciated in memory already.
So, what?
### Steps to reproduce
This code:
```
extends Node
func _ready() -> void:
var aa = get_viewport().get_texture()
printt("REF1",aa)
printt("VAR1",aa.get_image().save_png_to_buffer().size())
var bb = aa.duplicate()
printt("REF2",bb)
printt("VAR2",bb.get_image().save_png_to_buffer().size())
```
### Minimal reproduction project (MRP)
[Duplicate Bug.zip](https://github.com/user-attachments/files/18538682/Duplicate.Bug.zip) | needs testing,topic:gui | low | Critical |
2,809,839,203 | storybook | [Bug]: Storybook build fails in monorepo with pnpm | ### Describe the bug
Storybook build fails in the monorepo environment.
While storybook dev works fine, storybook build crashes with the following error.
```
=> Failed to build the preview
Could not resolve "./button.css" from "../../node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected][email protected]._6b4vza6cwbojtrjbcynfhg3dfy/node_modules/@storybook/react/template/cli/ts-3-8/Button.tsx"
file: /Users/johndoe/Projects/portal/node_modules/.pnpm/@[email protected]_@[email protected][email protected][email protected][email protected]._6b4vza6cwbojtrjbcynfhg3dfy/node_modules/@storybook/react/template/cli/ts-3-8/Button.tsx
at getRollupError (file:///Users/johndoe/Projects/portal/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/parseAst.js:396:41)
at error (file:///Users/johndoe/Projects/portal/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/parseAst.js:392:42)
at ModuleLoader.handleInvalidResolvedId (file:///Users/johndoe/Projects/portal/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/node-entry.js:20216:24)
at file:///Users/johndoe/Projects/portal/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/node-entry.js:20176:26
```
The node_modules folder looks like this. Seems like there are no *.css files.
<img width="953" alt="Image" src="https://github.com/user-attachments/assets/a55bc57b-d160-4692-b715-fc2ce95d4a89" />
### Reproduction link
https://github.com/vladmiller/storybook-monorepo-reproduction
### Reproduction steps
1. pnpm install
2. cd apps/storybook
3. pnpm build-storybook
### System
```bash
Storybook Environment Info:
System:
OS: macOS 15.2
CPU: (8) arm64 Apple M2
Shell: 5.9 - /bin/zsh
Binaries:
Node: 23.3.0 - ~/.nvm/versions/node/v23.3.0/bin/node
npm: 10.9.0 - ~/.nvm/versions/node/v23.3.0/bin/npm
pnpm: 9.15.4 - ~/Library/pnpm/pnpm <----- active
Browsers:
Safari: 18.2
npmPackages:
@storybook/addon-a11y: ^8.5.1 => 8.5.1
@storybook/addon-actions: ^8.5.1 => 8.5.1
@storybook/addon-backgrounds: ^8.5.1 => 8.5.1
@storybook/addon-docs: ^8.5.1 => 8.5.1
@storybook/addon-essentials: ^8.5.1 => 8.5.1
@storybook/addon-interactions: ^8.5.1 => 8.5.1
@storybook/addon-links: ^8.5.1 => 8.5.1
@storybook/blocks: ^8.5.1 => 8.5.1
@storybook/react: ^8.5.1 => 8.5.1
@storybook/react-vite: ^8.5.1 => 8.5.1
@storybook/test: ^8.5.1 => 8.5.1
storybook: ^8.5.1 => 8.5.1
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,809,858,479 | pytorch | Local config flags for torch.compile | internal x-post: https://fb.workplace.com/groups/1075192433118967/posts/1589701705001368/
Pitch: a new decorator to allow user code change configs midway through tracing.
cc @chauhang @penguinwu | triaged,oncall: pt2 | low | Minor |
2,809,882,361 | opencv | Inconsistency between imwrite and imwriteanimation functions | ### System Information
OpenCV version: 4.12.0-dev
### Detailed description
when i try to measure performance of `imwriteanimation` function by the code below i faced unexpected error ( indeed i accidentally created the Mat variable wrong type). i am not sure this can be considered as a bug.
test code output:
```
[ WARN:[email protected]] global loadsave.cpp:848 cv::imwrite_ Unsupported depth image for selected encoder is fallbacked to CV_8U.
0.0543036sec
0.0541429sec
[ERROR:[email protected]] global loadsave.cpp:973 cv::imwriteanimation_ imwriteanimation_('read_c.png'): can't write data: OpenCV(4.12.0-dev)
C:\projects\opencv\modules\imgproc\src\color.simd_helpers.hpp:94: error: (-2:Unspecified error) in function '__cdecl cv::impl::`anonymous-
namespace'::CvtHelper<struct cv::impl::`anonymous namespace'::Set<3,4,-1>,struct cv::impl::A0xf824bd31::Set<3,4,-1>,
struct cv::impl::A0xf824bd31::Set<0,2,5>,4>::CvtHelper(const class cv::_InputArray &,const class cv::_OutputArray &,int)'
> Unsupported depth of input image:
> 'VDepth::contains(depth)'
> where
> 'depth' is 1 (CV_8S)
0.0131287sec
```
### Steps to reproduce
```
#include <opencv2/highgui.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main( int argc, const char** argv )
{
Mat m(2000,3000,CV_8SC4);
TickMeter tm;
tm.start();
imwrite("read_a.png", m);
tm.stop();
cout << tm << endl;
Animation tanimation;
tanimation.frames.push_back(m);
tanimation.durations.push_back(10);
TickMeter tm1;
tm1.start();
imwritemulti("read_b.png", tanimation.frames);
tm1.stop();
cout << tm1 << endl;
TickMeter tm2;
tm2.start();
imwriteanimation("read_c.png", tanimation);
tm2.stop();
cout << tm2 << endl;
return 0;
}
```
### Issue submission checklist
- [x] I report the issue, it's not a question
- [ ] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [ ] I updated to the latest OpenCV version and the issue is still there
- [x] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: imgcodecs | low | Critical |
2,809,882,517 | TypeScript | Display default values for property types. | ### 🔎 Search Terms
- jsdoc property and default tooltip
- jsdoc declare default values on object properties
- use jsdoc property with default
### 🕗 Version & Regression Information
Unsure if it's always been this way.
### ⏯ Playground Link
https://www.typescriptlang.org/play/?filetype=js#code/LAKFHoCpNACTYAEAuBPADgUwCaYGawDeA9gEYBWmAxsgL6wAKATsegM5wKLotZNpEA5GwC2g2AB9YgkdnFTBAGwDmg+gG1mrNgDo2ASwBemALwy5AXVgAhAK7JkxAHawDxnZyQ9WmfqiJsyEz6TsoaWuw6igCGpJiKVnYOzrAxcYoeIPDgoBDQnohMmMi2TE5sRBFstAW4eNG2ishErkaYAFzSsoIANKmx8Z22TnUhOLA1WZA5IHjDNPopysVVABQAlERwsDtFJWVEk5OgisUtbpiwJl1yfWnxE1ewy8hr66BAA
### 💻 Code
```ts
/**
* @typedef {object} Props
* @property {'sm' | 'md' | 'lg'} [Props.size='md'] Button size.
* @property {string} [Props.label] Button label.
*/
/**
* @returns {Props}
* @default { size: 'md', label: undefined }
*/
function getProps() {
return {}
}
let { size = 'md', label } = getProps()
```
### 🙁 Actual behavior
Tooltip when hovering `size` destructured property:
```
let size: "sm" | "md" | "lg"
Button size.
```
### 🙂 Expected behavior
Tooltip when hovering `size` destructured property:
```
let size: "sm" | "md" | "lg"
Button size.
---
@default 'md'
```
### Additional information about the issue
Potentially relates to https://github.com/microsoft/TypeScript/issues/24746
When defining object property types using JSDoc default value notation, such as `@property {'sm' | 'md' | 'lg'} [Props.size='md']`, I'd like to see tooltip details about the default value. I assume the only way to denote default values with `@property` is to use `[key=value]` notation since `@default` cannot be inlined. | Suggestion,Awaiting More Feedback | low | Minor |
2,809,883,852 | rust | Tracking issue for release notes of #134272: make rustc_encodable_decodable feature properly unstable |
This issue tracks the release notes text for #134272.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Library, Language
- [Remove `RustcDecodable` and `RustcEncodable`](https://github.com/rust-lang/rust/pull/134272)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @RalfJung, @ibraheemdev -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,needs-triage,relnotes-tracking-issue | low | Minor |
2,809,888,782 | rust | Tracking issue for release notes of #134679: Windows: remove readonly files |
This issue tracks the release notes text for #134679.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [On recent versions of Windows `std::fs::remove_file` will now remove readonly files](https://github.com/rust-lang/rust/pull/134679)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @ChrisDenton, @Mark-Simulacrum -- origin issue/PR authors and assignees for starting to draft text
| O-windows,T-libs-api,relnotes,needs-triage,relnotes-tracking-issue | low | Minor |
2,809,894,965 | go | bufio: Reader.WriteTo makes an initial empty write | ### Go version
go version go1.24rc2 linux/amd64
### Output of `go env` in your module/workspace:
```shell
AR='ar'
CC='gcc'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_ENABLED='1'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
CXX='g++'
GCCGO='gccgo'
GO111MODULE=''
GOAMD64='v1'
GOARCH='amd64'
GOAUTH='netrc'
GOBIN=''
GOCACHE='/root/.cache/go-build'
GOCACHEPROG=''
GODEBUG=''
GOENV='/root/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFIPS140='off'
GOFLAGS=''
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build3215251769=/tmp/go-build -gno-record-gcc-switches'
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMOD='/dev/null'
GOMODCACHE='/root/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/root/go'
GOPRIVATE=''
GOPROXY='direct'
GOROOT='/usr/lib/golang'
GOSUMDB='off'
GOTELEMETRY='local'
GOTELEMETRYDIR='/root/.config/go/telemetry'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/usr/lib/golang/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.24rc2'
GOWORK=''
PKG_CONFIG='pkg-config'
```
### What did you do?
passing an bufio.Reader to io.Copy() results in an empty write() being issued. An empty write is normally not really an issue however in my case the writer is a unixpacket (SOCK_SEQPACKET) socket connection. In that case the empty write causes and empty read on the server. An an empty read() returns 0 which means the server misinterprets that message as EOF and closes the socket.
Consider this small example program, the program reads from stidn and writes that to the server running in the goroutine via io.Copy()
```go
package main
import (
"bufio"
"flag"
"fmt"
"io"
"log"
"net"
"os"
)
const socketName = "/tmp/testsock123"
const socketType = "unixpacket" // switch to "unix" and it works
func main() {
buffered := flag.Bool("bufio", false, "use bufio")
flag.Parse()
os.Remove(socketName)
socket, err := net.ListenUnix(socketType, &net.UnixAddr{Name: socketName, Net: socketType})
if err != nil {
log.Fatalln(err)
}
serverChan := make(chan struct{})
// server routine
go func() {
conn, err := socket.Accept()
if err != nil {
log.Fatalln(err)
}
buf := make([]byte, 1024)
for {
i, err := conn.Read(buf)
if err != nil {
if err == io.EOF {
fmt.Println("server got EOF")
serverChan <- struct{}{}
return
}
log.Fatalln(err)
}
fmt.Printf("read %d bytes, msg: %s\n", i, string(buf[:i]))
}
}()
conn, err := net.DialUnix(socketType, nil, &net.UnixAddr{Name: socketName, Net: socketType})
if err != nil {
log.Fatalln(err)
}
var reader io.Reader = os.Stdin
if *buffered {
reader = bufio.NewReader(os.Stdin)
}
_, err = io.Copy(conn, reader)
if err != nil {
log.Fatalln(err)
}
conn.Close()
<-serverChan
}
```
### What did you see happen?
Only if the reader is wrapped in an bufio.Reader there is an extra empty write being made to the server. The server considers this to be an EOF and closes before the actual writes are made.
Not using bufio makes it work.
The example program:
```
$ go run main.go <<<"test"
read 5 bytes, msg: test
server got EOF
# using bufio makes it no longer work
$ go run main.go -bufio <<<"test"
server got EOF
```
strace clearly shows the behavior
```
...
write(4, "", 0) = 0
server got EOF
read(0, "test\n", 32768) = 5
write(4, "test\n", 5) = 5
...
```
I guess for most cases the empty write does not matter (i.e. files or other stream based sockets) so most would not notice this but in case of SOCK_SEQPACKET/unixpacket it is important.
### What did you expect to see?
Using bufio.Reader should not result in empty writes during io.Copy(). As far as I can tell this is because io.Copy() used the WriterTo() implementation here:
https://cs.opensource.google/go/go/+/refs/tags/go1.23.5:src/bufio/bufio.go;l=518-521
So I think the bufio WriterTo() should be fixed to not cause empty write()'s. It should only have to write there if there is something in the buffer. | NeedsInvestigation,BugReport | low | Critical |
2,809,901,607 | react | Bug: Support use in act testing API #25523 | <!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version:
Main branch (as of October 21, 2022, when PR #25523 was merged).
## Steps To Reproduce
1. Write a test that uses the use hook inside a component.
2. Wrap the test in act but do not await the thenable returned by act.
3. Observe that the test may fail or behave inconsistently due to improper task scheduling.
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example:
javascript
import { act, render } from '@testing-library/react';
import React, { use } from 'react';
async function fetchData() {
return Promise.resolve('data');
}
function Component() {
const data = use(fetchData());
return <div>{data}</div>;
}
test('use hook in act', async () => {
await act(async () => {
render(<Component />);
});
});
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
If the thenable returned by act is not awaited, tasks scheduled by use may not be flushed correctly, leading to inconsistent test behavior.
No warning is shown unless use is called, which can make debugging difficult.
## The expected behavior
The act API should fully support the use hook, ensuring tasks are properly scheduled and flushed.
A warning should be shown if the thenable returned by act is not awaited, helping developers identify potential issues. | Status: Unconfirmed,Resolution: Needs More Information | medium | Critical |
2,809,911,541 | opencv | imread animation PNG writing speed performance is dramatically bad | ### Describe the feature and motivation
using the test code below the result is as following.
```
0.0455906sec
0.0416175sec
0.465386sec
```
APNG encoder is 10x slower than the PNG encoder.
### Additional context
```
#include <opencv2/highgui.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main( int argc, const char** argv )
{
Mat m(2000,3000,CV_8UC4);
TickMeter tm;
tm.start();
imwrite("read_a.png", m);
tm.stop();
cout << tm << endl;
Animation tanimation;
tanimation.frames.push_back(m);
tanimation.durations.push_back(10);
TickMeter tm1;
tm1.start();
imwritemulti("read_b.png", tanimation.frames);
tm1.stop();
cout << tm1 << endl;
TickMeter tm2;
tm2.start();
imwriteanimation("read_c.png", tanimation);
tm2.stop();
cout << tm2 << endl;
return 0;
}
``` | optimization,category: imgcodecs | low | Major |
2,809,912,676 | rust | Tracking issue for release notes of #135415: Add `File already exists` error doc to `hard_link` function |
This issue tracks the release notes text for #135415.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Add `File already exists` error doc to `hard_link` function](https://github.com/rust-lang/rust/pull/135415)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @Harshit933, @ChrisDenton -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,needs-triage,relnotes-tracking-issue | low | Critical |
2,809,921,273 | ollama | ollama with cpu to utilize the models locally | please make a more compatible version of ollama with cpu to utilize the models locally | feature request | low | Minor |
2,809,924,016 | react | Bug Report: decodeReply Causes Infinite Hang with FormData | ### React version
React 19 Beta
### Description
When calling decodeReply with a raw FormData object (without a zero chunk), it leads to the promise never rejecting, causing the application to hang indefinitely. This issue happens because the getRoot function is called after close(response), which leaves the connection open.
### Steps to Reproduce
Call decodeReply with a raw FormData object.
Ensure no zero chunk is passed.
Observe that the promise never rejects and the application hangs.
### Link to Code Example
missing
### The current behavior
When decodeReply is called with a raw FormData object, the promise remains unresolved, causing the application to hang indefinitely without rejecting.
### The expected behavior
The function should reject the promise appropriately when using FormData, preventing any indefinite hangs. The issue has been resolved by reordering the function calls so that close(response) occurs after getRoot(response). | Status: Unconfirmed,Resolution: Needs More Information | medium | Critical |
2,809,925,557 | flutter | SelectableText should have a selectionColor property | ### Use case
If a Text can have a selectionColor, then a SelectableText should also be able to set this property, to set the color it should display when being selected.
### Proposal
I should be able to set a selection color for SelectableText
```dart
const SelectableText('Selectable Text') // No selectionColor
```

```dart
const SelectionArea(
child: Text(
'Text wrapped in SelectionArea',
selectionColor: Colors.red,
),
)
```
 | c: new feature,framework,f: material design,c: proposal,f: selection,team-framework | low | Minor |
2,809,928,290 | flutter | conductor should run create_updated_flutter_deps.py | ### Use case
When updating the dart_revision in DEPS for a release, there is a comment about running create_updated_flutter_deps.py:
https://github.com/flutter/flutter/blob/c1ffaa9d9deb3e0853176271922e0b1c1356d21f/DEPS#L53-L59
### Proposal
That script should be automatically run when using [conductor](https://github.com/flutter/flutter/tree/master/dev/conductor) to automate this process. | team-release | low | Minor |
2,809,945,248 | PowerToys | Cursor crosshair activate only selected lines, vertical and/or horizontal line | ### Description of the new feature / enhancement
Like one only use vertical line.
### Scenario when this would be used?
I screenshot stock charts as pictures to review and don't need horizontal line, but only vertical line to help my eye to see the same timestamp of pair of price, macd, etc.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,809,955,566 | vscode | Output filter- support negative and multiple filters | I love the new output filter, and it would be really useful to have negative filter patterns with `!` or multiple comma-separated patterns. Useful enough that I might do a PR if you don't have time and can point me to the code.
For example, I always have Copilot Chat in trace mode, but there are some very noisy logs that I'd like to filter out. | feature-request,output | low | Minor |
2,809,962,178 | godot | Custom Resource.tres not updating values when the Resource.gd script values are changed | ### Tested versions
Godot v4.3.stable - Windows 10.0.26100 - GLES3 (Compatibility)
### System information
Godot v4.3.stable - Windows 10.0.26100 - GLES3 (Compatibility) - Radeon RX 580 Series (Advanced Micro Devices, Inc.; 31.0.21921.1000) - 12th Gen Intel(R) Core(TM) i7-12700K (20 Threads)
### Issue description
When changing the default value of a variable in a custom resource script (.gd), existing .tres resource files display the new default value correctly in the Inspector, but at runtime, the old value is still applied.
Example:
- health_resource.gd { var value: int = 10}
- player_health.tres { displays 10 in the inspector }
- print(player_health} output = 10
Change the value:
- health_resource.gd { var value: int = 20}
- player_health.tres { displays 20 in the inspector }
- print(player_health} output = 10
### Steps to reproduce
1. Create a custom resource script "health_resource.gd".
2. set a variable in health_resource.gd, eg: var value: int = 10
3. Create a resource file "player_health.tres" from health_resource.gd:
- The Inspector correctly displays value = 10.
- At runtime, print(player_health.value) outputs = 10.
4. Modify the script to change the default value.
5. var value: int = 20
6. Observe the changes in the Inspector:
- Open "player_health.tres".
- The Inspector now displays "value = 20".
7. Run the game and print the value:
print(player_health.value)
- Expected output: 20
- Actual output: 10 (old value)
Additional Notes
- The `.tres` file retains the old value even though the "Inspector" shows the updated default.
- Even when explicitly modifying the `.tres` file, the old value is applied at runtime.
- Deleting and recreating the `.tres` file applies the new default correctly.
### Minimal reproduction project (MRP)
Basic starter project. | bug,topic:core,needs testing | low | Minor |
2,809,971,292 | PowerToys | Rotate videos on explorer | ### Description of the new feature / enhancement
Rotate videos on explorer in the same way as we can with photos.
### Scenario when this would be used?
Hi, I'm a content creator and when I shoot videos with my professional camera, it doesn't automatically turn the videos, I always have to turn them manually in the program. It would be great if there was an option to select several videos and simply rotate them in the same way as you can with photos.
Here's my idea of a tool for future implementation. :)
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,809,973,175 | vscode | VS Code APT sources.list: add signed-by field and remove unnecessary architectures | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.4
- OS Version: Debian sid
Steps to Reproduce:
VS Code's APT sources.list has two problems. Now it is:
```sh
~> cat /etc/apt/sources.list.d/vscode.list
### THIS FILE IS AUTOMATICALLY CONFIGURED ###
# You may comment out this entry, but any other modifications may be lost.
deb [arch=amd64,arm64,armhf] https://packages.microsoft.com/repos/code stable main
```
### Problem 1: Missing `signed-by` field
The recent [APT 2.9.24 update](https://mastodon.social/@juliank/113866133456641922) introduced a [change](https://salsa.debian.org/apt-team/apt/-/commit/61f8f40f921cde13c5b97abbdf900646745e8e30#b3f55b8d9783f2ed27acfd1f0fe06dfc461e2aba_1_6):
> /etc/apt/trusted.gpg is no longer used as a source of signers.
> Sources without Signed-By are fully deprecated, and therefore /etc/apt/trusted.gpg.d is deprecated.
This has led to warnings when running `apt update`, such as:
```
Notice: Missing Signed-By in the sources.list(5) entry for 'https://packages.microsoft.com/repos/edge'
Notice: Missing Signed-By in the sources.list(5) entry for 'https://packages.microsoft.com/repos/code'
Notice: Consider migrating all sources.list(5) entries to the deb822 .sources format
Notice: The deb822 .sources format supports both embedded as well as external OpenPGP keys
Notice: See apt-secure(7) for best practices in configuring repository signing.
```
---
### Problem 2: Unnecessary architectures
The current configuration includes three architectures: `amd64`, `arm64`, and `armhf`. This results in downloading unnecessary index files for architectures that are not in use, wasting bandwidth and server resources. Should these be removed?
---
### Solution
According to the [Debian Wiki](https://wiki.debian.org/DebianRepository/UseThirdParty), the `microsoft.gpg` key should be moved from `/etc/apt/trusted.gpg.d` to `/usr/share/keyrings`, and then you can choose one of the following two solutions:
1. Update `/etc/apt/sources.list.d/vscode.list` to the following:
```
deb [signed-by=/usr/share/keyrings/microsoft.gpg] https://packages.microsoft.com/repos/code stable main
```
2. Completely remove `/etc/apt/sources.list.d/vscode.list` and switch to the DEB822 format. For more information on the DEB822 format, refer to `source.list(5)`. The DEB822 format also allows embedding the public key within the source list file. | debt,linux,deb,packaging | low | Critical |
2,810,044,624 | go | x/tools/gopls: crashes due to apparent data race(s) | While we've found and fixed many gopls crashes thanks to telemetry, as we gain more users it appears we're accumulating a long tail of issues that "can't happen" based on easily verifiable local invariants.
In some cases, these were related to misreporting of stacks by the runtime (#70637), but others cannot be explained by a misreported stack, and indicate some form of memory corruption or (most likely) data race.
This issue is an umbrella for those bugs. @adonovan and I will consolidate them here. | gopls,Tools,gopls/telemetry-wins,BugReport | low | Critical |
2,810,046,789 | react-native | [iOS] a problem with codegen? | ### Description
Hey folks, will you please have a brief look at the issue I've created in `creact-react-native-library` repo: https://github.com/callstack/react-native-builder-bob/issues/755 (not sure whether that issue belongs there, or to the RN repo, as it seems to be caused by Codegen outputs).
Essentially, I am trying to upgrade my fork of `react-native-fs` to [email protected], on the way resetting the library scaffolding to the one generated by the latest version of the `create-react-native-library` generator. The upgraded code is here: https://github.com/birdofpreyru/react-native-fs/tree/dev-v2.31. I'm trying to build its Example App then, and it fails at the linking step, due to a clash between the same-named objects (like `RCTAppDependencyProvider`) present both in files generated by the codegen for the library itself, and for smth else.
I have confirmed that builds passes fine and the app works, if I just comment out the content of involved `.mm` files generated under `ios/generated` folder (relative to the repo's root); but I have no idea what is the root cause of the problem? Is it a bug inside the Codegen generating for the library some files it is not supposed to generate for it? Is it some mistake in my lib's configuration causing that Codegen behavior? Have I messed-up any other config in the project? Is it a problem with the project template used by `creact-react-native-library` (same problem happens with the new Turbo Module lib project generated by `create-react-native-library`, without any other modifications)?
I guess, it is very easy issue to narrow down for anybody who knows how exactly Codegen is supposed to behave here :) (thus, not for me 😅 )
### Steps to reproduce
N/A
### React Native Version
0.77.0
### Affected Platforms
Build - MacOS
### Areas
Codegen
### Output of `npx react-native info`
```text
N/A
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
N/A
### Screenshots and Videos
_No response_ | Platform: iOS,Resolution: PR Submitted,Type: New Architecture | low | Critical |
2,810,049,937 | PowerToys | Unable to copy and paste images between windows machines | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
copy image from one machine, attempt to paste on another
### ✔️ Expected Behavior
image pasted on second computer
### ❌ Actual Behavior
nothing happens
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,810,062,119 | rust | Tracking issue for release notes of #134283: fix(libtest): Deprecate '--logfile' |
This issue tracks the release notes text for #134283.
### Steps
- [x] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [x] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Libraries
- [Deprecate libtest's '--logfile' option](https://github.com/rust-lang/rust/pull/134283)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @epage, @Amanieu -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,A-libtest,I-release-nominated,relnotes-tracking-issue | low | Minor |
2,810,062,256 | rust | Port tests to use the intrinsic macro | > `tests/ui/simd` has a bunch of test cases that need porting, I would suggest those be ported in a separate PR before we do the final big PR that removes support for this ABI from the compiler entirely.
_Originally posted by @RalfJung in [#132735](https://github.com/rust-lang/rust/issues/132735#issuecomment-2609964087)_
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"vayunbiyani"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | C-cleanup | low | Minor |
2,810,083,366 | go | cmd/link/internal: panic in loader.(*SymbolBuilder).SetBytesAt | ```
#!watchflakes
default <- pkg == "golang.org/x/tools/gopls/internal/test/integration/template" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8724903366910269521)):
FAIL golang.org/x/tools/gopls/internal/test/integration/template [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,Tools,compiler/runtime | low | Critical |
2,810,094,105 | yt-dlp | [TikTok] 403 Forbidden on all friends-only videos | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
(I censored the username and video id from the output log)
I suspect TikTok has changed something in their API, as both yt-dlp and a separate TikTok Scrapper I've been using refuse to download friends-only videos now.
Doesn't work on other friends-only videos on neither the same user, or from another user.
Doesn't work on videos that used to work before.
Tried using a VPN in case of region lock, however it also doesn't work (and the video plays normally from the app, so it's not an IP issue).
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.tiktok.com/@ju********lc/video/7463************414', '--cookies', 'cookies.txt']
[debug] Encodings: locale cp1250, fs utf-8, pref cp1250, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [de82acf87] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.14393-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.2
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1844 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[TikTok] Extracting URL: https://www.tiktok.com/@ju********lc/video/7463************414
[TikTok] 7463************414: Downloading webpage
[debug] [TikTok] Found universal data for rehydration
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: best/bestvideo+bestaudio
[info] 7463************414: Downloading 1 format(s): h264_540p_1859833-1
[debug] Invoking http downloader on "https://v16-web-newkey.tiktokcdn.com/caaaed6374c9c9c98f138265d65ccf37/6796785c/tos-no1a-ve-0068-no/ooIJGezrIIWIRcpxKAPALDOOgej1AT0AZLeNqz?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&br=3632&bt=1816&cs=0&ds=6&ft=bL4kamaRPD12NHyZtE-UxQv5SY3W3wv25xcAp&mime_type=video_mp4&qs=0&rc=cnF8b2hsc2d3SkBwaHIxaDFybndmOTc1aWlpZWg2ZWU1Z2c5O0BpM2ZkO3Y5cjh0eDMzbzczNUBjRl5Nc3FePmJKYSNvYF90aHFmOiNfNGIuNV9gNi8xLmItMTNfYSM1bnFtMmRjZjZgLS1kMTFzcw%3D%3D&l=20250124180044FD1A996A24987634921B&btag=2000b8000&ply_type=3&policy=eyJ2bSI6MywidWlkIjoiNjg2ODY3NzA5MzI5MjYzOTIzNyJ9"
ERROR: unable to download video data: HTTP Error 403: Forbidden
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 3492, in process_info
File "yt_dlp\YoutubeDL.py", line 3212, in dl
File "yt_dlp\downloader\common.py", line 464, in download
File "yt_dlp\downloader\http.py", line 367, in real_download
File "yt_dlp\downloader\http.py", line 118, in establish_connection
File "yt_dlp\YoutubeDL.py", line 4175, in urlopen
File "yt_dlp\networking\common.py", line 117, in send
File "yt_dlp\networking\_helper.py", line 208, in wrapper
File "yt_dlp\networking\common.py", line 340, in send
File "yt_dlp\networking\_requests.py", line 365, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden
``` | account-needed,site-bug,triage | low | Critical |
2,810,098,888 | flutter | [Impeller] entitypass clip stack should pop the appropriate number of clips on restore. | Currently when we pop a save stack, we only remove a single clip from the clip record-replay system. This was the cause of the issue worked around in https://github.com/flutter/flutter/pull/162113 .
We should already be tracking clip depth with enough granularity to know how many entries to pop. Low priority as the workaround is fairly straightforward. | P3,e: impeller,team-engine,triaged-engine | low | Minor |
2,810,101,120 | flutter | Screen flickering on iOS and iPhone | ### Steps to reproduce
This is reported from a priority partner.
As described by the partner: "Since end of last year, the number of occurrences of a “screen flickering” issue of our iOS testers increased significantly. This problem is not reliably reproducible"
We have seen this issue on Flutter 3.19.5 and 3.22.3.
Devices that reported the mentioned issues:
iPhone 15 Pro, iOS 18.2.1
iPhone 14 Pro Max, iOS 17.4
iPhone 13 mini, iOS 18.2
iPhone 13, iOS 18.2.1
This issue seems to be independent of the configuration of our app (brand, environment) but as far as we know only happens in Release builds (as mentioned in the linked GH issues)
This problem also happens with the official AppStore releases.
Even our testers are not able to reliably reproduce the issue.
Potential related issues:
https://github.com/flutter/flutter/issues/159184
https://github.com/flutter/flutter/issues/159061
### Expected results
No screen flickering.
### Actual results
Flickering. We have received videos of the behavior, which are shared internally.
### Code sample
N/A
### Screenshots or Video
Videos have been shared privately internally.
### Logs
N/A
### Flutter Doctor output
Will ask the partner. | platform-ios,engine,customer: vroom,c: rendering,a: production,P1,e: impeller,team-engine,triaged-engine | medium | Minor |
2,810,101,759 | vscode | Support setting user.name/user.email in the config for the built-in git extension, per user-profile | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Like many of us, I commit to both personal repositories and to work repositories. However, these are in two different "personas", which need two different names/email addresses. Currently, I've set up profiles in VSCode to reflect this, but I haven't found a way to change the settings in git to reflect this.
- It _can_ be manually configured for each repository via `git config user.email ....`. Unfortunately, these are all in devcontainers, so not only does it have to be done for each one, but it has to be re-done each time the container is rebuilt.
- There are ways to conditionally configure git... unfortunately this has to be handled through separate config files, which the devcontainer extension doesn't copy (although I've submitted an issue with them)
- It can be configured with environment variables; `GIT_AUTHOR_EMAIL`/`GIT_COMMITTER_EMAIL`/etc. Unfortunately, the only way I've found to do this per-profile so far is the `terminal.integrated.env.<os>`/`terminal.integrated.profiles.<os>.<shell>.env` settings, which the built-in git extension (of course?) ignores.
- Currently if I try to use this it means that commits between the UI and the CLI have mismatching authors/emails! Fun times.
- Setting this as an environment variable on the devcontainer isn't an option, for obvious reasons.
So, I'd prefer some way to be able to set this per-profile to support working in different personas this way. | feature-request,git | low | Minor |
2,810,110,789 | transformers | Mllama training via FSDP device and dtype misassignment | ### System Info
- `transformers` version: 4.48.1
- Platform: Linux-5.15.0-1073-azure-x86_64-with-glibc2.35
- Python version: 3.11.0rc1
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.3.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): 2.15.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: Yes
- Using GPU in script?: Yes
- GPU type: NVIDIA A100-SXM4-40GB
### Who can help?
@amyeroberts, @qubvel
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Shell command used to launch FSDP:
```bash
accelerate launch --config_file "configs/a100_config.yaml" train.py \
--seed 100 \
--model_name_or_path "/path/to/model" \
--dataset_path "/path/to/dataset" \
--add_special_tokens False \
--append_concat_token False \
--max_seq_len 1024 \
--num_train_epochs 15 \
--logging_steps 50 \
--log_level "info" \
--logging_strategy "steps" \
--evaluation_strategy "epoch" \
--save_strategy "epoch" \
--bf16 True\
--fp16 False \
--packing False \
--learning_rate 7e-5 \
--lr_scheduler_type "linear" \
--weight_decay 0.0 \
--warmup_ratio 0.0 \
--max_grad_norm 1.0 \
--output_dir "/path/to/output" \
--per_device_train_batch_size 1 \
--gradient_checkpointing True \
--use_reentrant True \
--dataset_text_field "content" \
--use_flash_attn False \
--use_peft_lora False \
--report_to "none"
```
`train.py` uses a standard `SFTTrainer()` trainer object called via `trainer.train()` after loading the processor and model, nothing fancy there.
```python
def main(model_args, data_args, training_args):
... # dataset, model, processor initialization
sft_training_args = SFTConfig(
output_dir=training_args.output_dir,
gradient_checkpointing=training_args.gradient_checkpointing,
bf16=training_args.bf16,
remove_unused_columns=False,
report_to=training_args.report_to,
num_train_epochs=training_args.num_train_epochs,
logging_steps=training_args.logging_steps,
evaluation_strategy = training_args.evaluation_strategy,
save_strategy = training_args.save_strategy,
max_seq_length=data_args.max_seq_length
)
trainer = SFTTrainer(
model=model,
peft_config=peft_config,
args=sft_training_args,
data_collator=collate_fn,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
processing_class=processor.tokenizer,
)
trainer.train()
)
```
The FSDP config `a100_config.yaml` is as follows:
```yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: true
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: false
machine_rank: 0
main_training_function: main
mixed_precision: 'bf16'
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
note that we can specify `mixed_precision: 'no'` for the config and get the same error as well.
### Expected behavior
FSDP (with offloaded parameters) training halts during the first forward pass. The resulting error,
```
[rank4]: File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 549, in _conv_forward
[rank4]: return F.conv2d(
[rank4]: ^^^^^^^^^
[rank4]: RuntimeError: Input type (torch.FloatTensor) and weight type (CUDABFloat16Type) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
```
is caused by `modeling_mllama.py` line 1541,
```python
patch_embeds = self.patch_embedding(pixel_values.to(self.dtype).to(self.device))
```
AFAIK we cannot generally use `.to()` assignments referencing the model's device or datatype at the start of the forward pass when training via FSDP with offloaded params, as the parameters may not be located in the device or with the datatype that is used during the forward pass.
The simplest solution here is to simply omit both assignments: if we change line 1541 to
```python
patch_embeds = self.patch_embedding(pixel_values)
```
the problem is eliminated. One would need to change the documentation for Llama 3.2 Vision, as inference involves sending processed inputs to the model's device but not dtype. For example, the official inference code snippet [here](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision)
```python
inputs = processor(image, prompt, return_tensors="pt").to(model.device)
```
needs to become
```python
inputs = processor(image, prompt, return_tensors="pt").to(model.dtype).to(model.device)
```
If this is an acceptable approach I would be happy to put in the PR, just let me know!
| bug | low | Critical |
2,810,112,050 | terminal | After trying out Quake mode, running `wt -w 0 sp -s 0.4 pwsh` from any non-quake mode WT terminal always opens a new terminal window | ### Windows Terminal version
1.22.3232.0
### Windows build number
10.0.26100.0
### Other Software
_No response_
### Steps to reproduce
1. Run `wt --window _quake` to try out the Quake mode
2. Close all Quake mode window, then open a new regular WT window, run `wt -w 0 sp -s 0.4 pwsh` within it
### Expected Behavior
The command `wt -w 0 sp -s 0.4 pwsh` should split the current window and run PowerShell in the new pane
### Actual Behavior
After trying out the Quake mode, running `wt -w 0 sp -s 0.4 pwsh` from WT (regular window, non-Quake mode) will always start a new window and split it over there, instead of splitting the current window. | Issue-Bug,Needs-Tag-Fix | low | Major |
2,810,113,565 | pytorch | Enable `sm_89` support for relevant ops in PyTorch | Please add sm_89 to the list of target architectures for the stable, nightly, and Docker images. While I’ve seen references indicating that sm_89 might not need explicit builds due to binary compatibility with sm_86 and sm_80, that compatibility does not hold for FP8-related features on sm_89.
For more details, see this comment: https://github.com/pytorch/ao/issues/1057#issuecomment-2613095265.
Thanks in advance!
cc: @alexsamardzic
cc @malfet @seemethere @ptrblck @msaroufim @eqy | module: build,module: cuda,triaged,module: m1 | low | Major |
2,810,119,273 | vscode | PreparedToolInvocation.pasteTenseMessage should take a callback | A find text tool should be able to say "Found 123 results" or something like that as the complete tool message | chat-tools | low | Minor |
2,810,124,917 | flutter | [skwasm] RuntimeError: illegal cast in `layers.dart` | ### Steps to reproduce
I've recently switched to the wasm-build of DevTools and am occasionally encountering an issue where it fails to render (I need to reload the page to get it to render). I see an error in the Chrome console which looks like it's an issue in the engine, but the stacktrace locations don't correspond to the current engine locations (attached is screenshot).

For reference, my Flutter version:
```
Flutter 3.29.0-1.0.pre.2 • channel [user-branch] • unknown source
Framework • revision 5517cc9b3b (9 days ago) • 2025-01-15 22:17:29 +0100
Engine • revision 5517cc9b3b
Tools • Dart 3.7.0 (build 3.7.0-323.0.dev) • DevTools 2.42.0
```
### Expected results
DevTools should render with wasm build
### Actual results
DevTools occasionally fails to render
### Code sample
No reliable repro
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
[!] Flutter (Channel [user-branch], 3.29.0-1.0.pre.2, on macOS 15.2 24C101 darwin-arm64, locale en) [190ms]
! Flutter version 3.29.0-1.0.pre.2 on channel [user-branch] at /Users/elliottbrooks/dev/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at https://flutter.dev/setup.
! Unknown upstream repository.
Reinstall Flutter by following instructions at https://flutter.dev/setup.
• Framework revision 5517cc9b3b (9 days ago), 2025-01-15 22:17:29 +0100
• Engine revision 5517cc9b3b
• Dart version 3.7.0 (build 3.7.0-323.0.dev)
• DevTools version 2.42.0
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update
checks and upgrades.
| waiting for customer response,in triage | low | Critical |
2,810,127,849 | kubernetes | CPU starvation on worker nodes caused by the Kubelet not setting cpu.cfs_quota_us in the kubepods.slice cgroup. | Any idea why the kubelet isn't setting a value for cpu.cfs_quota_us for the parent cgroup "kubepods.slice", and instead defaults to -1? This is leading to CPU starvation on the node, as burstable pods end up consuming 100% of the CPU, despite CPU reservations being configured in the kubelet’s kubeReserved and systemReserved as shown below. These reservations aren’t being enforced because the parent cgroup doesn't have CPU quota set. This is resulting in pods consuming 100% of the CPU and nothing being reserved for system processes or kubelet.
**################
Kubelet Config:
################**
kubeReserved:
cpu: "2000m"
systemReserved:
cpu: "2000m"
**################
CGroup "kubepods.slice" setting for cpu quota:
################**
$ cat /sys/fs/cgroup/cpu/kubepods.slice/cpu.cfs_quota_us
-1 | sig/node,needs-triage | low | Minor |
2,810,135,882 | storybook | [Bug]: runtime and iframe 404 on first load | ### Describe the bug
I am trying to upgrade from storybook 8.3.5 to 8.5.1. When I upgrade and run storybook, it opens the browser and stays on the loading spinner with the errors shown in the screenshot. It corrects itself on refresh. I have tried some other suggestions from past issues like vite --force, tracking down caches and removing, and others but those do not remove the issues. This issue occurs for all versions higher than 8.3.5 for me.
<img width="1269" alt="Image" src="https://github.com/user-attachments/assets/53df2de7-f1bb-4160-99f8-97081808ac08" />
### Reproduction link
no
### Reproduction steps
1. upgrade to any storybook version higher than 8.3.5
2. run storybook dev
3. see no other errors except those shown in the console and loading spinner.
### System
```bash
Storybook Environment Info:
System:
OS: macOS 14.7
CPU: (12) arm64 Apple M3 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 23.5.0 - /opt/homebrew/bin/node
npm: 10.9.2 - /opt/homebrew/bin/npm <----- active
pnpm: 9.12.1 - /opt/homebrew/bin/pnpm
Browsers:
Chrome: 129.0.6668.90
Safari: 17.6
npmPackages:
@storybook/addon-a11y: 8.4.7 => 8.4.7
@storybook/addon-essentials: 8.4.7 => 8.4.7
@storybook/addon-interactions: 8.4.7 => 8.4.7
@storybook/addon-links: 8.4.7 => 8.4.7
@storybook/addon-onboarding: 8.4.7 => 8.4.7
@storybook/blocks: 8.4.7 => 8.4.7
@storybook/builder-vite: 8.4.7 => 8.4.7
@storybook/react: 8.4.7 => 8.4.7
@storybook/react-vite: 8.4.7 => 8.4.7
@storybook/test: 8.4.7 => 8.4.7
@storybook/test-runner: 0.19.1 => 0.19.1
chromatic: 11.10.4 => 11.10.4
storybook: 8.4.7 => 8.4.7
storybook-addon-tailwind-autodocs: 1.0.8 => 1.0.8
storybook-react-i18next: 3.1.7 => 3.1.7
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,810,148,003 | node | Mark test-without-async-context-frame flaky on Windows | ### Test
test-without-async-context-frame
### Platform
Windows x64
### Console output
```console
duration_ms: 31534.251
exitcode: 1
severity: fail
stack: "\u25B6 without AsyncContextFrame\n \u2714 async-hooks\\test-async-local-storage-args.js\
\ (4219.419ms)\n \u2716 async-hooks\\test-async-local-storage-async-await.js (1885.5393ms)\n\
\ AssertionError [ERR_ASSERTION]: Test async-hooks\\test-async-local-storage-async-await.js\
\ failed with exit code 3221225477\n\n 3221225477 !== 0\n\n at TestContext.<anonymous>\
\ (file:///C:/workspace/node-test-binary-windows-js-suites/node/test/parallel/test-without-async-context-frame.mjs:59:7)\n\
\ at process.processTicksAndRejections (node:internal/process/task_queues:105:5)\n\
\ at async Test.run (node:internal/test_runner/test:981:9)\n at async\
\ Suite.processPendingSubtests (node:internal/test_runner/test:678:7) {\n generatedMessage:\
\ false,\n code: 'ERR_ASSERTION',\n actual: 3221225477,\n expected:\
\ 0,\n operator: 'strictEqual'\n }\n\n \u2714 async-hooks\\test-async-local-storage-async-functions.js\
\ (1434.8943ms)\n \u2714 async-hooks\\test-async-local-storage-dgram.js (1082.7731ms)\n\
\ \u2714 async-hooks\\test-async-local-storage-enable-disable.js (1100.3309ms)\n\
\ \u2714 async-hooks\\test-async-local-storage-enter-with.js (1097.935ms)\n \u2714\
\ async-hooks\\test-async-local-storage-errors.js (1042.0805ms)\n \u2714 async-hooks\\\
test-async-local-storage-gcable.js (1169.3682ms)\n \u2714 async-hooks\\test-async-local-storage-http-agent.js\
\ (1185.2085ms)\n \u2714 async-hooks\\test-async-local-storage-http.js (1189.2949ms)\n\
\ \u2714 async-hooks\\test-async-local-storage-misc-stores.js (1108.8861ms)\n \
\ \u2714 async-hooks\\test-async-local-storage-nested.js (1116.2834ms)\n \u2714\
\ async-hooks\\test-async-local-storage-no-mix-contexts.js (1496.0037ms)\n \u2714\
\ async-hooks\\test-async-local-storage-promises.js (1108.314ms)\n \u2714 async-hooks\\\
test-async-local-storage-socket.js (1192.8815ms)\n \u2714 async-hooks\\test-async-local-storage-thenable.js\
\ (1088.5363ms)\n \u2714 async-hooks\\test-async-local-storage-tlssocket.js (1322.2423ms)\n\
\ \u2714 parallel\\test-async-local-storage-bind.js (1154.576ms)\n \u2714 parallel\\\
test-async-local-storage-contexts.js (1169.0852ms)\n \u2714 parallel\\test-async-local-storage-deep-stack.js\
\ (1103.2735ms)\n \u2714 parallel\\test-async-local-storage-exit-does-not-leak.js\
\ (1084.1799ms)\n \u2714 parallel\\test-async-local-storage-http-multiclients.js\
\ (1451.2025ms)\n \u2714 parallel\\test-async-local-storage-snapshot.js (1146.4508ms)\n\
\u2716 without AsyncContextFrame (30962.8639ms)\n\u2139 tests 23\n\u2139 suites\
\ 1\n\u2139 pass 22\n\u2139 fail 1\n\u2139 cancelled 0\n\u2139 skipped 0\n\u2139\
\ todo 0\n\u2139 duration_ms 30977.1768\n\n\u2716 failing tests:\n\ntest at test\\\
parallel\\test-without-async-context-frame.mjs:49:5\n\u2716 async-hooks\\test-async-local-storage-async-await.js\
\ (1885.5393ms)\n AssertionError [ERR_ASSERTION]: Test async-hooks\\test-async-local-storage-async-await.js\
\ failed with exit code 3221225477\n\n 3221225477 !== 0\n\n at TestContext.<anonymous>\
\ (file:///C:/workspace/node-test-binary-windows-js-suites/node/test/parallel/test-without-async-context-frame.mjs:59:7)\n\
\ at process.processTicksAndRejections (node:internal/process/task_queues:105:5)\n\
\ at async Test.run (node:internal/test_runner/test:981:9)\n at async\
\ Suite.processPendingSubtests (node:internal/test_runner/test:678:7) {\n generatedMessage:\
\ false,\n code: 'ERR_ASSERTION',\n actual: 3221225477,\n expected: 0,\n\
\ operator: 'strictEqual'\n }"
```
### Build links
- https://ci.nodejs.org/job/node-test-binary-windows-js-suites/32254/RUN_SUBSET=0,nodes=win2019-COMPILED_BY-vs2022/testReport/(root)/parallel/test_without_async_context_frame/
### Additional information
_No response_ | flaky-test,async_local_storage | low | Critical |
2,810,150,561 | node | A way to forget a variable in Node.js REPL? | ### What is the problem this feature will solve?
It's just a tiny improvement for the REPL.
Take the following sample of REPL usage when a typo happened.
```sh
> const { DatabaseSync } = require('node:sqliite');
Uncaught Error [ERR_UNKNOWN_BUILTIN_MODULE]: No such built-in module: node:sqliite
at Function._load (node:internal/modules/cjs/loader:1068:13)
at TracingChannel.traceSync (node:diagnostics_channel:315:14)
at wrapModuleLoad (node:internal/modules/cjs/loader:218:24)
at Module.require (node:internal/modules/cjs/loader:1340:12)
at require (node:internal/modules/helpers:141:16) {
code: 'ERR_UNKNOWN_BUILTIN_MODULE'
}
> const { DatabaseSync } = require('node:sqlite');
Uncaught SyntaxError: Identifier 'DatabaseSync' has already been declared
```
Note that due to a typo, I can't use require again to import `DatabaseSync`. Instead, I usually close and open the reply.
### What is the feature you are proposing to solve the problem?
Implementing a way to forget a variable would solve it.
The Erlang programming language doesn't allow reassignment, so a situation like the one described above might occur. To solve this, they have a utility function for forgetting a variable.
In the example below we can reuse `Message` after calling `f(Message)`.

NOTE: That would be good. However, I don't know of a way to do this in the current REPL. Please let me know if there's one.
### What alternatives have you considered?
Using `let` when importing in REPL :) | feature request | low | Critical |
2,810,167,295 | puppeteer | [Bug]: remove deprecated clickCount attribute | ### Minimal, reproducible example
```TypeScript
import puppeteer from 'puppeteer';
const puppeteerOptions = {
headless: false,
protocol: 'webDriverBiDi',
}
const browser = await puppeteer.launch(puppeteerOptions);
const page = await browser.newPage();
await page.goto('data:text/html,<label id=lbl ondblclick="output.innerText += \'dblclick\'">click me</label><div id="output"></div>')
await page.click('#lbl', { clickCount: 2 });
console.log(await page.evaluate(() => output.innerText));
await browser.close();
```
### Background
I've been trying to get vue's tests running using WebDriver BiDi
### Expectation
The code above should print "dblclick"
### Reality
It prints nothing but does print "dblclick" if I'm not using `protocol: 'webDriverBiDi',`
### Puppeteer configuration file (if used)
```TypeScript
```
### Puppeteer version
24.1.1
### Node version
23.5.0
### Package manager
yarn
### Package manager version
1.22.22
### Operating system
macOS | bug,confirmed,P3 | low | Critical |
2,810,194,256 | vscode | Paste is extremely laggy |
Type: <b>Bug</b>
Paste is extremely laggy. Fix it.
Add a test please, is like one of the most used operation
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Xeon(R) W-2133 CPU @ 3.60GHz (12 x 3600)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.59GB (36.26GB free)|
|Process Argv|--crash-reporter-id c7111d2b-8353-4f60-bf14-9bf1bbea5cf8|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (16)</summary>
Extension|Author (truncated)|Version
---|---|---
Bookmarks|ale|13.5.0
markdown-mermaid|bie|1.27.0
vscode-eslint|dba|3.0.10
vscode-playwright-test-snippets|dee|0.0.3
xml|Dot|2.5.1
gitlens|eam|16.2.1
code-runner|for|0.12.2
copilot|Git|1.259.0
copilot-chat|Git|0.23.2
prettify-json|moh|0.0.3
powershell|ms-|2025.0.0
vsliveshare|ms-|1.0.5948
vscode-yaml|red|1.15.0
vscode-playwright-test-runner|sak|1.4.1
code-spell-checker|str|4.0.34
markdown-all-in-one|yzh|3.6.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
vscaat:30438848
c4g48928:30535728
azure-dev_surveyonecf:30548226
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
hdaa2157:31222309
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,810,228,151 | neovim | Allow non-ASCII `iskeyword` values | ### Problem
The `iskeyword` option currently accepts only ASCII characters. But languages like Agda, Pie use non-ASCII characters in identifier names.
### Expected behavior
Allow non-ASCII `iskeyword` values. | needs:vim-patch | low | Minor |
2,810,236,415 | godot | Unable to reload project with save after changing resource script | ### Tested versions
4.3 stable, 4.4 beta1
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - Radeon RX 560 Series (Advanced Micro Devices, Inc.; 31.0.14001.45012) - Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz (4 Threads)
### Issue description
After changing the custom resource script, it is impossible to reload the project, saving it, the pop-up selection window appears even after selection.
https://github.com/user-attachments/assets/5c17a701-a523-4b5a-9619-1874d1f71f2b
### Steps to reproduce
1. Open MRP
2. Open the resource `new_resource.tres` and open its script
3. Edit the resource script
4. Reload the project using `Project->Reload Current Project`
5. The pop-up window will appear again.
### Minimal reproduction project (MRP)
[bug.zip](https://github.com/user-attachments/files/18540803/bug.zip) | bug,topic:editor | low | Critical |
2,810,246,289 | PowerToys | Power Toys conficts with PowerPoint or Adobe Creative Cloud in Win11? | ### Microsoft PowerToys version
0.87.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
[2025-01-24.txt](https://github.com/user-attachments/files/18540826/2025-01-24.txt)
### ✔️ Expected Behavior
I was using ordinary apps in Windows and didn't anticipate the glitch or conflict with Power Toys.
### ❌ Actual Behavior
Displays blacked out and flickered before restoring without the previous monitor settings. PowerToys advised me to upload the log. There was also an error message from Adobe Cloud.
### Other Software
I was using PowerPoint. I previously had Adobe Illustrator open but had closed it. I was creating an image.
| Issue-Bug,Needs-Triage | low | Critical |
2,810,250,341 | transformers | Request to add Co-DETR | ### Model description
> A collaborative hybrid assignments training scheme, namely **Co-DETR**, learns more efficient and effective DETR-based detectors from versatile label assignment manners. This new training scheme can easily enhance the encoder’s learning ability in end-to-end detectors by training the multiple parallel auxiliary heads supervised by one-to-many label assignments such as ATSS and Faster RCNN. In addition, we conduct extra customized positive queries by extracting the positive coordinates from these auxiliary heads to improve the training efficiency of positive samples in the decoder. In inference, these auxiliary heads are discarded and thus our method introduces no additional parameters and computational cost to the original detector while requiring no hand-crafted non-maximum suppression (NMS).
Quote from their paper: https://arxiv.org/pdf/2211.12860
SotA on **Object Detection on COCO test-dev**: https://paperswithcode.com/sota/object-detection-on-coco
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Repository & Weight Links: https://github.com/Sense-X/Co-DETR
MMDet Implementation: https://github.com/open-mmlab/mmdetection/tree/main/projects/CO-DETR
Author: Zhuofan Zong https://github.com/TempleX98 | New model | low | Minor |
2,810,256,425 | flutter | Android tests failing to build because ninja cannot be found |
Linux_android_emu flutter_driver_android_test
```
Execution failed for task ':app:configureCMakeDebug[arm64-v8a]'.
> [CXX1416] Could not find Ninja on PATH or in SDK CMake bin folders.
```
Examples:
https://ci.chromium.org/ui/p/flutter/builders/staging/Linux_android_emu%20flutter_driver_android_test/2724/overview
https://ci.chromium.org/ui/p/flutter/builders/staging/Linux_android_emu%20flutter_driver_android_test/2734/overview | P2,team: presubmit flakes,c: flake,team-android,triaged-android,fyi-infra | low | Critical |
2,810,262,863 | flutter | Investigate spotlight running on chromium macOS VMs | This may be causing https://github.com/flutter/flutter/issues/157636 | team-infra | low | Minor |
2,810,272,546 | kubernetes | [CLE] Randomize lease candidate ping order and parallelize | In the leader election controller: https://github.com/kubernetes/kubernetes/blob/master/pkg/controlplane/controller/leaderelection/leaderelection_controller.go#L267-L293, candidates are iterated through sequentially and `LeaseCandidates.Update()` is a blocking operation that is also called sequentially. This can lead to cases where certain candidates do not have enough time to respond. We should parallelize this entire operation and iterate through the list in a random order to ensure every candidate has a fair chance of responding.
/cc @Henrywu573
/triage accepted
/sig api-machinery
| sig/api-machinery,triage/accepted | low | Minor |
2,810,289,662 | next.js | attempting to use import.meta.resolve results in an error | ### Link to the code that reproduces this issue
https://github.com/souporserious/next-import-meta-resolve-bug
### To Reproduce
Attempt to use `import.meta.resolve('./any-path')` in a Server or Client Component
### Current vs. Expected behavior
`import.meta.resolve('./any-path')` should work as expected with Node.js `20.8.0` and above, currently this error is thrown when attempting to use it:
<img width="950" alt="Image" src="https://github.com/user-attachments/assets/d523b3ba-8d05-4d4e-b748-2c3e29a18c06" />
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Fri Dec 6 19:01:59 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 21.2.0
npm: 10.8.1
Yarn: 1.22.21
pnpm: 9.9.0
Relevant Packages:
next: 15.1.6 // Latest available version is detected (15.1.6).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Webpack, Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
_No response_ | Webpack,Turbopack | low | Critical |
2,810,302,725 | go | x/text/message: scientific format is incorrect | ### Go version
N/A
### Output of `go env` in your module/workspace:
```shell
N/A
```
### What did you do?
Formatting a float for scientific results in extra spaces around the superscriptingExponent. ICU4C does not produce these same spaces when using `unicode/scientificnumberformatter.h` and CLDR does not suggest them either in current standards. See [cel-go](https://github.com/google/cel-go/blob/2f7606a838a90cfabca5146a89365a9c074197a1/ext/formatting.go#L336) for the Go code.
### What did you see happen?
1.052033 × 10⁰³
Note the extraneous `NARROW NO-BREAK SPACE` around the exponent separator.
### What did you expect to see?
1.052033×10⁰³
Note the removal of the extraneous `NARROW NO-BREAK SPACE` around the exponent separator. | NeedsInvestigation | low | Minor |
2,810,307,411 | tensorflow | TF 2.18 with GPU does not detect GPU, Cannot dlopen some GPU libraries, in a container | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18
### Custom code
Yes
### OS platform and distribution
Linux Centos 7.9, RHEL 8, RHEL 9
### Mobile device
_No response_
### Python version
3.11.0rc1
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
550.90.07
### GPU model and memory
_No response_
### Current behavior?
After discussing this on the [Apptainer Git](https://github.com/apptainer/apptainer/issues/2706#issuecomment-2613253071) we determined the latest TF-GPU running 2.18.0 does not register any GPUs. Older versions like 2.7.1-gpu work just fine.
`apptainer run --nv /apps/Miniforge/lib/python3.12/site-packages/containers/tensorflow/tensorflow/latest-gpu/tensorflow-tensorflow-latest-gpu-sha256\:1f16fbd9be8bb84891de12533e332bbd500511caeb5cf4db501dbe39d422f9c7.sif python`
```
import tensorflow as tf
2025-01-24 15:03:27.629215: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1737749008.639844 35316 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1737749008.847756 35316 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-24 15:03:31.499335: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
```
```
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
W0000 00:00:1737749068.599039 35316 gpu_device.cc:2344] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
Num GPUs Available: 0
```
```
>>> print(tf.__version__)
2.18.0
```
### Standalone code to reproduce the issue
```shell
shpc install tensorflow/tensorflow:latest-gpu
or
apptainer pull docker://tensorflow/tensorflow:latest-gpu
apptainer run --nv /apps/Miniforge/lib/python3.12/site-packages/containers/tensorflow/tensorflow/latest-gpu/tensorflow-tensorflow-latest-gpu-sha256\:1f16fbd9be8bb84891de12533e332bbd500511caeb5cf4db501dbe39d422f9c7.sif python
python
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
```
### Relevant log output
```shell
apptainer --debug exec --nv /apps/Miniforge/lib/python3.12/site-packages/containers/tensorflow/tensorflow/latest-gpu/tensorflow-tensorflow-latest-gpu-sha256:1f16fbd9be8bb84891de12533e332bbd500511caeb5cf4db501dbe39d422f9c7.sif python
DEBUG [U=0,P=1355208] persistentPreRun() Apptainer version: 1.3.6-1
DEBUG [U=0,P=1355208] persistentPreRun() Parsing configuration file /etc/apptainer/apptainer.conf
DEBUG [U=0,P=1355208] SetBinaryPath() Setting binary path to /usr/libexec/apptainer/bin:/usr/share/Modules/bin:/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/TurboVNC/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
DEBUG [U=0,P=1355208] SetBinaryPath() Using that path for all binaries
DEBUG [U=0,P=1355208] handleConfDir() /root/.apptainer already exists. Not creating.
DEBUG [U=0,P=1355208] handleRemoteConf() Ensuring file permission of 0600 on /root/.apptainer/remote.yaml
DEBUG [U=0,P=1355208] setUmask() Saving umask 0002 for propagation into container
DEBUG [U=0,P=1355208] checkEncryptionKey() Checking for encrypted system partition
DEBUG [U=0,P=1355208] Init() Image format detection
DEBUG [U=0,P=1355208] Init() Check for sandbox image format
DEBUG [U=0,P=1355208] Init() sandbox format initializer returned: not a directory image
DEBUG [U=0,P=1355208] Init() Check for sif image format
DEBUG [U=0,P=1355208] Init() sif image format detected
VERBOSE [U=0,P=1355208] SetGPUConfig() 'always use nv = yes' found in apptainer.conf
DEBUG [U=0,P=1355208] setNVLegacyConfig() Using legacy binds for nv GPU setup
VERBOSE [U=0,P=1355208] NvidiaIpcsPath() persistenced socket /var/run/nvidia-persistenced/socket not found
DEBUG [U=0,P=1355208] findOnPath() Found "ldconfig" at "/sbin/ldconfig"
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding SHELL environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding SUDO_GID environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding HISTCONTROL environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding no_proxy environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding HOSTNAME environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding HISTSIZE environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding SBATCH_PARTITION environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding GUESTFISH_OUTPUT environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding SLURM_PARTITION environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding SUDO_COMMAND environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding SUDO_USER environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding LMOD_DIR environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding PWD environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding LOGNAME environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding MODULESHOME environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding MANPATH environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding GUESTFISH_RESTORE environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding __MODULES_SHARE_MANPATH environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding SSH_ASKPASS environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding LANG environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding LS_COLORS environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding LMOD_SETTARG_FULL_SUPPORT environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding GUESTFISH_PS1 environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding https_proxy environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding LMOD_VERSION environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding MODULEPATH_ROOT environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding LMOD_PKG environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding TERM environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding LESSOPEN environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding USER environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding NO_PROXY environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding MODULES_RUN_QUARANTINE environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding LOADEDMODULES environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding SHLVL environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding BASH_ENV environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding LMOD_sys environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding HTTPS_PROXY environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding GUESTFISH_INIT environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding HTTP_PROXY environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding http_proxy environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding S_COLORS environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding __MODULES_LMINIT environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding which_declare environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding XDG_DATA_DIRS environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding MODULEPATH environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding SUDO_UID environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding LMOD_CMD environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding MAIL environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding MODULES_CMD environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding BASH_FUNC_ml%% environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding BASH_FUNC_which%% environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding BASH_FUNC_module%% environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding BASH_FUNC_scl%% environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding BASH_FUNC__module_raw%% environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding _ environment variable
VERBOSE [U=0,P=1355208] SetContainerEnv() Not forwarding APPTAINER_DEBUG environment variable
DEBUG [U=0,P=1355208] SetContainerEnv() Forwarding USER_PATH environment variable
VERBOSE [U=0,P=1355208] SetContainerEnv() Setting HOME=/root
VERBOSE [U=0,P=1355208] SetContainerEnv() Setting PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
DEBUG [U=0,P=1355208] InitImageDrivers() Skipping installing fuseapps image driver because running as root
DEBUG [U=0,P=1355208] SetuidMountAllowed() Kernel squashfs mount allowed because running as root
DEBUG [U=0,P=1355208] init() Use starter binary /usr/libexec/apptainer/bin/starter
VERBOSE [U=0,P=1355208] print() Set messagelevel to: 5
VERBOSE [U=0,P=1355208] init() Starter initialization
VERBOSE [U=0,P=1355208] is_suid() Check if we are running as setuid: 0
DEBUG [U=0,P=1355208] read_engine_config() Read engine configuration
DEBUG [U=0,P=1355208] init() Wait completion of stage1
DEBUG [U=0,P=1355224] set_parent_death_signal() Set parent death signal to 9
VERBOSE [U=0,P=1355224] init() Spawn stage 1
DEBUG [U=0,P=1355224] func1() executablePath is /usr/libexec/apptainer/bin/starter
DEBUG [U=0,P=1355224] func1() starter was not relocated from /usr/libexec
DEBUG [U=0,P=1355224] func1() Install prefix is /usr
DEBUG [U=0,P=1355224] startup() apptainer runtime engine selected
VERBOSE [U=0,P=1355224] startup() Execute stage 1
DEBUG [U=0,P=1355224] StageOne() Entering stage 1
DEBUG [U=0,P=1355224] InitImageDrivers() Skipping installing fuseapps image driver because running as root
DEBUG [U=0,P=1355224] prepareRootCaps() Root full capabilities
DEBUG [U=0,P=1355224] prepareAutofs() Found "/proc/sys/fs/binfmt_misc" as autofs mount point
DEBUG [U=0,P=1355224] prepareAutofs() Found "/home" as autofs mount point
DEBUG [U=0,P=1355224] prepareAutofs() Found "/share" as autofs mount point
DEBUG [U=0,P=1355224] prepareAutofs() Found "/misc" as autofs mount point
DEBUG [U=0,P=1355224] prepareAutofs() Found "/net" as autofs mount point
DEBUG [U=0,P=1355224] prepareAutofs() Found "/mnt/smb/locker" as autofs mount point
DEBUG [U=0,P=1355224] prepareAutofs() Found "/mnt/smb/labshare" as autofs mount point
DEBUG [U=0,P=1355224] prepareAutofs() Found "/mnt/smb/staging" as autofs mount point
DEBUG [U=0,P=1355224] prepareAutofs() Could not keep file descriptor for bind path /etc/localtime: no mount point
DEBUG [U=0,P=1355224] prepareAutofs() Could not keep file descriptor for bind path /etc/hosts: no mount point
DEBUG [U=0,P=1355224] prepareAutofs() Could not keep file descriptor for home directory /root: no mount point
DEBUG [U=0,P=1355224] prepareAutofs() Could not keep file descriptor for current working directory /root: no mount point
DEBUG [U=0,P=1355224] Init() Image format detection
DEBUG [U=0,P=1355224] Init() Check for sandbox image format
DEBUG [U=0,P=1355224] Init() sandbox format initializer returned: not a directory image
DEBUG [U=0,P=1355224] Init() Check for sif image format
DEBUG [U=0,P=1355224] Init() sif image format detected
DEBUG [U=0,P=1355224] setSessionLayer() Using overlay because it is not disabled
DEBUG [U=0,P=1355224] PrepareConfig() image driver is
VERBOSE [U=0,P=1355208] wait_child() stage 1 exited with status 0
DEBUG [U=0,P=1355208] cleanup_fd() Close file descriptor 4
DEBUG [U=0,P=1355208] cleanup_fd() Close file descriptor 5
DEBUG [U=0,P=1355208] cleanup_fd() Close file descriptor 6
DEBUG [U=0,P=1355208] init() Set child signal mask
DEBUG [U=0,P=1355208] init() Create socketpair for master communication channel
DEBUG [U=0,P=1355208] init() Create RPC socketpair for communication between stage 2 and RPC server
VERBOSE [U=0,P=1355208] init() Spawn master process
DEBUG [U=0,P=1355230] set_parent_death_signal() Set parent death signal to 9
VERBOSE [U=0,P=1355230] create_namespace() Create mount namespace
VERBOSE [U=0,P=1355208] enter_namespace() Entering in mount namespace
DEBUG [U=0,P=1355208] enter_namespace() Opening namespace file ns/mnt
VERBOSE [U=0,P=1355230] create_namespace() Create mount namespace
VERBOSE [U=0,P=1355231] init() Spawn RPC server
DEBUG [U=0,P=1355208] func1() executablePath is /usr/libexec/apptainer/bin/starter
DEBUG [U=0,P=1355208] func1() starter was not relocated from /usr/libexec
DEBUG [U=0,P=1355208] func1() Install prefix is /usr
DEBUG [U=0,P=1355231] func1() executablePath is /usr/libexec/apptainer/bin/starter
DEBUG [U=0,P=1355231] func1() starter was not relocated from /usr/libexec
DEBUG [U=0,P=1355231] func1() Install prefix is /usr
DEBUG [U=0,P=1355208] startup() apptainer runtime engine selected
VERBOSE [U=0,P=1355208] startup() Execute master process
DEBUG [U=0,P=1355231] startup() apptainer runtime engine selected
VERBOSE [U=0,P=1355231] startup() Serve RPC requests
DEBUG [U=0,P=1355208] InitImageDrivers() Skipping installing fuseapps image driver because running as root
DEBUG [U=0,P=1355208] setupSessionLayout() Using Layer system: overlay
DEBUG [U=0,P=1355208] setupOverlayLayout() Creating overlay SESSIONDIR layout
DEBUG [U=0,P=1355208] addRootfsMount() Mount rootfs in read-only mode
DEBUG [U=0,P=1355208] addRootfsMount() Image type is 4096
DEBUG [U=0,P=1355208] addRootfsMount() Mounting block [squashfs] image: /share/apps/Miniforge/lib/python3.12/site-packages/containers/tensorflow/tensorflow/latest-gpu/tensorflow-tensorflow-latest-gpu-sha256:1f16fbd9be8bb84891de12533e332bbd500511caeb5cf4db501dbe39d422f9c7.sif
DEBUG [U=0,P=1355208] addKernelMount() Checking configuration file for 'mount proc'
DEBUG [U=0,P=1355208] addKernelMount() Adding proc to mount list
VERBOSE [U=0,P=1355208] addKernelMount() Default mount: /proc:/proc
DEBUG [U=0,P=1355208] addKernelMount() Checking configuration file for 'mount sys'
DEBUG [U=0,P=1355208] addKernelMount() Adding sysfs to mount list
VERBOSE [U=0,P=1355208] addKernelMount() Default mount: /sys:/sys
DEBUG [U=0,P=1355208] addDevMount() Checking configuration file for 'mount dev'
DEBUG [U=0,P=1355208] addDevMount() Adding dev to mount list
VERBOSE [U=0,P=1355208] addDevMount() Default mount: /dev:/dev
DEBUG [U=0,P=1355208] addHostMount() Not mounting host file systems per configuration
VERBOSE [U=0,P=1355208] addBindsMount() Found 'bind path' = /etc/localtime, /etc/localtime
VERBOSE [U=0,P=1355208] addBindsMount() Found 'bind path' = /etc/hosts, /etc/hosts
DEBUG [U=0,P=1355208] addHomeStagingDir() Staging home directory (/root) at /var/lib/apptainer/mnt/session/root
DEBUG [U=0,P=1355208] addHomeMount() Adding home directory mount [/var/lib/apptainer/mnt/session/root:/root] to list using layer: overlay
DEBUG [U=0,P=1355208] addTmpMount() Checking for 'mount tmp' in configuration file
DEBUG [U=0,P=1355208] addScratchMount() Not mounting scratch directory: Not requested
DEBUG [U=0,P=1355208] addLibsMount() Checking for 'user bind control' in configuration file
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libOpenCL.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libOpenGL.so.0 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-cfg.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libEGL.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-eglcore.so.550.120 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-ml.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvcuvid.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-gtk3.so.550.120 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-glvkspirv.so.550.120 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libcudadebugger.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGLESv2.so.2 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-ptxjitcompiler.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGLESv2.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGL.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGLX_nvidia.so.0 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGLESv1_CM.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-tls.so.550.120 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGLdispatch.so.0 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-encode.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libOpenCL.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-ptxjitcompiler.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-encode.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-nvvm.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-glsi.so.550.120 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-opticalflow.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-opencl.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-egl-wayland.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGLX.so.0 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvoptix.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-gpucomp.so.550.120 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGL.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGLESv2_nvidia.so.2 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGLESv1_CM_nvidia.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-nvvm.so.4 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libEGL.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-gtk2.so.550.120 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-ml.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-fbc.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-fbc.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvcuvid.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-opticalflow.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGLX.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGLESv1_CM.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-cfg.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libcuda.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libvdpau_nvidia.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-rtcore.so.550.120 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libcuda.so.1 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libOpenGL.so to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libEGL_nvidia.so.0 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libnvidia-glcore.so.550.120 to mount list
DEBUG [U=0,P=1355208] addLibsMount() Add library /lib64/libGLdispatch.so to mount list
DEBUG [U=0,P=1355208] addFilesMount() Checking for 'user bind control' in configuration file
DEBUG [U=0,P=1355208] addFilesMount() Adding file /bin/nvidia-persistenced:/usr/bin/nvidia-persistenced to mount list
DEBUG [U=0,P=1355208] addFilesMount() Adding file /bin/nvidia-cuda-mps-control:/usr/bin/nvidia-cuda-mps-control to mount list
DEBUG [U=0,P=1355208] addFilesMount() Adding file /bin/nvidia-cuda-mps-server:/usr/bin/nvidia-cuda-mps-server to mount list
DEBUG [U=0,P=1355208] addFilesMount() Adding file /bin/nvidia-smi:/usr/bin/nvidia-smi to mount list
DEBUG [U=0,P=1355208] addFilesMount() Adding file /bin/nvidia-debugdump:/usr/bin/nvidia-debugdump to mount list
DEBUG [U=0,P=1355208] addResolvConfMount() Adding /etc/resolv.conf to mount list
VERBOSE [U=0,P=1355208] addResolvConfMount() Default mount: /etc/resolv.conf:/etc/resolv.conf
DEBUG [U=0,P=1355208] addHostnameMount() Skipping hostname mount, not virtualizing UTS namespace on user request
DEBUG [U=0,P=1355208] create() Mount all
DEBUG [U=0,P=1355208] mountGeneric() Mounting tmpfs to /var/lib/apptainer/mnt/session
DEBUG [U=0,P=1355208] mountImage() Mounting loop device /dev/loop0 to /var/lib/apptainer/mnt/session/rootfs of type squashfs
DEBUG [U=0,P=1355208] createCwdDir() Using /root as current working directory
DEBUG [U=0,P=1355208] mountGeneric() Mounting overlay to /var/lib/apptainer/mnt/session/final
DEBUG [U=0,P=1355208] mountGeneric() Unmounting and remounting overlay
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/final
DEBUG [U=0,P=1355208] setPropagationMount() Set RPC mount propagation flag to SLAVE
VERBOSE [U=0,P=1355208] Passwd() Checking for template passwd file: /var/lib/apptainer/mnt/session/rootfs/etc/passwd
VERBOSE [U=0,P=1355208] Passwd() Creating passwd content
VERBOSE [U=0,P=1355208] Passwd() Creating template passwd file and injecting user data: /var/lib/apptainer/mnt/session/rootfs/etc/passwd
DEBUG [U=0,P=1355208] addIdentityMount() Adding /etc/passwd to mount list
VERBOSE [U=0,P=1355208] addIdentityMount() Default mount: /etc/passwd:/etc/passwd
VERBOSE [U=0,P=1355208] Group() Checking for template group file: /var/lib/apptainer/mnt/session/rootfs/etc/group
VERBOSE [U=0,P=1355208] Group() Creating group content
DEBUG [U=0,P=1355208] addIdentityMount() Adding /etc/group to mount list
VERBOSE [U=0,P=1355208] addIdentityMount() Default mount: /etc/group:/etc/group
DEBUG [U=0,P=1355208] mountGeneric() Mounting /dev to /var/lib/apptainer/mnt/session/final/dev
DEBUG [U=0,P=1355208] mountGeneric() Mounting /etc/localtime to /var/lib/apptainer/mnt/session/final/etc/localtime
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/final/etc/localtime
DEBUG [U=0,P=1355208] mountGeneric() Mounting /etc/hosts to /var/lib/apptainer/mnt/session/final/etc/hosts
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/final/etc/hosts
DEBUG [U=0,P=1355208] mountGeneric() Mounting /proc to /var/lib/apptainer/mnt/session/final/proc
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/final/proc
DEBUG [U=0,P=1355208] mountGeneric() Mounting sysfs to /var/lib/apptainer/mnt/session/final/sys
DEBUG [U=0,P=1355208] mountGeneric() Mounting /root to /var/lib/apptainer/mnt/session/root
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/root
DEBUG [U=0,P=1355208] mountGeneric() Mounting /var/lib/apptainer/mnt/session/root to /var/lib/apptainer/mnt/session/final/root
DEBUG [U=0,P=1355208] func1() Container /tmp resolves to "/tmp"
DEBUG [U=0,P=1355208] func1() Container /var/tmp resolves to "/var/tmp"
VERBOSE [U=0,P=1355208] func1() Default mount: /tmp:/tmp
VERBOSE [U=0,P=1355208] func1() Default mount: /var/tmp:/var/tmp
DEBUG [U=0,P=1355208] mountGeneric() Mounting /tmp to /var/lib/apptainer/mnt/session/final/tmp
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/final/tmp
DEBUG [U=0,P=1355208] mountGeneric() Mounting /var/tmp to /var/lib/apptainer/mnt/session/final/var/tmp
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/final/var/tmp
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libOpenCL.so to /var/lib/apptainer/mnt/session/libs/libOpenCL.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libOpenCL.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libOpenGL.so.0 to /var/lib/apptainer/mnt/session/libs/libOpenGL.so.0
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libOpenGL.so.0
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-cfg.so.1 to /var/lib/apptainer/mnt/session/libs/libnvidia-cfg.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-cfg.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libEGL.so.1 to /var/lib/apptainer/mnt/session/libs/libEGL.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libEGL.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-eglcore.so.550.120 to /var/lib/apptainer/mnt/session/libs/libnvidia-eglcore.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-eglcore.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-ml.so to /var/lib/apptainer/mnt/session/libs/libnvidia-ml.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-ml.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvcuvid.so.1 to /var/lib/apptainer/mnt/session/libs/libnvcuvid.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvcuvid.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-gtk3.so.550.120 to /var/lib/apptainer/mnt/session/libs/libnvidia-gtk3.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-gtk3.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-glvkspirv.so.550.120 to /var/lib/apptainer/mnt/session/libs/libnvidia-glvkspirv.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-glvkspirv.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libcudadebugger.so.1 to /var/lib/apptainer/mnt/session/libs/libcudadebugger.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libcudadebugger.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGLESv2.so.2 to /var/lib/apptainer/mnt/session/libs/libGLESv2.so.2
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGLESv2.so.2
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-ptxjitcompiler.so.1 to /var/lib/apptainer/mnt/session/libs/libnvidia-ptxjitcompiler.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-ptxjitcompiler.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGLESv2.so to /var/lib/apptainer/mnt/session/libs/libGLESv2.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGLESv2.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGL.so to /var/lib/apptainer/mnt/session/libs/libGL.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGL.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGLX_nvidia.so.0 to /var/lib/apptainer/mnt/session/libs/libGLX_nvidia.so.0
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGLX_nvidia.so.0
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGLESv1_CM.so.1 to /var/lib/apptainer/mnt/session/libs/libGLESv1_CM.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGLESv1_CM.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-tls.so.550.120 to /var/lib/apptainer/mnt/session/libs/libnvidia-tls.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-tls.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGLdispatch.so.0 to /var/lib/apptainer/mnt/session/libs/libGLdispatch.so.0
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGLdispatch.so.0
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-encode.so.1 to /var/lib/apptainer/mnt/session/libs/libnvidia-encode.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-encode.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libOpenCL.so.1 to /var/lib/apptainer/mnt/session/libs/libOpenCL.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libOpenCL.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-ptxjitcompiler.so to /var/lib/apptainer/mnt/session/libs/libnvidia-ptxjitcompiler.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-ptxjitcompiler.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-encode.so to /var/lib/apptainer/mnt/session/libs/libnvidia-encode.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-encode.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-nvvm.so to /var/lib/apptainer/mnt/session/libs/libnvidia-nvvm.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-nvvm.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-glsi.so.550.120 to /var/lib/apptainer/mnt/session/libs/libnvidia-glsi.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-glsi.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-opticalflow.so to /var/lib/apptainer/mnt/session/libs/libnvidia-opticalflow.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-opticalflow.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-opencl.so.1 to /var/lib/apptainer/mnt/session/libs/libnvidia-opencl.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-opencl.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-egl-wayland.so.1 to /var/lib/apptainer/mnt/session/libs/libnvidia-egl-wayland.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-egl-wayland.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGLX.so.0 to /var/lib/apptainer/mnt/session/libs/libGLX.so.0
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGLX.so.0
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvoptix.so.1 to /var/lib/apptainer/mnt/session/libs/libnvoptix.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvoptix.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-gpucomp.so.550.120 to /var/lib/apptainer/mnt/session/libs/libnvidia-gpucomp.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-gpucomp.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGL.so.1 to /var/lib/apptainer/mnt/session/libs/libGL.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGL.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGLESv2_nvidia.so.2 to /var/lib/apptainer/mnt/session/libs/libGLESv2_nvidia.so.2
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGLESv2_nvidia.so.2
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGLESv1_CM_nvidia.so.1 to /var/lib/apptainer/mnt/session/libs/libGLESv1_CM_nvidia.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGLESv1_CM_nvidia.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-nvvm.so.4 to /var/lib/apptainer/mnt/session/libs/libnvidia-nvvm.so.4
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-nvvm.so.4
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libEGL.so to /var/lib/apptainer/mnt/session/libs/libEGL.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libEGL.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-gtk2.so.550.120 to /var/lib/apptainer/mnt/session/libs/libnvidia-gtk2.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-gtk2.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-ml.so.1 to /var/lib/apptainer/mnt/session/libs/libnvidia-ml.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-ml.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-fbc.so to /var/lib/apptainer/mnt/session/libs/libnvidia-fbc.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-fbc.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-fbc.so.1 to /var/lib/apptainer/mnt/session/libs/libnvidia-fbc.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-fbc.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvcuvid.so to /var/lib/apptainer/mnt/session/libs/libnvcuvid.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvcuvid.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-opticalflow.so.1 to /var/lib/apptainer/mnt/session/libs/libnvidia-opticalflow.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-opticalflow.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGLX.so to /var/lib/apptainer/mnt/session/libs/libGLX.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGLX.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGLESv1_CM.so to /var/lib/apptainer/mnt/session/libs/libGLESv1_CM.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGLESv1_CM.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-cfg.so to /var/lib/apptainer/mnt/session/libs/libnvidia-cfg.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-cfg.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libcuda.so to /var/lib/apptainer/mnt/session/libs/libcuda.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libcuda.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libvdpau_nvidia.so to /var/lib/apptainer/mnt/session/libs/libvdpau_nvidia.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libvdpau_nvidia.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-rtcore.so.550.120 to /var/lib/apptainer/mnt/session/libs/libnvidia-rtcore.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-rtcore.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libcuda.so.1 to /var/lib/apptainer/mnt/session/libs/libcuda.so.1
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libcuda.so.1
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libOpenGL.so to /var/lib/apptainer/mnt/session/libs/libOpenGL.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libOpenGL.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libEGL_nvidia.so.0 to /var/lib/apptainer/mnt/session/libs/libEGL_nvidia.so.0
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libEGL_nvidia.so.0
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libnvidia-glcore.so.550.120 to /var/lib/apptainer/mnt/session/libs/libnvidia-glcore.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libnvidia-glcore.so.550.120
DEBUG [U=0,P=1355208] mountGeneric() Mounting /lib64/libGLdispatch.so to /var/lib/apptainer/mnt/session/libs/libGLdispatch.so
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/libs/libGLdispatch.so
DEBUG [U=0,P=1355208] mountGeneric() Mounting /var/lib/apptainer/mnt/session/libs to /var/lib/apptainer/mnt/session/final/.singularity.d/libs
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/final/.singularity.d/libs
DEBUG [U=0,P=1355208] mountGeneric() Mounting /bin/nvidia-persistenced to /var/lib/apptainer/mnt/session/final/usr/bin/nvidia-persistenced
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/final/usr/bin/nvidia-persistenced
DEBUG [U=0,P=1355208] mountGeneric() Mounting /bin/nvidia-cuda-mps-control to /var/lib/apptainer/mnt/session/final/usr/bin/nvidia-cuda-mps-control
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/final/usr/bin/nvidia-cuda-mps-control
DEBUG [U=0,P=1355208] mountGeneric() Mounting /bin/nvidia-cuda-mps-server to /var/lib/apptainer/mnt/session/final/usr/bin/nvidia-cuda-mps-server
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/final/usr/bin/nvidia-cuda-mps-server
DEBUG [U=0,P=1355208] mountGeneric() Mounting /bin/nvidia-smi to /var/lib/apptainer/mnt/session/final/usr/bin/nvidia-smi
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/final/usr/bin/nvidia-smi
DEBUG [U=0,P=1355208] mountGeneric() Mounting /bin/nvidia-debugdump to /var/lib/apptainer/mnt/session/final/usr/bin/nvidia-debugdump
DEBUG [U=0,P=1355208] mountGeneric() Remounting /var/lib/apptainer/mnt/session/final/usr/bin/nvidia-debugdump
DEBUG [U=0,P=1355208] mountGeneric() Mounting /var/lib/apptainer/mnt/session/etc/resolv.conf to /var/lib/apptainer/mnt/session/final/etc/resolv.conf
DEBUG [U=0,P=1355208] mountGeneric() Mounting /var/lib/apptainer/mnt/session/etc/passwd to /var/lib/apptainer/mnt/session/final/etc/passwd
DEBUG [U=0,P=1355208] mountGeneric() Mounting /var/lib/apptainer/mnt/session/etc/group to /var/lib/apptainer/mnt/session/final/etc/group
VERBOSE [U=0,P=1355208] addCwdMount() /root found within container
DEBUG [U=0,P=1355208] create() Chroot into /var/lib/apptainer/mnt/session/final
DEBUG [U=0,P=1355231] Chroot() Hold reference to host / directory
DEBUG [U=0,P=1355231] Chroot() Called pivot_root on /var/lib/apptainer/mnt/session/final
DEBUG [U=0,P=1355231] Chroot() Change current directory to host / directory
DEBUG [U=0,P=1355231] Chroot() Apply slave mount propagation for host / directory
DEBUG [U=0,P=1355231] Chroot() Called unmount(/, syscall.MNT_DETACH)
DEBUG [U=0,P=1355231] Chroot() Changing directory to / to avoid getpwd issues
DEBUG [U=0,P=1355208] create() Chdir into / to avoid errors
VERBOSE [U=0,P=1355230] wait_child() rpc server exited with status 0
DEBUG [U=0,P=1355230] init() Set container privileges
DEBUG [U=0,P=1355230] apply_privileges() Effective capabilities: 0x000001ffffffffff
DEBUG [U=0,P=1355230] apply_privileges() Permitted capabilities: 0x000001ffffffffff
DEBUG [U=0,P=1355230] apply_privileges() Bounding capabilities: 0x000001ffffffffff
DEBUG [U=0,P=1355230] apply_privileges() Inheritable capabilities: 0x000001ffffffffff
DEBUG [U=0,P=1355230] apply_privileges() Ambient capabilities: 0x000001ffffffffff
DEBUG [U=0,P=1355230] apply_privileges() Set user ID to 0
DEBUG [U=0,P=1355230] set_parent_death_signal() Set parent death signal to 9
DEBUG [U=0,P=1355230] func1() executablePath is /usr/libexec/apptainer/bin/starter
DEBUG [U=0,P=1355230] func1() executablePath does not exist, assuming default prefix
DEBUG [U=0,P=1355230] startup() apptainer runtime engine selected
VERBOSE [U=0,P=1355230] startup() Execute stage 2
DEBUG [U=0,P=1355230] StageTwo() Entering stage 2
DEBUG [U=0,P=1355230] StartProcess() Setting umask in container to 0002
DEBUG [U=0,P=1355230] func4() Not exporting "BASH_FUNC__module_raw%%" to container environment: invalid key
DEBUG [U=0,P=1355230] func4() Not exporting "BASH_FUNC_ml%%" to container environment: invalid key
DEBUG [U=0,P=1355230] func4() Not exporting "BASH_FUNC_module%%" to container environment: invalid key
DEBUG [U=0,P=1355230] func4() Not exporting "BASH_FUNC_scl%%" to container environment: invalid key
DEBUG [U=0,P=1355230] func4() Not exporting "BASH_FUNC_which%%" to container environment: invalid key
DEBUG [U=0,P=1355230] sylogBuiltin() Sourcing /.singularity.d/env/01-base.sh
DEBUG [U=0,P=1355230] sylogBuiltin() Sourcing /.singularity.d/env/10-docker2singularity.sh
DEBUG [U=0,P=1355230] sylogBuiltin() Sourcing /.singularity.d/env/90-environment.sh
DEBUG [U=0,P=1355230] sylogBuiltin() Sourcing /.singularity.d/env/94-appsbase.sh
DEBUG [U=0,P=1355230] sylogBuiltin() Sourcing /.singularity.d/env/95-apps.sh
DEBUG [U=0,P=1355230] sylogBuiltin() Sourcing /.singularity.d/env/99-base.sh
DEBUG [U=0,P=1355230] sylogBuiltin() Sourcing /.singularity.d/env/99-runtimevars.sh
DEBUG [U=0,P=1355230] sylogBuiltin() Running action command exec
DEBUG [U=0,P=1355208] PostStartProcess() Post start process
Python 3.11.0rc1 (main, Aug 12 2022, 10:02:14) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
``` | stat:awaiting tensorflower,type:build/install,comp:gpu,TF 2.18 | low | Critical |
2,810,323,575 | vscode | emojies |
Type: <b>Bug</b>
Hello,
My VS doesn't show any emojis on both terminal and editor windows. I tried extensions, but I still have the same issue.
Visual Studio Code: Version: 1.96.4 (Universal)
Mac OS: Sequoia 15.2
Mac Mini M2 Pro
Thank you
VS Code version: Code 1.96.4 (Universal) (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Darwin arm64 24.2.0
Modes:
Remote OS version: Linux x64 6.5.0-1025-azure
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M2 Pro (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|2, 3, 2|
|Memory (System)|16.00GB (0.12GB free)|
|Process Argv|--crash-reporter-id 6dee1f0c-e526-49b3-82ac-a4e0d66f2b64|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|Codespaces: urban space robot|
|OS|Linux x64 6.5.0-1025-azure|
|CPUs|AMD EPYC 7763 64-Core Processor (2 x 0)|
|Memory (System)|7.74GB (6.00GB free)|
|VM|0%|
</details><details><summary>Extensions (37)</summary>
Extension|Author (truncated)|Version
---|---|---
emojisense|bie|0.10.0
codespaces|Git|1.17.3
cs50|CS5|0.0.1
ddb50|CS5|2.0.0
explain50|CS5|1.0.0
extension-uninstaller|CS5|1.0.9
phpliteadmin|CS5|0.0.1
style50|CS5|0.0.1
codespaces|Git|1.17.3
vscode-pull-request-github|Git|0.102.0
prettier-sql-vscode|inf|1.6.0
vscode-pdf|mat|0.1.2
vscode-docker|ms-|1.29.4
vscode-language-pack-bg|MS-|1.48.3
vscode-language-pack-cs|MS-|1.96.2024121109
vscode-language-pack-de|MS-|1.96.2024121109
vscode-language-pack-es|MS-|1.96.2024121109
vscode-language-pack-fr|MS-|1.96.2024121109
vscode-language-pack-hu|MS-|1.48.3
vscode-language-pack-it|MS-|1.96.2024121109
vscode-language-pack-ja|MS-|1.96.2024121109
vscode-language-pack-ko|MS-|1.96.2024121109
vscode-language-pack-pl|MS-|1.96.2024121109
vscode-language-pack-pt-BR|MS-|1.96.2024121109
vscode-language-pack-ru|MS-|1.96.2024121109
vscode-language-pack-zh-hans|MS-|1.96.2024121109
vscode-language-pack-zh-hant|MS-|1.96.2024121109
autopep8|ms-|2024.0.0
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
cpptools|ms-|1.22.11
hexeditor|ms-|1.11.1
vsliveshare|ms-|1.0.5948
java|red|1.39.0
vscode-java-debug|vsc|0.58.1
gitdoc|vsl|0.2.3
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
2f103344:31071589
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
9064b325:31222308
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | bug,info-needed,font-rendering | low | Critical |
2,810,323,711 | storybook | [Bug]: Type errors in stories where component has discriminated union in its prop type definition, and custom properties are used | ### Describe the bug
It seems to be impossible to properly type a story, when a component
1. has discriminated union in its props type definition, and
2. its story contains custom storybook-only property
### Reproduction link
https://stackblitz.com/edit/storybook-type-errors?file=src%2Fstories%2FComponent.tsx,src%2Fstories%2FComponent.stories.tsx
### Reproduction steps
_No response_
### System
```bash
Storybook Environment Info:
System:
OS: macOS 15.2
CPU: (14) arm64 Apple M4 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 22.11.0 - ~/.local/state/fnm_multishells/52967_1737709857083/bin/node
npm: 10.9.0 - ~/.local/state/fnm_multishells/52967_1737709857083/bin/npm
pnpm: 9.14.2 - ~/.local/state/fnm_multishells/52967_1737709857083/bin/pnpm <----- active
Browsers:
Chrome: 132.0.6834.110
Safari: 18.2
npmPackages:
@storybook/addon-docs: ^8.6.0-alpha.2 => 8.6.0-alpha.2
@storybook/addon-essentials: ^8.6.0-alpha.2 => 8.6.0-alpha.2
@storybook/addon-interactions: ^8.6.0-alpha.2 => 8.6.0-alpha.2
@storybook/addon-themes: ^8.6.0-alpha.2 => 8.6.0-alpha.2
@storybook/blocks: ^8.6.0-alpha.2 => 8.6.0-alpha.2
@storybook/nextjs: ^8.6.0-alpha.2 => 8.6.0-alpha.2
@storybook/react: ^8.6.0-alpha.2 => 8.6.0-alpha.2
@storybook/test: ^8.6.0-alpha.2 => 8.6.0-alpha.2
eslint-plugin-storybook: ^0.11.2 => 0.11.2
storybook: ^8.6.0-alpha.2 => 8.6.0-alpha.2
```
### Additional context
Originally found on v8.5.0 of storybook packages, tried to upgrade to latest versions to see if it’s fixed. | bug,needs triage | low | Critical |
2,810,325,074 | flutter | Scaling Issue with Native UiKitView on External Displays with Different Scale Factors | ### Environment:
- MacOS Version: 14.6.1 (23G93)
- Display Configuration: 1728 x 1117 (main), 3820 x 1600 (external)
- Flutter Version: 3.27.3
### Analysis:
A potential cause of the issue lies within the [FlutterPlatformViewController](https://github.com/flutter/flutter/blob/master/engine/src/flutter/shell/platform/darwin/ios/framework/Source/FlutterPlatformViewsController.mm) implementation. It appears the view controller obtains the screenScale from the possibly deprecated [mainScreen](https://developer.apple.com/documentation/uikit/uiscreen/main?language=objc) property. Consequently, the [scaleX](https://github.com/flutter/flutter/blob/74ade4334155a71dbfdfa019af940f3df666e652/engine/src/flutter/shell/platform/darwin/ios/framework/Source/FlutterPlatformViewsController.mm#L49) and [scaleY](https://github.com/flutter/flutter/blob/74ade4334155a71dbfdfa019af940f3df666e652/engine/src/flutter/shell/platform/darwin/ios/framework/Source/FlutterPlatformViewsController.mm#L55) values are calculated incorrectly for external displays with a different scale factor. This mismatch results in the observed rendering issues.
### Steps to reproduce
1. Connect a Mac to an external monitor with a different scale factor than the Mac's main display.
2. Run the attached Flutter application on the Mac in iPad mode.
3. Observe the scaling of the UiKitView on the external display.
### Expected results
The `UiKitView` should render at the correct size and scale consistently, regardless of the external monitor's scale factor.
### Actual results
The `UiKitView` scales incorrectly on the external display, as seen in the screenshot provided. The rendered view is noticeably smaller than expected, occupying only a quarter of the expected area.
### Code sample
<details open><summary>Code sample</summary>
https://github.com/ogBurzmali/window_scale_bug
</details>
### Screenshots or Video
<details open>
<summary>Screenshots</summary>

</details>
### Suggested Fix:
Update the `FlutterPlatformViewController` to use the correct display's `screenScale` when rendering views on external monitors. This may involve dynamically obtaining the scale factor for the specific screen where the `UiKitView` is being displayed, instead of relying on the `mainScreen` property.
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.3, on macOS 14.6.1 23G93 darwin-arm64, locale en-US)
• Flutter version 3.27.3 on channel stable at /Users/thomasariston/dev/sdks/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision c519ee916e (3 days ago), 2025-01-21 10:32:23 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/thomasariston/Library/Android/sdk
• Platform android-34, build-tools 35.0.0
• Java binary at: /opt/homebrew/opt/openjdk/bin/java
• Java version OpenJDK Runtime Environment Homebrew (build 23.0.2)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/macos-android-setup for detailed instructions).
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| platform-mac,a: platform-views,has reproducible steps,team-macos,found in release: 3.27,found in release: 3.29 | low | Critical |
2,810,342,682 | vscode | Another instance of Code is already - Copilot |
Type: <b>Bug</b>
Another instance of Code is already

Does not matter as Administrator, or Not.
Firefox 134.0.2
W11
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
Extension|Author (truncated)|Version
---|---|---
status-bar-tasks|Gua|0.3.0
alchemy65|alc|1.0.8
Bookmarks|ale|13.5.0
numbered-bookmarks|ale|8.5.0
project-manager|ale|12.8.0
arepl|alm|3.0.0
arm-debugger|Arm|1.4.4
cmsis-csolution|Arm|1.46.0
device-manager|Arm|1.13.1
environment-manager|Arm|1.10.0
keil-studio-pack|Arm|1.18.4
virtual-hardware|Arm|0.6.3
gitstash|art|5.2.0
language-gas-x86|bas|0.0.2
vscode-django|bat|1.15.0
git-temporal-vscode|bee|1.0.0
masm|blt|0.0.4
gitignore|cod|0.9.0
dscodegpt|Dan|3.7.16
vscode-markdownlint|Dav|0.58.2
cals-table-viewer|del|0.0.9
f5anything|dis|1.2.0
xml|Dot|2.5.1
gitlens|eam|16.2.1
memory-inspector|ecl|1.1.0
peripheral-inspector|ecl|1.5.1
git-project-manager|fel|1.8.2
vscode-firefox-debug|fir|2.14.0
vscode-gedcom|flo|0.0.4
code-runner|for|0.12.2
terminal|for|0.0.10
copilot|Git|1.259.0
copilot-chat|Git|0.23.2
remotehub|Git|0.65.2024112101
todo-tree|Gru|0.0.226
vscode-git-cruise|Guo|0.2.4
vscode-test-explorer|hbe|2.22.1
vscode-git-tags|how|1.4.4
githd|hui|2.5.5
python-coding-conventions|igr|0.0.4
vscode-chat-gpt|ika|1.2.0
go-to-word|jak|0.2.6
latex-workshop|Jam|10.7.5
compilemql4|Kei|0.0.1
vscode-sshfs|Kel|1.26.1
vscode-python-test-adapter|lit|0.8.2
vscode-clangd|llv|0.1.33
v-snippets|lor|0.1.20
MagicPython|mag|1.1.0
cortex-debug|mar|1.12.1
start-git-bash|McC|1.2.1
debug-tracker-vscode|mcu|0.0.15
memory-view|mcu|0.0.25
peripheral-viewer|mcu|1.4.6
rtos-views|mcu|0.0.7
rainbow-csv|mec|3.14.0
json|Mee|0.1.2
git-graph|mhu|1.30.0
file-downloader|min|1.0.13
vscode-todo-parser|min|1.9.1
vs-deploy|mkl|14.0.0
vscode-remote-workspace|mkl|0.42.0
vscode-edge-devtools|ms-|2.1.6
debugpy|ms-|2024.14.0
isort|ms-|2023.10.1
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-ssh|ms-|0.116.1
cmake-tools|ms-|1.19.52
cpptools|ms-|1.23.4
cpptools-extension-pack|ms-|1.3.0
hexeditor|ms-|1.11.1
powershell|ms-|2025.0.0
remote-explorer|ms-|0.4.3
remote-repositories|ms-|0.42.0
test-adapter-converter|ms-|0.2.1
debugger-for-chrome|msj|4.13.0
go-doc|msy|1.0.1
sftp|Nat|1.16.3
mq4|ner|1.0.4
terminal-keeper|ngu|1.1.53
pyside2-vsc|Oll|0.1.0
material-icon-theme|PKi|5.18.0
geo-data-viewer|Ran|2.6.0
vscode-yaml|red|1.15.0
renesas-build-utilities|Ren|25.0.1
renesas-debug|Ren|25.2.1
bookmarksng|RK|0.0.24
gi|rub|0.2.11
vscode-gitflow|Ser|1.3.27
git-merger|sha|0.4.1
trailing-spaces|sha|0.4.1
profitrobots-mq5-snippets|sib|1.5.0
code-ca65|tlg|1.2.6
cmake|twx|0.0.17
vscode-terminal-here|Tyr|0.2.4
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-icons|vsc|12.10.0
quick-python-print|Wei|0.3.1
jinja|who|0.0.8
vscode-go-autotest|win|1.6.0
php-debug|xde|1.35.0
json|Zai|2.0.2
pyqt-integration|zho|0.2.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
2f103344:31071589
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
9064b325:31222308
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | triage-needed | low | Critical |
2,810,347,102 | flutter | [SwiftPM] Xcode build does not update the generated package's supported platforms | ### Problem
When using SwiftPM, the Flutter tool generates a `Package.swift` file that adds plugins to the app's builds. This `Package.swift`'s supported platforms should match the Xcode project's Minimum Deployments.
However, if in Xcode I update my Minimum Deployments and then do a build, the `Package.swift` file isn't updated to the latest supported platforms. See: https://github.com/flutter/flutter/issues/162072
### Repro
1. Create an app with a plugin:
```
flutter create my_app
cd my_app
flutter pub add url_launcher
```
2. Generate the project configuration with SwiftPM on:
```
flutter config --enable-swift-package-manager
flutter build ios --config-only
```
3. Open the project in Xcode:
```
open ios/Runner.xcworkspace
```
4. In Xcode, open Project Navigator > Package Dependencies > FlutterGeneratedPluginSwiftPackage.
Check the generated package's supported platforms:
```swift
let package = Package(
name: "FlutterGeneratedPluginSwiftPackage",
platforms: [
.iOS("12.0")
],
...
)
```
5. In Xcode, open Runner target > General > Minimum Deployments and set iOS to `15.0`
6. In Xcode, build the app
7. Check the generated package's supported platforms. It will still be `.iOS("12.0")` ❌
### Workaround
To fix this, build the app using the Flutter tool:
```
flutter build ios --config-only
```
This regenerates the `Package.swift` file and updates its supported platforms. | platform-ios,tool,platform-mac,P3,team-ios,triaged-ios | low | Minor |
2,810,371,060 | flutter | [Text Selection] Text selection height should be based on the line height, not the character height | ### Steps to reproduce
Using a Flutter TextField, when you add both text and emojis and then select a region, the selection height appears inconsistent between characters and emojis. This is because the selection is applied by changing the background color of the character itself, with a few extra pixels added. This approach leads to an issue when both emojis and text appear on the same line, causing a mismatch in their selection heights.
### Expected results
The selection height should be uniform, as is seen with native text selection behavior.
### Actual results
Non-uniform selection that looks non-native. See attached pic. Selection area of 1 and 2 should have same height.
<img src="https://github.com/user-attachments/assets/99542c77-f534-40c8-ab0e-a6efb67728c5" width ="300">
### Code sample
<details open><summary>Code sample</summary>
```dart
Code is not needed
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Not needed.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[!] Flutter (Channel [user-branch], 3.24.5, on macOS 14.2.1 23C71 darwin-arm64, locale en-IT)
! Flutter version 3.24.5 on channel [user-branch] at /Users/delfme/Development/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at https://flutter.dev/setup.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss this error.
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 15.2)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.1)
[✓] Connected device (4 available)
! Error: Browsing on the local area network for iPhone. Ensure the device is unlocked and attached with a cable or associated with the same local area
network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
```
</details>
| a: text input,platform-ios,framework,f: cupertino,a: quality,has reproducible steps,team-text-input,a: macros,found in release: 3.27,found in release: 3.28 | low | Critical |
2,810,398,557 | langchain | Tool calling broken for Gemini with legacy agent | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import ChatPromptTemplate
from langchain.agents.agent import AgentExecutor
from langchain.agents import create_tool_calling_agent
model = ChatGoogleGenerativeAI(model="gemini-2.0-flash-exp")
prompt = ChatPromptTemplate.from_messages(
[
("system", PROMPT_TEMPLATE),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
tools = [my_tool, ...]
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
response = agent_executor.invoke({"input": "call my_tool"})
```
### Error Message and Stack Trace (if applicable)
`langchain_google_genai.chat_models.ChatGoogleGenerativeAIError: Invalid argument provided to Gemini: 400 * GenerateContentRequest.contents[2].parts[0].function_response.name: Name cannot be empty.`
### Description
Tool calling with Gemini requires the tool name to be returned in the tool response, but for some reason `ToolMessage.name` is not being set.
`ToolMessage.additional_kwargs` is set to: `{"name": "my_tool"}`, but this isn't used when building the message Parts to send to Gemini.
### System Info
python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
> Python Version: 3.11.9 (main, Jun 20 2024, 15:53:48) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.15
> langchain_community: 0.3.15
> langsmith: 0.2.11
> langchain_anthropic: 0.3.3
> langchain_google_genai: 2.0.9
> langchain_google_vertexai: 2.0.7
> langchain_openai: 0.3.1
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.43.1
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> filetype: 1.2.0
> google-cloud-aiplatform: 1.77.0
> google-cloud-storage: 2.19.0
> google-generativeai: 0.8.4
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langchain-mistralai: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.9
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available | 🤖:bug | low | Critical |
2,810,401,951 | next.js | Cache Poisoning and XSS attacks caused a 500 status code (QoS issue) | ### Link to the code that reproduces this issue
https://github.com/ale-grosselle/next-js-bug-500
### To Reproduce
# Fetch Request Example
1. Run the application locally:
```bash
npm run dev
```
2. Send the following fetch request (you can use Curl or send it using the browser console):
```javascript
fetch("http://localhost:3002/gssp", {
method: "GET",
headers: {
"x-now-route-matches": "-1"
}
})
.then((response) => {
console.log("Status Code:", response.status);
return response.text();
})
.then((data) => {
console.log("Response Body:", data);
})
.catch((error) => {
console.error("Error:", error);
});
```
3. Observe the 500 error in the response and the logs.
### Current vs. Expected behavior
<img width="1507" alt="Image" src="https://github.com/user-attachments/assets/fe7dc09f-da93-4c32-b2aa-c8d828fdc69a" />
<img width="1433" alt="Image" src="https://github.com/user-attachments/assets/16261bdc-68a0-471e-8361-99408cd18687" />
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Thu Sep 12 23:35:29 PDT 2024; root:xnu-10063.141.1.701.1~1/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 20.12.1
npm: 10.5.1
Yarn: 1.22.22
pnpm: 9.15.4
Relevant Packages:
next: 14.2.23 // An outdated version detected (latest is 15.1.6), upgrade is highly recommended!
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.7.3
Next.js Config:
output: N/A
⚠ An outdated version detected (latest is 15.1.6), upgrade is highly recommended!
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which area(s) are affected? (Select all that apply)
Pages Router, Runtime
### Which stage(s) are affected? (Select all that apply)
Other (Deployed)
### Additional context
Only version 14 (latest) has this issue, while version 15 does not have this problem.
On our side, it's a major problem because it generates many 500 errors when there's a potential attack, disrupts our QoS, and triggers the onColl (likely "on Collaboration" or "on Collect") alerts.
https://cyberpress.org/critical-vulnerability-in-next-js-framework-exposes-websites/ | Runtime,Pages Router | low | Critical |
2,810,412,684 | vscode | A developer command to show telemetry view | It is bit unreasonable to enable log level to trace to show the telemetry in the output. Instead there should be a developer command to show telemetry output and set log level to trace only for telemetry instead of all outputs | feature-request,telemetry | low | Minor |
2,810,429,807 | rust | Docs on safety of BufWriter are misleading | ### Location
https://github.com/rust-lang/rust/blob/8231e8599e238ff4e717639bd68c6abb8579fe8d/library/std/src/io/buffered/bufwriter.rs#L9
### Summary
This one is long winded. Sorry. A reader may decide this represents an actual code bug. I believe it is *at least* a documentation bug. I may just be rediscovering something every other programmer already knows.
```
let mut file = std::fs::File::create("test")?;
file.write_all(b"Hello, world!")?;
file.flush()?; // This does nothing. It is implemented as return Ok()
file.sync_all()?; // This issues an fsync
drop(file); // Close happens here. Docs correctly say that errors are ignored, sync_all *must* have happened to force the fsync.
```
In my scenario I am writing a file to an NFS mount from Linux. The syscalls are of the pattern open+write+write+...+close.
Close *can* fail, and that *can* prevent data from previous writes (which reported success) from being actually saved. I have no way of seeing the output of close. So I need to call sync_all, giving me a safe open+write+write+...+fsync+close.
https://github.com/rust-lang/rust/blob/8231e8599e238ff4e717639bd68c6abb8579fe8d/library/std/src/io/buffered/bufwriter.rs#L9
> /// It can be excessively inefficient to work directly with something that
/// implements [`Write`]. [...]
///
/// `BufWriter<W>` can improve the speed of programs that make *small* and
/// *repeated* write calls to the same file or network socket. [...]
///
/// It is critical to call [`flush`] before `BufWriter<W>` is dropped. Though
/// dropping will attempt to flush the contents of the buffer, any errors
/// that happen in the process of dropping will be ignored. Calling [`flush`]
/// ensures that the buffer is empty and thus dropping will not even attempt
/// file operations.
This really advertises itself for the use case
```
let mut buf = BufWriter::new(std::fs::File::create("test")?)
```
and then gives guidance on how to use it safely. This talks about errors during drop being lost (correct), says that calling flush is critical (true, but in a misleading way as the call to the inner file.flush is pointless). It does talk about *files*, not any old Write, steering me further towards madness.
Because I need to call
```
buf.into_inner()?.sync_all()?;
```
to avoid risk of silent data loss that is otherwise unobservable to the application.
I propose changing the wording at https://github.com/rust-lang/rust/blob/8231e8599e238ff4e717639bd68c6abb8579fe8d/library/std/src/io/buffered/bufwriter.rs#L21 to indicate that flush is necessary _but not sufficient_, and perhaps also adding as an example use of into_inner(). | A-docs,T-libs | low | Critical |
2,810,510,773 | tauri | [bug] Error: 1/0 error Operation not permitted (os error 1) on iOS | ### Describe the bug
Trying to generate embeddings with embedanything in the Rust backend. Works fine on MacOS, throws
`Error: 1/0 error Operation not permitted (os error 1)`
on iOS.

### Reproduction
Test repo with instructions: https://github.com/do-me/tauri-embedanything-ios/tree/main
### Expected behavior
All permissions are given, so it's supposed to work.
### Full `tauri info` output
```text
cargo tauri info
[✔] Environment
- OS: Mac OS 15.1.1 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.86.0-nightly (419b3e2d3 2025-01-15)
✔ cargo: 1.86.0-nightly (088d49608 2025-01-10)
✔ rustup: 1.27.1 (2024-04-24)
✔ Rust toolchain: nightly-aarch64-apple-darwin (environment override by RUSTUP_TOOLCHAIN)
- node: 20.10.0
- npm: 10.2.3
- deno: deno 2.1.6
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- tauri-cli 🦀: 2.2.4
[-] Plugins
- tauri-plugin-log 🦀: 2.2.0
- tauri-plugin-os 🦀: 2.2.0
- tauri-plugin-http 🦀: 2.2.0
- tauri-plugin-opener 🦀: 2.2.5
- tauri-plugin-fs 🦀: 2.2.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../src-www
- devUrl: http://0.0.0.0:3000/
[-] iOS
- Developer Teams: Dominik .... (ID: ...)
```
### Stack trace
```text
App installed:
• bundleID: app.somethin.ggggg
• installationURL: file:///private/var/containers/Bundle/Application/70694BF9-C4BE-4A64-BC7D-10FBB3836550/Offline%20OSM.app/
• launchServicesIdentifier: unknown
• databaseUUID: A409327F-6C64-4235-A282-608D7112A156
• databaseSequenceNumber: 2964
• options:
23:44:20 Enabling developer disk image services.
23:44:20 Acquired usage assertion.
Launched application with app.somethin.ggggg bundle identifier.
Info Watching /Users/dome/work/general/tauri/tauri-offline-maps/src-tauri for changes...
[connected]
```
### Additional context
_No response_ | type: bug,status: needs triage,platform: iOS | low | Critical |
2,810,522,193 | next.js | next lint fails when using ESLint v8 flat config | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/xenodochial-voice-lfgj88?workspaceId=ws_XCEs1GT1FEw6tFStZG4Cnd
### To Reproduce
1. Open the terminal
2. Run linting using Next.js and observe that it fails to run, indicating there are invalid ESLint config options
```
$ pnpm exec next lint
Invalid Options:
- Unknown options: useEslintrc, extensions, resolvePluginsRelativeTo, rulePaths, ignorePath, reportUnusedDisableDirectives
- 'extensions' has been removed.
- 'resolvePluginsRelativeTo' has been removed.
- 'ignorePath' has been removed.
- 'rulePaths' has been removed. Please define your rules using plugins.
- 'reportUnusedDisableDirectives' has been removed. Please use the 'overrideConfig.linterOptions.reportUnusedDisableDirectives' option instead.
```
3. Run linting using ESLint and observe that it runs successfully
```
$ pnpm exec eslint .
```
### Current vs. Expected behavior
### Current behavior
`next lint` does not work with an ESLint flat config (e.g., `eslint.config.js`) on `eslint@^8.21.0`.
It does work successfully with `eslint@^9`.
### Expected behavior
It is expected that `next lint` runs successfully with a valid ESLint flat config on `eslint@^8` and `eslint@^9`.
ESLint introduced the flat config in [v8.21.0](https://eslint.org/blog/2022/08/eslint-v8.21.0-released/) and it is the default format in v9.x.
Although ESLint v8.x has reached end-of-life, some widely used shared configs have not yet migrated to v9.x (e.g., [eslint-config-airbnb](https://github.com/airbnb/javascript/issues/2961)). Flat config compatibility with `next lint` and ESLint v8.x allows codebases tied to ESLint v8.x to avoid or move away from the deprecated eslintrc format.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4242
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.2.0-canary.25 // Latest available version is detected (15.2.0-canary.25).
eslint: 8.57.1
eslint-config-next: 15.1.6
react: 19.0.0
react-dom: 19.0.0
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Linting
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
I believe the culprit is the following ESLint v9.x version check:
https://github.com/vercel/next.js/blob/v15.2.0-canary.25/packages/next/src/lib/eslint/runLintCheck.ts#L178-L193
Patching it to support ESLint versions greater than v8.21.0 worked in my case:
```diff
- if (_semver.default.gte(eslintVersion, '9.0.0') && useFlatConfig) {
+ if (_semver.default.gte(eslintVersion, '8.21.0') && useFlatConfig) {
for (const option of [
'useEslintrc',
'extensions',
``` | Linting,linear: next | low | Minor |
2,810,539,897 | next.js | console.error in catch block triggers ErrorBoundary in strict mode when using Pages Router | ### Link to the code that reproduces this issue
https://github.com/lukasb/error-repro
### To Reproduce
1. npm run dev
2. navigate to app
### Current vs. Expected behavior
For pages router:
Current: ErrorBoundary is shown
Expected: ErrorBoundary is not shown, since error was caught
For app router:
Current: "1 error" badge shown
Expected: no badge shown, since error was caught
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 20.18.1
npm: 10.8.2
Yarn: 1.22.5
pnpm: 9.4.0
Relevant Packages:
next: 15.1.4 // There is a newer version (15.1.6) available, upgrade recommended!
eslint-config-next: 15.1.4
react: 18.3.1
react-dom: 18.3.1
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Pages Router
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
see investigation and patches by @icyJoseph here: https://github.com/vercel/next.js/discussions/75060#discussioncomment-11935972 | Pages Router | low | Critical |
2,810,542,687 | storybook | [Bug]: experimental test addon ^8.5.0 prevents custom vitest commands being registered | ### Describe the bug
The changes in 8.5.0 to the vite plugin which updates the vitest config overwrites the commands setting applied in a workspace file.
There is a workaround which is to register a plugin after the `testStorybook` plugin and inject the commands there, but this shouldnt be required and should retain the existing commands registered:
```ts
import { storybookTest } from "@storybook/experimental-addon-test/vitest-plugin";
import { defineWorkspace } from "vitest/config";
import * as commands from "./.storybook/vitest.commands";
export default defineWorkspace([
"vite.config.ts",
{
extends: "vite.config.ts",
plugins: [
storybookTest({ configDir: ".storybook" }),
// having to register my commands here because merged config has them removed.
{
enforce: "pre",
name: "patch-vitest-browser-commands",
config(config) {
config.test.browser.commands = {
...config.test.browser.commands,
...commands,
};
return config;
},
},
],
test: {
//...
browser: {
// these are ignore/removed
commands
}
}
}
]);
```
### Reproduction link
na
### Reproduction steps
1. register commands in vitest browser config object
2. note these commands are not available during tests
### System
```bash
Storybook Environment Info:
System:
OS: macOS 15.2
CPU: (16) arm64 Apple M4 Max
Shell: 5.9 - /bin/zsh
Binaries:
Node: 23.6.0 - ~/.nvm/versions/node/v23.6.0/bin/node
Yarn: 1.22.22 - ~/.nvm/versions/node/v23.6.0/bin/yarn <----- active
npm: 10.9.2 - ~/.nvm/versions/node/v23.6.0/bin/npm
Browsers:
Chrome: 131.0.6778.265
Safari: 18.2
npmPackages:
@storybook/addon-a11y: ^8.5.1 => 8.5.1
@storybook/addon-coverage: ^1.0.4 => 1.0.5
@storybook/addon-essentials: ^8.5.1 => 8.5.1
@storybook/addon-links: ^8.5.1 => 8.5.1
@storybook/blocks: ^8.5.1 => 8.5.1
@storybook/builder-vite: ^8.5.1 => 8.5.1
@storybook/experimental-addon-test: ^8.5.1 => 8.5.1
@storybook/manager-api: ^8.5.1 => 8.5.1
@storybook/react: ^8.5.1 => 8.5.1
@storybook/react-vite: ^8.5.1 => 8.5.1
@storybook/test: ^8.5.1 => 8.5.1
@storybook/theming: ^8.5.1 => 8.5.1
eslint-plugin-storybook: ^0.11.1 => 0.11.2
storybook: ^8.5.1 => 8.5.1
storybook-addon-data-theme-switcher: ^1.0.0 => 1.0.0
storybook-addon-tag-badges: ^1.2.1 => 1.4.0
```
### Additional context
_No response_ | bug,addon: test | low | Critical |
2,810,547,015 | flutter | CupertinoSheetRoute reacts to touch events while opening and closing | The sheet reacts to touches (dragging up or down) when it's animating to open or close, which isn't the correct behaviour on native iOS, and in this case its causing unexpected behaviour when dragging the sheet which is the inability to close the sheet, if you drag it becomes sticky, it stoping wherever the drag ends, and if you drag it while its opening it cause some visual stutters, I provided videos that visualize the problem.
Both recording are from Flutter.
This video shows what happens when trying to drag the sheet while its closing:
https://github.com/user-attachments/assets/af1a30a9-d955-4a88-b314-cf6c59cc736d
This one shows what happens when trying to drag the sheet while its opening:
https://github.com/user-attachments/assets/7745c40f-a858-43cc-8d43-07e1cf30fe60
<details>
<summary>Flutter doctor -v output:</summary>
```
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.2 24C101 darwin-arm64, locale
en-PS)
• Flutter version 3.27.1 on channel stable at /Users/maherr/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (6 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/maherr/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = /Users/maherr/Library/Android/sdk
• Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
[✓] IntelliJ IDEA Ultimate Edition (version 2023.3.2)
• IntelliJ at /Applications/IntelliJ IDEA.app
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
[✓] VS Code (version 1.96.4)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (6 available)
• sdk gphone64 arm64 (mobile) • emulator-5554 •
android-arm64 • Android 15 (API 35) (emulator)
• iPhone (mobile) • 00008110-0011384834E2401E • ios
• iOS 18.0.1 22A3370
• iPhone (mobile) • 00008110-000208513621801E • ios
• iOS 18.2 22C152
• macOS (desktop) • macos • darwin-arm64
• macOS 15.2 24C101 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin
• macOS 15.2 24C101 darwin-arm64
• Chrome (web) • chrome •
web-javascript • Google Chrome 131.0.6778.265
! Error: Browsing on the local area network for Dalia’s iPhone. Ensure the
device is unlocked and attached with a cable or associated with the same
local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code
-27)
! Error: Browsing on the local area network for iPhone. Ensure the device is
unlocked and attached with a cable or associated with the same local area
network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code
-27)
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
<details>
<summary>Reproducible code</summary>
```dart
import 'package:cupertino_onboarding/cupertino_onboarding.dart';
import 'package:flutter/cupertino.dart';
import '../Widgets/cupertino_sheet.dart';
void main() {
runApp(CupertinoApp(
home: CupertinoOnboardingPage(),
));
}
class CupertinoOnboardingPage extends StatefulWidget {
const CupertinoOnboardingPage({super.key});
@override
State<CupertinoOnboardingPage> createState() => _CupertinoOnboardingPageState();
}
class _CupertinoOnboardingPageState extends State<CupertinoOnboardingPage> {
@override
Widget build(BuildContext context) {
return CupertinoPageScaffold(
child: CustomScrollView(
slivers: [
CupertinoSliverNavigationBar(
previousPageTitle: 'Back',
// transitionBetweenRoutes: false,
stretch: true,
largeTitle: Text('Onboarding'),
),
SliverFillRemaining(
fillOverscroll: false,
hasScrollBody: false,
child: Center(
child: Column(
spacing: 20,
mainAxisSize: MainAxisSize.min,
children: [
CupertinoButton(
onPressed: () {
showCupertinoSheet(
context: context,
pageBuilder: (context) {
return CalendarOnboarding();
},
);
},
child: Text('Show onboarding'),
),
],
),
),
)
],
),
);
}
}
class CalendarOnboarding extends StatelessWidget {
const CalendarOnboarding({
super.key,
});
@override
Widget build(BuildContext context) {
return CupertinoOnboarding(
widgetAboveBottomButton: CupertinoButton(
onPressed: () {},
child: Text(
'About Privacy & Policy',
style: TextStyle(
color: CupertinoColors.systemRed.resolveFrom(context),
),
),
),
bottomButtonPadding: EdgeInsets.only(left: 30, right: 30, bottom: 70),
bottomButtonColor: CupertinoColors.systemRed.resolveFrom(context),
onPressedOnLastPage: () => Navigator.pop(context),
pages: [
WhatsNewPage(
titleTopIndent: 40,
titleFlex: 4,
title: const Text("What's New in Calendar"),
features: [
WhatsNewFeature(
icon: Icon(
CupertinoIcons.mail,
color: CupertinoColors.systemRed.resolveFrom(context),
),
title: const Text('Found Events'),
description: const Text(
'Siri suggests events found in Mail, Messages, and Safari, so you can add them easily, such as flight reservations and hotel bookings.',
),
),
WhatsNewFeature(
icon: Icon(
CupertinoIcons.time,
color: CupertinoColors.systemRed.resolveFrom(context),
),
title: const Text('Time to Leave'),
description: const Text(
"Calendar uses Apple Maps to look up locations, traffic conditions, and transit options to tell you when it's time to leave.",
),
),
WhatsNewFeature(
icon: Icon(
CupertinoIcons.location,
color: CupertinoColors.systemRed.resolveFrom(context),
),
title: const Text('Location Suggestions'),
description: const Text(
'Calendar suggests locations based on your past events and significant locations.',
),
),
],
),
],
);
}
}
```
</details>
| waiting for customer response,in triage | low | Critical |
2,810,547,962 | godot | RichTextLabel has no property for `line_spacing`, just `line_separation` | ### Tested versions
- Reproducible in `v4.4.beta1.mono.official [d33da79d3]`
### System information
macOS Sonoma 14.6.1
### Issue description
`RichTextLabel` has no `line_spacing` property, which would set the space only _between_ lines. Instead, it has `line_separation`, which adds that amount of space after _each_ line. E.g. in this picture, there is a lot of blank space below "Line 3":
<img width="308" alt="Image" src="https://github.com/user-attachments/assets/8f9eb18d-4894-4847-9818-8b0dad135af8" />
The best workaround I could find is making a `FontVariation` that has `spacing_bottom` set to a negative value while keeping `line_separation` at a positive value. I believe this is similar to specifying the BBCode `[font bt=5]` but in such a way that you don't need to modify every `RichTextLabel` in your game.
### Steps to reproduce
- Make a `RichTextLabel`
- Increase its `line_separation` property to make it more obvious
### Minimal reproduction project (MRP)
[richtextlabelspacingrepro.zip](https://github.com/user-attachments/files/18543572/richtextlabelspacingrepro.zip) | enhancement,feature proposal,discussion,topic:gui | low | Minor |
2,810,556,419 | flutter | CupertinoSliverNavigationBar/CupertinoNavigationBar bottom is not displayed during nav bar flying hero transitions | ### Use case
In the flying hero transition between two pages with CupertinoSliverNavigationBar/CupertinoNavigationBar, the bottom widget is not displayed:
https://github.com/user-attachments/assets/73854a78-b9de-4fd9-b684-c1f2883d2b07
### Proposal
The bottom widget should sync up with the large title during the hero transition. | framework,f: cupertino,c: proposal,team-design | low | Minor |
2,810,559,437 | langchain | Callback error when using a `return_direct` tool and `RunnableWithMessageHistory` | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import ChatPromptTemplate
from langchain.agents.agent import AgentExecutor
from langchain.agents import create_tool_calling_agent
model = ChatGoogleGenerativeAI(model="gemini-2.0-flash-exp")
prompt = ChatPromptTemplate.from_messages(
[
("system", PROMPT_TEMPLATE),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
class StructuredResponseTool(BaseTool):
"""
Tool that can be used by the LLM to provide a structured response to the user.
"""
name: str = "respond_to_user"
description: str = (
"Always use this tool to provide response to the user, and to indicate whether the conversation should end. "
"The `message` content should be what you would normally respond with in a conversation. "
"The `end_conversation` flag should be set to True if the conversation should end after this response."
)
args_schema: type[BaseModel] = LLMResponse
return_direct: bool = True # Causes the tool result to be returned directly to the user
def _run(self, message: str, end_conversation: bool) -> LLMResponse:
return LLMResponse(message=message, end_conversation=end_conversation)
tools = [StructuredResponseTool()]
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Use a single message history (no need for multiple threads)
message_history = ChatMessageHistory()
def get_session_history(_) -> BaseChatMessageHistory:
return message_history
agent_with_chat_history = RunnableWithMessageHistory(
agent_executor,
get_session_history,
input_messages_key="input",
history_messages_key="chat_history",
)
response = agent_with_chat_history.invoke({"input": "Hello"})
```
### Error Message and Stack Trace (if applicable)
`Error in RootListenersTracer.on_chain_end callback: ValueError("Expected str, BaseMessage, List[BaseMessage], or Tuple[BaseMessage]. Got message='Hello, welcome to Le Bistro! Do you have any dietary restrictions?' end_conversation=False.")`
### Description
The `RootListenersTracer.on_chain_end callback` reports an error when using a tool with `return_direct=True` that returns a non-`str` value, and the `RunnableWithMessageHistory` wrapper.
This is due to how it [validates the message history](https://github.com/langchain-ai/langchain/blob/dbb6b7b103d9c32cea46d3848839a4c9cbb493c3/libs/core/langchain_core/runnables/history.py#L486)
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
> Python Version: 3.11.9 (main, Jun 20 2024, 15:53:48) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.15
> langchain_community: 0.3.15
> langsmith: 0.2.11
> langchain_anthropic: 0.3.3
> langchain_google_genai: 2.0.9
> langchain_google_vertexai: 2.0.7
> langchain_openai: 0.3.1
> langchain_text_splitters: 0.3.5
> langgraph_sdk: 0.1.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.43.1
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> filetype: 1.2.0
> google-cloud-aiplatform: 1.77.0
> google-cloud-storage: 2.19.0
> google-generativeai: 0.8.4
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langchain-mistralai: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.9
> orjson: 3.10.15
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available | 🤖:bug | low | Critical |
2,810,585,740 | flutter | [Beta] flutter upgrade has broken flutter command | me@linux-dev:~$ flutter version
cat: /home/me/flutter/bin/internal/engine.version: No such file or directory
me@linux-dev:~$ flutter upgrade
cat: /home/me/flutter/bin/internal/engine.version: No such file or directory
me@linux-dev:~$ dart --version
cat: /home/me/flutter/bin/internal/engine.version: No such file or directory
was on beta channel. just saw an upgrade banner earlier today and ran "flutter upgrade" now neither dart or flutter work, the only print the text noted above. | tool,has reproducible steps,team-release,workaround available,found in release: 3.28 | low | Critical |
2,810,605,670 | react-native | On iOS, TextInput stops responding to user input when maxLength > 5.5 quadrillion | ### Description
In iOS, the TextInput component stops responding to user input if `maxLength` is greater than 5.50 quadrillion (5500000000000000). The exact number is somewhere between 5.49 and 5.5 quadrillion, but I'll leave that as an exercise for the reader.
### Steps to reproduce
1. Add TextInput component
2. Set `maxLength` prop, greater than 5500000000000000
3. Attempt to use TextInput field to enter text
### React Native Version
0.76.6
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.0.1
CPU: (12) arm64 Apple M2 Pro
Memory: 78.22 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 22.12.0
path: ~/.local/share/mise/installs/node/22.12.0/bin/node
Yarn:
version: 4.6.0
path: ~/.local/share/mise/installs/node/22.12.0/bin/yarn
npm:
version: 10.9.0
path: ~/.local/share/mise/installs/node/22.12.0/bin/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.16.2
path: ~/.local/share/mise/installs/ruby/3.3.6/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12700392
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: ~/.local/share/mise/installs/java/zulu-17.54.21/bin/javac
Ruby:
version: 3.3.6
path: ~/.local/share/mise/installs/ruby/3.3.6/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 16.0.2
wanted: ^16.0.0
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.6
wanted: 0.76.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
https://snack.expo.dev/9RnT_dmqTwVK8of5iVUJ-
### Screenshots and Videos
_No response_ | Resolution: PR Submitted,Component: TextInput | low | Minor |
2,810,606,894 | ollama | can not run this on intel Xe gpu | ### What is the issue?
## 硬件环境
- cpu:Intel i5-1240p / AMD
- gpu:Intel Iris Xe / AMD Radeon
- 内存:16GB DDR5
- os:Windows 11
## 重现步骤
1. setx OLLAMA_DEBUG 1
2. ollama serve 2 > debug.log
3. ollama run deepseek-r1:7b
<!-- Failed to upload "debug.txt" -->
4. then error showed
[debug.txt](https://github.com/user-attachments/files/18544023/debug.txt)
### OS
Windows
### GPU
Intel
### CPU
Intel
### Ollama version
ollama version is 0.5.1-ipexllm-20250123 Warning: client version is 0.5.7 | bug | low | Critical |
2,810,620,860 | ollama | running deepseek r1 671b on 64GB / 128GB ram mac gives `Error: llama runner process has terminated: signal: killed` | ### What is the issue?
after waiting all day for the model to download, `ollama run deepseek-r1:671b` fails to run with the error `Error: llama runner process has terminated: signal: killed`.
I can run the deepseek-r1:70b llama model just fine.
I'm running a Macbook M3 Pro 64GB ram, I'm assuming it's failing due to lack of memory?
- how do I know the real memory requirements for a model? i don't think it's obvious on the ollama page.
- any way to fix this at all? I tried it on my 128GB M1 Ultra Mac Studio and got the same error. I'd really love to run this locally, so would appreciate any help!
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.7 | bug | low | Critical |
2,810,635,118 | stable-diffusion-webui | [Bug]: random crash with "press any key to continue" | ### Checklist
- [x] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
i get a random crash with no error other than "Press any key to continue..." popping up, it seems to happen regardless of extensions being active or not, it happens at completely random times (even with zero activity with the program) but much more often during startup or generation.
### Steps to reproduce the problem
random
### What should have happened?
shoudnt crash
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
[sysinfo-2025-01-25-00-25.json](https://github.com/user-attachments/files/18544173/sysinfo-2025-01-25-00-25.json)
### Console logs
```Shell
venv "J:\Ai\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --xformers --ckpt-dir 'J:\Ai\checkpoint' --embeddings-dir 'J:\Ai\embeddings' --autolaunch
*** "Disable all extensions" option was set, will only load built-in extensions ***
Press any key to continue . . .
```
### Additional information
_No response_ | bug-report | low | Critical |
2,810,635,232 | electron | Custom protocols failing to register | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.0
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 10
### What arch are you using?
x64
### Last Known Working Electron version
31.7.5
### Expected Behavior
WebContentsViews are able to load files through custom protocols that are successfully registered
### Actual Behavior
Our custom protocols are failing to register so when we attempt to load a file through it, the request gets sent to OS url handlers and we cannot load it in our app.

### Testcase Gist URL
_No response_
### Additional Information
_No response_ | platform/windows,bug :beetle:,33-x-y | low | Critical |
2,810,644,168 | go | proposal: cmd/vet: warn about cases where err is set but not used | ### Proposal Details
While working on an experimental tool for testing a new error handling syntax (https://go.dev/cl/643996), I came across three different instances of incorrect handling of `err` in Go code maintained by the Go team:
* https://go.dev/cl/643995
* https://go.dev/cl/643259
* https://go.dev/cl/644155
These cases all took the form
```Go
// Some declaration and use of a variable named "err" of type "error",
// followed by:
err = F()
// More code that does not look at "err".
```
In other words, the code was written to collect an error value, but the error value was never tested or used.
The Go compiler will warn about cases in which an err variable is set but never used, followed the spec implementation restriction "A compiler may make it illegal to declare a variable inside a function body if the variable is never used." However, in these cases the variable is in fact used. It's just not used after the final case in which it is set.
I think it would be difficult for us to tighten up that implementation restriction to say something like "A compiler may make it illegal to assign a value to a variable and then never use that variable again." I think that would likely break a lot of existing working Go code.
But I think that for the subset of cases in which a variable of type `error` is set and then never checked, it would be reasonable for vet to produce an error. Those cases are almost certainly mistakes. And even in the likely rare cases when they are not mistakes, the code would be clearer if it assigned to error result to `_` rather than to `err`.
(I discovered these cases because there is no simple way to convert them to the proposed error handling syntax. Simple attempts to do produce a variable that is set and never used again, triggering the compiler error.)
CC @adonovan | Proposal | low | Critical |
2,810,654,191 | PowerToys | Custom CSS Support for Markdown Preview in File Explorer Add-On | ### Description of the new feature / enhancement
Please provide a means to inject a custom stylesheet into the markdown preview that can be used system wide for all previewed markdown files.
This will allow users to override and augment the layout and appearance of previews with personalized and in some cases more accessible styles.
In **PowerToys Settings** > **File Explorer add-ons**, the **Scalable Vector Graphics** and **Source code files (Monaco)** previewers have options that allow for some preview customization, however the **Markdown** previewer does not. Please consider adding a text box option that allows the user to define a custom stylesheet. The previewer could then use the contents of that text box as the body of a `<style>` block which can be injected into markdown file previews.
Users can already define custom styles on a per-file basis by including such a `<style>` block in each markdown file. That's not a scalable solution, though. We need the ability to set a custom stylesheet once and allow it to be used by all markdown files previewed using the PowerToys previewer.
### Scenario when this would be used?
Whenever markdown files are previewed.
**Accessibility Enhancements:** Custom stylesheets can be tailored to improve accessibility for users with specific needs, such as adjusting font sizes, colors, and contrast to better suit visually impaired users. This ensures that markdown documents are inclusive and accessible to a wider audience. Benefits include:
- **Improved Readability**: Custom stylesheets allow you to adjust font sizes, line heights, and spacing to make the text more readable for users with visual impairments. For instance, increasing font size can be crucial for users with low vision.
- **Color Contrast**: Ensuring sufficient color contrast is vital for readability, especially for users with color blindness or other visual challenges. Custom stylesheets enable you to choose color schemes that meet accessibility guidelines, making content more accessible to a broader audience.
**Enhanced Productivity:** By applying custom stylesheets, users can streamline their workflow by instantly seeing their markdown content rendered in their preferred style, reducing the need to manually format documents for different outputs. For example, using _Visual Studio Code_ allows for such customizations, enhancing the markdown editing experience, but requiring users to open markdown documents in tools such as Visual Studio Code to to view them defeats the point of the preview. The preview pane in Windows Explorer is a great way to browse and view documents without having to open them in a 3rd party tool.
### Supporting information
- [Custom Markdown Preview Styles - Foam](https://foambubble.github.io/foam/user/features/custom-markdown-preview-styles.html?form=MG0AV3)
Just ask Bing "[benefits of customizing markdown previews with stylesheets](https://www.bing.com/search?q=benefits+of+customizing+markdown+previews+with+stylesheets)" and you'll get the following response:
> ### Customizing Markdown previews with stylesheets in VSCode has the following benefits:
> **Improved readability:** You can choose themes that highlight syntax.
> **Customization:** You can create unique styles for your files.
> **Enhanced visual appeal:** Styling Markdown documents with CSS makes your content more visually appealing.
| Needs-Triage | low | Minor |
2,810,654,479 | PowerToys | **Shortcuts for Workspaces** | ### Description of the new feature / enhancement
I want an easier way to access my workspaces. Once I’ve set up a workspace, I don’t want to open the editor or double-click a link file every time. Instead, I’d like a window to appear when I use a shortcut. In this window, I could quickly select my workspace by clicking on it or pressing a corresponding number, for example.
### Scenario when this would be used?
Scenario for Using Shortcuts for Workspaces
Imagine you're a developer or designer working on multiple projects simultaneously, such as:
Project A: Frontend development for a website.
Project B: Backend API for a mobile app.
Project C: Personal blog writing.
Each project has its own workspace with relevant files, configurations, and tools. Switching between them is frequent, and efficiency is key.
How the Shortcut Improves Workflow:
You’ve already set up your workspaces for each project.
Instead of:
Opening the editor manually,
Navigating through menus, or
Searching for a link file to launch the workspace,
you press a keyboard shortcut (e.g., Ctrl+Alt+W).
A small window pops up listing your saved workspaces:
1: Frontend Website
2: Backend API
3: Blog Writing
You quickly press 1, and the workspace for Project A opens immediately, ready for use.
Benefits in This Scenario:
Saves time by reducing repetitive actions.
Keeps focus intact by minimizing interruptions.
Streamlines switching between projects, especially in fast-paced or multitasking environments.
### Supporting information
[https://github.com/microsoft/PowerToys/issues/34597](url) | Needs-Triage | low | Minor |
2,810,670,536 | go | x/build/perfdata/app: unrecognized failures | ```
#!watchflakes
default <- pkg == "golang.org/x/build/perfdata/app" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8725013058942459777)):
FAIL golang.org/x/build/perfdata/app [build failed]
# github.com/mattn/go-sqlite3
sqlite3-binding.c:46632:15: error: use of undeclared identifier 'winFile'
sqlite3-binding.c:46633:4: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46633:19: error: expected expression
sqlite3-binding.c:46633:11: error: use of undeclared identifier 'winFile'
sqlite3-binding.c:46640:17: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46641:19: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46642:2: error: call to undeclared function 'winMapfile'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
sqlite3-binding.c:46642:2: note: did you mean 'unixMapfile'?
sqlite3-binding.c:38820:12: note: 'unixMapfile' declared here
sqlite3-binding.c:46642:13: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46649:30: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46652:7: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46653:7: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46673:64: error: use of undeclared identifier 'winFile'
sqlite3-binding.c:46673:73: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46674:15: error: expected expression
sqlite3-binding.c:46674:7: error: use of undeclared identifier 'winFile'
sqlite3-binding.c:46688:10: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46696:7: error: call to undeclared function 'winUnmapfile'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
sqlite3-binding.c:46696:7: note: did you mean 'unixUnmapfile'?
sqlite3-binding.c:38703:13: note: 'unixUnmapfile' declared here
sqlite3-binding.c:46697:10: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46709:5: error: use of undeclared identifier 'winClose'
fatal error: too many errors emitted, stopping now [-ferror-limit=]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| Builders,NeedsInvestigation | low | Critical |
2,810,677,890 | pytorch | compiling for rocm gfx1010, getting cuda errors | ### 🐛 Describe the bug
I am compiling for rocm, but getting a cuda error.
The same error has been discussed in https://github.com/pytorch/pytorch/issues/108344, but the user said it was solved by using `USE_ROCM` and did not elaborate further.
```
export VERBOSE=1
export PYTORCH_ROCM_ARCH="gfx1010"
export USE_ROCM=1
export ROCM_PATH=/usr
export USE_CUDA=0
export USE_XPU=0
```
```
[6372/7445] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o
FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o
/usr/bin/ccache /usr/bin/c++ -DAT_PER_OPERATOR_HEADERS -DBUILD_ONEDNN_GRAPH -DCAFFE2_BUILD_MAIN_LIB -DCPUINFO_SUPPORTED_PLATFORM=1 -DFLASHATTENTION_DISABLE_ALIBI -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_RPC -DUSE_TENSORPIPE -DXNN_LOG_LEVEL=0 -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/home/user/git/pytorch/build/aten/src -I/home/user/git/pytorch/aten/src -I/home/user/git/pytorch/build -I/home/user/git/pytorch -I/home/user/git/pytorch/cmake/../third_party/benchmark/include -I/home/user/git/pytorch/third_party/onnx -I/home/user/git/pytorch/build/third_party/onnx -I/home/user/git/pytorch/nlohmann -I/home/user/git/pytorch/torch/csrc/api -I/home/user/git/pytorch/torch/csrc/api/include -I/home/user/git/pytorch/caffe2/aten/src/TH -I/home/user/git/pytorch/build/caffe2/aten/src/TH -I/home/user/git/pytorch/build/caffe2/aten/src -I/home/user/git/pytorch/build/caffe2/../aten/src -I/home/user/git/pytorch/torch/csrc -I/home/user/git/pytorch/third_party/miniz-3.0.2 -I/home/user/git/pytorch/third_party/kineto/libkineto/include -I/home/user/git/pytorch/third_party/kineto/libkineto/src -I/home/user/git/pytorch/third_party/cpp-httplib -I/home/user/git/pytorch/aten/src/ATen/.. -I/home/user/git/pytorch/third_party/FXdiv/include -I/home/user/git/pytorch/c10/.. -I/home/user/git/pytorch/third_party/pthreadpool/include -I/home/user/git/pytorch/third_party/cpuinfo/include -I/home/user/git/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/include -I/home/user/git/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/src -I/home/user/git/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/deps/clog/include -I/home/user/git/pytorch/third_party/NNPACK/include -I/home/user/git/pytorch/third_party/fbgemm/include -I/home/user/git/pytorch/third_party/fbgemm -I/home/user/git/pytorch/third_party/fbgemm/third_party/asmjit/src -I/home/user/git/pytorch/third_party/ittapi/src/ittnotify -I/home/user/git/pytorch/third_party/FP16/include -I/home/user/git/pytorch/third_party/tensorpipe -I/home/user/git/pytorch/build/third_party/tensorpipe -I/home/user/git/pytorch/third_party/tensorpipe/third_party/libnop/include -I/home/user/git/pytorch/third_party/fmt/include -I/home/user/git/pytorch/build/third_party/ideep/mkl-dnn/include -I/home/user/git/pytorch/third_party/ideep/mkl-dnn/src/../include -I/home/user/git/pytorch/third_party/flatbuffers/include -isystem /home/user/git/pytorch/build/third_party/gloo -isystem /home/user/git/pytorch/cmake/../third_party/gloo -isystem /home/user/git/pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /home/user/git/pytorch/cmake/../third_party/googletest/googlemock/include -isystem /home/user/git/pytorch/cmake/../third_party/googletest/googletest/include -isystem /home/user/git/pytorch/third_party/protobuf/src -isystem /home/user/git/pytorch/third_party/XNNPACK/include -isystem /home/user/git/pytorch/third_party/ittapi/include -isystem /home/user/git/pytorch/cmake/../third_party/eigen -isystem /home/user/git/pytorch/third_party/ideep/mkl-dnn/include/oneapi/dnnl -isystem /home/user/git/pytorch/third_party/ideep/include -isystem /home/user/git/pytorch/INTERFACE -isystem /home/user/git/pytorch/third_party/nlohmann/include -isystem /home/user/git/pytorch/build/include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-dangling-reference -Wno-error=dangling-reference -Wno-error=redundant-move -Wno-stringop-overflow -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -std=gnu++17 -fPIC -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -Wall -Wextra -Wdeprecated -Wno-unused-parameter -Wno-missing-field-initializers -Wno-array-bounds -Wno-unknown-pragmas -Wno-strict-overflow -Wno-strict-aliasing -Wunused-function -Wunused-variable -Wunused-but-set-variable -Wno-maybe-uninitialized -fvisibility=hidden -O2 -pthread -DASMJIT_STATIC -fopenmp -fopenmp -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/ir/ir.cpp.o -c /home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp
/home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp: In member function 'bool torch::jit::Node::hasSideEffects() const':
/home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp:1181:16: error: 'set_stream' is not a member of 'torch::jit::cuda'; did you mean 'c10::cuda::set_stream'?
1181 | case cuda::set_stream:
| ^~~~~~~~~~
In file included from /home/user/git/pytorch/torch/csrc/jit/ir/ir.h:18,
from /home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp:1:
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:223:11: note: 'c10::cuda::set_stream' declared here
223 | _(cuda, set_stream) \
| ^~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:351:35: note: in definition of macro 'DEFINE_SYMBOL'
351 | namespace ns { constexpr Symbol s(static_cast<unique_t>(_keys::ns##_##s)); }
| ^
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:352:1: note: in expansion of macro 'FORALL_NS_SYMBOLS'
352 | FORALL_NS_SYMBOLS(DEFINE_SYMBOL)
| ^~~~~~~~~~~~~~~~~
/home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp:1182:16: error: '_set_device' is not a member of 'torch::jit::cuda'; did you mean 'c10::cuda::_set_device'?
1182 | case cuda::_set_device:
| ^~~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:222:11: note: 'c10::cuda::_set_device' declared here
222 | _(cuda, _set_device) \
| ^~~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:351:35: note: in definition of macro 'DEFINE_SYMBOL'
351 | namespace ns { constexpr Symbol s(static_cast<unique_t>(_keys::ns##_##s)); }
| ^
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:352:1: note: in expansion of macro 'FORALL_NS_SYMBOLS'
352 | FORALL_NS_SYMBOLS(DEFINE_SYMBOL)
| ^~~~~~~~~~~~~~~~~
/home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp:1183:16: error: '_current_device' is not a member of 'torch::jit::cuda'; did you mean 'c10::cuda::_current_device'?
1183 | case cuda::_current_device:
| ^~~~~~~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:224:11: note: 'c10::cuda::_current_device' declared here
224 | _(cuda, _current_device) \
| ^~~~~~~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:351:35: note: in definition of macro 'DEFINE_SYMBOL'
351 | namespace ns { constexpr Symbol s(static_cast<unique_t>(_keys::ns##_##s)); }
| ^
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:352:1: note: in expansion of macro 'FORALL_NS_SYMBOLS'
352 | FORALL_NS_SYMBOLS(DEFINE_SYMBOL)
| ^~~~~~~~~~~~~~~~~
/home/user/git/pytorch/torch/csrc/jit/ir/ir.cpp:1184:16: error: 'synchronize' is not a member of 'torch::jit::cuda'; did you mean 'c10::cuda::synchronize'?
1184 | case cuda::synchronize:
| ^~~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:225:11: note: 'c10::cuda::synchronize' declared here
225 | _(cuda, synchronize) \
| ^~~~~~~~~~~
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:351:35: note: in definition of macro 'DEFINE_SYMBOL'
351 | namespace ns { constexpr Symbol s(static_cast<unique_t>(_keys::ns##_##s)); }
| ^
/home/user/git/pytorch/aten/src/ATen/core/interned_strings.h:352:1: note: in expansion of macro 'FORALL_NS_SYMBOLS'
352 | FORALL_NS_SYMBOLS(DEFINE_SYMBOL)
| ^~~~~~~~~~~~~~~~~
```
<details>
<summary>output of `rocminfo`</summary>
```
ROCk module is loaded
=====================
HSA System Attributes
=====================
Runtime Version: 1.1
Runtime Ext Version: 1.6
System Timestamp Freq.: 1000.000000MHz
Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model: LARGE
System Endianness: LITTLE
Mwaitx: DISABLED
DMAbuf Support: YES
==========
HSA Agents
==========
*******
Agent 1
*******
Name: AMD Ryzen 9 5950X 16-Core Processor
Uuid: CPU-XX
Marketing Name: AMD Ryzen 9 5950X 16-Core Processor
Vendor Name: CPU
Feature: None specified
Profile: FULL_PROFILE
Float Round Mode: NEAR
Max Queue Number: 0(0x0)
Queue Min Size: 0(0x0)
Queue Max Size: 0(0x0)
Queue Type: MULTI
Node: 0
Device Type: CPU
Cache Info:
L1: 32768(0x8000) KB
Chip ID: 0(0x0)
ASIC Revision: 0(0x0)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 3400
BDFID: 0
Internal Node ID: 0
Compute Unit: 32
SIMDs per CU: 0
Shader Engines: 0
Shader Arrs. per Eng.: 0
WatchPts on Addr. Ranges:1
Memory Properties:
Features: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: FINE GRAINED
Size: 65759348(0x3eb6874) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 65759348(0x3eb6874) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 3
Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED
Size: 65759348(0x3eb6874) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
Pool 4
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 65759348(0x3eb6874) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:4KB
Alloc Alignment: 4KB
Accessible by all: TRUE
ISA Info:
*******
Agent 2
*******
Name: gfx1010
Uuid: GPU-XX
Marketing Name: AMD Radeon RX 5700 XT
Vendor Name: AMD
Feature: KERNEL_DISPATCH
Profile: BASE_PROFILE
Float Round Mode: NEAR
Max Queue Number: 128(0x80)
Queue Min Size: 64(0x40)
Queue Max Size: 131072(0x20000)
Queue Type: MULTI
Node: 1
Device Type: GPU
Cache Info:
L1: 16(0x10) KB
L2: 4096(0x1000) KB
Chip ID: 29471(0x731f)
ASIC Revision: 2(0x2)
Cacheline Size: 64(0x40)
Max Clock Freq. (MHz): 2100
BDFID: 2048
Internal Node ID: 1
Compute Unit: 40
SIMDs per CU: 2
Shader Engines: 2
Shader Arrs. per Eng.: 2
WatchPts on Addr. Ranges:4
Coherent Host Access: FALSE
Memory Properties:
Features: KERNEL_DISPATCH
Fast F16 Operation: TRUE
Wavefront Size: 32(0x20)
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Max Waves Per CU: 40(0x28)
Max Work-item Per CU: 1280(0x500)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
Max fbarriers/Workgrp: 32
Packet Processor uCode:: 151
SDMA engine uCode:: 35
IOMMU Support:: None
Pool Info:
Pool 1
Segment: GLOBAL; FLAGS: COARSE GRAINED
Size: 8372224(0x7fc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 2
Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED
Size: 8372224(0x7fc000) KB
Allocatable: TRUE
Alloc Granule: 4KB
Alloc Recommended Granule:2048KB
Alloc Alignment: 4KB
Accessible by all: FALSE
Pool 3
Segment: GROUP
Size: 64(0x40) KB
Allocatable: FALSE
Alloc Granule: 0KB
Alloc Recommended Granule:0KB
Alloc Alignment: 0KB
Accessible by all: FALSE
ISA Info:
ISA 1
Name: amdgcn-amd-amdhsa--gfx1010:xnack-
Machine Models: HSA_MACHINE_MODEL_LARGE
Profiles: HSA_PROFILE_BASE
Default Rounding Mode: NEAR
Default Rounding Mode: NEAR
Fast f16: TRUE
Workgroup Max Size: 1024(0x400)
Workgroup Max Size per Dimension:
x 1024(0x400)
y 1024(0x400)
z 1024(0x400)
Grid Max Size: 4294967295(0xffffffff)
Grid Max Size per Dimension:
x 4294967295(0xffffffff)
y 4294967295(0xffffffff)
z 4294967295(0xffffffff)
FBarrier Max Size: 32
*** Done ***
```
</details>
[full log](https://github.com/user-attachments/files/18544411/log.txt)
please do not tell me to use docker
### Versions
<details>
<summary>output of `python torch/utils/collect_env.py`</summary>
```
Collecting environment information... PyTorch version: 2.7.0.dev20250123+rocm6.3
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-fa1d09cbd
OS: Gentoo Linux (x86_64)
GCC version: (Gentoo 13.2.1_p20240210 p14) 13.2.1 20240210
Clang version: 19.1.5
CMake version: version 3.31.4
Libc version: glibc-2.40
Python version: 3.12.8 (main, Jan 12 2025, 23:50:05) [GCC 14.2.1 20241116] (64-bit runtime)
Python platform: Linux-6.7.7-gentoo-dist-x86_64-AMD_Ryzen_9_5950X_16-Core_Processor-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon RX 5700 XT (gfx1010:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5950X 16-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 73%
CPU max MHz: 5083.3979
CPU min MHz: 2200.0000
BogoMIPS: 6789.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] onnx==1.16.2
[pip3] optree==0.14.0
[pip3] pytorch-triton-rocm==3.2.0+git0d4682f0
[pip3] pytorch-triton-xpu==3.2.0+gite98b6fcb
[pip3] torch==2.7.0.dev20250123+rocm6.3
[pip3] torchaudio==2.6.0.dev20250124+rocm6.3
[pip3] torchsde==0.2.6
[pip3] torchvision==0.22.0.dev20250124+rocm6.3
[conda] Could not collect
```
cc @malfet @seemethere @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | module: build,module: rocm,triaged | low | Critical |
2,810,682,796 | godot | Update ThorVG to 1.0 when it's released to support SVG blur filter | ### Tested versions
- Reproducible in: 4.4 beta1, 4.3, 4.2
### System information
🪟 Windows 11 - Godot 4.4 beta1
### Issue description
Import an svg with a blur filter applied on an element.
The SVG will get rendered/converted by Godot as if there was no blur filter.
The image in Inkscape:

The image in Godot:

This probably affects most if not all svg filters, but the blur is by far the most useful one.
### Steps to reproduce
Import the provided image (or any other SVG with blur on an element) in Godot.
### Minimal reproduction project (MRP)
https://filebin.net/9p1yeqor2ilpzovi | enhancement,confirmed,topic:thirdparty,topic:import | low | Minor |
2,810,685,994 | create-react-app | Umbrella: CRA breaks with React 19, and CRA needs deprecation notices | Per request from @rickhanlonii :
## Background / Primary Problem
Starting with the release of React 19, users running `create-react-app my-app` began experiencing hard errors from NPM during project setup.
This is due to a combination of multiple factors:
- CRA has always installed the latest available version of React by default, and that version is not listed in the template packages
- The default templates _do_ use React Testing Library
- The default templates currently use `@testing-library/[email protected]`. However, that version has an expected peer dependency of `react@18`. That means that installing React 19 causes a peer dependency mismatch
- NPM has a notoriously inflexible stance towards peer dep mismatches, and immediately throws an error when it hits a mismatch, and NPM is the default package manager used for CRA project setup
This causes a perfect storm of incompatibility. CRA creates a new project, adds React 19, tries to use NPM to install the packages, and NPM errors due to the peer dep mismatch with RTL.
### Solution
**This can be fixed by merging a PR that updates the versions of RTL listed in the default templates, to the latest RTL releases that support React 19**. That will eliminate the peer dep mismatch and should allow project creation to succeed without errors.
There's a couple RTL template fix PRs up already. This one looks like it should work:
- https://github.com/facebook/create-react-app/pull/13738
#### Testing Template Changes
I checked the CRA CLI logic, and you can specify templates with an NPM version. So, we ought to be able to publish `cra-template@alpha` or some other similar tag to NPM, then do a CRA install with `create-react-app my-app --template cra-template@alpha` to verify it works.
## Additional Problem: Users Still Try to Use Create React App, Without Knowing It's Deprecated
### CRA Is Still Referenced in Many Tutorials
The peer dep mismatch problem is easily fixable by itself, but it's also exacerbated by the fact that CRA has been _officially_ unmaintained for 2+ years, and in practice for longer than that. Additionally, while the broader React community has known for years that "CRA is deprecated, don't use it", there's a very long tail of existing tutorials and articles that tell readers to start a new project by using CRA. Additionally, the old React docs did say "use CRA", and technically that page is even still up:
- https://legacy.reactjs.org/docs/create-a-new-react-app.html#recommended-toolchains
> The React team primarily recommends these solutions:
> - If you’re learning React or creating a new [single-page](https://legacy.reactjs.org/docs/glossary.html#single-page-application) app, use [Create React App](https://legacy.reactjs.org/docs/create-a-new-react-app.html#create-react-app).
In fact, if I do a search for `new react app`, the top hit is that legacy docs page, and the third hit is the CRA docs:

### Beginners Are Most Likely to Try Using CRA
The most likely type of user to run into this problem is a beginner who's just getting started. While I don't have hard numbers how many people _are_ running into this, I've seen numerous reports of this error across Reddit, the CRA and React repos, and Reactiflux. Some examples:
- https://www.reddit.com/r/reactjs/comments/1i49yo9/whats_going_on_with_create_react_app_errors_and/
- https://www.reddit.com/r/reactjs/comments/1hvw86m/wtf_is_up_with_basic_install_shouldnt_be_this_hard/
- https://www.reddit.com/r/reactjs/comments/1hsez1y/help_for_a_complete_noobnot_able_to_install_react/
- https://www.reddit.com/r/reactjs/comments/1hp2hcj/create_a_new_react_app_version_error/
and from the current CRA issues:

Given that, there's clearly a significant number of users being affected by these errors, enough that it's worth both fixing the immediate errors, and giving them a longer-term migration path or directions away from CRA.
### The CRA Docs, README, and CLI Do Not Declare It As Deprecated
While the React community as a whole knows CRA is deprecated, that is not actually stated anywhere in the CRA docs or README. There's nothing to tell a user "don't use CRA", or any pointers to the ["Start a New React Project" page on React.dev](https://react.dev/learn/start-a-new-react-project).
Similarly, the CRA CLI does not print any deprecation warnings.
So, there's nothing to even indicate to a user that "this is the wrong and outdated approach, don't use this, there are better tools available".
### Immediate Solution
- **The CRA CLI should be updated to print a loud and clear "CRA is deprecated, see the React docs 'Start a Project' page for alternatives" message** (but, ideally, _not_ throw an error for now)
- **The CRA docs and README should be updated to state it's deprecated**
There's a PR up to update the CLI with a deprecation message:
- https://github.com/facebook/create-react-app/pull/17003
As a longer-term step, it's worth seriously considering if the CRA repo should be archived until there's any decision or forward motion on future changes such as turning CRA into a "meta-launcher" for creating a project.
## Related: The React Docs Do Not List a Direct Equivalent to CRA for New Projects
(I know @rickhanlonii directly disagrees with me on this. Writing it up anyway as a description of why I feel this is relevant. **This should not block the work to fix the _actual_ problems described above**, but this topic does tie in to how the deprecation notices should be handled and how we want to direct users to proceed if they move away from CRA.)
Per above, the legacy React docs recommended CRA for "learning and Single-Page Apps".
The current React docs do not mention CRA. Instead, the ["Start a New React Project" page](https://react.dev/learn/start-a-new-react-project) heavily directs users to use "a production-grade React framework" (currently listing Next, Remix, Gatsby, and Expo), with the stance that it's better to use a purpose-built framework rather than cobbling one together yourself with several libraries.
While all of those frameworks can be used to create an SPA with client-side behavior in general, they also all (intentionally) add a lot of additional opinions and complexity. Whether that's good or bad, none of them are a direct 1:1 equivalent of the kind of basic client-only SPA architecture that CRA created.
Client-only SPAs have historically been an extremely common way to use React. They're also simpler, in that there's fewer moving pieces to understand, and thus more suitable for beginners.
Today, Vite is by far the best tool for creating a new basic React client SPA project. CRA projects can be straightforwardly migrated to Vite, Vite comes with basic React templates built into `create-vite`, and Vite also forms the basis for some React-based frameworks already. (RSBuild is also an option for migrating away from CRA projects, but it's admittedly much newer and less mature.)
Vite is _briefly_ mentioned in a couple places in the docs, but those mentions are buried and intentionally not given the same weight as the "framework" options:
- https://react.dev/learn/start-a-new-react-project#can-i-use-react-without-a-framework ( buried down inside of an expandable details section for "Can I Use React Without a Framework"):
> If your app has unusual constraints not served well by these frameworks, or you prefer to solve these problems yourself, you can roll your own custom setup with React. Grab react and react-dom from npm, set up your custom build process with a bundler like [Vite](https://vitejs.dev/) or [Parcel](https://parceljs.org/), and add other tools as you need them for routing, static generation or server-side rendering, and more.
- https://react.dev/learn/add-react-to-an-existing-project#step-1-set-up-a-modular-javascript-environment :
> If your app doesn’t have an existing setup for compiling JavaScript modules, set it up with [Vite](https://vitejs.dev/). The Vite community maintains [many integrations with backend frameworks](https://github.com/vitejs/awesome-vite#integrations-with-backends), including Rails, Django, and Laravel. If your backend framework is not listed, [follow this guide](https://vitejs.dev/guide/backend-integration.html) to manually integrate Vite builds with your backend.
Neither of those meaningfully presents users with a direct path away from CRA, nor do they match the widescale usage of Vite as a common tool for client-side SPAs in the React ecosystem.
### Ideal Solution
Given that, my ideal magic-wand wishlist of steps would be:
- The "Start a New React Project" page would add a section explicitly listing "client-only / SPA" apps (whatever the desired term is) as a valid category of React app
- That section would specifically recommend Vite as the best tool for building client-only / "DIY-framework" style React apps
- The CRA docs would potentially be moved to a `legacy.create-react-app.dev` subdomain, and replaced with a landing page directing to some combo of Vite for new projects, and pages for migrating existing projects to Vite, Next, Remix, etc
That way, there's a direct replacement listed for users who want to create this style of app, and beginners running into those deprecation notices are given a path to hopefully move forward with whatever tutorial or learning process they're on without having to deal with additional complexity.
<div id="vite-recommendation-wording"></div>
#### Vite Recommendation Wording
For the "Start a New React Project" page, I would specifically like to see a section added with roughly this phrasing:
> ## Single Page Applications
>
> If you are learning React, are not using JS in your backend, only need client-side functionality, or want to design your app configuration yourself, there are other options available for starting a React project.
>
> The most common build tool for this use case is Vite. Since Vite projects do not have opinions, we recommend starting with one of these templates that includes React-based routing libraries with data loading functionality built in:
>
> - Vite + React Router template
> - Vite + TanStack Router template
>
> Alternately, you can start from the basic Vite React template and choose your own libraries as needed.
I'm happy to see the exact wording tweaked, but the key points are:
- List Vite specifically as a valid option for starting a React project
- List some of the use cases why it might be a good choice
- Acknowledge that there are many ways to use React, and frameworks are not "one size fits all"
The React team has indicated in the last couple days that "Vite + a data loading router" seem to fit their definition of a suitable "framework", so I'm hopeful that this phrasing would be acceptable. | needs triage,issue: bug report | high | Critical |
2,810,699,045 | go | x/build/perfdata/db/dbtest: unrecognized failures | ```
#!watchflakes
default <- pkg == "golang.org/x/build/perfdata/db/dbtest" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8725013058942459777)):
FAIL golang.org/x/build/perfdata/db/dbtest [build failed]
# github.com/mattn/go-sqlite3
sqlite3-binding.c:46632:15: error: use of undeclared identifier 'winFile'
sqlite3-binding.c:46633:4: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46633:19: error: expected expression
sqlite3-binding.c:46633:11: error: use of undeclared identifier 'winFile'
sqlite3-binding.c:46640:17: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46641:19: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46642:2: error: call to undeclared function 'winMapfile'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
sqlite3-binding.c:46642:2: note: did you mean 'unixMapfile'?
sqlite3-binding.c:38820:12: note: 'unixMapfile' declared here
sqlite3-binding.c:46642:13: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46649:30: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46652:7: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46653:7: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46673:64: error: use of undeclared identifier 'winFile'
sqlite3-binding.c:46673:73: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46674:15: error: expected expression
sqlite3-binding.c:46674:7: error: use of undeclared identifier 'winFile'
sqlite3-binding.c:46688:10: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46696:7: error: call to undeclared function 'winUnmapfile'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
sqlite3-binding.c:46696:7: note: did you mean 'unixUnmapfile'?
sqlite3-binding.c:38703:13: note: 'unixUnmapfile' declared here
sqlite3-binding.c:46697:10: error: use of undeclared identifier 'pFd'
sqlite3-binding.c:46709:5: error: use of undeclared identifier 'winClose'
fatal error: too many errors emitted, stopping now [-ferror-limit=]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| Builders,NeedsInvestigation | low | Critical |
2,810,715,666 | godot | Signal connection fails when unbinding args and creating function | ### Tested versions
- Reproducible in v4.4.beta1.official [d33da79d3], v4.4.dev.gh-100554 [6fdbc0cc6]
### System information
Windows 11 (build 22631) - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - Intel(R) Iris(R) Xe Graphics (Intel Corporation; 32.0.101.5768) - 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz (8 threads)
### Issue description
Unbinding arguments in the editor Connection Dialog causes the connection to fail *if also generating the receiving function*.
An error is printed:
`ERROR: Cannot connect to 'toggled': the provided callable is not valid: 'Control(main.gd)::_on_check_box_toggled'.`
- unbinding works fine when the receiving function already exists
- the generated function is correct
- not unbinding any parameters works without issue
- if the signal has more than one parameter, unbinding any number (more than zero) produces the error
### Steps to reproduce
0. Create a scene with a CheckBox and a node with a new script.
1. Use the Signal/Connections dock to make a connection from the CheckBox's `toggled` signal.
2. Expand the 'advanced' panel in the Connection Dialog
3. Set unbind signal arguments spinner to 1
4. Press Connect
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Critical |
2,810,729,226 | vscode | chatEditingService indicates that there's an ongoing edit until the entire response ends | 1. Have an agentic flow that makes an edit to a file
2. Notice the context key `isApplyingChatEdits` is true until the entire response ends (possibly after multiple other non-edit tool calls)
It is because the editsSource in this promise is not `.resolve()`d until the entire response ends:
https://github.com/microsoft/vscode/blob/a9ce0b55568c34935065bb253eea321f2eefb648/src/vs/workbench/contrib/chat/browser/chatEditing/chatEditingService.ts#L352 | debt,chat | low | Minor |
2,810,733,442 | godot | OS directory slashes on Windows are inconsistent with temp as \ and the rest are / | ### Tested versions
v4.4.beta1.official [d33da79d3]
### System information
Godot v4.4.beta1 - Windows 10 (build 19045) - Multi-window, 9 monitors - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6636) - 11th Gen Intel(R) Core(TM) i9-11900K @ 3.50GHz (16 threads)
### Issue description
The OS directory values having either \ or / work with a shell open, but when printing their values out:
the temp dir is \\
the rest are /
### Steps to reproduce
Print the various OS directory commands:
print(OS.get_temp_dir())
print(OS.get_cache_dir())
print(OS.get_data_dir())
print(OS.get_config_dir())
print(OS.get_user_data_dir())
output is:
C:\Users\profile\AppData\Local\Temp\
C:/Users/profile/AppData/Local
C:/Users/profile/AppData/Roaming
C:/Users/profile/AppData/Roaming
C:/Users/profile/AppData/Roaming/Godot/app_userdata/MRP directory slashes
The censor_user function I made to not show my profile name for the example picture

### Minimal reproduction project (MRP)
[mrp_directory_slashes.zip](https://github.com/user-attachments/files/18544707/mrp_directory_slashes.zip) | bug,platform:windows,topic:porting,usability | low | Major |
2,810,734,877 | kubernetes | Unable to use k8s service cluster IP with ipencap/IPIP | ### What happened?
Hello, this is likely not a bug as I am trying to do something mildly unholy and likely need to use some other k8s knob but I wasn't able to make an account on the discuss forum for some reason so here I am. I assume the issue is ipencap technically doesn't have a port as part of the protocol while the inner packet does so the service never gets hit but I'll explain my case as someone might have a better idea as I could easily be missing something much simpler/easier to do since I do not have a ton of kubernetes knowledge. I am also using k3s which I wouldn't expect to matter for services but if that's an auto rejection then fair enough.
In my case I have two pods: Pod A and Pod B where each has a container with IPs setup on an iface named eth0 at 100.70.0.X/24 (so let's just say 100.70.0.1 and 100.70.0.2). They also have cluster IP services setup at 200.70.0.X/24 (so again let's say 200.70.0.1 and 200.70.0.2). Traffic going to a different pod's cluster IP goes through the eth0 IPs so packets going to an open service port of 5000 would end up looking something like
`<IP src=100.70.0.1 dst=200.70.0.2 dport=5000>` which works.
I am attempting to use ipencap to get traffic from a container in pod A to go to a container in pod B to then route that packet out to the wider world as I require a specific service and interface setup in the container in pod B. Trying to use the static cluster IP fails while using the eth0 IPs directly works with the problem being the eth0 IPs are set by an outside source and change on container restart while I need something static while keeping the packet relatively simple protocol wise. It's where what I'm doing is vaguely router/VPN-ish but not quite and it's not all the traffic, just very specific traffic.
There are good/dumb reasons for this where it's complicated to wrap this packet in something more complicated due to what the receiving service is expecting although if someone has a suggestion I'm happy to hear it. I explored trying to use GRE or vxlan with similar results where the cluster IP doesn't seem to work (along with further issues in my case as setting things up across the outside host to have a shared vxlan or something is complicated).
If I do direct eth0 IPs then this works as I expect where if port 5000 is defined I can do things like `curl http://200.75.0.2:5000`. But setting up an ipencap packet like below where I'm wanting to go to an outside IP of like 10.20.30.40 doesn't:
`<Outer IP src=100.70.0.1 dst=200.75.0.2 | Inner IP src=100.70.0.1 dst=10.20.30.40 dport=5000>`
I setup something using scapy to mess with this but I haven't been able to get past what I think is how this internally handles the vip and ports.
### What did you expect to happen?
I create an IPIP packet using a cluster IP address with an exposed port in the inner packet and can receive this packet.
### How can we reproduce it (as minimally and precisely as possible)?
scapy makes this easy with something like below for a UDP packet.
```python3
>>> from scapy.all import *
>>> inner_ip_udp = IP(src="100.70.0.1", dst="10.20.30.40")/UDP(sport=2134, dport=5000)/"Hello World!"
>>> outer_ip_udp = IP(src="100.70.0.1", dst="200.70.0.2")/inner_ip_udp
```
Otherwise can mess with ip link but where again using the eth0 IPs works but the cluster IPs does not since I am 99% sure the port is the requirement to work.
```bash
ip link add mytun type ipip local 100.70.0.1 remote 200.70.0.2
ip link set mytun up
```
Then just force something like `curl http://10.20.30.40:5000 --interface mytun` and using the eth0 IPs I see the packet but with cluster IP I do not.
### Anything else we need to know?
As mentioned this is running with k3s and it's too complicated to change my setup for the service I'm trying to reach to switch away from k3s to see if this would work in k8s or something else for whatever reason.
And thanks to whoever reads all this!
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.30.3+k3s1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.3+k3s1
```
</details>
### Cloud provider
<details>
This is running on a local machine so no cloud provider
</details>
### OS version
<details>
```console
PRETTY_NAME="Ubuntu 24.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.1 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/network,sig/node,needs-triage | low | Critical |
2,810,742,294 | vscode | Randomly Glitching of the Interface |
Type: <b>Performance Issue</b>
In the middle of coding sessions, sometimes the entire vs code freezes and only some random parts of the screen tend to work. Like the whole interface is freezed and some random areas are doing their jobs. This fixes after I minimize and reopen the VS Code Tab. It happens frequently. All of this started mainly after installing the Copilot Update in my VS Code. I'm not sure if the cause of this problem is the update or not, but one of my co-workers has also had this problem. I have sufficient specifications in my system and believe the hardware is the not cause of the problem.
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 3 5300U with Radeon Graphics (8 x 2595)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|19.33GB (11.95GB free)|
|Process Argv|--crash-reporter-id fa83368a-f598-4cf8-88c6-78c017828d3e|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
2 162 7796 code main
1 109 1992 fileWatcher [1]
0 574 2380 extensionHost [1]
0 147 2968 electron-nodejs (tsserver.js )
0 94 5856 "C:\Users\hp\AppData\Local\Programs\Microsoft VS Code\Code.exe" "c:\Users\hp\AppData\Local\Programs\Microsoft VS Code\resources\app\extensions\json-language-features\server\dist\node\jsonServerMain" --node-ipc --clientProcessId=2380
0 93 7868 electron-nodejs (serverMain.js )
0 222 11000 electron-nodejs (tsserver.js )
0 111 18540 electron-nodejs (typingsInstaller.js typesMap.js )
0 147 6004 gpu-process
0 48 8648 utility-network-service
1 407 14292 window [1] (index.jsx - vite-react-project - Visual Studio Code)
0 115 14440 ptyHost
0 84 11932 C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -noexit -command "try { . \"c:\Users\hp\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\contrib\terminal\common\scripts\shellIntegration.ps1\" } catch {}"
0 8 16656 conpty-agent
0 145 17196 shared-process
0 31 18220 crashpad-handler
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (index.jsx - vite-react-project - Visual Studio Code)
| Folder (vite-react-project): 24 files
| File types: jsx(6) js(2) json(2) svg(2) css(2) gitignore(1) html(1)
| md(1)
| Conf files: package.json(1);
```
</details>
<details><summary>Extensions (19)</summary>
Extension|Author (truncated)|Version
---|---|---
codesnap|adp|1.3.4
vscode-tailwindcss|bra|0.14.1
es7-react-js-snippets|dsz|4.4.3
vsc-material-theme|Equ|34.7.9
prettier-vscode|esb|11.0.0
font-switcher|eva|4.1.0
mithril-emmet|Fal|0.7.7
auto-rename-tag|for|0.1.10
code-runner|for|0.12.2
copilot|Git|1.259.0
copilot-chat|Git|0.23.2
live-sass|gle|6.1.2
next-js-ts-snippets|loc|2.0.3
material-icon-theme|PKi|5.18.0
LiveServer|rit|5.7.9
es7-react-js-snippets|rod|1.9.3
js-jsx-snippets|sky|11.1.3
html-to-css-autocompletion|sol|1.1.2
errorlens|use|3.22.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
9064b325:31222308
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,810,750,132 | ollama | Ollama cannot start because it try to create an existing directory | ### What is the issue?
Since my main partition doesn't have enough memory, I change OLLAMA_MODELS to /media/brianhuster/E/ollama/models. I have created that directory, and `mv /usr/share/ollama/.ollama/models /media/brianhuster/E/ollama/models`. However, after that I cannot restart Ollama because, according to `sudo journalctl -u ollama.service -n 50`, the error come from
```
Error: mkdir /media/brianhuster/E: permission denied
```
The problem is that /media/brianhuster/E already exists, so why the does it try to recreate it? Not to say that I have even change the owner of `/media/brianhuster/E` to `ollama:ollama`, that still doesn't help.
```
drwxrwxr-x 4 ollama ollama 4096 Thg 1 25 11:00 /media/brianhuster/E/
```
### OS
Linux
### GPU
Intel
### CPU
Intel
### Ollama version
Warning: could not connect to a running Ollama instance Warning: client version is 0.5.7 | bug | low | Critical |
2,810,756,480 | ui | [bug]: Shadcn not validating Tailwind CSS installation after Tailwind 4 update | ### Describe the bug
After the update to Tailwind CSS 4, the `npx shadcn@latest init` command fails to validate the Tailwind CSS installation due to the removal of the Tailwind config file. This prevents the proper initialization of Shadcn. It is recommended to update the installation documentation to reflect these changes.

### Affected component/components
N/A (This issue affects the initialization process rather than specific components)
### How to reproduce
1. Update Tailwind CSS to version 4.
2. Run the command `npx shadcn@latest init`.
3. Observe that the command fails to validate the Tailwind CSS installation.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
✔ Preflight checks.
✔ Verifying framework. Found Vite.
✖ Validating Tailwind CSS.
✖ Validating import alias.
No Tailwind CSS configuration found at C:\Users\ranbi\OneDrive\Desktop\Portfolio website\client.
It is likely you do not have Tailwind CSS installed or have an invalid configuration.
Install Tailwind CSS then try again.
Visit https://tailwindcss.com/docs/guides/vite to get started.
No import alias found in your tsconfig.json file.
Visit https://ui.shadcn.com/docs/installation/vite to learn how to set an import alias.
```
### System Info
```bash
- Operating System: Windows 11
- Node.js version: (v22.12.0)
- NPM version: (10.9.0)
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug,tailwind | medium | Critical |
2,810,768,959 | ollama | see total token usage? | Hello, is there a way to see the total token usage for hours days or weeks?
Would like to see how much it really uses, hard to grasp if I use it with Vs Code and Continue extension | feature request | low | Minor |
2,810,773,913 | vscode | pasta não abre |
Type: <b>Bug</b>
minha pasta não abre, e não consico abrir um novo arquivo para novos projetos.
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz (8 x 1190)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|7.80GB (3.46GB free)|
|Process Argv|--crash-reporter-id 78281426-714d-4839-b394-4611896cbabc|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (11)</summary>
Extension|Author (truncated)|Version
---|---|---
dart-code|Dar|3.102.0
flutter|Dar|3.102.0
vscode-language-pack-pt-BR|MS-|1.96.2024121109
csdevkit|ms-|1.15.34
csharp|ms-|2.61.28
vscode-dotnet-runtime|ms-|2.2.5
python|ms-|2024.22.2
color-highlight|nau|2.8.0
LiveServer|rit|5.7.9
intellicode-api-usage-examples|Vis|0.2.9
vscode-icons|vsc|12.10.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
2f103344:31071589
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
9064b325:31222308
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed,*english-please,translation-required-portuguese-brazil,triage-needed | low | Critical |
2,810,781,151 | yt-dlp | Confused about installing on Linux Mint | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
I'm trying to install this on Linux Mint, and to the best of my ability I've followed the [official instructions](https://github.com/yt-dlp/yt-dlp/wiki/Installation). Please be kind to me, as I very much am still a Linux newb. I'm still learning, but I am willing to learn and I'm trying. 🙂
Anyway, per the [Installing the release binary](https://github.com/yt-dlp/yt-dlp/wiki/Installation#installing-the-release-binary) page:
1. I made sure I have Python installed, specifically 3.12.3.
2. I've downloaded the release binary for my OS, specifically the recommended zipimport binary. I didn't know where to put it so I placed in my Home folder in ~/yt-dlp/.
3. I copy/pasted `curl -L https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -o ~/.local/bin/yt-dlp
chmod a+rx ~/.local/bin/yt-dlp # Make executable` into Terminal to put it into $PATH. (I only used that one, not the "wget" or "aria2c" ones because it said I should only use one of those three options.)
4. This didn't work because it just gave me the error messages: `Warning: Failed to open the file /home/sarah/.local/bin/yt-dlp: No such file` and `chmod: cannot access '/home/sarah/.local/bin/yt-dlp': No such file or directory`.
5. So, I created the subfolder "bin" in ~/.local/ (it didn't exist), and then I moved the file from ~/yt-dlp/ to there and tried it again. It worked.
6. To test it, I then typed `yt-dlp -U`, but it simply returned the error message, `Command 'yt-dlp' not found, but can be installed with:
sudo apt install yt-dlp`.
Why isn't the application found? As far as I can tell, I followed the instructions pretty darn near exactly (as best I could), and I thought adding it to $PATH would allow me to use its commands by simply typing the name like above (with appropriate arguments obviously). But it says it's nonexistent.
Clearly, I did something wrong. Would someone help me out here please?
Thank you!
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### My Terminal Contents
(Note: I replaced the username with `[User]` for privacy reasons.)
[USER]@Desktop:~$ python3 --version
Python 3.12.3
[USER]@Desktop:~$ curl -L https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -o ~/.local/bin/yt-dlp
chmod a+rx ~/.local/bin/yt-dlp # Make executable
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0Warning: Failed to open the file /home/[USER]/.local/bin/yt-dlp: No such file
Warning: or directory
0 2943k 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0
curl: (23) Failure writing output to destination
chmod: cannot access '/home/[USER]/.local/bin/yt-dlp': No such file or directory
[USER]@Desktop:~$ ^C
[USER]@Desktop:~$ ^C
[USER]@Desktop:~$ curl -L https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -o ~/.local/bin/yt-dlp
chmod a+rx ~/.local/bin/yt-dlp # Make executable
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
100 2943k 100 2943k 0 0 854k 0 0:00:03 0:00:03 --:--:-- 2742k
[USER]@Desktop:~$ yt-dlp -U
Command 'yt-dlp' not found, but can be installed with:
sudo apt install yt-dlp
### Specifications
- OS: Linux Mint 22.1 Cinnamon
- Python: Version 3.12.3
### Complete Verbose Output
```shell
``` | question | low | Critical |
2,810,789,589 | ollama | Mini-InternVL | https://hf-mirror.com/OpenGVLab/Mini-InternVL-Chat-4B-V1-5
thanks. | model request | low | Minor |
2,810,789,793 | ollama | MiniCPM-o-2_6 | https://hf-mirror.com/openbmb/MiniCPM-o-2_6
thanks. | model request | low | Minor |
2,810,801,344 | yt-dlp | [Newgrounds] Login support is broken / site requires 2fa | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
I tried downloading video from **Newgrounds** and it's not working. I tried both giving `username`, `password` and `cookies`.
When you give `username` and `password`, you are required to also provide a **2FA** code, which if you get from an email, I don't think it's possible to add that as it don't prompt for a **2FA code** after attempting to log in.
When I tried giving `cookies`, it also doesn't work with the error.
```
ERROR: [Newgrounds] 961757: This video is only available for registered users. Use --cookies, --cookies-from-browser, --username and --password, --netrc-cmd, or --netrc (newgrounds) to provide account credentials. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies
```
And this is the cookie file _with values removed_
```
# Netscape HTTP Cookie File
.newgrounds.com TRUE / TRUE 1738389207 _ngViCo-SupporterPromo ----
.newgrounds.com TRUE / TRUE 0 ng_user0 ----
.newgrounds.com TRUE / TRUE 1769317311 passport-auth ----
```
I notice that it changes the `ng_user0` and `_ngViCo-SupporterPromo` values after running the command. Don't know if that is to be expected or not.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
❯ ./yt-dlp.exe "https://newgrounds.com/portal/view/752271" --cookies "./cookie.txt" -vU
[debug] Command-line config: ['https://newgrounds.com/portal/view/752271', '--cookies', './cookie.txt', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [c8541f8b1] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22621-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[Newgrounds] Extracting URL: https://newgrounds.com/portal/view/752271
[Newgrounds] 752271: Downloading webpage
ERROR: [Newgrounds] 752271: This video is only available for registered users. Use --cookies, --cookies-from-browser, --username and --password, --netrc-cmd, or --netrc (newgrounds) to provide account credentials. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\newgrounds.py", line 150, in _real_extract
File "yt_dlp\extractor\common.py", line 1258, in raise_login_required
``` | account-needed,site-bug,can-share-account | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.