id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,747,992,637 | vscode | Debug: Search inside collapsed nodes |
Type: <b>Feature Request</b>
Would it be possible to add an option to the debug -> variables to filter/search inside collapsed notes, looking at the key & values for data buried data, this is something that's available within Pycharm so it must be possible to do ;-)
At the moment the only way to search deep inside a large dict or array of dicts is to issue a json.dumps on the console for that variable and search outside VSCode.
There is no native search inside data wranger and the export here is only CSV or parquet.
How it looks inside Pycharm:

VS Code version: Code 1.96.0 (138f619c86f1199955d53b4166bef66ef252935c, 2024-12-11T02:29:09.626Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<!-- generated by issue reporter --> | feature-request,debug | medium | Critical |
2,748,025,574 | react-native | Click events do not take effect in animation views (Some Android devices, Huawei) | ### Description
On some Android devices (Huawei), the buttons in the animation view cannot respond to click events normally, and you need to click many times before you can touch them occasionally.
### Steps to reproduce
1、install the application with yarn android
2、click '显示弹框'
3、click '关闭',It takes many clicks to close
### React Native Version
0.74.1
### Affected Platforms
Runtime - Android
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
System:
OS: macOS 13.4
CPU: (10) arm64 Apple M2 Pro
Memory: 90.55 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.0.0
path: /usr/local/bin/node
Yarn:
version: 1.22.19
path: /usr/local/bin/yarn
npm:
version: 8.6.0
path: /usr/local/bin/npm
Watchman:
version: 2024.01.22.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.12.1
path: /Users/01400926/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 22.4
- iOS 16.4
- macOS 13.3
- tvOS 16.4
- watchOS 9.4
Android SDK: Not Found
IDEs:
Android Studio: 2022.3 AI-223.8836.35.2231.10671973
Xcode:
version: 14.3/14E222b
path: /usr/bin/xcodebuild
Languages:
Java:
version: 20.0.2
path: /usr/bin/javac
Ruby:
version: 3.2.2
path: /Users/01400926/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.74.1
wanted: 0.74.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
not
```
### Reproducer
https://github.com/peaktangf/rnnotresponsedemo
### Screenshots and Videos
https://private-user-images.githubusercontent.com/14729675/370210228-5d7ec6d6-2cbd-4dbd-926d-7005db2dbf38.MOV?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzQ1MzM0MDEsIm5iZiI6MTczNDUzMzEwMSwicGF0aCI6Ii8xNDcyOTY3NS8zNzAyMTAyMjgtNWQ3ZWM2ZDYtMmNiZC00ZGJkLTkyNmQtNzAwNWRiMmRiZjM4Lk1PVj9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEyMTglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMjE4VDE0NDUwMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWZhODE5ODBmYTA2ZGJlODVkNmEzYTliYzU3ZDY4ODRkNTBhNjdhYmJkNDBhOTY5ZTQxMjlhZTkwODBlOTAwZTEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.lgaB0R8Y_wmD0RhvjYn9vUpMtiH1EfmeV4TxVEoyLFw | Platform: Android,Needs: Author Feedback,Needs: Repro,Newer Patch Available,Type: New Architecture | low | Major |
2,748,036,046 | rust | compiletest: debugger ignore logic papers over debugger failing to init | Example in https://github.com/rust-lang/rust/pull/134458#issuecomment-2551321173:
```
tests\debuginfo\issue-13213.rs ... ignored, ignored when the debugger is cdb (Fails with exit code 0xc0000135 ("the application failed to initialize properly"))
``` | A-testsuite,A-debuginfo,T-compiler,T-bootstrap,E-medium,C-bug,A-compiletest | low | Critical |
2,748,050,399 | vscode | terminal suggest becomes buggy when the content wraps | No issues occur if I make the terminal wider cc @Tyriar
https://github.com/user-attachments/assets/5cb4c533-181e-4b23-9376-546325b7f53e
| bug,terminal-suggest | low | Critical |
2,748,056,235 | flutter | Semantics of elements are not same in listview and Column | ### Steps to reproduce
1. Paste in a DartPad
### Expected results
The rendering of `Column` should be the same whether it's in a `ListView` or a `Column`.
<img width="753" alt="image" src="https://github.com/user-attachments/assets/60ea1a05-91ad-4575-880a-8c67f29670f5" />
### Actual results
`Column` are rendered differently in `ListView` and `Column`.
I don't know how to explain it.
Items in the column appear merged when in a list.
<img width="753" alt="image" src="https://github.com/user-attachments/assets/7fecf0d6-1fdb-4800-83f9-5167739098d0" />
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
showSemanticsDebugger: true,
home: Scaffold(
body: SafeArea(
child: Column(
children: [
SizedBox(
height: 100,
child: ListView(
children: [_MyWidget()],
),
),
Column(
mainAxisSize: MainAxisSize.min,
children: [_MyWidget()],
),
],
),
),
),
);
}
}
class _MyWidget extends StatelessWidget {
const _MyWidget();
@override
Widget build(BuildContext context) {
return Column(
children: [
Text('Hello, World!'),
ElevatedButton(
onPressed: () {},
child: Text("Button"),
),
],
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
https://pastebin.com/qWntn72P
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Based on Dart SDK 3.7.0-243.0.dev and Flutter SDK 3.28.0-1.0.pre.104
```
</details>
| framework,a: accessibility,has reproducible steps,P3,team-accessibility,triaged-accessibility,found in release: 3.27,found in release: 3.28 | low | Critical |
2,748,057,014 | rust | Incorrect suggestion to derive `Clone` on `Vec` directly | ### Code
```Rust
pub struct NotClone {}
pub fn foo(v: &Vec<NotClone>) {
let _v = v.clone();
}
```
### Current output
```Shell
warning: call to `.clone()` on a reference in this situation does nothing
--> src/lib.rs:4:15
|
4 | let _v = v.clone();
| ^^^^^^^^
|
= note: the type `Vec<NotClone>` does not implement `Clone`, so calling `clone` on `&Vec<NotClone>` copies the reference, which does not do anything and can be removed
= note: `#[warn(noop_method_call)]` on by default
help: remove this redundant call
|
4 - let _v = v.clone();
4 + let _v = v;
|
help: if you meant to clone `Vec<NotClone>`, implement `Clone` for it
--> /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/vec/mod.rs:397:1
|
39+ #[derive(Clone)]
39| pub struct Vec<T, #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global> {
|
```
### Desired output
```Shell
warning: call to `.clone()` on a reference in this situation does nothing
--> src/lib.rs:4:15
|
4 | let _v = v.clone();
| ^^^^^^^^
|
= note: the type `Vec<NotClone>` does not implement `Clone`, so calling `clone` on `&Vec<NotClone>` copies the reference, which does not do anything and can be removed
= note: `#[warn(noop_method_call)]` on by default
help: remove this redundant call
|
4 - let _v = v.clone();
4 + let _v = v;
|
help: if you meant to clone `Vec<NotClone>`, implement `Clone` for it
--> /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/vec/mod.rs:397:1
|
1+ #[derive(Clone)]
2| pub struct NotClone {}
|
```
### Rationale and extra context
Encountered this while investigating around #134467. Obviously the `derive` is placed at the wrong place.
### Other cases
```Rust
```
### Rust Version
```Shell
1.85.0-nightly
(2024-12-17 a4cb3c831823d9baa56c)
```
### Anything else?
_No response_
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"cyrgani"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-diagnostics,T-compiler,A-suggestion-diagnostics,D-invalid-suggestion | low | Minor |
2,748,069,494 | TypeScript | tsc looses trailing comments in the produced js files | ### 🔎 Search Terms
"missing comments", "missing comments in .js output", "trailing comments"
### 🕗 Version & Regression Information
Happens in every version available in the playground (3.3.3 to v5.8.0-dev.20241218)
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.7.2#code/MYGwhgzhAECC0G8CwAoa7oHpPQNYFN8AHAWggHsAnAF3wBNoJqwbUNoBbfagC3LtgAKAJSI27DMHIA7CiHwA6EOQDmggEQUa9dcIDc49AF9DWHAWJkqtBvml1URoA
### 💻 Code
```ts
class A {
// keep-sorted start
methodA() {
console.log("sorted");
}
// keep-sorted end
}
```
### 🙁 Actual behavior
`// keep-sorted end` is missing in the .js output.
### 🙂 Expected behavior
`// keep-sorted end` is retained in the .js output.
### Additional information about the issue
_No response_ | Help Wanted,Possible Improvement | low | Minor |
2,748,070,732 | stable-diffusion-webui | [Feature Request]: Maximum size for Inpaint "Resize to" | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
When inpainting and then sending the generated image back to inpaint, the "Resize to" width and height are changed to the size of the image. This should not be a huge issue, but if the image is big and one continues inpainting without paying attention to this width and height, the automatic1111 backend crashes the computer(!).
Using right now: version: [v1.10.1](https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/82a973c04367123ae98bd9abdf80d9eda9b910e2) • python: 3.10.14 • torch: 2.1.2+cu121 • xformers: 0.0.23.post1 • gradio: 3.41.2 • checkpoint: [b1689257e6](https://google.com/search?q=b1689257e6e1b2e61544b1a41fc114e7d798f68854b3f875cd52070bfe1fbc00))
Could you please point me to the code that accepts this width/height, so I do a `min(received, 1024)` in my local deployment? Thanks!
### Proposed workflow
Ideally there would be a setting, enabled by default, "Automatically limit only-masked inpaint area width and height [to prevent the computer crashing]".
### Additional information
_No response_ | enhancement | low | Critical |
2,748,103,692 | rust | bootstrap: retry `cargo` invocations if stderr contains a known pattern | ## Why
Our CI auto builds sometimes fail for known reasons that are not related to the PRs we are trying to merge.
Most of the times, these errors are hard to understand and fix (or can't be fixed at all), decreasing the success rate of the auto builds for several weeks or months.
The impact is that we loose days of parallel compute time and hours of maintainers time that need to analyze the error message and reschedule the PRs in the merge queue.
## Feature
We want to list the the stderr of the known issues we are aware of in the `config.toml` file that bootstrap uses. These patterns can be expressed as regex.
We want `bootstrap` to retry cargo invocations up to two times if stderr matches one of the listed patterns.
This would help reduce the failure rate of our CI because it would significantly reduce the percentage of jobs failing due to spurious errors.
The error messages need to be precise enough to avoid retrying cargo invocations over genuine problems.
Known error patterns can be found [here](https://github.com/rust-lang/rust/issues/133959). Not all of them can be listed.
As a start, we could just have 1 stderr string in the list (this one doesn't need to be a regex):
- `ranlib.exe: could not create temporary file whilst writing archive: no more archived files` which is discussed in https://github.com/rust-lang/rust/issues/108227
## Questions
- Is `config.toml` the right place to put the known stderr patterns? In Zulip, Jieyou proposed introducing another file: `retry-patterns.toml`. I'll leave it to the bootstrap team to decide.
- Which format do we use to write the stderr patterns in the `config.toml` file? For example, it can be an array of strings. It could also be an "object" if we want to customize how many times to retry per error message. I'll leave it to the boostrap team to decide.
- how do we make sure these patterns are present in the `config.toml` used for CI? I'm not familiar with how the `config.toml` for the CI is generated.
## Zulip links
- idea proposed [here](https://rust-lang.zulipchat.com/#narrow/channel/242791-t-infra/topic/CI.20improvements/near/489445485)
- agreement reached [here](https://rust-lang.zulipchat.com/#narrow/channel/242791-t-infra/topic/CI.20improvements/near/489749580)
| E-hard,C-enhancement,T-bootstrap,T-infra,A-CI,A-bootstrap-config | low | Critical |
2,748,130,188 | vscode | Enable disposable tracking | With https://github.com/microsoft/vscode/pull/236487 I have added a disposable tracker that uses the finalization registry. The logic is that a GC'ed disposable that hasn't been disposed is a bug that we need to fix.
This should be enabled by default when running out of dev but before that we need to mark a couple of disposables as singletons, e.g those returned from registries (menu, command, language, editors etc...) | debt | low | Critical |
2,748,143,813 | vscode | Snippet Choice Traps Focus | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: **Yes**
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.0
- OS Version: Windows 11, and Mac OS
```
Version: 1.96.0 (user setup)
Commit: 138f619c86f1199955d53b4166bef66ef252935c
Date: 2024-12-11T02:29:09.626Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.22631
```
My extension's Language Server provides snippets to complete function documentation as part of its `textDocument/completion` handler. Here is an example snippet it provides:
```
;---------
; SCOPE: ${1|PRIVATE,INTERNAL,PUBLIC|}
; DESCRIPTION: ${2}
; PARAMETERS:
; subject (${3|I,O,IO|},${4|REQ,OPT|}${5}) - ${6}
; body (${7|I,O,IO|},${8|REQ,OPT|}${9}) - ${10}
; inst (${11|I,O,IO|},${12|REQ,OPT|}${13}) - ${14}
; interval (${15|I,O,IO|},${16|REQ,OPT|}${17}) - ${18}
;${0}
;---------
```
When the user offers the snippet, the content is inserted correctly. However, when they tab to one of the lines below `PARAMETERS:` and stop on any of the `${X|I,O,IO|}` and `${X|REQ,OPT|}` tab choices, they can no longer Shift+Tab to a tab stop before the choice. Interestingly, if the user has reached the `${2}` tab stop, they can tab back to the `$1` choice.
In order to verify this was not an issue with the extension itself, I copied this snippet to be a user-configured global snippet per [these instructions](https://code.visualstudio.com/docs/editor/userdefinedsnippets#_create-your-own-snippets).
```json
{
"Function Header": {
"scope": "",
"prefix": ";",
"body": [
";---------",
"; SCOPE: ${1|PRIVATE,INTERNAL,PUBLIC|}",
"; DESCRIPTION: ${2}",
"; PARAMETERS:",
"; subject (${3|I,O,IO|},${4|REQ,OPT|}${5}) - ${6}",
"; body (${7|I,O,IO|},${8|REQ,OPT|}${9}) - ${10}",
"; inst (${11|I,O,IO|},${12|REQ,OPT|}${13}) - ${14}",
"; interval (${15|I,O,IO|},${16|REQ,OPT|}${17}) - ${18}",
";${0}",
";---------"
]
}
}
```
Steps to Reproduce:
1. Create this snippet in your global snippet configuration
2. In a new document, accept the snippet
3. Tab to the `I/O/IO` tab choice on the line `subject` parameter.
4. You can no longer shift tab back to `${2}`
| bug,snippets | low | Critical |
2,748,145,671 | yt-dlp | Can't download video (and subtitles) when many comments | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Yt-dlp downloads video page, parses it, extracts links (subtitles, video stream), downloads comments, and then downloads content using extracted links.
The problem is downloading comment can take too long, and extracted links become expired, producing 404 error.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--get-comments', '--write-subs', 'https://www.youtube.com/watch?v=uD4izuDMUQA']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8 (No ANSI), error utf-8 (No ANSI), screen utf-8 (No ANSI)
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [542166962] (zip)
[debug] Python 3.9.4 (CPython x86_64 64bit) - macOS-10.13.6-x86_64-i386-64bit (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 4.4.1 (setts), ffprobe 4.4.1, phantomjs 2.1.1
[debug] Optional libraries: certifi-2021.10.08, mutagen-1.45.1, requests-2.27.1, sqlite3-3.34.0, urllib3-1.26.9
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[youtube] Extracting URL: https://www.youtube.com/watch?v=uD4izuDMUQA
[youtube] uD4izuDMUQA: Downloading webpage
[youtube] uD4izuDMUQA: Downloading ios player API JSON
[youtube] uD4izuDMUQA: Downloading mweb player API JSON
[debug] Loading youtube-nsig.03dbdfab from cache
[debug] [youtube] Decrypted nsig 1S4AiOPPlr8bwtmrL => XM5rNJw4_pWm0Q
[debug] Loading youtube-nsig.03dbdfab from cache
[debug] [youtube] Decrypted nsig smffvg_Fcc0uOm2Al => N-Y9lkp46e7kvg
[youtube] uD4izuDMUQA: Downloading m3u8 information
[info] uD4izuDMUQA: Downloading subtitles: en
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[youtube] Downloading comment section API JSON
[youtube] Downloading ~330599 comments
[youtube] Sorting comments by newest first
[youtube] Downloading comment API JSON page 1 (0/~330599)
[youtube] Downloading comment API JSON reply thread 1 (1/~330599)
[youtube] Downloading comment replies API JSON page 1 (11/~330599)
[youtube] Downloading comment replies API JSON page 2 (61/~330599)
[youtube] Downloading comment replies API JSON page 3 (111/~330599)
[youtube] Downloading comment replies API JSON page 4 (161/~330599)
[youtube] Downloading comment replies API JSON page 5 (211/~330599)
[youtube] Downloading comment replies API JSON page 6 (261/~330599)
[youtube] Downloading comment replies API JSON page 7 (311/~330599)
[youtube] Downloading comment replies API JSON page 8 (361/~330599)
[youtube] Downloading comment replies API JSON page 9 (411/~330599)
[youtube] Downloading comment replies API JSON page 10 (461/~330599)
[youtube] Downloading comment replies API JSON page 11 (511/~330599)
[youtube] Downloading comment replies API JSON page 12 (561/~330599)
[youtube] Downloading comment API JSON reply thread 2 (589/~330599)
[youtube] Downloading comment API JSON reply thread 3 (591/~330599)
[youtube] Downloading comment API JSON page 2 (600/~330599)
[youtube] Downloading comment API JSON page 3 (620/~330599)
[youtube] Downloading comment API JSON reply thread 1 (633/~330599)
[youtube] Downloading comment API JSON page 4 (641/~330599)
[youtube] Downloading comment API JSON reply thread 1 (642/~330599)
[youtube] Downloading comment API JSON page 5 (662/~330599)
[youtube] Downloading comment API JSON reply thread 1 (672/~330599)
[youtube] Downloading comment API JSON page 6 (683/~330599)
[youtube] Downloading comment API JSON reply thread 1 (688/~330599)
[youtube] Downloading comment API JSON reply thread 2 (700/~330599)
[youtube] Downloading comment API JSON reply thread 3 (707/~330599)
[youtube] Downloading comment API JSON page 7 (713/~330599)
[youtube] Downloading comment API JSON reply thread 1 (724/~330599)
[youtube] Downloading comment API JSON reply thread 2 (726/~330599)
[youtube] Downloading comment API JSON page 8 (737/~330599)
[youtube] Downloading comment API JSON reply thread 1 (747/~330599)
[youtube] Downloading comment API JSON page 9 (758/~330599)
[youtube] Downloading comment API JSON reply thread 1 (766/~330599)
[youtube] Downloading comment API JSON reply thread 2 (769/~330599)
[youtube] Downloading comment API JSON reply thread 3 (778/~330599)
[youtube] Downloading comment API JSON page 10 (781/~330599)
[youtube] Downloading comment API JSON reply thread 1 (788/~330599)
[youtube] Downloading comment API JSON page 11 (802/~330599)
[youtube] Downloading comment API JSON reply thread 1 (811/~330599)
[youtube] Downloading comment API JSON page 12 (823/~330599)
[youtube] Downloading comment API JSON page 13 (843/~330599)
[youtube] Downloading comment API JSON reply thread 1 (853/~330599)
[youtube] Downloading comment API JSON page 14 (865/~330599)
[youtube] Downloading comment API JSON reply thread 1 (870/~330599)
[youtube] Downloading comment API JSON reply thread 2 (872/~330599)
[youtube] Downloading comment API JSON reply thread 3 (882/~330599)
[youtube] Downloading comment API JSON reply thread 4 (884/~330599)
[youtube] Downloading comment API JSON page 15 (890/~330599)
[youtube] Downloading comment API JSON reply thread 1 (891/~330599)
[youtube] Downloading comment API JSON reply thread 2 (911/~330599)
[youtube] Downloading comment API JSON page 16 (913/~330599)
[youtube] Downloading comment API JSON reply thread 1 (929/~330599)
[youtube] Downloading comment API JSON page 17 (934/~330599)
[youtube] Downloading comment API JSON reply thread 1 (945/~330599)
[youtube] Downloading comment API JSON page 18 (955/~330599)
[youtube] Downloading comment API JSON reply thread 1 (961/~330599)
[youtube] Downloading comment API JSON reply thread 2 (971/~330599)
[youtube] Downloading comment API JSON page 19 (979/~330599)
[youtube] Downloading comment API JSON reply thread 1 (987/~330599)
[youtube] Downloading comment API JSON reply thread 2 (997/~330599)
[youtube] Downloading comment API JSON reply thread 3 (1003/~330599)
...
[youtube] Downloading comment API JSON page 10700 (328997/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329007/~330599)
[youtube] Downloading comment API JSON page 10701 (329018/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329028/~330599)
[youtube] Downloading comment API JSON page 10702 (329042/~330599)
[youtube] Downloading comment API JSON page 10703 (329062/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329078/~330599)
[youtube] Downloading comment API JSON page 10704 (329083/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329085/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329095/~330599)
[youtube] Downloading comment API JSON page 10705 (329112/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329124/~330599)
[youtube] Downloading comment API JSON page 10706 (329133/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329150/~330599)
[youtube] Downloading comment API JSON page 10707 (329157/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329168/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329179/~330599)
[youtube] Downloading comment API JSON page 10708 (329180/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329196/~330599)
[youtube] Downloading comment replies API JSON page 1 (329206/~330599)
[youtube] Downloading comment API JSON page 10709 (329212/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329214/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329231/~330599)
[youtube] Downloading comment API JSON page 10710 (329240/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329246/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329252/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329258/~330599)
[youtube] Downloading comment API JSON page 10711 (329263/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329267/~330599)
[youtube] Downloading comment API JSON page 10712 (329284/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329288/~330599)
[youtube] Downloading comment API JSON page 10713 (329306/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329309/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329320/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329334/~330599)
[youtube] Downloading comment API JSON page 10714 (329338/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329350/~330599)
[youtube] Downloading comment replies API JSON page 1 (329360/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329369/~330599)
[youtube] Downloading comment API JSON page 10715 (329371/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329386/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329392/~330599)
[youtube] Downloading comment API JSON page 10716 (329393/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329402/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329413/~330599)
[youtube] Downloading comment API JSON page 10717 (329419/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329427/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329432/~330599)
[youtube] Downloading comment API JSON page 10718 (329441/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329461/~330599)
[youtube] Downloading comment API JSON page 10719 (329462/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329468/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329472/~330599)
[youtube] Downloading comment API JSON page 10720 (329484/~330599)
[youtube] Downloading comment API JSON page 10721 (329504/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329524/~330599)
[youtube] Downloading comment API JSON page 10722 (329526/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329527/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329536/~330599)
[youtube] Downloading comment API JSON page 10723 (329555/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329565/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329567/~330599)
[youtube] Downloading comment API JSON page 10724 (329578/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329591/~330599)
[youtube] Downloading comment API JSON page 10725 (329599/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329600/~330599)
[youtube] Downloading comment API JSON page 10726 (329620/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329628/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329649/~330599)
[youtube] Downloading comment API JSON page 10727 (329651/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329653/~330599)
[youtube] Downloading comment replies API JSON page 1 (329663/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329673/~330599)
[youtube] Downloading comment API JSON page 10728 (329687/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329694/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329698/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329702/~330599)
[youtube] Downloading comment API JSON page 10729 (329719/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329737/~330599)
[youtube] Downloading comment replies API JSON page 1 (329747/~330599)
[youtube] Downloading comment API JSON page 10730 (329776/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329787/~330599)
[youtube] Downloading comment API JSON page 10731 (329803/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329812/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329817/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329823/~330599)
[youtube] Downloading comment API JSON page 10732 (329828/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329837/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329843/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329848/~330599)
[youtube] Downloading comment replies API JSON page 1 (329858/~330599)
[youtube] Downloading comment API JSON page 10733 (329863/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329879/~330599)
[youtube] Downloading comment replies API JSON page 1 (329889/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329912/~330599)
[youtube] Downloading comment API JSON page 10734 (329914/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329922/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329926/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329933/~330599)
[youtube] Downloading comment API JSON page 10735 (329945/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329949/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329955/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329963/~330599)
[youtube] Downloading comment API JSON reply thread 4 (329967/~330599)
[youtube] Downloading comment API JSON reply thread 5 (329970/~330599)
[youtube] Downloading comment API JSON page 10736 (329975/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329976/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329979/~330599)
[youtube] Downloading comment replies API JSON page 1 (329989/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329993/~330599)
[youtube] Downloading comment API JSON reply thread 4 (329999/~330599)
[youtube] Downloading comment replies API JSON page 1 (330009/~330599)
[youtube] Downloading comment replies API JSON page 2 (330059/~330599)
[youtube] Downloading comment API JSON reply thread 5 (330067/~330599)
[youtube] Downloading comment replies API JSON page 1 (330077/~330599)
[youtube] Downloading comment replies API JSON page 2 (330127/~330599)
[youtube] Downloading comment replies API JSON page 3 (330177/~330599)
[youtube] Downloading comment replies API JSON page 4 (330227/~330599)
[youtube] Downloading comment replies API JSON page 5 (330277/~330599)
[youtube] Downloading comment replies API JSON page 6 (330327/~330599)
[youtube] Downloading comment replies API JSON page 7 (330377/~330599)
[youtube] Downloading comment replies API JSON page 8 (330427/~330599)
[youtube] Downloading comment replies API JSON page 9 (330477/~330599)
[youtube] Downloading comment replies API JSON page 10 (330527/~330599)
[youtube] Downloading comment API JSON reply thread 6 (330538/~330599)
[youtube] Downloading comment API JSON reply thread 7 (330540/~330599)
[youtube] Downloading comment replies API JSON page 1 (330550/~330599)
[youtube] Downloading comment API JSON reply thread 8 (330557/~330599)
[youtube] Downloading comment API JSON page 10737 (330558/~330599)
[youtube] Downloading comment API JSON reply thread 1 (330564/~330599)
[youtube] Downloading comment API JSON reply thread 2 (330568/~330599)
[youtube] Downloading comment API JSON reply thread 3 (330574/~330599)
[youtube] Extracted 330580 comments
[debug] Default format spec: bestvideo*+bestaudio/best
[info] uD4izuDMUQA: Downloading 1 format(s): 401+251
[info] Writing video subtitles to: TIMELAPSE OF THE FUTURE: A Journey to the End of Time (4K) [uD4izuDMUQA].en.vtt
[debug] Invoking http downloader on "https://www.youtube.com/api/timedtext?v=uD4izuDMUQA&ei=QQxiZ_uZN-_UxN8P0ufXoQc&caps=asr&opi=112496729&exp=xbt&xoaf=5&hl=en&ip=0.0.0.0&ipbits=0&expire=1734504113&sparams=ip%2Cipbits%2Cexpire%2Cv%2Cei%2Ccaps%2Copi%2Cexp%2Cxoaf&signature=5C32FB21273E3176945C8CDF31245E71038E40BD.19F6AEC3D4E5045E0BAFC6CB0B74DAACB6B8DBAC&key=yt8&lang=en&fmt=vtt"
ERROR: Unable to download video subtitles for 'en': HTTP Error 404: Not Found
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/networking/_urllib.py", line 398, in _send
res = opener.open(urllib_req, timeout=self._calculate_timeout(request))
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 523, in open
response = meth(req, response)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 632, in http_response
response = self.parent.error(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 561, in error
return self._call_chain(*args)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 494, in _call_chain
result = func(*args)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 641, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 4351, in _write_subtitles
self.dl(sub_filename, sub_copy, subtitle=True)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3199, in dl
return fd.download(name, new_info, subtitle)
File "/usr/local/bin/yt-dlp/yt_dlp/downloader/common.py", line 464, in download
ret = self.real_download(filename, info_dict)
File "/usr/local/bin/yt-dlp/yt_dlp/downloader/http.py", line 367, in real_download
establish_connection()
File "/usr/local/bin/yt-dlp/yt_dlp/downloader/http.py", line 118, in establish_connection
ctx.data = self.ydl.urlopen(request)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 4162, in urlopen
return self._request_director.send(req)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/common.py", line 117, in send
response = handler.send(request)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/common.py", line 340, in send
return self._send(request)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/_urllib.py", line 403, in _send
raise HTTPError(UrllibResponseAdapter(e.fp), redirect_loop='redirect error' in str(e)) from e
yt_dlp.networking.exceptions.HTTPError: HTTP Error 404: Not Found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1624, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1780, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1839, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3011, in process_video_result
self.process_info(new_info)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 177, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3267, in process_info
sub_files = self._write_subtitles(info_dict, temp_filename)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 4359, in _write_subtitles
raise DownloadError(msg)
yt_dlp.utils.DownloadError: Unable to download video subtitles for 'en': HTTP Error 404: Not Found
```
| question | low | Critical |
2,748,145,735 | ui | [bug]: ContextMenu fails to update location on right-click to a new location after opening | ### Describe the bug
### Title: ContextMenu fails to update location on right-click to a new location after opening
#### Description:
When using the `ContextMenu` component in ShadCN UI, the context menu correctly opens when right-clicking. However, if the user right-clicks again at a different location within the same container, the context menu remains at the previous location instead of updating to the new one. The position is not recalculated when the context menu is opened at a new location.
#### Steps to Reproduce:
1. Right-click anywhere in the container to open the `ContextMenu`.
2. Without closing the context menu, right-click again at a different location within the container.
3. Notice that the context menu does not update its position and remains at the previous location.
#### Expected Behavior:
When right-clicking in a new location, the context menu should update its position to the new click location and display accordingly.
#### Actual Behavior:
The context menu stays at the previous location and does not update its position.
### Affected component/components
ContextMenu
### How to reproduce
#### Steps to Reproduce:
1. Right-click anywhere in the container to open the `ContextMenu`.
2. Without closing the context menu, right-click again at a different location within the container.
3. Notice that the context menu does not update its position and remains at the previous location.
#### Expected Behavior:
When right-clicking in a new location, the context menu should update its position to the new click location and display accordingly.
#### Actual Behavior:
The context menu stays at the previous location and does not update its position.
### Codesandbox/StackBlitz link
https://v0.dev/chat/ORkigkeuMgZ
### Logs
_No response_
### System Info
```bash
Windows 11, Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,748,179,831 | react-native | FlatList > Orientation change > Different Item Heights for Portrait and Landscape Do Not Persist Scroll Position Correctly | ### Description
render items have a different height (landscape = 50 portrait = 200)
When scrolling down a list of flat list items, when the orientation is changed (from portrait to landscape and vise versa), the current scroll position is incorrect.
It seems to retail the current scroll position in pixels and not by the viewable items that were present pre orientation change?
Note that if you're at the top of the list, there is no issue. It's only when you have scrolled down the list.
I have tried capturing the top viewable item on onViewableItemChanged and attempting to use the ref's scrollToIndex, but to no avail.
### Steps to reproduce
Install and execute the application
Scroll down the list
Change orientation
The previously viewable items are no longer in view
### React Native Version
0.76.5
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
npm verbose cli C:\Program Files\nodejs\node.exe C:\Users\MarkAustin\AppData\Roaming\nvm\v22.12.0\node_modules\npm\bin\npm-cli.js
npm info using [email protected]
npm info using [email protected]
npm verbose title npm exec react-native info
npm verbose argv "exec" "--" "react-native" "info"
npm verbose logfile logs-max:10 dir:d:\workspace\npm-cache\_logs\2024-12-18T15_49_53_732Z-
npm verbose logfile d:\workspace\npm-cache\_logs\2024-12-18T15_49_53_732Z-debug-0.log
info Fetching system and libraries information...
(node:27612) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
System:
OS: Windows 11 10.0.26100
CPU: (22) x64 Intel(R) Core(TM) Ultra 7 155H
Memory: 12.54 GB / 31.70 GB
Binaries:
Node:
version: 22.12.0
path: C:\Program Files\nodejs\node.EXE
Yarn: Not Found
npm:
version: 10.9.0
path: C:\Program Files\nodejs\npm.CMD
Watchman: Not Found
SDKs:
Android SDK: Not Found
Windows SDK: Not Found
IDEs:
Android Studio: AI-242.23339.11.2421.12550806
Visual Studio:
- 17.12.35506.116 (Visual Studio Enterprise 2022)
Languages:
Java: 17.0.13
Ruby: Not Found
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.5
wanted: 0.76.5
react-native-windows: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
npm verbose cwd D:\Workspace\rn-replications\flatlist-orientiation\ReproducerApp
npm verbose os Windows_NT 10.0.26100
npm verbose node v22.12.0
npm verbose npm v10.9.0
npm verbose exit 0
npm info ok
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
https://github.com/markaustinws/flatlist-orientiation
### Screenshots and Videos

| Issue: Author Provided Repro,Component: FlatList | low | Critical |
2,748,189,710 | electron | Add flag to enable PipeWire Camera support | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
Hi,
In Chrome/Chromium, it is possible to enable the PipeWire Camera support in flags:

It allows to use camera that are supported through libcamera.
### Proposed Solution
It would be great if it was possible to add such an option to electron to enable it too.
### Alternatives Considered
Not using Electron, maybe using libcamerify to execute the electron app and expose the camera as a v4l device (I haven't tested yet)
### Additional Information
I want this to be able to use https://github.com/IsmaelMartinez/teams-for-linux/ with a pipewire camera. | enhancement :sparkles: | low | Minor |
2,748,196,102 | PowerToys | Can you add an option that only displays the New+menu when holding down Shift?🙏🙏🙏 | ### Provide a description of requested docs changes
I'm sorry for the translation software I used
I hope to add an option for the New+menu to only appear when the Shift key is pressed, as template files are not used as frequently
And when I mistakenly use a new+ template to create a new one
Because my template is quite complex
So I may create files that exceed 5G and have a quantity of over 300+
When I mistakenly created them in the HDD
That's really scary | Issue-Docs,Needs-Triage,Product-New+ | low | Minor |
2,748,198,792 | svelte | Client errors aren't that helpful in the docs | I've managed to trigger the `Svelte error: state_unsafe_mutation` error a couple of times while migrating my app from Svelte 4 to 5. When it happens i am a little confused as to why, and when i click on the link to the client errors in the docs i get no further explanation on common use cases, how it can happen and what to do to solve it: https://svelte.dev/docs/svelte/runtime-errors#Client-errors
It would be awesome if the docs had deeper explanations on these errors, so that i could use it as a resource for fixing the bugs that occur. I would imagine that people who google these errors would be pleasantly surprised to find an explanation and solution to these errors in the docs. | documentation | low | Critical |
2,748,209,037 | rust | Unsized types in required trait methods | so this issue is a follow up for #134422 (closed as not planned) to suggest making a lint instead.
# What the issue talked about
Rust does not check for `T: Sized` in required trait methods
```rust
trait Foo {
fn bar(self: Self, x: str);
}
```
The above code compiles, even though `Self` and `str` are both `?Sized`
this makes the trait unimplementable
```rust
impl Foo for [u8] {
fn bar(self: Self, x: str) {}
}
```
Produces:
```shell
The size for values of type `[u8]` cannot be known at compile time
The size for values of type `str` cannot be known at compile time
```
For more information please see the [RFC]
# Meta
for anyone who cares, here is `rustc --version --verbose`
```
rustc 1.85.0-nightly (21fe748be 2024-12-11)
binary: rustc
commit-hash: 21fe748be15271ea5804e0507cd699b675efe038
commit-date: 2024-12-11
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.5
```
Here is the [RFC]
[RFC]: https://github.com/rust-lang/rfcs/pull/3745 | A-lints,T-lang,C-feature-request,needs-rfc,T-types,A-trait-objects | low | Major |
2,748,211,125 | kubernetes | Failing test: [sig-network] Services should fail health check node port if there are only terminating endpointsimage | ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-containerd#image-validation-ubuntu-e2e
Sig network tests are failing due to issues with curl.
### Which tests are failing?
- [sig-network] Services should fail health check node port if there are only terminating endpointsimage
- [sig-network] Networking should check kube-proxy urls
- [sig-network] Services should implement NodePort and HealthCheckNodePort correctly when ExternalTrafficPolicy changes
### Since when has it been failing?
Test is red as far back as it goes
### Testgrid link
https://testgrid.k8s.io/sig-node-containerd#image-validation-ubuntu-e2e
### Reason for failure (if possible)
_No response_
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig network | sig/network,kind/failing-test,needs-triage | low | Critical |
2,748,213,456 | ollama | falcon3:10b gives empty response sometimes | ### What is the issue?
Ollama 0.5.4 with falcon3:10b randomly gives empty response. The question I asked is "Why is e^(B_t-t/2) a martingale? Specifically why is it finite?". Initial debugging with help from Ollama Discord points to structured output problems.
Server log: [com.i0ntpst.ollama.log](https://github.com/user-attachments/files/18185728/com.i0ntpst.ollama.log)
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.1 | bug | low | Critical |
2,748,224,813 | ollama | IBM Granite MoE & Dense-2b is very slow when KV Cache quantization is enabled | ### What is the issue?
I found all Granite MoE models + dense:2b runs extremely slow when KV Cache is enabled, there didn't seems to be any hit on models response quality, just speed, kinda strange
I'm using Windows 11 + RTX 4090
Here is an example using model: granite3.1-moe:3b-instruct-q8_0
### `set OLLAMA_FLASH_ATTENTION=1 && set OLLAMA_KV_CACHE_TYPE=q8_0 && ollama serve`
```
>>> how far is the moon
The distance from Earth to the Moon can vary due to the elliptical shape of its orbit around our planet. On
average, it's about 238,855 miles (384,400 kilometers) away from Earth. However, this is approximately 238,855
miles (384,400 kilometers) at its closest approach and can range up to 252,088 miles (405,696 kilometers) during
its farthest point in its elliptical path.
total duration: 8.3218603s
load duration: 15.7633ms
prompt eval count: 49 token(s)
prompt eval duration: 242ms
prompt eval rate: 202.48 tokens/s
eval count: 130 token(s)
eval duration: 8.005s
eval rate: 16.24 tokens/s
```
### `ollama serve`
```
>>> how far is the moon
The average distance from Earth to the Moon is approximately 238,855 miles (384,400 kilometers). However, it's
important to note that this can fluctuate slightly due to the elliptical nature of its orbit around our planet. At
its closest point, known as perigee, it's about 225,623 miles (363,104 kilometers) away from Earth, while at its
farthest point, called apogee, it can reach up to 252,088 miles (405,696 kilometers).
total duration: 4.2702016s
load duration: 805.8374ms
prompt eval count: 193 token(s)
prompt eval duration: 287ms
prompt eval rate: 672.47 tokens/s
eval count: 142 token(s)
eval duration: 3.115s
eval rate: 45.59 tokens/s
```
---
granite3.1-dense:2b also have the same issue
`ollama run granite3.1-dense:2b-instruct-q8_0 --verbose`
### `set OLLAMA_FLASH_ATTENTION=1 && set OLLAMA_KV_CACHE_TYPE=q8_0 && ollama serve`
```
>>> how far is the moon
As previously mentioned, the average distance from the Earth to the Moon is approximately 238,855 miles (384,400
kilometers). This value remains constant throughout their orbital motion around each other.
total duration: 3.7165709s
load duration: 847.2622ms
prompt eval count: 124 token(s)
prompt eval duration: 93ms
prompt eval rate: 1333.33 tokens/s
eval count: 52 token(s)
eval duration: 2.717s
eval rate: 19.14 tokens/s
```
### `ollama serve`
```
>>> how far is the moon
The average distance from the Earth to the Moon is about 238,855 miles (384,400 kilometers). This distance is
often referred to as the semi-major axis of the Moon's elliptical orbit around the Earth.
total duration: 815.8894ms
load duration: 16.3291ms
prompt eval count: 49 token(s)
prompt eval duration: 286ms
prompt eval rate: 171.33 tokens/s
eval count: 61 token(s)
eval duration: 458ms
eval rate: 133.19 tokens/s
```
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.4 | bug | low | Major |
2,748,236,791 | flutter | WASM + Edge: TypeError: type incompatibility when transforming from/to JS | ### Steps to reproduce
1. Run the [Superlist app](https://app.superlist.com) with Microsoft Edge version 131 + Windows 10
### Expected results
Flutter app runs for all users
### Actual results
Flutter app does not run for some users and throws an error relating to `boxed_double.dart` & `alignment.dart`. Thus far, we see users with this profile:
```
Browser: microsoft edge
Browser version: 131.0.0.0
Os: Windows 10
```
### Code sample
I do not have a minimal code sample and I cannot reproduce this error myself.
I have built a special version of our app with WASM that does not strip symbols and disables all optimizations. The logs below contain the error seen by some users. In my view, the relevant parts come from the `boxed_double.dart` through to the `alignment.dart` files.
### Logs
Errors in console below:
```
main.dart.mjs:72 JavaScriptError
main.dart.mjs:72 at module0._invokeCallback (https://deploy-preview-5924--dev-web-superlist.netlify.app/main.dart.wasm:wasm-function[1631]:0x614df5)
at https://deploy-preview-5924--dev-web-superlist.netlify.app/main.dart.mjs:986:49
boxed_double.dart:233 Uncaught Exception {}
$Error._throw @ boxed_double.dart:233
$Error._throwWithCurrentStackTrace @ box.dart:493
$_invokeCallback @ main.dart.wasm:0x614e17
(anonymous) @ main.dart.mjs:986
alignment.dart:106 Uncaught (in promise) TypeError: type incompatibility when transforming from/to JS
at module0._319 (alignment.dart:106:40)
at module0._319 (alignment.dart:523:91)
at Object.initializeEngine (main.dart.mjs:149:80)
at v.onEntrypointLoaded [as _onEntrypointLoaded] ((index):370:55)
at v.didCreateEngineInitializer (flutter.js:1:1679)
at _1516 (main.dart.mjs:463:28)
at module0.FlutterLoaderExtension|didCreateEngineInitializer (main.dart.wasm:0x117d91f)
at module0.bootstrapEngine inner (main.dart.wasm:0x1171317)
at module0.bootstrapEngine (main.dart.wasm:0x11710c8)
at module0.main inner (text_modifier_group.dart:312:64)
$_319 @ alignment.dart:106
$_319 @ alignment.dart:523
(anonymous) @ main.dart.mjs:149
onEntrypointLoaded @ (index):370
didCreateEngineInitializer @ flutter.js:1
_1516 @ main.dart.mjs:463
$FlutterLoaderExtension|didCreateEngineInitializer @ main.dart.wasm:0x117d91f
$bootstrapEngine inner @ main.dart.wasm:0x1171317
$bootstrapEngine @ main.dart.wasm:0x11710c8
$main inner @ text_modifier_group.dart:312
$main @ weak_patch.dart:151
$main tear-off trampoline @ main.dart.wasm:0x666917
$_invokeMain @ super_button_style.dart:265
invokeMain @ main.dart.mjs:1686
_loadWasmEntrypoint @ flutter.js:1
await in _loadWasmEntrypoint
```
### Flutter Doctor output
```
[!] Flutter (Channel master, 3.27.0-1.0.pre.554, on macOS 15.2 24C101 darwin-arm64, locale en-US)
• Flutter version 3.27.0-1.0.pre.554 on channel master at /Users/brian/superlist/superlist/.flutter
! Unknown upstream repository.
Reinstall Flutter by following instructions at https://flutter.dev/setup.
• Framework revision 966aeb28ea (4 weeks ago), 2024-11-19 04:17:26 -0500
• Engine revision b6723e33b8
• Dart version 3.7.0 (build 3.7.0-160.0.dev)
• DevTools version 2.41.0-dev.2
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update checks and upgrades.
[!] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/brian/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = /Users/brian/Library/Android/sdk
• Java binary at: /usr/bin/java
✗ Could not determine java version
[!] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
! CocoaPods 1.16.1 out of date (1.16.2 is recommended).
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
To update CocoaPods, see https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/macos-android-setup for detailed instructions).
[✓] IntelliJ IDEA Ultimate Edition (version 2024.3.1)
• IntelliJ at /Applications/IntelliJ IDEA.app
• Flutter plugin version 83.0.4
• Dart plugin version 243.23177
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.100.0
[✓] Connected device (5 available)
• Briphone (mobile) • 00008120-000A20980233401E • ios • iOS 18.2 22C152
• iPhone 16 (mobile) • 2D28A576-2687-46EE-9F84-CA935E5AAAF0 • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-0 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.2 24C101 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.2 24C101 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.140
! Error: Browsing on the local area network for Brian’s iPad. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 4 categories.
``` | platform-windows,platform-web,assigned for triage,browser: edge,e: wasm,team-web | low | Critical |
2,748,264,068 | opencv | Unknown C++ exception from OpenCV code from python using QR Detect | ### System Information
OpenCV version: 4.10.0.82
Operating System / Platform: Windows 10 Version 22H2 (OS Build 19045.5247)
Python 3.9.13
Reproduced in Docker Image as well:
OpenCV version: 4.10.0.84
Operating System / Platform: Ubuntu 24.04.1
Python: 3.12.3
### Detailed description
Particular images are hanging on the call to detect and decode for the QR Code Detector. They run for about 10 minutes and then provide the error: `cv2.error: Unknown C++ exception from OpenCV code` . WIthin the Docker Image described above they run for the same amount of time until throwing a Segmentation Fault. Alternatively, when these images are run through the same call on OpenCV Aruco QR Detector the errors are not encountered. In some cases we have observed that subtle changes to the images will lead to successful runs. Images are attached here.


### Steps to reproduce
```python
qcd = cv2.QRCodeDetector()
img = cv2.imread("Path/To/Image")
retval, decoded_info, points, straight_qrcode = qcd.detectAndDecodeMulti(img)
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: objdetect | low | Critical |
2,748,264,589 | godot | Text servers can't be built as extensions | ### Tested versions
4.4.dev [6395450b10d73bc3515763c7bbd1b2f5b7425d10]
### System information
Windows 11
### Issue description
While working on https://github.com/godotengine/godot/pull/100562 I was attempting to confirm the compilation as an extension would work but ran into unrelated problems, two primary ones currently:
* The builsystem fails due to:
```
ImportError: cannot import name 'Ansi' from partially initialized module 'methods' (most likely due to a circular import) (modules\text_server_fb\gdextension_build\methods.py):
File "modules\text_server_fb\gdextension_build\SConstruct", line 7:
import methods
File "modules\text_server_fb\gdextension_build\methods.py", line 6:
from methods import Ansi
```
This should be easy to fix by renaming `modules\text_server_fb\gdextension_build\methods.py` to something specific.
* The compilation itself fails due to missing thorvg files, this would likely be due to the SCons file not being up-to-date with changes to the svg module
I can look into fixing the specifics but we should probably evaluate if we want to maintain this functionality as buildable, drop it entirely, or just leave it
### Steps to reproduce
Try to build the extension version of either text server module
### Minimal reproduction project (MRP)
N/A | bug,discussion,topic:buildsystem,topic:gdextension | low | Critical |
2,748,264,593 | godot | CavasItem's clip only mode on skewed styleboxes fails to clear. | ### Tested versions
- Reproducible in: 4.3.stable
### System information
Godot v4.3.stable.mono - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 Ti (NVIDIA; 32.0.15.6636) - AMD Ryzen 7 4800H with Radeon Graphics (16 Threads)
### Issue description


As you can see, the portions of the stylebox outside the bounding box are not being cleared; artifacts of previous frames persist.
### Steps to reproduce
Create a panel with a flat stylebox.
Skew the stylebox.
Put any visible child inside the panel.
### Minimal reproduction project (MRP)
[panel.zip](https://github.com/user-attachments/files/18186026/panel.zip)
| bug,topic:rendering,topic:2d | low | Minor |
2,748,281,390 | PowerToys | Add automatic rename mode when creating a new file type in New+ | ### Description of the new feature / enhancement
In **New+**, When a new file type is created from the template, it should _automatically_ enter "**rename** mode" to allow users to immediately rename the file from its default template name. This feature is similar to the Windows built-in behavior and would streamline workflows by eliminating unnecessary steps, such as navigating to the file and manually triggering rename.
### Scenario when this would be used?
This feature would be particularly helpful when users create multiple files in quick succession. Often, new files are named generically (e.g., "New File (1)"), with respect to the template name, and users must manually rename them later. By entering rename mode automatically, users can:
- Avoid creating duplicate, undescriptive filenames.
- Save time by renaming files immediately upon creation.
This is especially beneficial for power users managing large projects with frequent file additions.
### Supporting information
_No response_ | Idea-Enhancement,Needs-Triage,Product-New+ | low | Minor |
2,748,286,378 | pytorch | [Dynamo] torch._dynamo.exc.Unsupported: sort with non-constant keys | ### 🐛 Describe the bug
This error was encountered while trying to implement a version of [Autotuner.prune_configs](https://github.com/triton-lang/triton/blob/137bc62102f4a261cc921998221cea2b046a6c1b/python/triton/runtime/autotuner.py#L214) from Triton.
This function was modified from operating on a dict to a list (dict with config keys is also not supported).
A minimal repro would look something like:
```python
est_timing: List[Tuple[triton.runtime.Config, float]]
est_timing = [
(config, perf_model(**named_args, **kwargs, **config.all_kwargs()))
for config in configs
]
configs = sorted(est_timing, key=lambda x: est_timing[1])[:top_k]
```
Here is the complete function which triggered the error (for reproducibility):
```python
def call_prune_configs( # type: ignore[no-untyped-def]
autotuner,
early_config_prune: Callable,
perf_model: Callable,
top_k: float,
is_top_k_float: bool,
configs: List,
named_args: Dict,
kwargs: Dict,
):
if early_config_prune:
configs = early_config_prune(configs, named_args, **kwargs)
if perf_model:
# we assert top_k is a float before calling this
if is_top_k_float and top_k <= 1.0:
top_k = int(len(configs) * top_k)
if len(configs) > top_k:
est_timing = [
(config, perf_model(**named_args, **kwargs, **config.all_kwargs()))
for config in configs
]
configs = sorted(est_timing, key=lambda x: est_timing[1])[:top_k]
return configs
# Called in torch/_higher_order_ops/triton_kernel_wrap.py
pruned_configs = self.call_user_defined_fn(
call_prune_configs,
[
variable,
wrapped_early_configs_prune,
wrapped_perf_model,
wrapped_configs_top_k,
wrapped_is_top_k_float,
wrapped_configs,
named_args,
kwargs,
],
{},
tx,
variable.source,
)
```
Here is a stack trace of the generated bytecode leading up to the error:
```bash
"/data/users/ginzburg/pytorch/torch/_higher_order_ops/triton_kernel_wrap.py", line 1023> [BuiltinVariable(sorted), ListVariable(length=2), TupleVariable(length=1)]
V1218 08:22:05.910000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE LOAD_CONST call_prune_configs.<locals>.<lambda> [BuiltinVariable(sorted), ListVariable(length=2), TupleVariable(length=1), ConstantVariable(code: <code object <lambda> at 0x7f9e3c5fbb50, file "/data/users/ginzburg/pytorch/torch/_higher_order_ops/triton_kernel_wrap.py", line 1023>)]
V1218 08:22:05.910000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE MAKE_FUNCTION 8 [BuiltinVariable(sorted), ListVariable(length=2), TupleVariable(length=1), ConstantVariable(code: <code object <lambda> at 0x7f9e3c5fbb50, file "/data/users/ginzburg/pytorch/torch/_higher_order_ops/triton_kernel_wrap.py", line 1023>), ConstantVariable(str: 'call_prune_configs.<locals>.<lambda>')]
V1218 08:22:05.910000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE LOAD_CONST ('key',) [BuiltinVariable(sorted), ListVariable(length=2), NestedUserFunctionVariable()]
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_KW 2 [BuiltinVariable(sorted), ListVariable(length=2), NestedUserFunctionVariable(), TupleVariable(length=1)]
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE LOAD_DEREF est_timing []
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE LOAD_CONST 1 [ListVariable(length=2)]
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE BINARY_SUBSCR None [ListVariable(length=2), ConstantVariable(int: 1)]
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TupleVariable(length=2)]
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE LOAD_DEREF est_timing []
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE LOAD_CONST 1 [ListVariable(length=2)]
V1218 08:22:05.911000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE BINARY_SUBSCR None [ListVariable(length=2), ConstantVariable(int: 1)]
V1218 08:22:05.912000 934875 torch/_dynamo/symbolic_convert.py:956] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TupleVariable(length=2)]
inline_call [('sort with non-constant keys', 1)]
```
### Versions
PyTorch version: 2.6.0a0+git28e242f
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: 18.1.8 (CentOS 18.1.8-3.el9)
CMake version: version 3.26.4
Libc version: glibc-2.34
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.16.1
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0a0+git28e242f
[conda] blas 1.0 mkl
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.10 py310h5eee18b_0
[conda] mkl_random 1.2.7 py310h1128e8f_0
[conda] numpy 1.26.4 py310h5f9d8c6_0
[conda] numpy-base 1.26.4 py310hb5e798b_0
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0a0+git28e242f dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,748,330,660 | godot | Stream sync play and seek is setting wrong position on certain cases | ### Tested versions
- Reproducible in: 4.3.stable and later
- Not reproducible in: 4.2.stable and earlier
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - Intel(R) HD Graphics 5500 (Intel Corporation; 20.19.15.5107) - Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz (4 Threads)
### Issue description
If you have multiple streams on AudioStreamSynchronized with different lengths, start and seek methods will unsync the sound when position set is higher than one of the stream's length.
There's an opened [PR](https://github.com/godotengine/godot/pull/100534) to fix it.
### Steps to reproduce
- Add a AudioStreamSynchronized to an AudioPlayer
- Add a stream with a length of 16 seconds
- Add another stream with a length of 32 seconds
- Call audio_player.play(20) or audio_player.seek(20)
### Minimal reproduction project (MRP)
You can reproduce this issue [here](https://github.com/adriano-sudario/godot_streams_issues_mrp) with detailed informations running the scene _res://bugs/stream_sync_play_and_seek_not_setting_position_right_sometimes.tscn_ | bug,topic:audio,regression | low | Critical |
2,748,334,631 | go | runtime: use runtime.AddCleanup in the standard library | AddCleanup has been added to the runtime (#67535). We should use runtime.AddCleanup instead of runtime.SetFinalizer in the standard library wherever it is possible.
@mknyszek | NeedsFix,compiler/runtime | low | Major |
2,748,362,367 | flutter | Question for the engine team about making copyPixelBuffer/textureFrameAvailable more explicit about call patterns | From https://github.com/flutter/packages/pull/7466#discussion_r1871942627:
"I think the next step is to file an issue to explore with the engine team whether making the copyPixelBuffer/textureFrameAvailable more explicit about call patterns (e.g., saying that calls will not be more frequent than ~the screen refresh rate) is something the engine team is comfortable with, so we know whether or not we need to build limiting logic at the plugin level."
In #7466 there is little issue that it assumes that `copyPixelBuffer` will not be called more often than screen refresh rate. It is not very critical for functionality but it would not be optimal if that did not hold. Current implementation holds this assumption but that is not documented.
| engine,P3,team-engine,triaged-engine | low | Major |
2,748,363,512 | flutter | Button taps have vertical offset on a emulator/simulator | ### Steps to reproduce
1. Run the example app
2. Scroll until you see the button "inside scroll"
3. Try tapping the top half of the button. Taps are not detected. Now try tapping right under the button, but outside the box. Taps go through. It seems like there is a vertical offset in its hitbox. Images attached
I'm running the demo on an Android emulator, so that we have the precision of the mouse to tap near the bounding of the button.
### Expected results
No vertical offset when tapping, just like the button outside the scroll in the demo
### Actual results
There is a vertical offset when tapping
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int numTaps = 0;
@override
Widget build(BuildContext context) {
final Widget body = Column(
children: [
Text("Button outside scroll has the correct hit box"),
FilledButton.icon(
icon: Icon(Icons.add),
onPressed: () {
setState(() {
numTaps++;
});
},
label: Text("outside scroll"),
),
Expanded(
child: Scrollbar(
child: CustomScrollView(
slivers: <Widget>[
for (int i = 0; i < 10; i++) ...[
SliverToBoxAdapter(
child: Placeholder(
fallbackHeight: 100,
)),
],
SliverToBoxAdapter(
child: FilledButton.icon(
icon: Icon(Icons.add),
onPressed: () {
setState(() {
numTaps++;
});
},
label: Text("inside scroll"),
),
),
SliverToBoxAdapter(
child: Placeholder(
fallbackHeight: 100,
)),
],
),
),
)
],
);
return Scaffold(
appBar: AppBar(
title: Text("Num taps: $numTaps"),
),
body: body,
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
#### Above mouse works:

#### Above mouse doesn't work:

</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on Fedora Linux 41 (Workstation Edition) 6.11.10-300.fc41.x86_64, locale en_US.UTF-8)
• Flutter version 3.27.1 on channel stable at /home/david/.local/share/mise/installs/flutter/3.27.1-stable
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (2 days ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /home/david/Android/Sdk/
• Platform android-35, build-tools 34.0.0
• Java binary at: /home/david/opt/android-studio/jbr/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• Chrome at google-chrome
[✓] Linux toolchain - develop for Linux desktop
• clang version 19.1.5 (Fedora 19.1.5-1.fc41)
• cmake version 3.30.5
• ninja version 1.12.1
• pkg-config version 2.3.0
[✓] Android Studio (version 2024.2)
• Android Studio at /home/david/opt/android-studio/
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• android-studio-dir = /home/david/opt/android-studio/
• Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
[✓] VS Code (version 1.96.0)
• VS Code at /usr/share/code
• Flutter extension version 3.102.0
[✓] Connected device (3 available)
• sdk gphone x86 64 (mobile) • emulator-5554 • android-x64 • Android 13 (API 33) (emulator)
• Linux (desktop) • linux • linux-x64 • Fedora Linux 41 (Workstation Edition) 6.11.10-300.fc41.x86_64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.108
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| framework,f: gestures,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.27,found in release: 3.28 | low | Minor |
2,748,376,750 | ollama | Setup window scaling is bigger than expected. | ### What is the issue?
It's probably not causing any problems during usage but this kinda bothers me on each update. The setup window, text and buttons are larger than expected. I'm currently on `0.5.4` but same exists on previus versions. I have dual monitor setup. Both 1080p and never used scaling (on default %100). The size difference is pretty obvious in eye but here is a screenshot:

Right window for reference. All setup programs I have ever seen is in size of it.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4 | bug | low | Major |
2,748,443,008 | godot | [GDScript] Cannot declare variable in lambda with the same name as parent scope | ### Tested versions
Godot v4.3.stable - Windows 10.0.26100 - Vulkan (Forward+) - integrated Intel(R) Iris(R) Xe Graphics (Intel Corporation; 31.0.101.5595) - 13th Gen Intel(R) Core(TM) i5-1340P (16 Threads)
### System information
N/A
### Issue description
When creating a GDScript function containing a lambda in it, both with the same variable name, this will result in a compile error.
### Steps to reproduce
Create the following script below. It will result in the following error:
```
Parse Error: There is already a variable named "result" declared in this scope.
```
### Minimal reproduction project (MRP)
```gdscript
func test() -> void:
var result := []
var lambda := func () -> void:
var result := []
``` | discussion,topic:gdscript | low | Critical |
2,748,492,876 | PowerToys | Hang when logging in via Windows Remote Desktop to a machine with PowerToys installed | ### Microsoft PowerToys version
0.87.0.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
Log in via Remote Desktop and try to do just about anything
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
Version: 0.87.0.0
OS Version: Microsoft Windows NT 10.0.26100.0
IntPtr Length: 8
x64: True
Date: 12/18/2024 1:31:25 PM
Exception:
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.
---> System.Runtime.InteropServices.COMException (0xD0000701): 0xD0000701
at Standard.NativeMethods.DwmExtendFrameIntoClientArea(IntPtr hwnd, MARGINS& pMarInset)
at System.Windows.Appearance.WindowBackdropManager.UpdateGlassFrame(IntPtr hwnd, WindowBackdropType backdropType)
at System.Windows.Appearance.WindowBackdropManager.ApplyBackdrop(IntPtr hwnd, WindowBackdropType backdropType)
at System.Windows.Appearance.WindowBackdropManager.SetBackdrop(Window window, WindowBackdropType backdropType)
at System.Windows.ThemeManager.ApplyStyleOnWindow(Window window, Boolean useLightColors)
at System.Windows.ThemeManager.ApplyFluentOnWindow(Window window)
at System.Windows.ThemeManager.OnWindowThemeChanged(Window window, ThemeMode oldThemeMode, ThemeMode newThemeMode)
at System.Windows.ThemeManager.SyncWindowThemeMode(Window window)
at System.Windows.TreeWalkHelper.InvalidateOnResourcesChange(FrameworkElement fe, FrameworkContentElement fce, ResourcesChangeInfo info)
at System.Windows.ResourceDictionary.NotifyOwners(ResourcesChangeInfo info)
at System.Windows.ResourceDictionary.OnMergedDictionariesChanged(Object sender, NotifyCollectionChangedEventArgs e)
at System.Collections.ObjectModel.ObservableCollection`1.OnCollectionChanged(NotifyCollectionChangedEventArgs e)
at PowerLauncher.Helper.ThemeManager.SetSystemTheme(Theme theme)
at PowerLauncher.Helper.ThemeManager.<>c__DisplayClass12_0.<UpdateTheme>b__0()
at System.Windows.Threading.Dispatcher.Invoke(Action callback, DispatcherPriority priority, CancellationToken cancellationToken, TimeSpan timeout)
at System.Windows.Threading.Dispatcher.Invoke(Action callback)
at PowerLauncher.Helper.ThemeManager.UpdateTheme()
at PowerLauncher.Helper.ThemeManager.OnUserPreferenceChanged(Object sender, UserPreferenceChangedEventArgs e)
at InvokeStub_UserPreferenceChangedEventHandler.Invoke(Object, Span`1)
at System.Reflection.MethodBaseInvoker.InvokeWithFewArgs(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
--- End of inner exception stack trace ---
at System.Reflection.MethodBaseInvoker.InvokeWithFewArgs(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at System.Delegate.DynamicInvokeImpl(Object[] args)
at Microsoft.Win32.SystemEvents.SystemEventInvokeInfo.InvokeCallback(Object arg)
at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs)
at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler)
### Other Software
Remote Desktop | Issue-Bug,Product-PowerToys Run,Needs-Triage | low | Minor |
2,748,520,619 | transformers | InternVL is ExecuTorch Compatible | ### Feature request
### Feature request
Enable OpenGVLab/InternVL2-1B to ["Export to ExecuTorch"](https://github.com/huggingface/transformers/issues/32253) workflow
### Motivation
See details in #32253
### Your contribution
model enablement | Feature request,ExecuTorch | low | Minor |
2,748,525,313 | ant-design | Slider: allow "included=true" to begin from a specified value | ### What problem does this feature solve?
Currently `included=true` always starts from left-most (right-most if reversed) edge. Currently there is no way to start the track fill to begin from a middle of the range. For example if you have a range from -10 to 10 and the default value is 0 then you may want to include the track starting from zero and going toward the node.
### What does the proposed API look like?
```react-js
<Slider
min={-10}
max={10}
defaultValue={0}
included={true}
includedStart={0} <-- This is the proposal
/>
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Minor |
2,748,543,894 | angular | Angular router leaks components and their DOM Elements | ### Which @angular/* package(s) are the source of the bug?
router
### Is this a regression?
No
### Description
Every time we navigate to a component and route away asynchronously, the Angular router is leaking the component.
This is causing severe memory issues in production code.
Steps to reproduce:
1. `git clone [email protected]:arobinson/angular-router-leak.git`
2. `pnpm install`
3. `ng serve`
4. Open http://localhost:4200/homepage in Google Chrome (or Microsoft Edge)
5. Open the chrome developer tools
6. Open the "Memory" tab
7. Take a heap snapshot
8. Filter for "andrew", verify nothing is found
9. Close the chrome developer tools
10. Click "Launch Standalone"
11. Click "Close" (it will navigate after a 2 second delay)
12. Repeat steps 10 & 11 a few times
13. Reopen the chrome developer tools
14. Open the "Memory" tab
15. Take a heap snapshot
16. Filter for "andrew", notice that the DOM element has been orphaned and will not be Garbage Collected.
For larger components this ends up leaking over 50MB of RAM per navigation in our application.

Now see that somehow the problem is different with a synchronous navigation:
1. Reload the browser to clear the memory
2. Open dev tools
3. Click "Launch Standalone" again
4. This time click "Close Synchronously"
5. Repeat steps 3 & 4 a few times
15. Take a heap snapshot
16. Filter for "andrew", notice that only the last DOM element that was routed to is detached. So there is still a memory issue, but thankfully it does not grow every time we route.

### Please provide a link to a minimal reproduction of the bug
https://github.com/arobinson/angular-router-leak
### Please provide the exception or error you saw
```true
Detached / Leaked DOM elements and their components
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 18.2.1
Node: 22.11.0
Package Manager: pnpm 9.9.0
OS: darwin arm64
Angular: 18.2.2
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.3
@angular-devkit/build-angular 18.2.1
@angular-devkit/core 19.0.3
@angular-devkit/schematics 19.0.3
@angular/cli 18.2.1
@schematics/angular 19.0.3
rxjs 7.8.1
typescript 5.5.2
zone.js 0.14.10
```
### Anything else?
_No response_ | memory leak,area: router | low | Critical |
2,748,555,544 | rust | ICE: impossible case reached wit gce + v0 symbol mangling | <!--
[31mICE[0m: Rustc ./a.rs '-Cinstrument-coverage -ooutputfile -Zdump-mir-dir=dir' 'error: internal compiler error: compiler/rustc_symbol_mangling/src/v0.rs:578:42: impossible case reached', 'error: internal compiler error: compiler/rustc_symbol_mangling/src/v0.rs:578:42: impossible case reached'
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
//@compile-flags: -Cinstrument-coverage
#![feature(generic_const_exprs)]
struct Inner<const N: usize, const M: usize>;
impl<const N: usize, const M: usize> Inner<N, M>
where
[(); N + M]:,
{
fn i() -> Self {
Self
}
}
struct Outer<const A: usize, const B: usize>(Inner<A, { B * 2 }>)
where
[(); A + (B * 2)]:;
impl<const A: usize, const B: usize> Outer<A, B>
where
[(); A + (B * 2)]:,
{
fn o() -> Self {
Self(Inner::i())
}
}
fn main() {
Outer::<1, 1>::o();
}
````
original:
````rust
#![allow(incomplete_features)]
#![feature(generic_const_exprs)]
struct Inner<const N: usize, const M: usize>;
impl<const N: usize, const M: usize> Inner<N, M> where [(); N + M]: {
fn i() -> Self {
Self
}
}
struct Outer<const A: usize, const B: usize>(Inner<A, { B * 2 }>) where [(); A + (B * 2)]:;
impl<const A: usize, const B: usize> Outer<A, B> where [(); A + (B * 2)]: {
fn o() -> Self {
Self(Inner::i())
}
}
fn main() {
Outer::<1, 1>::o();
}
````
Version information
````
rustc 1.85.0-nightly (057bdb37e 2024-12-18)
binary: rustc
commit-hash: 057bdb37eccff6a2bd402509bbbadb9d73ad7bf5
commit-date: 2024-12-18
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.6
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/057bdb37eccff6a2bd402509bbbadb9d73ad7bf5/compiler/rustc_symbol_mangling/src/v0.rs#L572-L584
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Cinstrument-coverage`
<details><summary><strong>Program output</strong></summary>
<p>
```
warning: the feature `generic_const_exprs` is incomplete and may not be safe to use and/or cause compiler crashes
--> /tmp/icemaker_global_tempdir.Vj5eJd7yWmCg/rustc_testrunner_tmpdir_reporting.hpXCYSBbyu42/mvce.rs:1:12
|
1 | #![feature(generic_const_exprs)]
| ^^^^^^^^^^^^^^^^^^^
|
= note: see issue #76560 <https://github.com/rust-lang/rust/issues/76560> for more information
= note: `#[warn(incomplete_features)]` on by default
error: internal compiler error: compiler/rustc_symbol_mangling/src/v0.rs:578:42: impossible case reached
thread 'rustc' panicked at compiler/rustc_symbol_mangling/src/v0.rs:578:42:
Box<dyn Any>
stack backtrace:
0: 0x77006137164a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::he51ca9b0dc91b94e
1: 0x770061a13dbc - core::fmt::write::h2fe894f1a71f3577
2: 0x770062989091 - std::io::Write::write_fmt::h76ab228c5bf44dde
3: 0x7700613714a2 - std::sys::backtrace::BacktraceLock::print::h54c62b0f13f3e5cd
4: 0x77006137399a - std::panicking::default_hook::{{closure}}::hde230593691540bb
5: 0x7700613737e3 - std::panicking::default_hook::h93566d26810460b4
6: 0x7700604e01e8 - std[3b55a5d72aeb08a8]::panicking::update_hook::<alloc[33b8f9c30a0f5ec3]::boxed::Box<rustc_driver_impl[d048b04913314086]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x770061374158 - std::panicking::rust_panic_with_hook::h8f04bfe0058df3f9
8: 0x77006051ae71 - std[3b55a5d72aeb08a8]::panicking::begin_panic::<rustc_errors[84dd1c3e8f0c26d7]::ExplicitBug>::{closure#0}
9: 0x770060510056 - std[3b55a5d72aeb08a8]::sys::backtrace::__rust_end_short_backtrace::<std[3b55a5d72aeb08a8]::panicking::begin_panic<rustc_errors[84dd1c3e8f0c26d7]::ExplicitBug>::{closure#0}, !>
10: 0x77006050cc8d - std[3b55a5d72aeb08a8]::panicking::begin_panic::<rustc_errors[84dd1c3e8f0c26d7]::ExplicitBug>
11: 0x770060524dd1 - <rustc_errors[84dd1c3e8f0c26d7]::diagnostic::BugAbort as rustc_errors[84dd1c3e8f0c26d7]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x770060afb9d3 - rustc_middle[c5657185550747ed]::util::bug::opt_span_bug_fmt::<rustc_span[430234a91bb838ed]::span_encoding::Span>::{closure#0}
13: 0x770060ae133a - rustc_middle[c5657185550747ed]::ty::context::tls::with_opt::<rustc_middle[c5657185550747ed]::util::bug::opt_span_bug_fmt<rustc_span[430234a91bb838ed]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x770060ae11cb - rustc_middle[c5657185550747ed]::ty::context::tls::with_context_opt::<rustc_middle[c5657185550747ed]::ty::context::tls::with_opt<rustc_middle[c5657185550747ed]::util::bug::opt_span_bug_fmt<rustc_span[430234a91bb838ed]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x77005ebce5c0 - rustc_middle[c5657185550747ed]::util::bug::bug_fmt
16: 0x77006105929f - <rustc_symbol_mangling[d2b91f94fa5b6f43]::v0::SymbolMangler as rustc_middle[c5657185550747ed]::ty::print::Printer>::print_const
17: 0x77006105988a - <rustc_symbol_mangling[d2b91f94fa5b6f43]::v0::SymbolMangler as rustc_middle[c5657185550747ed]::ty::print::Printer>::path_generic_args::<<rustc_symbol_mangling[d2b91f94fa5b6f43]::v0::SymbolMangler as rustc_middle[c5657185550747ed]::ty::print::Printer>::default_print_def_path::{closure#3}>
18: 0x770061057151 - <rustc_symbol_mangling[d2b91f94fa5b6f43]::v0::SymbolMangler as rustc_middle[c5657185550747ed]::ty::print::Printer>::print_def_path
19: 0x770061057c4d - <rustc_symbol_mangling[d2b91f94fa5b6f43]::v0::SymbolMangler as rustc_middle[c5657185550747ed]::ty::print::Printer>::print_type
20: 0x770061057365 - <rustc_symbol_mangling[d2b91f94fa5b6f43]::v0::SymbolMangler as rustc_middle[c5657185550747ed]::ty::print::Printer>::print_impl_path
21: 0x7700610569e4 - <rustc_symbol_mangling[d2b91f94fa5b6f43]::v0::SymbolMangler as rustc_middle[c5657185550747ed]::ty::print::Printer>::print_def_path
22: 0x7700610570e7 - <rustc_symbol_mangling[d2b91f94fa5b6f43]::v0::SymbolMangler as rustc_middle[c5657185550747ed]::ty::print::Printer>::print_def_path
23: 0x770061056450 - rustc_symbol_mangling[d2b91f94fa5b6f43]::v0::mangle
24: 0x7700620c6e5f - rustc_symbol_mangling[d2b91f94fa5b6f43]::symbol_name_provider
25: 0x7700620c58ea - rustc_query_impl[7ef989d5bbee9c2]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[7ef989d5bbee9c2]::query_impl::symbol_name::dynamic_query::{closure#2}::{closure#0}, rustc_middle[c5657185550747ed]::query::erase::Erased<[u8; 16usize]>>
26: 0x7700620c4877 - rustc_query_system[ccd0af510523da55]::query::plumbing::try_execute_query::<rustc_query_impl[7ef989d5bbee9c2]::DynamicConfig<rustc_query_system[ccd0af510523da55]::query::caches::DefaultCache<rustc_middle[c5657185550747ed]::ty::instance::Instance, rustc_middle[c5657185550747ed]::query::erase::Erased<[u8; 16usize]>>, false, false, false>, rustc_query_impl[7ef989d5bbee9c2]::plumbing::QueryCtxt, false>
27: 0x7700620c44e2 - rustc_query_impl[7ef989d5bbee9c2]::query_impl::symbol_name::get_query_non_incr::__rust_end_short_backtrace
28: 0x7700628254c2 - <rustc_middle[c5657185550747ed]::mir::mono::MonoItem>::symbol_name
29: 0x770062824fbd - rustc_monomorphize[318401024c3aab57]::partitioning::assert_symbols_are_distinct::<core[d6142de48f57c781]::slice::iter::Iter<rustc_middle[c5657185550747ed]::mir::mono::MonoItem>>
30: 0x7700620e17fd - rustc_monomorphize[318401024c3aab57]::partitioning::collect_and_partition_mono_items
31: 0x7700620dfca4 - rustc_query_impl[7ef989d5bbee9c2]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[7ef989d5bbee9c2]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2}::{closure#0}, rustc_middle[c5657185550747ed]::query::erase::Erased<[u8; 24usize]>>
32: 0x7700620dfc89 - <rustc_query_impl[7ef989d5bbee9c2]::query_impl::collect_and_partition_mono_items::dynamic_query::{closure#2} as core[d6142de48f57c781]::ops::function::FnOnce<(rustc_middle[c5657185550747ed]::ty::context::TyCtxt, ())>>::call_once
33: 0x7700629b0a20 - rustc_query_system[ccd0af510523da55]::query::plumbing::try_execute_query::<rustc_query_impl[7ef989d5bbee9c2]::DynamicConfig<rustc_query_system[ccd0af510523da55]::query::caches::SingleCache<rustc_middle[c5657185550747ed]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[7ef989d5bbee9c2]::plumbing::QueryCtxt, false>
34: 0x7700629b0720 - rustc_query_impl[7ef989d5bbee9c2]::query_impl::collect_and_partition_mono_items::get_query_non_incr::__rust_end_short_backtrace
35: 0x7700629f5630 - <rustc_codegen_llvm[642dbb4fc0deb88d]::LlvmCodegenBackend as rustc_codegen_ssa[f60ac56e662ca6db]::traits::backend::CodegenBackend>::codegen_crate
36: 0x7700629e1c64 - <rustc_interface[f31e59266673035b]::queries::Linker>::codegen_and_build_linker
37: 0x77006299b3d2 - rustc_interface[f31e59266673035b]::passes::create_and_enter_global_ctxt::<core[d6142de48f57c781]::option::Option<rustc_interface[f31e59266673035b]::queries::Linker>, rustc_driver_impl[d048b04913314086]::run_compiler::{closure#0}::{closure#2}>::{closure#2}::{closure#0}
38: 0x770062a15b83 - rustc_interface[f31e59266673035b]::interface::run_compiler::<(), rustc_driver_impl[d048b04913314086]::run_compiler::{closure#0}>::{closure#1}
39: 0x770062918511 - std[3b55a5d72aeb08a8]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[f31e59266673035b]::util::run_in_thread_with_globals<rustc_interface[f31e59266673035b]::util::run_in_thread_pool_with_globals<rustc_interface[f31e59266673035b]::interface::run_compiler<(), rustc_driver_impl[d048b04913314086]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
40: 0x7700629189a6 - <<std[3b55a5d72aeb08a8]::thread::Builder>::spawn_unchecked_<rustc_interface[f31e59266673035b]::util::run_in_thread_with_globals<rustc_interface[f31e59266673035b]::util::run_in_thread_pool_with_globals<rustc_interface[f31e59266673035b]::interface::run_compiler<(), rustc_driver_impl[d048b04913314086]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[d6142de48f57c781]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
41: 0x770062919f6f - std::sys::pal::unix::thread::Thread::new::thread_start::he7ac21643e42d931
42: 0x77005cca339d - <unknown>
43: 0x77005cd2849c - <unknown>
44: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.85.0-nightly (057bdb37e 2024-12-18) running on x86_64-unknown-linux-gnu
note: compiler flags: -C instrument-coverage -Z dump-mir-dir=dir
query stack during panic:
#0 [symbol_name] computing the symbol for `<impl at /tmp/icemaker_global_tempdir.Vj5eJd7yWmCg/rustc_testrunner_tmpdir_reporting.hpXCYSBbyu42/mvce.rs:3:1: 5:18>::i`
#1 [collect_and_partition_mono_items] collect_and_partition_mono_items
end of query stack
error: aborting due to 1 previous error; 1 warning emitted
```
</p>
</details>
<!--
query stack:
#0 [symbol_name] computing the symbol for `<impl at /tmp/icemaker_global_tempdir.Vj5eJd7yWmCg/rustc_testrunner_tmpdir_reporting.hpXCYSBbyu42/mvce.rs:3:1: 5:18>::i`
#1 [collect_and_partition_mono_items] collect_and_partition_mono_items
-->
@rustbot label +F-generic_const_exprs | I-ICE,T-compiler,C-bug,F-generic_const_exprs,S-bug-has-test,requires-incomplete-features | low | Critical |
2,748,585,500 | go | x/pkgsite: code example format request returns HTTP 405 method not allowed error | <!--
Please answer these questions before submitting your issue. Thanks!
If you would like to have your package removed, please file an issue at https://golang.org/s/pkgsite-package-removal.
-->
### What is the URL of the page with the issue?
https://pkg.go.dev/net/url#Values.Encode (click the "Format" button)
### What is your user agent?
Chrome on Windows
<!--
You can find your user agent here:
https://www.whatismybrowser.com/detect/what-is-my-user-agent
-->
### Screenshot
<!--
Please paste a screenshot of the page.
-->
<img width="1912" alt="image" src="https://github.com/user-attachments/assets/64eb69b9-a154-4cc5-a0e7-5a39913cc755" />
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
-->
Clicked the Format button for the code example
### What did you expect to see?
Formatted code
### What did you see instead?
An HTTP 405 error response
| help wanted,NeedsInvestigation,pkgsite | low | Critical |
2,748,599,924 | vscode | Selected tab underline thickness is hard to see | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I would like there to be a way to customize the active tab underline thickness. The current 1px thickness is difficult to notice. I think 4px makes sense.
## Existing tab UI
It's hard to tell which tab is selected, especially in light themes where a thin dark line isn't generally noticeable.

## Tab UI suggestion
There's plenty of room to add a 4px line, which fits with the [WCAG focus](https://www.w3.org/WAI/WCAG21/Understanding/focus-visible) suggestion of using color the size of twice the perimeter of the shape being outlined. This is both easy to see and aesthetically pleasing, in my opinion. This isn't the same thing as component focus, but if it's good enough for focus, I think it should be good enough for active state.

## Other small underlines
The top/bottom docked activity bar has the same issue

As do the tabs in the panel and in the terminal


## Other programs
WebStorm uses a 4px border which is easy to see.

## Other GitHub issues
A similar previous request was locked (due to inactivity?) https://github.com/microsoft/vscode/issues/130394 | feature-request,workbench-tabs | low | Major |
2,748,613,485 | godot | Alt-tabbing doesn't switch focus to different window | ### Tested versions
4.4 dev6
### System information
Godot v4.4.dev6 - Windows 10.0.19045 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated AMD Radeon RX 580 2048SP (Advanced Micro Devices, Inc.; 31.0.21921.1000) - Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz (4 threads)
### Issue description
When switching from script editor to undocked shader editor, pressing common shortcuts (undo, copy & paste) are still performed in the script editor. This is because alt-tabbing doesn't switch focus to shader editor, I need to click on the shader editor to completely switch the focus. Alt-tabbing just brings up the shader editor without switching the focus to it.
This isn't an issue when switching from shader editor to script editor.
### Steps to reproduce
open script editor and undock shader editor. write something in script editor, switch to shader editor and press ctrl+a. everything in script editor should be selected.
### Minimal reproduction project (MRP)
new project | bug,topic:editor,needs testing | low | Minor |
2,748,643,914 | rust | Equality constraints break type resolution | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
Imagine a trait for serialization (AsRaw). I'm trying to make a trait for wrappers that serialize into the same encoding as their inner type. I tried this:
```rust
trait AsRaw {
type Raw;
}
trait WrapperTrait: AsRaw {
type Inner: AsRaw<Raw = Self::Raw>;
}
#[repr(u32)]
enum Foo {
Bar = 1,
Baz = 2,
}
impl AsRaw for Foo {
type Raw = u32;
}
fn noop1<P: WrapperTrait>(x: <P::Inner as AsRaw>::Raw) -> <P as AsRaw>::Raw {
x
}
fn noop2<P: WrapperTrait<Inner = Foo>>(x: <P::Inner as AsRaw>::Raw) -> <P as AsRaw>::Raw {
// ERROR: mismatched types
x
}
fn noop3<P: WrapperTrait<Inner = Foo>>(x: <P::Inner as AsRaw>::Raw) -> <P as AsRaw>::Raw {
noop1::<P>(x)
}
```
For some reason, `noop2()` fails to compile, even though `noop1()` (a strictly more general function) compiles successfully. Furthermore, `noop3()` (which has the same exact signature as `noop2()`) compiles successfully, taking advantage of `noop1()`.
Providing a specific value for an associated type seems to cause the compiler to "forget" certain bounds on the associated type, at least while checking type equality.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: aarch64-apple-darwin
release: 1.83.0
LLVM version: 19.1.1
```
Nightly has the same behavior.
Error message:
```
$ RUST_BACKTRACE=1 cargo build 02:53:57 PM
Compiling equality-bug v0.1.0 (/Users/jacobgreenfield/Downloads/equality-bug)
error[E0308]: mismatched types
--> src/main.rs:25:5
|
23 | fn noop2<P: WrapperTrait<Inner = Foo>>(x: <P::Inner as AsRaw>::Raw) -> <P as AsRaw>::Raw {
| ----------------- expected `<P as AsRaw>::Raw` because of return type
24 | // ERROR: mismatched types
25 | x
| ^ expected associated type, found `u32`
|
= note: expected associated type `<P as AsRaw>::Raw`
found type `u32`
= help: consider constraining the associated type `<P as AsRaw>::Raw` to `u32` or calling a method that returns `<P as AsRaw>::Raw`
= note: for more information, visit https://doc.rust-lang.org/book/ch19-03-advanced-traits.html
For more information about this error, try `rustc --explain E0308`.
error: could not compile `equality-bug` (bin "equality-bug") due to 1 previous error
``` | C-bug,T-types,needs-triage | low | Critical |
2,748,654,398 | ollama | StructuredOutputs Schema Missing in Prompt [Unlike OpenAI API Default Behavior] | When using StructuredOutputs, I noticed that the model's outputs were nonsensical and didn't align with expectations.
After debugging, I discovered that the output schema isn't included in the prompt, leaving the model unaware of its options and what it should generate. While developers could manually add the schema to the prompt, this isn't a common practice. For instance, OpenAI's API automatically includes the schema (easily verified by counting input prompt tokens), and I believe this behavior should be standard for all inference engines.
I tested Ollama in three different ways, and all of them exhibited the same behavior. Below is the code to reproduce the issue:
```python
import aiohttp
from pydantic import BaseModel, Field
from typing import Union, List, Literal
from typing import Annotated
class Action1(BaseModel):
type: Literal["action1"]
__doc__: str = "store in memory"
class Action2(BaseModel):
type: Literal["action2"]
__doc__: str = "call someone"
class Action3(BaseModel):
type: Literal["action3"]
__doc__: str = "move to a location"
class Response(BaseModel):
steps: Annotated[
List[Union[Action1, Action2, Action3]],
Field(..., description='Sequence of steps to perform', discriminator='type')
]
messages = [
{"role": "system", "content": "Decompose input request into sequence of steps. Possible steps are listed in the response schema."},
{"role": "user", "content": "I want to go to the park."},
]
payload = {
"model": 'qwen2.5-coder:7b',
"messages": messages,
"stream": False,
"options": {"temperature" : 0.0},
"format": Response.model_json_schema()
}
async def get_response(payload):
async with aiohttp.ClientSession() as session:
async with session.post('http://localhost:11434/api/chat', json=payload) as response:
json_response = await response.json()
raw_string = json_response['message']['content']
res = Response.model_validate_json(raw_string)
return json_response, res
json_response, res = await get_response(payload)
# json_response['prompt_eval_count'] shows 40 prompt tokens
# res.model_dump() shows `{'steps': [{'type': 'action1'}, {'type': 'action2'}]}`, which doesn't make any sense
# the correct answer for "I want to go to the park." is Action 3 ("move to a location"), not 1 or 2
messages_w_schema = [
{
"role": "system",
"content":
"Decompose input request into sequence of steps. Possible steps are listed in the response schema."
+ f'\nSCHEMA:\n{Response.model_json_schema()}'
},
{"role": "user", "content": "I want to go to the park."},
]
payload_new = {
"model": 'qwen2.5-coder:7b',
"messages": messages_w_schema,
"stream": False,
"options": {"temperature" : 0.0},
"format": Response.model_json_schema()
}
json_response_2, res_2 = await get_response(payload_new)
# json_response_2['prompt_eval_count'] shows 328 prompt tokens
# res_2.model_dump() correclty suggests action3
#============
from ollama import chat
chat_res_1 = chat(
model=payload['model'],
messages=messages,
format=Response.model_json_schema(),
options={'temperature': 0}
)
chat_res_2 = chat(
model=payload['model'],
messages=messages_w_schema,
format=Response.model_json_schema(),
options={'temperature': 0}
)
# chat_res_1.prompt_eval_count = 40, chat_res_2.prompt_eval_count = 328
# Response.model_validate_json(chat_res_1.message.content) again points to actions 1 & 2, not 3
#================
from openai import OpenAI
client = OpenAI(
base_url = 'http://localhost:11434/v1',
api_key='ollama',
)
oai_client_1 = client.beta.chat.completions.parse(
model=payload['model'],
messages=payload['messages'],
response_format=Response,
temperature=0
)
oai_client_2 = client.beta.chat.completions.parse(
model=payload_new['model'],
messages=payload_new['messages'],
response_format=Response,
temperature=0
)
# same here: oai_client_1.choices[0].message.parsed contains actions 1 & 2, not 3
# and input prompt token counts are the same
```
As you might see, we have 3 actions: memory, call and navigation. We ask an llm to decompose input query into sequence of actions. For this example (`I want to go to the park.`) we expect model to output 3rd action. There is no way for the model to understand which action to choose other than parsing that info from the output schema. Without it, model just randomly guesses, and is not able to produce anything meaningful.
====
The biggest issue I see is that even though Ollama is OpenAI compatible, it doesn't have the same logic under the hood, which might surprise a lot of developers when switching from proprietary LLMs to local ones.
Proposal: by default, add output schema to inputs, and maybe allow to disable that behaviour with a flag. Given that the functionality was added not so long ago, I believe it is important to fix it RN, before the workaround logic with system prompt patching will be widely spread.
===
My library versions:
- pip: ollama==0.4.4
- ollama CLI version is 0.5.1
- pip: openai=='1.57.3' | feature request | low | Critical |
2,748,661,940 | ui | [feat]: Switch to `gap-XX` instead of `space-y-XX` | ### Feature description
Tailwind `gap-XX` is the proper way to space elements within `flex` or `grid` elements.
We should refactor the codebase and switch every `space-x-XXX` or `space-y-XXX` to `gap-XXX` when appropriate.
### Affected component/components
_No response_
### Additional Context
The downside of using `space-y-XX` is that it's hard for the user to override if changed needed for a single child.
It will require the `!important` flag to override the complex styling applied to the children by the `space-y` utils.
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,748,717,020 | kubernetes | Missing imports using register-gen since v0.32.0 | I tried the register-gen and the imports `k8s.io/apimachinery/pkg/runtime` and `k8s.io/apimachinery/pkg/runtime/schema` are missing with the version 0.32.0.
This is due to the dependency `k8s.io/gengo/v2` being updated after the merge if this PR: https://github.com/kubernetes/gengo/pull/277. The field `FormatOnly` is set to true, so the insertion/deletion of imports is disabled.
I could provide a fix for this issue. I am just not sure if it should be fixed directly in register-gen (by adding the imports in the code generator), or if that change should be reverted in the `k8s.io/gengo/v2` project. | sig/api-machinery,kind/regression,triage/accepted | low | Major |
2,748,725,813 | go | testing/synctest: "all goroutines in bubble are blocked" should list blocked goroutines | When all the goroutines in a synctest bubble are durably blocked and no progress can be made, synctest.Run panics. This is essentially a more limited version of the runtime's fatal error "all goroutines are asleep - deadlock!".
Currently, when this panic occurs in a test, by default you see a stack trace for the goroutine which called synctest.Run, but not for any other goroutines. This isn't especially useful. You can set GOTRACEBACK=all to see more stacks, but you shouldn't have to.
I think that ideally in this case you'd get stack traces for all goroutines in the bubble causing the panic, but not for anything else.
I'm not sure yet what the right way to implement this is: Should the panic from synctest.Run contain the relevant stack traces? Should synctest.Run print the stacks itself before panicing? Should the testing package detect the synctest.Run panic and handle printing the relevant stacks itself? | NeedsInvestigation | low | Critical |
2,748,734,800 | vscode | Expanding references in quick chat should grow picker | https://github.com/user-attachments/assets/aad8fac3-f497-4e67-9c54-f43afe7bb8df
originally internally reported by @bpasero | bug,quick-chat | low | Minor |
2,748,789,711 | vscode | Sticky scroll lines in the diff editor don't fully extend horizontally | repro:
- open diff editor to check changes for a commit
- scroll one of the diff lines to a point where it is at the top of the viewport (where the sticky elements would be)
- 🐛 see the diff peeking through between the sticky line and scroll bar

| under-discussion,editor-sticky-scroll | low | Minor |
2,748,806,848 | flutter | PR opened pre-monorepo merge but rebased to AFTER it scheduling the wrong presubmit builds | https://github.com/flutter/flutter/pull/160482
It's also scheduling framework builds immediately. | team-infra,monorepo | low | Minor |
2,748,808,347 | transformers | tokenizer decode decode with timestamp fails for extended vocabulary | ### System Info
python=3.10.13
transformers==4.44.1
torch==2.1.2
### Who can help?
@sanchit-gandhi @ylacombe @eustlb @arthurz
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Decoding with timestamps produces unexpected results when the vocabulary is extended
```
>>> from transformers import WhisperTokenizer, AddedToken
>>> tokenizer = WhisperTokenizer.from_pretrained('openai/whisper-base', language="English", task="transcribe", predict_timestamps=True)
>>> extended_vocab = ['newword1']
>>> extended_vocab = [AddedToken(t, single_word=True, lstrip=True) for t in extended_vocab]
>>> tokenizer.add_tokens(extended_vocab)
1
>>> print(len(tokenizer))
51866
>>> print(tokenizer.convert_ids_to_tokens(51865))
newword1
>>> tokens = tokenizer('<|0.00|> newword1 <|0.22|>').input_ids
>>> tokens
[50258, 50259, 50359, 50364, 51865, 220, 50375, 50257]
>>> tokenizer.decode(tokens, skip_special_tokens=True)
'newword1 '
>>> tokenizer.decode(tokens, skip_special_tokens=False)
'<|startoftranscript|><|en|><|transcribe|>newword1 <|endoftext|>'
>>> tokenizer.decode(tokens, skip_special_tokens=False, decode_with_timestamps=True)
'<|startoftranscript|><|en|><|transcribe|><|0.00|><|30.02|> <|30.24|><|endoftext|>'
>>> tokens = tokenizer('<|0.00|> word <|0.22|>').input_ids # something in the vocabulary
>>> tokenizer.decode(tokens, skip_special_tokens=True)
' word '
>>> tokenizer.decode(tokens, skip_special_tokens=False)
'<|startoftranscript|><|en|><|transcribe|> word <|endoftext|>'
>>> tokenizer.decode(tokens, skip_special_tokens=False, decode_with_timestamps=True)
'<|startoftranscript|><|en|><|transcribe|><|0.00|> word <|0.22|><|endoftext|>'
```
The problem arises in [https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/whisper/tokenization_whisper.py#L546]( https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/whisper/tokenization_whisper.py#L546)
see issue [20225](https://github.com/huggingface/transformers/issues/20225)
### Expected behavior
I would expect the timestamps to remain consistent from tokenizing and decoding.
```
>>> tokens = tokenizer('<|0.00|> newword1 <|0.22|>').input_ids
>>> tokenizer.decode(tokens, skip_special_tokens=False, decode_with_timestamps=True)
'<|startoftranscript|><|en|><|transcribe|><|0.00|> newword1<|0.22|><|endoftext|>'
``` | bug | low | Minor |
2,748,841,449 | kubernetes | [Compatibility Version] kube version validation should be moved out of component base | xref: https://github.com/kubernetes/kubernetes/pull/128279/files#r1890633759
> is k8s.io/component-base supposed to be agnostic for in-tree or out-of-tree components? trying to make sure the n-3 / 1.31 floor we set here is enforced for ValidateKubeEffectiveVersion but wouldn't break a different component with a different versioning scheme from using this package
ref: https://github.com/kubernetes/kubernetes/pull/128279#discussion_r1890662857
> Can we open an issue to move this logic out of component-base as well? I would expect k8s components (kube-apiserver, ...) to set the limit according to the rules of that component (N-3 for k8s components), but leave it up to 3rd party component owners to decide what the limit should be for their components.
| sig/api-machinery,triage/accepted | low | Minor |
2,748,841,458 | vscode | Git - Add "Cherry Pick with No Commit" option to Source Control Graph | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
### Description:
The "Source Control Graph" in Visual Studio Code is an excellent tool for visualizing and managing Git repositories. However, it currently offers only the "Cherry Pick" option, which automatically creates a commit after applying the changes from the selected commit.
I propose adding a "Cherry Pick with No Commit" option to the Source Control Graph. This feature would allow users to cherry-pick a specific commit without automatically committing the changes. Instead, the changes would be applied to the working directory and staged in the index, giving users the opportunity to:
- Review the changes
- Modify them as needed
- Combine them with other updates before creating a commit
This addition would significantly improve the flexibility of the cherry-pick operation, making it more useful for workflows that require careful adjustment of changes, such as:
- Resolving conflicts manually
- Integrating partial changes
- Customizing commits from cherry-picked content
#### Current Limitation:
As of now, the Source Control Graph only supports a standard "Cherry Pick" operation that automatically creates a new commit. Adding this new option would align VSCode's Git functionality more closely with Git's native capabilities and provide users with greater control over their version control workflows.
#### Benefits:
- **Greater Flexibility:** Allows users to manage cherry-picked changes before committing.
- **Improved Workflow Control:** Offers better handling of conflicts and manual adjustments.
- **Enhanced Usability:** Supports complex Git workflows commonly used in professional development environments.
This feature would be a valuable enhancement for developers using Visual Studio Code as their primary Git interface. | feature-request,git | low | Minor |
2,748,857,798 | flutter | Use of `flutter update-packages` makes code reviews/merges painful | As an example, see https://github.com/flutter/flutter/pull/160354/files.
There are 92 files changed because a new package was added, and to use uniform package versions, `flutter update-packages` was used, which in turn requires updating the entire repository. We should either stop using pinned packages (and move towards workspaces), or further improve our tooling to minimize cascading effects of doing things like adding a single package. | team-infra,c: tech-debt | low | Minor |
2,748,893,230 | tauri | [bug] Invalid scale on linux | ### Describe the bug
When i run my application everything is tiny and when i open the devtools my <body> tag has a size of -150000 px by -40000 px which makes it unstably tiny
### Reproduction
1. Create a new project with `bun create tauri-app`
2. Run it on linux on wayland(i haven't tested on xorg)
3. open devtools and see the wrong scale
### Expected behavior
the <body> tag and it's children having appropriate sizes that are not negative and more than 20 times bigger than my actual resolution
### Full `tauri info` output
```text
$ tauri info
[✔] Environment
- OS: NixOS 25.5.0 x86_64 (X64)
✔ webkit2gtk-4.1: 2.46.4
✔ rsvg2: 2.58.3
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (1980-01-01)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 20.18.1
- npm: 10.8.2
- bun: 1.1.38
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../build
- devUrl: http://localhost:1420/
- framework: Svelte
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
I tried using a webkitgtk-based browser(midori) and it didn't have this issue. | type: bug,status: upstream,platform: Nix/NixOS | low | Critical |
2,748,915,466 | go | x/build: improve the container deployment process | I propose changing how services are deployed from the x/build repository. I would like to continue/finalize the migration from building and deploying containers on local machines to doing most of the work on Cloud Build. I propose we:
Current State: Services are built and deployed as containers primarily on local machines.
Create a trigger in Cloud Build for each service, The trigger will take a x/build repo commit hash as input and:
- Checkout the commit from go.googlesource.com.
- Check if a container for the requested commit already exists.
- If a container image for the requested commit does not exist, it will build and push the container image.
- Once the container image exists, it will deploy the service using the container image.
When a service is being deployed, the user can initiate a deployment by running a make target which:
- If the service being deployed is from a published commit (the commit ID is available on the remote Gerrit server) and the local work tree has no uncommitted or staged changes, it will initiate a cloud build trigger with a commit hash (see previously mentioned trigger).
- If the service being deployed contains uncommitted work or staged changes (a dirty commit):
- A locally defined cloud build job will be initiated (much like many of the services are currently deployed now)
- It will upload the local cloud build definition and checked out repository.
- Cloud build will create the image and publish it.
- The XB command will be called to deploy the newly created container.
Additional changes:
- There should be a mechanism for redeploying an existing deployment. Possible solutions include:
- Each Kubernetes manifest should contain a `GO_IMAGE_DEPLOYMENT_TIMESTAMP` environmental variable that is updated with a timestamp of the requested deployment time. This should ensure that the deployment manifests will always be unique.
- Deleting the pods for the deployment and waiting for the Kubernetes reconciliation loop to recreate them again.
- The canonical source for cloud build definitions will be stored in the x/build repository. There should be make targets added which will set the locally defined cloud build definitions to the existing cloud build triggers. This models work initiated by @dmitshur. How and where the the various elements are stored is completely open to discussion. We should use this issue to resolve those questions.
- The deployment make files should share as much logic as possible since most of these jobs will use the same logic.
Possible Benefits:
- A more centralized build and deployment process.
- Reduced reliance on local machines. Which is great for security and reducing errors due to bespoke local machine configurations.
@golang/release | Builders,NeedsFix | low | Critical |
2,748,920,276 | flutter | Attempt to execute code removed by Dart AOT compiler (TFA) (crash after updating to 3.27.1 from 3.24.5) | ### Steps to reproduce
I get this error when launching iOS application with 3.27.1 (on 3.24.5 it works fine)
The stacktrace points to PopupMenuButton but that is not even being used anywhere in the app.
### Expected results
The app should behave exactly like how it works when build with 3.24.5
### Actual results
The app shows blank screen with the above error showing in the console
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.2 darwin-arm64, locale en-US)
• Flutter version 3.27.1 on channel stable at x
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (23 hours ago)
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
```
</details>
| c: regression,c: crash,tool,dependency: dart,P1,team-tool,triaged-tool,dependency:dart-triaged | medium | Critical |
2,748,947,330 | go | proposal: x/net/http3: add HTTP/3 implementation | This is a proposal to add an HTTP/3 client and server implementation in x/net/http3.
Similarly to the accepted form of #58547 (x/net/quic), this proposal is to add an experimental package with an API subject to change during development. Once we have a complete implementation, we will file a separate proposal for API review.
Initial development will be in an internal package (x/net/internal/http3) until the details are firm enough for external testing. We will then move the package to x/net/http3 and (when we have some confidence the API is right) file the API review proposal. | Proposal | medium | Major |
2,748,977,958 | pytorch | FSDP learning hangs when the program tries to save the model | ### 🐛 Describe the bug
## TL;DR
I have strange intermittent error which I can fix in my own (just by adding one line), but I don't know how to fix it properly in general
## What's the problem?
Recently I have tried to fine-tune some LLMs using [Accelerate](https://github.com/huggingface/accelerate) from HuggingFace. I have used FSDP distributed learning + LoRA adapter to fine-tune 2 models from Qwen series: [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) and [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). And when there are no problems with the latter model, I have got a strange intermittent error when trying to save the former after training (It may not happen right away, but it will happen for sure)
```bash
/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:690: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .
warnings.warn(
[rank0]:[E1219 03:01:55.989449808 ProcessGroupNCCL.cpp:616] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=600, OpType=_ALLGATHER_BASE, NumelIn=58309504, NumelOut=233238016, Timeout(ms)=1800000) ran for 1800068 milliseconds before timing out.
[rank0]:[E1219 03:01:55.990175767 ProcessGroupNCCL.cpp:1785] [PG ID 0 PG GUID 0(default_pg) Rank 0] Exception (either an error or timeout) detected by watchdog at work: 600, last enqueued NCCL work: 600, last completed NCCL work: 599.
[rank0]:[E1219 03:01:56.095109308 ProcessGroupNCCL.cpp:1834] [PG ID 0 PG GUID 0(default_pg) Rank 0] Timeout at NCCL work: 600, last enqueued NCCL work: 600, last completed NCCL work: 599.
[rank0]:[E1219 03:01:56.095141631 ProcessGroupNCCL.cpp:630] [Rank 0] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.
[rank0]:[E1219 03:01:56.095151475 ProcessGroupNCCL.cpp:636] [Rank 0] To avoid data inconsistency, we are taking the entire process down.
[rank0]:[E1219 03:01:56.096419462 ProcessGroupNCCL.cpp:1595] [PG ID 0 PG GUID 0(default_pg) Rank 0] Process group watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=600, OpType=_ALLGATHER_BASE, NumelIn=58309504, NumelOut=233238016, Timeout(ms)=1800000) ran for 1800068 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7fc856081446 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x282 (0x7fc857394772 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7fc85739bbb3 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7fc85739d61d in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x145c0 (0x7fc89fd2a5c0 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #5: <unknown function> + 0x8609 (0x7fc8a24df609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #6: clone + 0x43 (0x7fc8a2619133 in /lib/x86_64-linux-gnu/libc.so.6)
terminate called after throwing an instance of 'c10::DistBackendError'
what(): [PG ID 0 PG GUID 0(default_pg) Rank 0] Process group watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=600, OpType=_ALLGATHER_BASE, NumelIn=58309504, NumelOut=233238016, Timeout(ms)=1800000) ran for 1800068 milliseconds before timing out.
Exception raised from checkTimeout at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:618 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7fc856081446 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: c10d::ProcessGroupNCCL::WorkNCCL::checkTimeout(std::optional<std::chrono::duration<long, std::ratio<1l, 1000l> > >) + 0x282 (0x7fc857394772 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: c10d::ProcessGroupNCCL::watchdogHandler() + 0x233 (0x7fc85739bbb3 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #3: c10d::ProcessGroupNCCL::ncclCommWatchdog() + 0x14d (0x7fc85739d61d in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x145c0 (0x7fc89fd2a5c0 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #5: <unknown function> + 0x8609 (0x7fc8a24df609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #6: clone + 0x43 (0x7fc8a2619133 in /lib/x86_64-linux-gnu/libc.so.6)
Exception raised from ncclCommWatchdog at ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1601 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x7fc856081446 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0xe4271b (0x7fc85700a71b in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch_cuda.so)
frame #2: <unknown function> + 0x145c0 (0x7fc89fd2a5c0 in /mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/lib/libtorch.so)
frame #3: <unknown function> + 0x8609 (0x7fc8a24df609 in /lib/x86_64-linux-gnu/libpthread.so.0)
frame #4: clone + 0x43 (0x7fc8a2619133 in /lib/x86_64-linux-gnu/libc.so.6)
E1219 03:01:57.498000 786687 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: -6) local_rank: 0 (pid: 786758) of binary: /mnt/data/a.kudisov/transformers/.venv/bin/python
Traceback (most recent call last):
File "/mnt/data/a.kudisov/transformers/.venv/bin/accelerate", line 8, in <module>
sys.exit(main())
^^^^^^
File "/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/accelerate/commands/launch.py", line 1155, in launch_command
multi_gpu_launcher(args)
File "/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/accelerate/commands/launch.py", line 793, in multi_gpu_launcher
distrib_run.run(args)
File "/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/distributed/run.py", line 910, in run
elastic_launch(
File "/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/data/a.kudisov/transformers/.venv/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=======================================================
train.py FAILED
-------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
-------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-12-19_03:01:57
host : ...
rank : 0 (local_rank: 0)
exitcode : -6 (pid: 786758)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 786758
=======================================================
```
This error occurs when the program tries to save the model and hangs while collecting model.state_dict()
I did a little investigation and found out that the main process (it is distributed learning with 4 processes) successfully collects all model's layers on cpu, except the last one. When it starts processing the last layer, the whole process hangs and crashes on timeout.
If I change model from Qwen2.5-7B-Instruct to Qwen2.5-1.5B-Instruct (from a big one to a small one) this error will disappear (there is still one more [problem](https://github.com/huggingface/transformers/pull/35234) that will not allow you to save the model after training but it's related to transformers from HuggingFace, not pytorch)
I believe this error has something to do with communication between processes.
## How to reproduce?
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch --config_file=config.yaml --main_process_port=12355 train.py --output_dir=./save
```
Accelerate config:
```yaml
# accelerate.yaml
compute_environment: LOCAL_MACHINE
debug: true
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: NO_PREFETCH
fsdp_forward_prefetch: false
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
Training script:
```python
# train.py
import argparse
from functools import partial
import torch
from datasets import Dataset
from peft import LoraConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, Trainer, TrainingArguments
def get_data(tokenizer):
data = [{
'user_message': "Hi, how are you?",
'model_message': "I'm good, thanks. How about you?"
}] * 20
data = Dataset.from_list(data)
data = data.train_test_split(train_size=0.7, shuffle=True, seed=42)
tmp = data['test'].train_test_split(test_size=0.6, shuffle=True, seed=143)
data['validation'] = tmp['train']
data['test'] = tmp['test']
def tokenize(x):
messages = [
{'role': 'user', "content": x['user_message']},
{'role': 'assistant', "content": x['model_message']},
]
text = tokenizer.decode(tokenizer.apply_chat_template(messages))
result = tokenizer(text, return_tensors='pt')
sep = '<|im_start|>assistant\n'
input_text = text.split(sep)[0] + sep
input_len = len(tokenizer(input_text)['input_ids'])
result['labels'] = result['input_ids'].clone().detach()
result['labels'][:, :input_len] = -100
return {k: v.tolist()[0] for k, v in result.items()}
tokenized_datasets = data.map(
tokenize,
remove_columns=['user_message', 'model_message'],
)
tokenized_datasets.set_format('torch')
return tokenized_datasets
def collate_fn(data, pad_token_id):
input_ids, labels = tuple([x[key] for x in data] for key in ('input_ids', 'labels'))
input_ids = torch.nn.utils.rnn.pad_sequence(input_ids, batch_first=True, padding_value=pad_token_id)
labels = torch.nn.utils.rnn.pad_sequence(labels, batch_first=True, padding_value=-100)
return {
'input_ids': input_ids,
'labels': labels,
'attention_mask': input_ids.ne(pad_token_id) * 1
}
def print_trainable_parameters(model):
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(f'trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param:.2f}')
def training_function(args):
model_name = 'Qwen/Qwen2.5-7B-Instruct'
training_args = TrainingArguments(
output_dir=args.output_dir,
gradient_checkpointing=True,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
num_train_epochs=1,
save_strategy='no',
seed=42,
data_seed=42,
optim='adamw_8bit'
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
data = get_data(tokenizer)
model = AutoModelForCausalLM.from_pretrained(
model_name,
return_dict=True,
)
model.add_adapter(LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.1,
target_modules=['q_proj', 'k_proj']
))
trainer = Trainer(
model=model,
args=training_args,
train_dataset=data['train'],
eval_dataset=data['validation'],
data_collator=partial(collate_fn, pad_token_id=tokenizer.pad_token_id),
)
if trainer.accelerator.is_main_process:
print_trainable_parameters(model)
trainer.train()
if trainer.is_fsdp_enabled:
trainer.accelerator.state.fsdp_plugin.set_state_dict_type("FULL_STATE_DICT")
trainer.save_model()
def main():
parser = argparse.ArgumentParser(description='Main training script.')
parser.add_argument(
'--output_dir',
type=str,
default='.',
help='Optional save directory where all checkpoint folders will be stored. Default is the current working directory.'
)
args = parser.parse_args()
training_function(args)
if __name__ == '__main__':
main()
```
Environment:
```bash
accelerate==1.1.1
torch==2.5.1+cu124
pandas==2.2.3
peft==0.13.2
datasets==3.1.0
transformers==4.46.3
tqdm==4.67.1
```
## How to fix?
I found that just adding `dist.barrier()` in `_pre_state_dict_hook` function (from torch/distributed/fsdp/_state_dict_utils.py) helps me overcome the problem. But I don't have enough expertise to be sure that this is the correct error correction.
```python
# torch/distributed/fsdp/_state_dict_utils.py
@no_type_check
@torch.no_grad()
def _pre_state_dict_hook(
module: nn.Module,
*args,
**kwargs,
) -> None:
"""
This is called before the core state dict saving logic of ``module``.
``fsdp_state._state_dict_type`` is used to decide what postprocessing will
be done.
"""
fsdp_state = _get_module_fsdp_state_if_fully_sharded_module(module)
if fsdp_state.sharding_strategy == ShardingStrategy.NO_SHARD:
context = _replace_with_full_state_dict_type(fsdp_state)
warnings.warn(
"When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict will"
"be returned."
)
else:
_set_use_dtensor(fsdp_state)
context = contextlib.nullcontext()
with context:
_pre_state_dict_hook_fn = {
StateDictType.FULL_STATE_DICT: _full_pre_state_dict_hook,
StateDictType.LOCAL_STATE_DICT: _local_pre_state_dict_hook,
StateDictType.SHARDED_STATE_DICT: _sharded_pre_state_dict_hook,
}
##############################################
dist.barrier() # I add this
##############################################
_pre_state_dict_hook_fn[fsdp_state._state_dict_type](
fsdp_state,
module,
*args,
**kwargs,
)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-165-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 10.1.243
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 112
On-line CPU(s) list: 0-111
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 2527.355
CPU max MHz: 2001.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Virtualization: VT-x
L1d cache: 2.6 MiB
L1i cache: 1.8 MiB
L2 cache: 70 MiB
L3 cache: 84 MiB
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1+cu124
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn | oncall: distributed,module: distributed_checkpoint | low | Critical |
2,748,988,333 | ui | [bug]: Sidebar useIsMobile doesn't change when slowly | ### Describe the bug
When I expand the window slowly, the `isMobile` doesn't change to `false`.
Add `console.log(window.innerWidth, e.matches)` get `767 false`, but `MOBILE_BREAKPOINT` is `768`, so setIsMobile is `true`.
Use `event.matches` or `mql.matches` or ~`<=`~?
I find a questions: [html - Media query max-width not working inclusively - Stack Overflow](https://stackoverflow.com/questions/56842906/media-query-max-width-not-working-inclusively). You can see: https://jsfiddle.net/e0hdyqc9/, the `767` is white.
Maybe with `max-width` when the number is even, media match includes the number; when the number is odd, media match doesn't include the number.
### Affected component/components
Sidebar
### How to reproduce
1. Contract the window
2. Slowly enlarge the window
3. isMobile doesn't change
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows11, Edge
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,749,035,299 | flutter | Semantics onTap also triggering background content on mobile web | ### Steps to reproduce
1. Wrap a button with semantics with the following values:
```
Semantics(
label: "test",
hint: "hint test",
excludeSemantics: true,
onTap: _incrementCounter,
child: IconButton(
onPressed: _incrementCounter,
icon: const Icon(Icons.add),
),
),
```
2. Run on mobile web
2.a. Change the flutter settings so you can connect to the webserver on your phone
3. Click the button slowly a few times
### Expected results
The value should increment once for each press
### Actual results
Sometimes, the button increments twice
### Code sample
Repo available here: https://github.com/edeetee/bug_doubletap_semantics
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
void main() {
WidgetsFlutterBinding.ensureInitialized();
RendererBinding.instance.ensureSemantics();
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// TRY THIS: Try running your application with "flutter run". You'll see
// the application has a purple toolbar. Then, without quitting the app,
// try changing the seedColor in the colorScheme below to Colors.green
// and then invoke "hot reload" (save your changes or press the "hot
// reload" button in a Flutter-supported IDE, or press "r" if you used
// the command line to start the app).
//
// Notice that the counter didn't reset back to zero; the application
// state is not lost during the reload. To reset the state, use hot
// restart instead.
//
// This works for code too, not just values: Most code changes can be
// tested with just a hot reload.
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
// This call to setState tells the Flutter framework that something has
// changed in this State, which causes it to rerun the build method below
// so that the display can reflect the updated values. If we changed
// _counter without calling setState(), then the build method would not be
// called again, and so nothing would appear to happen.
print('Incrementing counter');
_counter++;
});
}
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
return Scaffold(
appBar: AppBar(
// TRY THIS: Try changing the color here to a specific color (to
// Colors.amber, perhaps?) and trigger a hot reload to see the AppBar
// change color while the other colors stay the same.
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
// Here we take the value from the MyHomePage object that was created by
// the App.build method, and use it to set our appbar title.
title: Text(widget.title),
),
body: Center(
// Center is a layout widget. It takes a single child and positions it
// in the middle of the parent.
child: Column(
// Column is also a layout widget. It takes a list of children and
// arranges them vertically. By default, it sizes itself to fit its
// children horizontally, and tries to be as tall as its parent.
//
// Column has various properties to control how it sizes itself and
// how it positions its children. Here we use mainAxisAlignment to
// center the children vertically; the main axis here is the vertical
// axis because Columns are vertical (the cross axis would be
// horizontal).
//
// TRY THIS: Invoke "debug painting" (choose the "Toggle Debug Paint"
// action in the IDE, or press "p" in the console), to see the
// wireframe for each widget.
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'You have pushed the button this many times:',
),
Semantics(
label: "test",
hint: "hint test",
excludeSemantics: true,
onTap: _incrementCounter,
child: IconButton(
onPressed: _incrementCounter,
icon: const Icon(Icons.add),
),
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
],
),
),
floatingActionButton: Semantics(
button: true,
excludeSemantics: true,
onTap: _incrementCounter,
child: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: const Icon(Icons.add),
),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/e5500247-4bd9-4722-9a00-cb6c48e113b8
</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib/main.dart on Chrome in debug mode...
This app is linked to the debug service: ws://127.0.0.1:60670/2n9b5kV5aos=/ws
Debug service listening on ws://127.0.0.1:60670/2n9b5kV5aos=/ws
Connecting to VM Service at ws://127.0.0.1:60670/2n9b5kV5aos=/ws
Connected to the VM Service.
4
Incrementing counter
Restarted application in 100ms.
7
Incrementing counter
Restarted application in 96ms.
Restarted application in 47ms.
```
</details>
### Flutter Doctor output
The error was initially happening on the version this test is written in, 3.24.3. I tested it in the most recent version, and it actually regressed, now it always happens.
<details open><summary>Doctor output</summary>
```console
[flutter] flutter doctor -v
[✓] Flutter (Channel stable, 3.24.3, on macOS 15.0 24A335 darwin-arm64, locale en-NZ)
• Flutter version 3.24.3 on channel stable at /Users/edeetee/fvm/versions/3.24.3
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (3 months ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/edeetee/Library/Android/sdk
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/macos-android-setup for more details.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[!] Android Studio (version unknown)
• Android Studio at /Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
✗ Unable to determine Android Studio version.
✗ android-studio-dir = /
✗ Android Studio not found at /Contents
• Try updating or re-installing Android Studio.
• Consider removing your android-studio-dir setting by running:
flutter config --android-studio-dir=
[✓] VS Code (version 1.96.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (4 available)
• Edward’s iPhone (mobile) • 00008030-000C4D3E1198C02E • ios • iOS 18.1.1 22B91
• macOS (desktop) • macos • darwin-arm64 • macOS 15.0 24A335 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0 24A335 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.140
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 2 categories.
exit code 0
```
</details>
| a: accessibility,platform-web,has reproducible steps,P2,browser: safari-ios,team-web,triaged-web,found in release: 3.27,found in release: 3.28 | low | Critical |
2,749,076,401 | transformers | DeBERTa's `DisentangledSelfAttention` hardcodes `float` dtype, which causes `bfloat16` overflow error | ### System Info
transformers: 4.47.0
Python: 3.10.5
PyTorch: 2.5.1+cu124
GPU: NVIDIA GTX 980 Ti
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm training a `DebertaForMaskedLM` model with a broader experimental framework, but you can reproduce the bug with simple inference as follows: instantiate such a model with datatype `bfloat16`, and send a batch through it.
```python
import torch
from transformers import DebertaConfig, DebertaForMaskedLM
model = DebertaForMaskedLM._from_config(DebertaConfig(), torch_dtype=torch.bfloat16)
model(**{"input_ids": torch.tensor([[101,102,103,104]]),
"attention_mask": torch.tensor([[1,1,1,1]])})
```
One of two errors is now thrown in `modeling_deberta.py`, both in `DisentangledSelfAttention.forward()` (and they can both be traced back to the same issue):
1. `RuntimeError: expected m1 and m2 to have the same dtype, but got: float != struct c10::BFloat16`
2. `RuntimeError: value cannot be converted to type at::BFloat16 without overflow`
Here's where they come from: two fields in DeBERTa's `DisentangledSelfAttention` are constructed by explicitly declaring their `dtype` as `torch.float`:
https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/modeling_deberta.py#L187-L188
Then, in `forward()`, we create the two tensors `query_layer` and `key_layer` that start out with the `dtype` of the hidden states, which have the `dtype` of the model, namely `bfloat16`:
https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/modeling_deberta.py#L258-L259
But then, one of these tensors, `query_layer`, is modified by adding `self.q_bias` into it. The resulting tensor inherits the `torch.float` data type:
https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/modeling_deberta.py#L268
The first RuntimeError can occur on the following line, when `query_layer` (now `torch.float`) and `key_layer` (still `torch.bfloat16`) are multiplied. I've had this line crash on one machine and work on another, so perhaps this kind of mixed precision sometimes works.
https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/modeling_deberta.py#L276
The second RuntimeError occurs even when mixed precision is supported. It happens on the following line:
https://github.com/huggingface/transformers/blob/9613933b022ddbf085e2c593ed4ceea4c734179a/src/transformers/models/deberta/modeling_deberta.py#L290
`attention_scores` is of type `bfloat16`. You then ask to fill it with the minimal value *for the data type of `query_layer`, not the data type of `attention_scores`*. Because `query_layer.dtype` is `torch.float`, that minimal value (-3.40282e+38) is *more negative than the most negative `torch.bfloat16`* (-3.38953e+38). Hence, the overflow.
### Expected behavior
The `dtype` of `self.q_bias` and `self.v_bias` should be set like the rest of the modules/tensors in the model, rather than being hardcoded. That would keep everything `bfloat16`. | bug | low | Critical |
2,749,078,035 | godot | `sample_baked_with_rotation` in Curve3D gives wrong rotation near first and last points | ### Tested versions
#### Reproducible in
- 4.4.dev.custom_build.fafc07335
- 4.3.stable.official.77dcf97d8
- 4.2.2.stable.official.15073afe3
### System information
Godot v4.3.stable - Ubuntu 22.04.5 LTS 22.04 - X11 - GLES3 (Compatibility) - NVIDIA GeForce RTX 3080 (nvidia; 550.120) - Intel(R) Core(TM) i5-8600K CPU @ 3.60GHz (6 Threads)
### Issue description
Curve3D's `sample_baked_with_rotation` returns strange rotation values near the first point and the last point of a curve.
### Steps to reproduce
1. Construct a Curve3D with at least 3 points.
2. Give the middle point a non-zero in and out vector.
3. Sample the curve with rotation using `sample_baked_with_rotation` at regular intervals to see that the rotation value veers off in a weird direction as the first and the final points are approached.
### Minimal reproduction project (MRP)
[mrp_curve3d-edge-rotation.zip](https://github.com/user-attachments/files/18191202/mrp_curve3d-edge-rotation.zip)
The attached MRP is a 3D scene with a Camera3D, a Path3D, and a MeshInstance3D. The Path3D has a script attached to it that draws basis provided by `sample_baked_with_rotation` along the entire length of the Path3D. This is not a tool script, so the project must be ran to see the basis drawn. The Path3D script has some exported parameters to help the user investigate this issue: `count` is the number of samples of the curve taken and drawn in the 3D viewport as basis gizmos, `size` is the length of the vector that represents a basis in a gizmo, and `cubic` changes whether `sample_baked_with_rotation` is called with `cubic = true` or `cubic = false` (the issue behavior does not appear to change based on this).
#### Examples from MRP
The following is the 3D scene with only 50 samples taken so the basis gizmo can be more easily seen.

As the number of samples increases so does the number of problematic rotation samples near the first and last points.

When the middle point has zero in and out vectors, there are only straight lines and this issue does not occur.

| bug,topic:3d | low | Minor |
2,749,079,736 | flutter | `AnimationStyle` API improvements | - It'd be nice to have `AnimationStyle.merge` (similar to [**TextStyle.merge**](https://main-api.flutter.dev/flutter/painting/TextStyle/merge.html))
- It'd also be nice if the `.lerp()` method was more than just a binary switch, since maybe someone would want to animate the animation style!
- Since it's an immutable configuration class, a `const` constructor would be nice to have as well. | c: new feature,framework,a: animation,c: proposal,P3,team-framework,triaged-framework | low | Minor |
2,749,081,726 | godot | LightmapGI: Custom Sky environment does not contribute to ambient light | ### Tested versions
Reproducible in the Godot v4.4.dev (7f5c46929) build (built from the AUR and currently only a few commits behind)
### System information
Godot v4.4.dev (7f5c46929) - Manjaro Linux #1 SMP PREEMPT_DYNAMIC Mon, 09 Dec 2024 11:58:37 +0000 on Wayland - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (nvidia; 565.77) - AMD Ryzen 9 9900X 12-Core Processor (24 threads)
### Issue description
Currently setting the environment mode of the lightmapGI node to custom sky results in completely black bakes (assuming there are no other lights contributing bounces)
This doesn't happen when using the scene sky.
Here's a scene with the lightmapGI node set to use the scene sky:

And here is the same scene when setting the lightmapGI node to use a custom sky (which is identical to the scene sky here so it's even more clear)

### Steps to reproduce
Create an object with UV2, and LightmapGI node.
Set the environment mode to "Custom Sky" (it doesn't matter what kind of sky material you use here, they're all broken)
Bake lighting
Notice the object is dark and not illuminated by the sky at all. (Which is clear if you change the sky energy to a stupidly high number and the object still doesn't change in brightness.)
### Minimal reproduction project (MRP)
[ambient-light-baking-issue.zip](https://github.com/user-attachments/files/18202507/ambient-light-baking-issue.zip)
This is the same scene as above but with the custom sky already set. | bug,topic:rendering,topic:3d | low | Critical |
2,749,104,909 | TypeScript | Check if path is excluded via watch options before adding watchers during `watchFailedLookupLocationOfNonRelativeModuleResolutions` | ### 🔍 Search Terms
finishCachingPerDirectoryResolution
failedLookupLocation
### ✅ Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
## System
Typescript 5.4
## Context
We have a rather large Typescript project, with both a lot of internal as well as external dependendencies.
I have more recently started looking into the rather sluggish initial "startup times" in our IDEs.
By "startup time" I mean the initial parsing etc. before intellisense etc. can kick in and show type information.
Activating tracing and checking times from VSCode the big block of `finishCachingPerDirectoryResolution` at the end immediatelly caught my eye.
Depending on the amount of dependencies a package/project has this can take upwards of 10 seconds.
After further investigation it really boils down to the initial call to `watchFailedLookupLocationOfResolution` (https://github.com/microsoft/TypeScript/blob/0dda037f9fc53af3623b9e7371a731947a572d40/src/compiler/resolutionCache.ts#L1146-L1165)
In particular this loop: https://github.com/microsoft/TypeScript/blob/0dda037f9fc53af3623b9e7371a731947a572d40/src/compiler/resolutionCache.ts#L1155-L1157 is the main problem as for some reason, having many external dependencies can mean this array can grow beyond 100k entries.
## Solutioning
### First try
The first thing I was wondering is if I could simply use the `watchOptions.excludeDirectories` to avoid having so many watchers being set for at times irrelevant directories (irrelevant in the sense that i can guarantee they wont ever be needed to be watched), however this setting is simply ignored at this point.
### Second try
I added a `isIgnoredByWatchOptions` inside the above mentioned `for` loop to avoid calling `watchFailedLookupLocation` for a location that should be ignored anyway. This did filter out calls, however did not make things faster.
To simply verify anything i added a `!failedLookupLocation.includes('node_modules')` to the check before calling `isIgnoredByWatchOptions` which brought down times roughly ~90%.
So i dug deeper into `isIgnoredByWatchOptions` and it looks like it is creating a rather expensive matcher every time it gets called, even if it basically gets called 100k times with the same matcher relevant parameters but a different path to check against.
### Third try
I used the available `memoizeCached` to create a cached version of the regex creating found in the first lines of `matchesExcludeWorker` ( https://github.com/microsoft/TypeScript/blob/0dda037f9fc53af3623b9e7371a731947a572d40/src/compiler/commandLineParser.ts#L4010-L4022)
```
const getCachedExcludeMatcher = memoizeCached((excludeSpecs, basePath, useCaseSensitiveFileNames) => {
const usefulSpecs = filter(excludeSpecs, (spec) => !invalidDotDotAfterRecursiveWildcard(spec))
const excludePattern = getRegularExpressionForWildcard(usefulSpecs, basePath, "exclude");
const excludeRegex = excludePattern && getRegexFromPattern(excludePattern, useCaseSensitiveFileNames);
return excludeRegex;
}, new MatchExcludeCache());
```
using this cached "excludeMatcher" approach together with watchOptions.excludeDirectories and checking `isIgnoredByWatchOptions` in the `failedLookupLocations` loop brought down time about 75% from ~8 seconds to 2 seconds.
## Ask
My question is:
a) On a very basic level - why can there by so many (100k+) failed lookup locations and how can that be avoided
b) Is it ok to check if a failed lookup location is marked as "excluded" from watch options and check for that before watching it
c) Are you ok for me to raise a PR with the above solutioning to reduce time for the watcher setup?
### 📃 Motivating Example
Watcher setup time for larger projects has been reduced resulting in faster tsserver load times in IDEs
### 💻 Use Cases
1. What do you want to use this for?
improve ide setup performance
2. What shortcomings exist with current approaches?
the current creating of matchers is creating performance issues when called a lot
3. What workarounds are you using in the meantime?
n/a | Help Wanted,Possible Improvement | low | Critical |
2,749,114,090 | transformers | AllAboardBertweetModel | ### Model description
AllAboardBertweetModel used for AllaboardSystem
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | New model | low | Minor |
2,749,118,574 | deno | Uncaught PermissionDenied: Permission denied (os error 13) When running vite | Version: Deno 2.1.4
I have a remix project which contain a directory with restricted permission. When i run the project with npm, the project run without a problem. But with Deno it gives me `Uncaught PermissionDenied`. The folder itself is ignored in `.gitignore`
Here is the log
```
DEBUG RS - deno_config::deno_json:731 - Config file found at '/home/nsetyo/projects/deno.jsonc'
DEBUG RS - deno_config::workspace::discovery:266 - package.json file found at '/home/nsetyo/projects/package.json'
DEBUG RS - deno::args:593 - .npmrc found at: '/home/nsetyo/.npmrc'
DEBUG RS - deno::args:937 - Finished config loading.
DEBUG RS - deno::task_runner:438 - Resolving commands in '/home/nsetyo/projects/node_modules/.bin'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'sucrase-node' to '/home/nsetyo/projects/node_modules/sucrase/bin/sucrase-node'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'jsesc' to '/home/nsetyo/projects/node_modules/jsesc/bin/jsesc'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'uuid' to '/home/nsetyo/projects/node_modules/uuid/dist/bin/uuid'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'tsc' to '/home/nsetyo/projects/node_modules/typescript/bin/tsc'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'vite' to '/home/nsetyo/projects/node_modules/vite/bin/vite.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'mkdirp' to '/home/nsetyo/projects/node_modules/mkdirp/bin/cmd.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'resolve' to '/home/nsetyo/projects/node_modules/resolve/bin/resolve'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'semver' to '/home/nsetyo/projects/node_modules/semver/bin/semver.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'jiti' to '/home/nsetyo/projects/node_modules/jiti/bin/jiti.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'pidtree' to '/home/nsetyo/projects/node_modules/pidtree/bin/pidtree.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'vite-node' to '/home/nsetyo/projects/node_modules/vite-node/vite-node.mjs'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'loose-envify' to '/home/nsetyo/projects/node_modules/loose-envify/cli.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'migrate-flat-routes' to '/home/nsetyo/projects/node_modules/remix-flat-routes/dist/cli.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'prisma' to '/home/nsetyo/projects/node_modules/prisma/build/index.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'json5' to '/home/nsetyo/projects/node_modules/json5/lib/cli.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'nanoid' to '/home/nsetyo/projects/node_modules/nanoid/bin/nanoid.cjs'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'mime' to '/home/nsetyo/projects/node_modules/mime/cli.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'astring' to '/home/nsetyo/projects/node_modules/astring/bin/astring'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'sucrase' to '/home/nsetyo/projects/node_modules/sucrase/bin/sucrase'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'husky' to '/home/nsetyo/projects/node_modules/husky/bin.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'acorn' to '/home/nsetyo/projects/node_modules/acorn/bin/acorn'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'glob' to '/home/nsetyo/projects/node_modules/glob/dist/esm/bin.mjs'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'autoprefixer' to '/home/nsetyo/projects/node_modules/autoprefixer/bin/autoprefixer'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'conventional-commits-parser' to '/home/nsetyo/projects/node_modules/conventional-commits-parser/cli.mjs'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'prettier' to '/home/nsetyo/projects/node_modules/prettier/bin/prettier.cjs'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'commitizen' to '/home/nsetyo/projects/node_modules/commitizen/bin/commitizen'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'remix-serve' to '/home/nsetyo/projects/node_modules/@remix-run/serve/dist/cli.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'cz' to '/home/nsetyo/projects/node_modules/commitizen/bin/git-cz'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'tailwind' to '/home/nsetyo/projects/node_modules/tailwindcss/lib/cli.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'parser' to '/home/nsetyo/projects/node_modules/@babel/parser/bin/babel-parser.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'biome' to '/home/nsetyo/projects/node_modules/@biomejs/biome/bin/biome'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'update-browserslist-db' to '/home/nsetyo/projects/node_modules/update-browserslist-db/cli.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'rollup' to '/home/nsetyo/projects/node_modules/rollup/dist/bin/rollup'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'git-raw-commits' to '/home/nsetyo/projects/node_modules/git-raw-commits/cli.mjs'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'cssesc' to '/home/nsetyo/projects/node_modules/cssesc/bin/cssesc'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'git-cz' to '/home/nsetyo/projects/node_modules/commitizen/bin/git-cz'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'gunzip-maybe' to '/home/nsetyo/projects/node_modules/gunzip-maybe/bin.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'commitlint' to '/home/nsetyo/projects/node_modules/@commitlint/cli/cli.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'yaml' to '/home/nsetyo/projects/node_modules/yaml/bin.mjs'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'tsconfck' to '/home/nsetyo/projects/node_modules/tsconfck/bin/tsconfck.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'tailwindcss' to '/home/nsetyo/projects/node_modules/tailwindcss/lib/cli.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'browserslist' to '/home/nsetyo/projects/node_modules/browserslist/cli.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'commit' to '/home/nsetyo/projects/node_modules/@commitlint/prompt-cli/cli.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'tsserver' to '/home/nsetyo/projects/node_modules/typescript/bin/tsserver'.
DEBUG RS - deno::task_runner:481 - Failed resolving npx command 'JSONStream'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'node-which' to '/home/nsetyo/projects/node_modules/which/bin/which.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'js-yaml' to '/home/nsetyo/projects/node_modules/js-yaml/bin/js-yaml.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'remix' to '/home/nsetyo/projects/node_modules/@remix-run/dev/dist/cli.js'.
DEBUG RS - deno::task_runner:474 - Resolved npx command 'uvu' to '/home/nsetyo/projects/node_modules/uvu/bin.js'.
Task dev remix vite:dev --logLevel info
error: Uncaught PermissionDenied: Permission denied (os error 13) about ["/home/nsetyo/projects/.volumes/mssql/secrets/machine-key"]
at new FsWatcher (ext:runtime/40_fs_events.js:24:17)
at Object.watchFs (ext:runtime/40_fs_events.js:74:10)
at ext:deno_node/_fs/_fs_watch.ts:57:21
at callback (ext:deno_web/02_timers.js:58:7)
at eventLoopTick (ext:core/01_core.js:210:13)
``` | needs info | low | Critical |
2,749,127,381 | vscode | Can't install extension properly |
Type: <b>Bug</b>
1. Click update button in extention page
```log
2024-12-19 10:43:50.754 [info] Getting Manifest... llvm-vs-code-extensions.vscode-clangd
2024-12-19 10:43:51.633 [info] Installing extension: llvm-vs-code-extensions.vscode-clangd {"pinned":true,"productVersion":{"version":"1.96.1","date":"2024-12-17T17:50:05.206Z"},"operation":3,"isApplicationScoped":false,"profileLocation":{"$mid":1,"external":"vscode-userdata:/c%3A/Users/Administrator/.vscode/extensions/extensions.json","path":"/c:/Users/Administrator/.vscode/extensions/extensions.json","scheme":"vscode-userdata"},"installOnlyNewlyAddedFromExtensionPack":true}
2024-12-19 10:44:22.523 [error] [network] #5: https://llvm-vs-code-extensions.gallerycdn.vsassets.io/extensions/llvm-vs-code-extensions/vscode-clangd/0.1.33/1732259347312/Microsoft.VisualStudio.Services.VSIXPackage?redirect=true&update=true - error GET net::ERR_TIMED_OUT
2024-12-19 10:44:52.633 [error] [network] #6: https://llvm-vs-code-extensions.gallerycdn.vsassets.io/extensions/llvm-vs-code-extensions/vscode-clangd/0.1.33/1732259347312/Microsoft.VisualStudio.Services.VSIXPackage?update=true - error GET net::ERR_TIMED_OUT
2024-12-19 10:44:52.634 [warning] Failed downloading vsix. net::ERR_TIMED_OUT. Retry again... llvm-vs-code-extensions.vscode-clangd
2024-12-19 10:46:11.893 [info] Getting Manifest... cweijan.vscode-database-client2
2024-12-19 10:46:24.819 [info] Installing extension: cweijan.vscode-database-client2 {"pinned":true,"productVersion":{"version":"1.96.1","date":"2024-12-17T17:50:05.206Z"},"operation":3,"isApplicationScoped":false,"profileLocation":{"$mid":1,"external":"vscode-userdata:/c%3A/Users/Administrator/.vscode/extensions/extensions.json","path":"/c:/Users/Administrator/.vscode/extensions/extensions.json","scheme":"vscode-userdata"},"installOnlyNewlyAddedFromExtensionPack":true}
2024-12-19 10:46:41.221 [warning] Error while deleting the file file:///c%3A/Users/Administrator/AppData/Roaming/Code/CachedExtensionVSIXs/.06ca40a7-ce43-493f-a1d4-3a3cf884e5c7 无法删除不存在的文件 'c:\Users\Administrator\AppData\Roaming\Code\CachedExtensionVSIXs\.06ca40a7-ce43-493f-a1d4-3a3cf884e5c7'
2024-12-19 10:46:41.222 [warning] Failed downloading vsix. 无法写入文件"c:\Users\Administrator\AppData\Roaming\Code\CachedExtensionVSIXs\.06ca40a7-ce43-493f-a1d4-3a3cf884e5c7"(Error: net::ERR_HTTP2_PING_FAILED). Retry again... llvm-vs-code-extensions.vscode-clangd
2024-12-19 10:47:11.339 [error] [network] #14: https://llvm-vs-code-extensions.gallerycdn.vsassets.io/extensions/llvm-vs-code-extensions/vscode-clangd/0.1.33/1732259347312/Microsoft.VisualStudio.Services.VSIXPackage?redirect=true&update=true - error GET net::ERR_TIMED_OUT
2024-12-19 10:47:21.354 [warning] Failed downloading vsix. 无法写入文件"c:\Users\Administrator\AppData\Roaming\Code\CachedExtensionVSIXs\.ef1c2f0e-052a-470b-aede-ef4f6fdeca7d"(Error: net::ERR_HTTP2_PING_FAILED). Retry again... cweijan.vscode-database-client2
```
VS Code version: Code 1.96.1 (42b266171e51a016313f47d0c48aca9295b9cbb2, 2024-12-17T17:50:05.206Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i5-11500 @ 2.70GHz (12 x 2712)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.73GB (5.27GB free)|
|Process Argv|--crash-reporter-id ecca0ea2-641b-4360-8829-049c249bc706|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (91)</summary>
Extension|Author (truncated)|Version
---|---|---
better-comments|aar|3.0.2
preview-pdf|ana|1.0.0
iconify|ant|0.9.5
icons-carbon|ant|0.2.6
open-in-github-button|ant|0.1.1
slidev|ant|0.49.29
unocss|ant|0.63.6
where-am-i|ant|0.2.0
astro-vscode|ast|2.15.4
crates-io|Bar|0.7.3
markdown-emoji|bie|0.3.1
vscode-tailwindcss|bra|0.12.11
wdl|bro|1.0.0
wdl-devtools|bro|0.0.90
wdlformatter|bru|0.1.0
ruff|cha|2024.56.0
dbclient-jdbc|cwe|1.3.9
vscode-database-client2|cwe|7.6.3
vscode-eslint|dba|3.0.10
githistory|don|0.6.20
gitlens|eam|16.0.5
EditorConfig|Edi|0.16.4
graphviz-preview|EFa|1.7.4
prettier-vscode|esb|11.0.0
vscode-open-in-github|fab|2.3.0
file-icons|fil|1.1.0
code-runner|for|0.12.2
shell-format|fox|7.2.5
copilot|Git|1.242.0
copilot-chat|Git|0.23.2024102903
remotehub|Git|0.64.0
vscode-github-actions|git|0.27.0
vscode-pull-request-github|Git|0.102.0
lake-editor|hug|0.0.16
omp-pragma|idm|1.1.0
latex-workshop|Jam|10.7.0
svg|joc|1.5.4
lldb-dap|llv|0.2.6
vscode-clangd|llv|0.1.29
vscode-latex|mat|1.3.0
asm-code-lens|maz|2.6.1
vscode-github-actions|me-|3.0.1
rainbow-csv|mec|3.12.0
mongodb-vscode|mon|1.9.3
moonbit-lang|moo|0.1.202410280
vscode-language-pack-zh-hans|MS-|1.96.2024121109
debugpy|ms-|2024.12.0
python|ms-|2024.16.1
vscode-pylance|ms-|2024.12.1
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
tensorboard|ms-|2023.10.1002992421
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
hexeditor|ms-|1.11.1
live-server|ms-|0.4.15
makefile-tools|ms-|0.11.13
remote-explorer|ms-|0.4.3
remote-repositories|ms-|0.42.0
remote-server|ms-|1.5.2
vsliveshare|ms-|1.0.5941
tinymist|myr|0.12.12
color-highlight|nau|2.8.0
nextflow|nex|1.0.3
mdc|Nux|0.2.0
amber-language|Ph0|1.2.8
material-icon-theme|PKi|5.16.0
material-product-icons|PKi|1.7.1
sqlite-viewer|qwt|0.9.5
r-debugger|RDe|0.5.5
vscode-yaml|red|1.15.0
r|REd|2.8.4
LiveServer|rit|5.7.9
rust-analyzer|rus|0.3.2212
markdown-preview-enhanced|shd|0.8.14
just|ske|2.0.0
slint|Sli|1.9.0
dot|Ste|0.0.1
even-better-toml|tam|0.19.2
xmake-vscode|tbo|2.3.9
graphviz-interactive-preview|tin|0.3.5
errorlens|use|3.21.0
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
volar|Vue|2.1.8
vscode-todo-highlight|way|1.0.5
vscode-zig|zig|0.6.3
(1 theme extensions excluded)
</details>
<!-- generated by issue reporter --> | info-needed | medium | Critical |
2,749,191,312 | angular | Add more ergonomic api to declare server-side services | ### Which @angular/* package(s) are relevant/related to the feature request?
core, platform-server
### Description
This is regarding the recommended way to declare server only services:
https://github.com/angular/angular/issues/53168#issuecomment-1827805974
At the moment it feels like a lot of boilerplate to create separate services for server and browser contexts. It feels like a step back, adding every single service that has different logic on ssr to the app configuration. Especially if it's a simple noop on the server.
It would be great if it was possible to define on which platform a service will be injected during the service definition.
### Proposed solution
Without knowing the inner workings of the compiler and being a bit naive, something along these lines would be great:
```ts
@Injectable({
providedIn: 'root',
})
export class DataServiceService {
get(id: string): string {
return 'browser-data';
}
}
@ServerInjectable({
overrides: DataServiceService
})
export class DataServiceServiceServer extends DataServiceService {
get(id: string): string {
return 'server-data-performed-with-secret-tokens';
}
}
```
By default, services would be the same for browser and server. Only specific services would be overridden to have different behaviour on the server. This would eliminate the need to then add the services separately to the two app configurations.
### Alternatives considered
Keep it as is | area: server | low | Minor |
2,749,218,097 | react-native | property is not configurable This error is located at: in VirtualizedList (created by FlatList) | ### Description
I am using React Native 0.76, and whenever I try to use a FlatList, the same TypeError: property is not configurable issue arises. This time, I encountered the same error while using the GooglePlacesAutocomplete component, which internally uses FlatList or VirtualizedList to render suggestions.
### Steps to reproduce
**start React native project version v0.76**
**try to use Flastlist anywhere**
### React Native Version
0.76.2
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
PS C:\Users\sofim\Desktop\Spark> npx react-native info
info Fetching system and libraries information...
System:
OS: Windows 11 10.0.26100
CPU: (8) x64 11th Gen Intel(R) Core(TM) i5-11300H @ 3.10GHz
Memory: 9.57 GB / 15.79 GB
Binaries:
Node:
version: 18.18.1
path: C:\Program Files\nodejs\node.EXE
Yarn: Not Found
npm:
version: 9.8.1
path: C:\Program Files\nodejs\npm.CMD
Watchman: Not Found
SDKs:
Android SDK: Not Found
Windows SDK: Not Found
IDEs:
Android Studio: AI-241.18034.62.2412.12266719
Visual Studio: Not Found
Languages:
Java: 17.0.10
Ruby: Not Found
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.2
wanted: 0.76.2
react-native-windows: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
info React Native v0.76.5 is now available (your project is running on v0.76.2).
info Changelog: https://github.com/facebook/react-native/releases/tag/v0.76.5
info Diff: https://react-native-community.github.io/upgrade-helper/?from=0.76.2&to=0.76.5
info For more info, check out "https://reactnative.dev/docs/upgrading?os=windows".
```
### Stacktrace or Logs
```text
property is not configurable This error is located at: in VirtualizedList (created by FlatList)
```
### Reproducer
https://github.com/manzoorsofi/Spark-App-Yashooda
### Screenshots and Videos

| Component: VirtualizedList,Component: FlatList,Needs: Triage :mag:,Newer Patch Available | low | Critical |
2,749,219,808 | PowerToys | Launch and Edit option in Workspaces clears all configured the CLI arguments | ### Microsoft PowerToys version
0.87.0
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
Workspaces
### Steps to reproduce
### Problem Description
Currently when using `Launch & Edit` option in Workspaces, it resets all previously configured CLI arguments.
I hope Everyone set's up workspaces not to just simply open apps, but to open a specific folder or file or URL. Right now that can only be configured by giving CLI arguments. If i wanna add a new app or resize the tabs, it can be only done by clicking `Launch & Edit` option, which btw resets the entire CLI configurations.
It would be much useful if CLI args don't reset!
### Steps to Reproduce
1. Make a new workspace & configure CLI arguments for a few apps.
2. Use the "Launch & Edit" option.
3. It prompts you to capture the screen.
4. Observe that the CLI arguments are no longer preserved.
### ✔️ Expected Behavior
CLI arguments should persist after capturing the "Launch & Edit" screen.
### ❌ Actual Behavior
CLI arguments are cleared upon capturing the "Launch & Edit" screen.
### Other Software
_No response_ | Issue-Bug,Resolution-Fix Committed,Product-Workspaces | low | Minor |
2,749,262,131 | flutter | Menus should be configurable to not overlap with parent button | ### Steps to reproduce
Looking at the docs from `MenuStyle` it says:
```dart
/// Determines the desired alignment of the submenu when opened relative to
/// the button that opens it.
///
/// If there isn't sufficient space to open the menu with the given alignment,
/// and there's space on the other side of the button, then the alignment is
/// swapped to it's opposite (1 becomes -1, etc.), and the menu will try to
/// appear on the other side of the button. If there isn't enough space there
/// either, then the menu will be pushed as far over as necessary to display
/// as much of itself as possible, possibly overlapping the parent button.
final AlignmentGeometry? alignment;
```
The important part is the last sentence: `possibly overlapping the parent button`.
I am seeing this behaviour with `DropdownMenu` as I have lots of entries and when I click the entries get on top of the `DropdownButton`. Normally this wouldn't be an issue, but my `DropdownMenu` is filterable, so I can type on it, and when the menu covers the `DropdownButton`, I can't see what I am typing.
Thus, Menus should have the option to cover or not to cover the parent button that opened them, as in cases of filterable `DropdownMenu` this will be undesirable.
### Actual results
Menus will decide themselves if they will cover their parent button or not.
### Logs
_No response_
### Flutter Doctor output
Flutter version 3.24.4 | c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Minor |
2,749,322,074 | ant-design | Table 组件开启虚拟列表后,在 windows 上卡顿掉帧 | ### What problem does this feature solve?
目前,Table 开启虚拟列表后,mac 上没有问题,windows 上滚动时会卡顿掉帧。antd 文档例子就有这个问题,但 4.x 版本的很流畅。windows 测试机型配置如下:处理器:AMD Ryzen 7 PRO 4750U with Radeon Graphics 1.70 GHz。机带 RAM:16.0 GB。
### What does the proposed API look like?
5.x 的 Table 组件在开启虚拟列表后,window 上也能流畅滚动。
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Major |
2,749,339,758 | langchain | "TypeError in embed_documents: Nested Embedding Structure (List[List[float]]) Causes Failure in LlamaCppEmbeddings with langchain-community v0.3.12" | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import LlamaCppEmbeddings
# Load a text file
loader = TextLoader("./docs/raw.txt")
docs = loader.load()
# Split text into smaller chunks
text_splitter = CharacterTextSplitter(chunk_size=10, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
# Prepare input for embedding
embeddings = LlamaCppEmbeddings(model_path="./models/llama-1B-q8_0.gguf")
_texts = [doc.page_content for doc in texts]
# Attempt to embed documents
embedded_texts = embeddings.embed_documents(_texts)
print(len(embedded_texts), len(embedded_texts[0]))
### Error Message and Stack Trace (if applicable)
[list(map(float, e["embedding"])) for e in embeddings["data"]]
PyDev console: starting.
Traceback (most recent call last):
File "D:\Pycharm\PyCharm Community Edition 2023.2.3\plugins\python-ce\helpers\pydev\_pydevd_bundle\pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<input>", line 1, in <module>
File "<input>", line 1, in <listcomp>
TypeError: float() argument must be a string or a real number, not 'list'
### Description
When using langchain-community's LlamaCppEmbeddings with llama-cpp-python, the embed_documents method fails with a TypeError when processing certain input texts. The issue arises because the returned embedding structure from llama_cpp is unexpectedly nested (List[List[float]]), but embed_documents assumes a flat structure (List[float]).
Environment
Python version: 3.10
langchain-community: v0.3.12
llama-cpp-python: v0.3.5
Model: llama-1B-q8_0.gguf --> llama-3.2-1B
Expected Behavior
The embed_documents method should process the embeddings correctly, regardless of whether the returned embedding structure is flat (List[float]) or nested (List[List[float]]).
Actual Behavior
The embed_documents method assumes the returned embeddings are flat (List[float]), but when the structure is nested (List[List[float]]), it fails with the following error:
TypeError: float() argument must be a string or a real number, not 'list'
### System Info
(gpt310free) PS D:\Temp\Gpt> python -m langchain_core.sys_info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.25
> langchain: 0.3.12
> langchain_community: 0.3.12
> langsmith: 0.1.147
> langchain_text_splitters: 0.3.3
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.10
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.0
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.25.2
> orjson: 3.10.12
> packaging: 24.0
> pydantic: 2.10.3
> pydantic-settings: 2.7.0
> PyYAML: 6.0.1
> requests: 2.30.0
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 8.2.3
> typing-extensions: 4.12.2
| 🤖:bug | low | Critical |
2,749,354,353 | vscode | Add editable filters to the workspace (folder) |
Type: <b>Feature Request</b>
When there is a lot of content in the workspace folder, browsing becomes troublesome and difficult. At this point, we want to use a JSON like configuration document to customize folder filters, similar to this:
```json
{
“filter0": [**/*],
"filter1":
[
rootdir\\apps\\folder1,
rootdir\\apps\\folder3,
rootdir\\cores\\*,
...
]
}
```
When VSCode reads this custom filter configuration file, it can display the directory structure of the workspace according to the configuration above; Of course, it's best to make it switchable so that it can revert back to its original form.
VS Code version: Code 1.96.1 (42b266171e51a016313f47d0c48aca9295b9cbb2, 2024-12-17T17:50:05.206Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<!-- generated by issue reporter --> | feature-request,file-explorer | low | Minor |
2,749,371,092 | go | spec: clarify that map iteration always yields the latest version of an entry | ### Proposal Details
Consider the following program (https://go.dev/play/p/wd1Ge2PIyAK):
```
package main
import "fmt"
func main() {
m := map[string]string{
"a": "original",
"b": "original",
}
for k, v := range m {
fmt.Println(k, v)
m["a"] = "new"
}
}
```
Map iteration order unspecified, so if "a" is produced first it will print:
```
a original
b original
```
If "b" is produced first, the spec does not specify whether this must print
```
b original
a new
```
Or if
```
b original
a original
```
is acceptable. In other words, is iteration required to produce the latest version of an entry, or is a stale version OK?
As far as I know, the `gc` map implementation has always produced the latest version of an entry. In fact, when implementing #54766 I fixed bugs I introduced that produced stale values under the assumption that the spec required it.
The spec does specify "If a map entry that has not yet been reached is removed during iteration, the corresponding iteration value will not be produced." In my opinion it would be odd to require deletion be tracked precisely, but not changes to values.
Thus I propose that the map portion of https://go.dev/ref/spec#For_range be amended to add "Iteration always produces the latest value of a map entry."
cc @griesemer @randall77 | NeedsFix,FixPending | low | Critical |
2,749,379,013 | ollama | Unable to load dynamic library: libstdc++.so.6: cannot open | ### What is the issue?
oll```
ama run llama3.1
Error: Unable to load dynamic library: Unable to load dynamic server library: libstdc++.so.6: cannot open shared object file: No such file or directory
```
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | bug | low | Critical |
2,749,418,944 | ant-design | Segmented 有多項 option 會超出頁面寬度 | ### What problem does this feature solve?
Segmented 有多項 option 時,橫向顯示會超出頁面寬度
雖然可自己寫樣式修正,不過期望能有像 Tabs 一樣 slider 的功能,或是有更好的呈現方式
### What does the proposed API look like?
以 Tabs 為例, <Segmented> 組件中可根據自身 options 選項及寬度調整是否需滑動
或是以類似 flex-wrap 的 API 讓使用者自定是否可多行顯示(不過還是比較喜歡 Tabs 的顯示方式)
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 💄 Design,improvement | low | Minor |
2,749,440,079 | flutter | No top padding in `NavigationBar` and `BottomAppBar` in Android's edge to edge mode | ### Steps to reproduce
1. Create a Flutter app and add Material 3's `BottomNavigationBar` widget or `BottomAppBar` (with a `FloatingActionButton` with location as `FloatingActionButtonLocation.endContained` to distinguish) widget.
2. Run the app with edge-to-edge mode enabled. (which is the default for Android 15+ in Flutter 3.27).
### Expected results
As per the Material spec [here](https://m3.material.io/components/navigation-bar/specs#6f329e0c-c278-4ac8-9b02-1afcb2790ac3), the navigation bar should include a top padding of 12 dp.
### Actual results
However, after upgrading Flutter to 3.27 and running the app on my Pixel 8a which enables edge-to-edge mode, the `NavigationBar` and `BottomAppBar` does not include a top padding at all as visible in the attached screenshots.
*Update*: Moreover, the ink splash when an `IconButton` is tapped does not align correctly with the bounds of the icon. The ink splash appears slightly above the icon.
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
### With `NavigationBar`

### With `BottomAppBar` with a FAB with location `FloatingActionButtonLocation.endContained`

### Screenshot of Google Drive which uses the same navigation bar component

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| platform-android,e: OS-version specific,P1,team-android,triaged-android | medium | Minor |
2,749,469,350 | next.js | Route Handler PATCH export type error in Next.js 15 | ### Link to the code that reproduces this issue
https://github.com/shellen/repro-next-route-handler
### To Reproduce
After an npm install of the attached repro, just run `npm run build` to observe a failure to compile:
Linting and checking validity of types ..Failed to compile.
`src/app/api/test/[id]/route.ts`
`Type error: Route "src/app/api/test/[id]/route.ts" has an invalid "PATCH" export:`
`Type "{ params: { id: string; }; }" is not a valid type for the function's second argument.`
### Current vs. Expected behavior
# Current: Build fails with type error:
`src/app/api/test/[id]/route.ts`
```
Type error: Route "src/app/api/test/[id]/route.ts" has an invalid "PATCH" export:
Type "{ params: { id: string; }; }" is not a valid type for the function's second argument.
```
# Expected Results:
Build should succeed when using the documented type pattern for route handlers:
```
export async function PATCH(
request: NextRequest,
{ params }: { params: { id: string } }
)
```
Route handlers in Next.js 15 consistently throw a TypeScript error for the second argument's type,
specifically for PATCH methods. This occurs even when following the documented patterns and
trying multiple type approaches.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #26~22.04.1-Ubuntu SMP Thu Jul 11 22:33:04 UTC 2024
Available memory (MB): 7930
Available CPU cores: 2
Binaries:
Node: 20.18.1
npm: 10.8.2
Yarn: 1.22.22
pnpm: 9.15.0
Relevant Packages:
next: 15.1.1-canary.13 // Latest available version is detected (15.1.1-canary.13).
eslint-config-next: 15.1.1
react: 18.2.0
react-dom: 18.2.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
TypeScript
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
# Attempted fixes
Here are things I've tried to fix it.
| Approach | Description | Result | Note |
|----------|-------------|---------|------|
| Custom Props Type | Used Props interface for params | Failed | Type error |
| Props Interface | Added interface for parameters | Failed | Type error |
| Next.js Types | Used NextRequest and return types | Failed | Same type error persisted |
| Base Request Type | Used standard Request type | Failed | Type error remained |
| Record Type | Used Record<string, string \| string[]> | Failed | No improvement |
| Remove Dependencies | Removed @hello-pangea/dnd | Failed | Not dependency related |
| Disable TypedRoutes | Removed from next.config | Failed | Not config related |
| RouteSegment Type | Created custom RouteSegment type | Failed | Type error persisted |
| Runtime Specification | Added nodejs runtime | Failed | No effect on type error |
| Middleware Approach | Added type validation | Failed | Same issue |
| RouteContext Type | Custom context interface | Failed | Invalid second argument |
| NodeJS Runtime + NextRequest | Combined runtime with NextRequest | Failed | Same type error |
| Handler Function Pattern | Separated business logic | Failed | ESLint any error |
| Type Assertion Pattern | Used 'as unknown as' | Failed | Type error remained |
| Custom Route Context | Created RouteContext type | Failed | Not accepted by Next.js |
This table shows systematic attempts to resolve the issue through various typing approaches and configurations, all resulting in the same core type error with the route handler's second argument.
Running React 18 or React 19 does not seem to affect it. We're using a dependency that still needs React 18 but as mentioned, rolling the react version forward didn't seem to fix either. | TypeScript | low | Critical |
2,749,513,456 | PowerToys | Problems encountered when using New+on the desktop in Win10 system | ### Microsoft PowerToys version
0.87.0
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
New+
### Steps to reproduce
【I'm sorry, I used translation software】
【To avoid translation errors, I try my best to have the translator translate simple sentences】
My computer has two screens
I tried using New+Create Template on the desktop
I use New+ Create Template on the home screen
But the generated files appeared on the second desktop
And during this process, the software icons and file icons on my home screen were all messed up
What a magical question
It's like a template file flashing from the first desktop to the second desktop on its own
### ✔️ Expected Behavior
The template file was created at the location of the desktop where I right-click
### ❌ Actual Behavior
The file flew away and even crashed the desktop icons that I had arranged in sequence
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-New+ | low | Critical |
2,749,518,394 | kubernetes | Container name not showed in the `probe` event message | ### What would you like to be added?
Add container name ` Container abc` in the `probe` event message, like:
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned xxx/xxx-controller-manager-7c8fcd5fff-4fprb to cloud-dev
Normal Pulling 20m kubelet Pulling image "xxx/xxx/xxx:v0.0.1"
Normal Pulled 20m kubelet Successfully pulled image "xxx/xxx/xxx-controller:v0.0.1"
Normal Started 20m kubelet Started container manager
Normal Created 20m kubelet Created container manager
Normal Created 19m (x4 over 20m) kubelet Created container kube-rbac-proxy
Normal Started 19m (x4 over 20m) kubelet Started container kube-rbac-proxy
...
Warning Unhealthy 2m58s (x31 over 10m) kubelet [Container abc] Readiness probe failed: HTTP probe failed with statuscode: 500
```
### Why is this needed?
When there are multiple containers in a pod, we use `kubectl describe po xxx` to show the events,
it is not showing the failed container name, but it is useful and needed.
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned xxx/xxx-controller-manager-7c8fcd5fff-4fprb to cloud-dev
Normal Pulling 20m kubelet Pulling image "xxx/xxx/xxx:v0.0.1"
Normal Pulled 20m kubelet Successfully pulled image "xxx/xxx/xxx-controller:v0.0.1"
Normal Started 20m kubelet Started container manager
Normal Created 20m kubelet Created container manager
Normal Created 19m (x4 over 20m) kubelet Created container kube-rbac-proxy
Normal Started 19m (x4 over 20m) kubelet Started container kube-rbac-proxy
...
Warning Unhealthy 2m58s (x31 over 10m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
``` | kind/feature,sig/cli,needs-triage | low | Critical |
2,749,534,814 | ui | [bug]: When using rich text Editor inside a Dialog, the built-in popups of the rich text editor cannot gain focus | ### Describe the bug
When using rich text editor inside a Dialog, the built-in popups of the rich text editor cannot gain focus
Can Dialog these two properties be set the same way as in MUI?
1.disableEnforceFocus={true}
2.disableAutoFocus={true}
### Affected component/components
Dialog
### How to reproduce
1.

2.
<img width="1285" alt="image" src="https://github.com/user-attachments/assets/d0a19e40-2213-41bd-8813-e41bdfd83511" />
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
browsers, system
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,749,564,057 | next.js | DYNAMIC_SERVER_USAGE error with parallel and/or intercepted routes | ### Link to the code that reproduces this issue
https://github.com/diamondT/with-modal-app
### To Reproduce
1. build application with `npm run build`
2. start with `npm run start`
3. navigate to `http://localhost:3005`
4. click on the "Create an address" button
### Current vs. Expected behavior
### Current behavior
`Application error: a server-side exception has occurred (see the server logs for more information).
Digest: DYNAMIC_SERVER_USAGE`
### Expected Behavior
Clicking the button should navigate to `http://localhost:3005/address/create?type=A` and update the content accordingly
The page in question is using `searchParams`.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Thu Nov 14 12:54:01 UTC 2024 (099023b)
Available memory (MB): 31992
Available CPU cores: 16
Binaries:
Node: 22.9.0
npm: 10.8.3
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.1.1 // Latest available version is detected (15.1.1).
eslint-config-next: 15.1.1
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes, Runtime
### Which stage(s) are affected? (Select all that apply)
next build (local), next start (local)
### Additional context
When running the application in dev mode with `npm run dev`, everything works as expected.
Manually marking the page as dynamic with `export const dynamic = 'force-dynamic'` also works as expected.
Since the page is using `searchParams`, it should be marked as a dynamic route. The same page as a regular route is indeed identified as dynamic, however the intercepted route is not. Build output:
```
...
✓ Compiled successfully
✓ Linting and checking validity of types
✓ Collecting page data
Error: Static generation failed due to dynamic usage on /address/create, reason: `await searchParams`, `searchParams.then`, or similar
at m (.next/server/chunks/638.js:1:41139)
at Object.get (.next/server/chunks/584.js:3:29779)
at a (.next/server/app/address/create/page.js:1:3367)
at stringify (<anonymous>)
✓ Generating static pages (7/7)
✓ Collecting build traces
✓ Finalizing page optimization
Redirects
┌ source: /:path+/
├ destination: /:path+
└ permanent: true
Rewrites
┌ source: /address/create
└ destination: /(.)address/create
Route (app) Size First Load JS
┌ ○ / 3.74 kB 109 kB
├ ○ /_not-found 979 B 106 kB
├ ○ /(.)address/create 142 B 105 kB
└ ƒ /address/create 142 B 105 kB
+ First Load JS shared by all 105 kB
├ chunks/4bd1b696-92810b4b4ece63ad.js 52.9 kB
├ chunks/517-cf90ad66b32a8bbf.js 50.5 kB
└ other shared chunks (total) 1.88 kB
○ (Static) prerendered as static content
ƒ (Dynamic) server-rendered on demand
Process finished with exit code 0
``` | Runtime,Parallel & Intercepting Routes | low | Critical |
2,749,629,972 | PowerToys | Command Not Found won't work | ### Microsoft PowerToys version
0.87.0
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
Command not found
### Steps to reproduce
Simply install Powertoys and enable Command-Not-Found
### ✔️ Expected Behavior
something to happen
### ❌ Actual Behavior
It used to work with my current/unchanged Profile settings. I install every PT update so I'm not sure when it stopped working.
I open the dashboard, select Advanced, Command Not Found - Installed. All following components detected.
Open a new Powershell in terminal. Type vim (or anything not installed) - "vim: The term 'vim' is not recognized as a name of a cmdlet"
Restart computer. Try again. No change.
Uninstall Powershell 7.4.6 x64. Restart system. Reinstall (tried both Winget and msi). Uninstall CNF. Uninstall Powertoys. Restart computer.
Reinstall Powershell. Reinstall Powertoys. Reinstall CNF. Verify $PROFILE has had CNF Import-Module. Tried this whole uninstall/reinstall as both admin and not.
**$PROFILE**
> oh-my-posh init pwsh --config 'C:\Users\XXXX\AppData\Local\Programs\oh-my-posh\themes\mineallmine.mytheme.omp.json' | Invoke-Expression
> Import-Module -Name Terminal-Icons
>
> if ($host.Name -eq 'ConsoleHost')
> {
> Import-Module PSReadLine
> }
>
> Set-PSReadLineOption -PredictionSource History
> Set-PSReadLineOption -PredictionViewStyle ListView
> Set-PSReadLineOption -EditMode Windows
>
> #f45873b3-b655-43a6-b217-97c00aa0db58 PowerToys CommandNotFound module
>
> Import-Module -Name Microsoft.WinGet.CommandNotFound
> #f45873b3-b655-43a6-b217-97c00aa0db58
**Get-ExperimentalFeature**
> Name Enabled Source Description
> ---- ------- ------ -----------
> PSCommandNotFoundSuggestion True PSEngine Recommend potential commands based on …
> PSCommandWithArgs False PSEngine Enable `-CommandWithArgs` parameter fo…
> PSFeedbackProvider True PSEngine Replace the hard-coded suggestion fram…
> PSLoadAssemblyFromNativeCode False PSEngine Expose an API to allow assembly loadin…
> PSModuleAutoLoadSkipOfflineFiles False PSEngine Module discovery will skip over files …
> PSSubsystemPluginModel False PSEngine A plugin model for registering and un-…
Verify Dashboard says CNF installed. Powertoys works. No change CNF doesn't do anything.
I am using oh-my-posh 24.15.1 (latest).
I see other closed reports (i.e. #35666, #31250, etc) with exactly the same symptoms described and none have a resolution beyond - it eventually fixed itself or abandoned resolution attempts
[log_2024-12-19.zip](https://github.com/user-attachments/files/18194801/log_2024-12-19.zip)
[Installation log_2024-12-19.zip](https://github.com/user-attachments/files/18195037/Installation.log_2024-12-19.zip)
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-CommandNotFound | low | Minor |
2,749,647,878 | pytorch | "aten::any" and "aten::all" behavior for unsigned tensors different than signed | ### 🐛 Describe the bug

When I use torch.any ot torch.all for an int8 tensor, the result is boolean as expected for these boolean ops. But, for some reason, when I use uint8 for example, so the result is a uint8 tensor. That's not how I expect it to work. Is there a reason for this behavior?
### Versions
torch: 2.4.0+cu121
python: 3.9.1 | triaged,module: reductions,module: unsigned int | low | Critical |
2,749,718,163 | vscode | Devcontainer: prompts me to open a folder without further insights | Steps to Reproduce:
1. fresh stable
2. click on remote indicator and container
🐛 I am immediately asked to open a folder
https://github.com/user-attachments/assets/3b3a8d96-d47f-4451-97e1-55003692487c
| macos,remote | low | Minor |
2,749,719,807 | yt-dlp | Request for Custom Extractor for pornhat.com in yt-dlp | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
_No response_
### Example URLs
Single Video: https://www.pornhat.com/video/cover-girl-ava-koxxx-at-milf-video/
Channel URL: https://www.pornhat.com/channels/
### Provide a description that is worded well enough to be understood
Right now It download short duration video of .mp4 not .dash file full.
[debug] Invoking http downloader on "https://www.pornhat.com/get_file/13/aa271aa912c9fabdab2703672fb8ca36/507000/507630/507630_720p.mp4/"
[download] Cover-girl Ava Koxxx at milf video (1) [cover-girl-ava-koxxx-at-milf-video-1].mp4 has already been downloaded
[download] 100% of 975.00B
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.pornhat.com/video/cover-girl-ava-koxxx-at-milf-video/']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [542166962] (pip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.85-1-pve-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.pornhat.com/video/cover-girl-ava-koxxx-at-milf-video/
[generic] cover-girl-ava-koxxx-at-milf-video: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] cover-girl-ava-koxxx-at-milf-video: Extracting information
[debug] Looking for embeds
[debug] Identified a html5 embed
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] cover-girl-ava-koxxx-at-milf-video-1: Downloading 1 format(s): 720p
[debug] Invoking http downloader on "https://www.pornhat.com/get_file/13/aa271aa912c9fabdab2703672fb8ca36/507000/507630/507630_720p.mp4/"
[download] Cover-girl Ava Koxxx at milf video (1) [cover-girl-ava-koxxx-at-milf-video-1].mp4 has already been downloaded
[download] 100% of 975.00B
```
| site-request,NSFW,triage | low | Critical |
2,749,764,799 | ant-design | DatePicker disable auto convert formats | ### Reproduction link
[https://codepen.io/Feng-Zhang-the-typescripter/pen/JoPWBVj](https://codepen.io/Feng-Zhang-the-typescripter/pen/JoPWBVj)
### Steps to reproduce
I have a DatePicker whose formats is an array ['MM/YYYY', 'MM/YY']
### What is expected?
I want to input `03/2025`, expected `03/2025`
### What is actually happening?
after I input `03/20`, it would auto expand to `03/2020`, then the final input becomes `03/202025`
| Environment | Info |
| --- | --- |
| antd | 5.22.5 |
| React | 17.0.1 |
| System | Any |
| Browser | Chrome 130 |
---
How can I disable this auto format? Or customize it to only do format when onBlur() or press Enter
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Minor |
2,749,793,776 | PowerToys | Workspaces do not recognize some programs | ### Microsoft PowerToys version
0.87.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
Launch:
- Marmoset Toolbag 5 (5010)
- Zbrush 2023.3
Go to workspaces - create workspace - see the program list - none of the launched above appear
### ✔️ Expected Behavior
Launch:
- Marmoset Toolbag 5 (5010)
- Zbrush 2023.3
Go to workspaces - create workspace - program list includes both of the programs
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,749,800,628 | pytorch | [Bug] Memory leak in C++ libtorch | ### 🐛 Describe the bug
It seems there is a bug leading to a memory leak in the (C++) libtorch.
```
#include <torch/extension.h>
#include <thread>
class FrameVector {
private:
std::vector<int> data;
public:
FrameVector() {
std::vector<int> data(7056);
this->data = data;
}
};
class FrameTensor {
private:
torch::Tensor data;
public:
FrameTensor() {
this->data = torch::zeros({1, 84, 84});
}
};
template<class T>
void f() {
int capacity = 1000000;
std::vector<std::vector<T>> frames(capacity);
for (auto i = 0; i < capacity + 1000000; i++) {
if (i == capacity) {
std::cout << "buffer is full!" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(2));
std::cout << "restart!" << std::endl;
}
frames[i % capacity].push_back(T());
if (i >= capacity) {
frames[i % capacity].erase(frames[i % capacity].begin());
}
}
}
int main(int argc, char *argv[])
{
f<FrameTensor>(); // needs 34G to fill the replay buffer, then memory increases to around 60G
f<FrameVector>(); // needs 34G to fill the replay buffer, then memory stay constant (as it should)
}
```
The bug only seems to occur when the `torch::Tensor` is stored in nested containers for examples:
- `std::vector<std::vector<T>>`
- `std::vector<std::deque<T>>`
I believe the internal counter that keep track of the number of references to the `torch::Tensor` fail to count the correct number of references. This leads the tensors memory to never be released.
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 Ti
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 36
On-line CPU(s) list: 0-35
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 1
Stepping: 7
CPU(s) scaling MHz: 27%
CPU max MHz: 4800.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512_vnni md_clear flush_l1d arch_capabilities
Virtualisation: VT-x
L1d cache: 576 KiB (18 instances)
L1i cache: 576 KiB (18 instances)
L2 cache: 18 MiB (18 instances)
L3 cache: 24.8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-35
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.0
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchrl==0.6.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 1.26.4 py311h24aa872_0
[conda] numpy-base 1.26.4 py311hbfb1bba_0
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] torch 2.3.0 pypi_0 pypi
[conda] torchaudio 2.3.0 pypi_0 pypi
[conda] torchvision 0.18.0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
cc @jbschlosser | module: cpp,triaged | low | Critical |
2,749,822,052 | pytorch | Failed to load TorchScript of SSDLite in android | ### 🐛 Describe the bug
I tried to load SSDLite in Android but always failed with it.
Here is my code for export TorchScript: [sun-jiao/pytorch_ssdlite_export](https://github.com/sun-jiao/pytorch_ssdlite_export).
Use `detection_export.py` to convert the pretrained model to TorchScript. And then use `detection.py` to check if the exported model works fine.
And here is a minimized Android project to reproduce this issue: [sun-jiao/pytorch_detection_example](https://github.com/sun-jiao/pytorch_detection_example).
Copy or move the above exported `ssdlite.pt` to `app/src/main/assets/ssdlite.pt` and run the Android project.
Here is the UI:

Click the button "Load model" and it will crash.
Here is the related log:
```
2024-12-19 17:55:53.534 7148-7148 nativeloader net.sunjiao.pytorchdetectionexample D Load /data/app/~~QEZItQNbBIyxcvyQgPi2uQ==/net.sunjiao.pytorchdetectionexample-n9cinNNg6poEN6AyrBDSpA==/base.apk!/lib/arm64-v8a/libpytorch_jni.so using ns clns-7 from class loader (caller=/data/app/~~QEZItQNbBIyxcvyQgPi2uQ==/net.sunjiao.pytorchdetectionexample-n9cinNNg6poEN6AyrBDSpA==/base.apk!classes5.dex): ok
2024-12-19 17:55:53.535 7148-7148 nativeloader net.sunjiao.pytorchdetectionexample D Load libtorch-code-gen.so using ns clns-7 from class loader (caller=/data/app/~~QEZItQNbBIyxcvyQgPi2uQ==/net.sunjiao.pytorchdetectionexample-n9cinNNg6poEN6AyrBDSpA==/base.apk!classes5.dex): dlopen failed: library "libtorch-code-gen.so" not found
2024-12-19 17:55:53.772 7148-7148 AndroidRuntime net.sunjiao.pytorchdetectionexample E FATAL EXCEPTION: main
Process: net.sunjiao.pytorchdetectionexample, PID: 7148
com.facebook.jni.CppException:
Unknown builtin op: torchvision::nms.
Could not find any similar ops to torchvision::nms. This op may not exist or may not be currently supported in TorchScript.
:
File "code/__torch__/torchvision/ops/boxes.py", line 128
_55 = __torch__.torchvision.extension._assert_has_ops
_56 = _55()
_57 = ops.torchvision.nms(boxes, scores, iou_threshold)
~~~~~~~~~~~~~~~~~~~ <--- HERE
return _57
at org.pytorch.NativePeer.initHybrid(Native Method)
at org.pytorch.NativePeer.<init>(NativePeer.java:27)
at org.pytorch.Module.load(Module.java:28)
at org.pytorch.Module.load(Module.java:38)
at net.sunjiao.pytorchdetectionexample.MainActivityKt.LoadModelButton$lambda$0(MainActivity.kt:64)
at net.sunjiao.pytorchdetectionexample.MainActivityKt.$r8$lambda$tsHf2Yc3D2EpbqvM6adjyUQecUc(Unknown Source:0)
at net.sunjiao.pytorchdetectionexample.MainActivityKt$$ExternalSyntheticLambda0.invoke(D8$$SyntheticClass:0)
at androidx.compose.foundation.ClickablePointerInputNode$pointerInput$3.invoke-k-4lQ0M(Clickable.kt:987)
at androidx.compose.foundation.ClickablePointerInputNode$pointerInput$3.invoke(Clickable.kt:981)
at androidx.compose.foundation.gestures.TapGestureDetectorKt$detectTapAndPress$2$1.invokeSuspend(TapGestureDetector.kt:255)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTaskKt.resume(DispatchedTask.kt:179)
at kotlinx.coroutines.DispatchedTaskKt.dispatch(DispatchedTask.kt:168)
at kotlinx.coroutines.CancellableContinuationImpl.dispatchResume(CancellableContinuationImpl.kt:474)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl(CancellableContinuationImpl.kt:508)
at kotlinx.coroutines.CancellableContinuationImpl.resumeImpl$default(CancellableContinuationImpl.kt:497)
at kotlinx.coroutines.CancellableContinuationImpl.resumeWith(CancellableContinuationImpl.kt:368)
at androidx.compose.ui.input.pointer.SuspendingPointerInputModifierNodeImpl$PointerEventHandlerCoroutine.offerPointerEvent(SuspendingPointerInputFilter.kt:665)
at androidx.compose.ui.input.pointer.SuspendingPointerInputModifierNodeImpl.dispatchPointerEvent(SuspendingPointerInputFilter.kt:544)
at androidx.compose.ui.input.pointer.SuspendingPointerInputModifierNodeImpl.onPointerEvent-H0pRuoY(SuspendingPointerInputFilter.kt:566)
at androidx.compose.foundation.AbstractClickablePointerInputNode.onPointerEvent-H0pRuoY(Clickable.kt:947)
at androidx.compose.foundation.AbstractClickableNode.onPointerEvent-H0pRuoY(Clickable.kt:795)
at androidx.compose.ui.input.pointer.Node.dispatchMainEventPass(HitPathTracker.kt:317)
at androidx.compose.ui.input.pointer.Node.dispatchMainEventPass(HitPathTracker.kt:303)
at androidx.compose.ui.input.pointer.Node.dispatchMainEventPass(HitPathTracker.kt:303)
at androidx.compose.ui.input.pointer.NodeParent.dispatchMainEventPass(HitPathTracker.kt:185)
at androidx.compose.ui.input.pointer.HitPathTracker.dispatchChanges(HitPathTracker.kt:104)
at androidx.compose.ui.input.pointer.PointerInputEventProcessor.process-BIzXfog(PointerInputEventProcessor.kt:113)
at androidx.compose.ui.platform.AndroidComposeView.sendMotionEvent-8iAsVTc(AndroidComposeView.android.kt:1576)
at androidx.compose.ui.platform.AndroidComposeView.handleMotionEvent-8iAsVTc(AndroidComposeView.android.kt:1527)
at androidx.compose.ui.platform.AndroidComposeView.dispatchTouchEvent(AndroidComposeView.android.kt:1466)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3122)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2803)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3122)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2803)
at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3122)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2803)
2024-12-19 17:55:53.773 7148-7148 AndroidRuntime net.sunjiao.pytorchdetectionexample E at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:3122)
at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2803)
at com.android.internal.policy.DecorView.superDispatchTouchEvent(DecorView.java:458)
at com.android.internal.policy.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1982)
at android.app.Activity.dispatchTouchEvent(Activity.java:4533)
at com.android.internal.policy.DecorView.dispatchTouchEvent(DecorView.java:416)
at android.view.View.dispatchPointerEvent(View.java:16737)
at android.view.ViewRootImpl$ViewPostImeInputStage.processPointerEvent(ViewRootImpl.java:7974)
at android.view.ViewRootImpl$ViewPostImeInputStage.onProcess(ViewRootImpl.java:7732)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:7128)
at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:7185)
at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:7151)
at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:7317)
at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:7159)
at android.view.ViewRootImpl$AsyncInputStage.apply(ViewRootImpl.java:7374)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:7132)
at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:7185)
at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:7151)
at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:7159)
at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:7132)
at android.view.ViewRootImpl.deliverInputEvent(ViewRootImpl.java:10241)
at android.view.ViewRootImpl.doProcessInputEvents(ViewRootImpl.java:10192)
at android.view.ViewRootImpl.enqueueInputEvent(ViewRootImpl.java:10161)
at android.view.ViewRootImpl$WindowInputEventReceiver.onInputEvent(ViewRootImpl.java:10383)
at android.view.InputEventReceiver.dispatchInputEvent(InputEventReceiver.java:295)
at android.os.MessageQueue.nativePollOnce(Native Method)
at android.os.MessageQueue.next(MessageQueue.java:346)
at android.os.Looper.loopOnce(Looper.java:189)
at android.os.Looper.loop(Looper.java:317)
at android.app.ActivityThread.main(ActivityThread.java:8710)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:582)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:886)
Suppressed: kotlinx.coroutines.internal.DiagnosticCoroutineContextException: [androidx.compose.ui.platform.MotionDurationScaleImpl@c93bdeb, androidx.compose.runtime.BroadcastFrameClock@d87a748, StandaloneCoroutine{Cancelling}@16a59e1, AndroidUiDispatcher@6b10306]
2024-12-19 17:55:53.779 7148-7148 Process net.sunjiao.pytorchdetectionexample I Sending signal. PID: 7148 SIG: 9
---------------------------- PROCESS ENDED (7148) for package net.sunjiao.pytorchdetectionexample ----------------------------
```
### Versions
Failed to run it so I'll give environment manually:
```
Collecting environment information...
Traceback (most recent call last):
File "collect_env.py", line 692, in <module>
main()
File "collect_env.py", line 675, in main
output = get_pretty_env_info()
^^^^^^^^^^^^^^^^^^^^^
File "collect_env.py", line 670, in get_pretty_env_info
return pretty_str(get_env_info())
^^^^^^^^^^^^^^
File "collect_env.py", line 495, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "collect_env.py", line 450, in get_pip_packages
for line in out.splitlines()
^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'splitlines'
```
System environment:
```
-` sunjiao@arch-83al
.o+` -----------------
`ooo/ OS: Arch Linux x86_64
`+oooo: Host: 83AL XiaoXinPro 14 IRH8
`+oooooo: Kernel: 6.12.4-zen1-1-zen
-+oooooo+: Uptime: 1 day, 21 hours, 59 mins
`/:-:++oooo+: Packages: 1794 (pacman)
`/++++/+++++++: Shell: bash 5.2.37
`/++++++++++++++: Resolution: 2880x1800
`/+++ooooooooooooo/` DE: Plasma 6.2.4
./ooosssso++osssssso+` WM: kwin
.oossssso-````/ossssss+` WM Theme: Lavanda-Sea-Light
-osssssso. :ssssssso. Theme: [Plasma], FRESH-Blueberries [GTK2/3]
:osssssss/ osssso+++. Icons: Fluent [Plasma], Fluent [GTK2/3]
/ossssssss/ +ssssooo/- Terminal: konsole
`/ossssso+/:- -:/+osssso+- CPU: 13th Gen Intel i5-13500H (16) @ 4.700GHz
`+sso+:-` `.-/+oso: GPU: Intel Raptor Lake-P [Iris Xe Graphics]
`++:. `-/+/ Memory: 23426MiB / 31816MiB
.` `/
```
Python version:
```
$ python --version
Python 3.12.7
```
Package version:
```
$ pip freeze | grep torch
pytorch-lightning==2.4.0
torch==2.5.1
torchaudio==2.5.1
torchmetrics==1.6.0
torchvision==0.20.1
```
Pytorch android package:
```
api("org.pytorch", "pytorch_android", "2.1.0")
api("org.pytorch", "pytorch_android_torchvision", "2.1.0")
```
```[tasklist]
### Tasks
```
| oncall: mobile | low | Critical |
2,749,836,163 | angular | Template HMR broken with Ionic, showing old template even after full page reload | ### Which @angular/* package(s) are the source of the bug?
compiler, compiler-cli, core
### Is this a regression?
No
### Description
First off, I'm not sure if this is a bug with Angular, Ionic, or both. I've created the issue here because this is still an experimental feature.
I've created a default [Ionic](https://github.com/ionic-team/ionic-framework) starter project and enabled the experimental template HMR with `NG_HMR_TEMPLATES=1`. I marked 3 different places within the app to test HMR in, all showing different behavior:
- `folder.page.html`: Testcase 1: Page component (directly navigated to)
- `my-custom-component.component.html`: Testcase 2: Nested custom component
- `app.component.html`: Testcase 3: Outside of ion-router-outlet
Details of the behavior are below and in comments in the files themselves, but a quick summary:
- Within the `ion-router-outlet`
- HMR does not work _live_ (for components currently visible on the page when the changes are made)
- HMR does not work _the first time the component is rendered_ after a full page reload, instead showing an outdated version of the component from the compiled chunk
- this may include multiple places within the application (due to component caching?)
- Outside the `ion-router-outlet` HMR works as expected, both live and after a full page reload
### Please provide a link to a minimal reproduction of the bug
https://github.com/ReneZeidler/hmr-ionic-bug
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.0.6
Node: 20.18.1
Package Manager: npm 11.0.0
OS: linux x64
Angular: 19.0.5
... animations, common, compiler, compiler-cli, core, forms
... language-service, platform-browser, platform-browser-dynamic
... router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.6
@angular-devkit/build-angular 19.0.6
@angular-devkit/core 19.0.6
@angular-devkit/schematics 19.0.6
@angular/cli 19.0.6
@schematics/angular 19.0.6
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else?
### Reproduction
1. Clone the minimal reproduction repo
2. Run `npm start`
3. Go through each test case in turn by opening the corresponding template file and uncommenting the marked line
4. To start again from a clean slate, comment out all of the lines again and restart the dev server
### Testcases
Open the details to view my observations and conclusions for each testcase:
<details><summary>Testcase 1 (folder.page.html): Page component (directly navigated to)</summary>
<p>
Observations:
* The text does NOT appear on the current page
* Navigating to a different page in the sidebar causes the text to appear
* After navigating back to the first page in the sidebar, the text is still there
* After a full page reload the text does NOT appear on the current page
* Navigating to a different page causes the text to appear again
* Forcing a full rebuild of the chunk by changing the corresponding .ts file of this component "bakes" in the current version of the template as the one that is visible after a full page reload
Conclusions:
* HMR template changes are not applied live to components currently visible on the page
* After a full page reload, the first time a component is rendered it uses the original template from the compiled chunk, missing HMR changes
* Subsequent renders of the component have the HMR changes applied
</p>
</details>
<details><summary>Testcase 2 (my-custom-component.component.html): Nested custom component</summary>
<p>
Observations:
* The text does NOT appear on the current page
* The text still does NOT appear after navigating to a different page in the sidebar
* The text still does NOT appear after a full page reload
* Forcing a full rebuild of the chunk by changing the corresponding .ts file of this component "bakes" in the current version of the template as the one that is visible after a full page reload
Conclusions:
* It seems like the nested component is only rendered once across all pages (caching?), and that render uses the outdated version of the template
</p>
</details>
<details><summary>Testcase 3 (app.component.html): Outside of ion-router-outlet</summary>
<p>
Observations:
* The text DOES appear live on the current page
* The text DOES appear after a full page reload
Conclusions:
* HMR works as expected outside of ion-router-outlet
</p>
</details> | area: compiler | low | Critical |
2,749,863,038 | next.js | Can't Navigate from App Router to Pages Router in Dev Mode | ### Link to the code that reproduces this issue
https://github.com/Henrixounez/repro-next-navigating-app-to-pages-in-dev-mode
### To Reproduce
1. Start the application in development (`npm run dev`)
2. Access the Home page in App Router `/`
3. Click on links that should redirect to the About page in Pages Router
4. It doesn't work and reloads the page (and in the terminal it tries to compile the `/about` page but we are redirected back to `/`)
https://github.com/user-attachments/assets/f5416d80-23e8-497a-a7b8-ee7d3631fba0
👉 However it works when the application is built and started
### Current vs. Expected behavior
I expect to be redirected to the `/about` page when clicking on a `<Link/>` or `router.push` with href to `/about`
```tsx
<Link href="/about">
To about page (Link)
</Link>
```
```tsx
<button onClick={() => router.push("/about")}>
To about page (Router)
</button>
```
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.0.0: Fri Sep 15 14:41:34 PDT 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T8103
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 20.11.0
npm: 10.2.4
Yarn: 1.22.17
pnpm: 9.6.0
Relevant Packages:
next: 15.1.1 // Latest available version is detected (15.1.1).
eslint-config-next: 15.1.1
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | Navigation | low | Minor |
2,749,883,512 | godot | Big memory usage when too many `Area2D`s collision pair's | ### Tested versions
- Reproducible in: v4.3.stable.mono.official [77dcf97d8]
### System information
Windows 10 - Vulkan(Foward+) - Nvidea 1070
### Issue description
~~There seems to be a big memory leak when spawning too many **Area2D's** too fast.~~
~~This leak also happens when using **JoltPhysics2D**.~~
**Edit:**
> Area2D uses too much memory when there are a lot of collision pairs.
> Comparing them with CharacterBody2D, they use more then 3 times the memory
> CharacterBody2D: 500k pairs | **279Mb** memory
> Area2D: 500k pairs | **906 MB**
> See: https://github.com/godotengine/godot/issues/100600#issuecomment-2557028345
Here it is **working normally**, spawning moving bullets.
Because the FPS drops, it spawns slower and doesn't leak memory.
**298 MB** static memory after 1000 `Area2D` spawned.
And 132k collision pairs
https://github.com/user-attachments/assets/cfc1854f-aad0-4e1f-a869-765d8605d369
---
And here it is **leaking memory**.
When I try to spawn non-moving bullets.
It spawns them very fast. But the memory goes up very fast.
**906 MB** static memory after 1000 `Area2D` spawned.
And 500k collision pairs
https://github.com/user-attachments/assets/8a5a6e4e-4087-443e-8502-776735114c51
I've also tested the same with Node2D and Character2D but they work fine.
This only happens with `Area2D`s
---
PS: Its strange that the FPS in the MRP tanks to 4 FPS so fast with only 500+ nodes too.
I was getting a stable 40 frames with 2k bullets on my other project.
I thought maybe it was because I was spawning them there with coroutines, but the performance on the MRP stays at 4 FPS even after it finishes spawning the 1k bullets.
So something else might be wrong.
---
As a side-note, when the memory-leak example finishes spawning my computer starts screeching.
Which I'm guessing is my old HDD asking for help.
This also rarely happens here when windows starts indexing or reading tons of small files.
Not sure this is related but a fun tidbit.
### Steps to reproduce
MRP:
- Run the MRP project
- Look at the monitor tab static memory
Expected: NOT to increase so much. Expected around 200-300 MBs.
Result: static memory 900+Mbs
Non-MRP:
- Spawn 5 Area2D's each frame
Expected: Normal static memory increase.
Result: static memory increase very fast.
### Minimal reproduction project (MRP)
[MRP_memory-leak-physics.zip](https://github.com/user-attachments/files/18196183/MRP_memory-leak-physics.zip)
| bug,topic:core,topic:2d | low | Major |
2,749,925,125 | vscode | Remove the extra steps when installing updates to VS Code | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Currently, updating Visual Studio code involves the following steps:
1. Click the cog.
2. Click the "Install Update" button.
3. Enter admin credentials.
4. **Wait a few seconds.**
5. **Click the cog.**
6. **Click the "Update Now" button.**
7. Wait for the update to install.
I would like this to be simplified to the following steps:
1. Click the cog.
2. Click the "Install Update" button.
3. Enter admin credentials.
4. Wait for update to install.
The extra couple of clicks are a minor annoyance that adds to all the other minor annoyances throughout the day, which lead to eventual subsurface rage that has no time or space to dissipate over the short weekends, surrounded by screaming kids, before returning to the fray again on Monday. | bug,install-update | low | Minor |
2,749,947,261 | flutter | [Impeller] Render outside viewBox image when there is a `mix-blend-mode` different from normal on iOS | ### Steps to reproduce
1. Create an application
2. Paste code in sample
3. Run on iOS
### Expected results
When I launch with flutter 3.24.5:
<img width="360" alt="image" src="https://github.com/user-attachments/assets/486e00dc-e9c7-4729-8f8e-a08aa9f4bdab" />
### Actual results
When I launch with flutter 3.27.2:
<img width="360" alt="image" src="https://github.com/user-attachments/assets/e49657c0-aca9-4bb7-b7b9-540f3ca6c51e" />
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter_svg/flutter_svg.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: SafeArea(
child: Column(
children: [
SvgPicture.string(svgDataWithSoftLight),
SvgPicture.string(svgDataWithSoftLightAfter),
],
),
),
),
);
}
}
const svgDataWithSoftLight = '''
<svg width="100" height="100" viewBox="0 0 100 100" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="100" height="100" x="0" y="0" fill="green" />
<g style="mix-blend-mode:soft-light">
<rect width="100" height="100" x="0" y="0" fill="yellow" />
</g>
<rect width="100" height="100" x="99" y="0" fill="red" />
</svg>
''';
const svgDataWithSoftLightAfter = '''
<svg width="100" height="100" viewBox="0 0 100 100" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="100" height="100" x="0" y="0" fill="green" />
<rect width="100" height="100" x="99" y="0" fill="red" />
<g style="mix-blend-mode:soft-light">
<rect width="100" height="100" x="0" y="0" fill="yellow" />
</g>
</svg>
''';
```
</details>
### Screenshots or Video
_No response_
### Logs
<details open><summary>Logs</summary>
https://pastebin.com/guQKY2qx
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.2 24C101 darwin-arm64, locale fr-FR)
• Flutter version 3.27.1 on channel stable at /Users/user/fvm/versions/stable
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (3 days ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/user/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.96.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (1 available)
• iPhone 16 Plus (mobile) • 129F45AF-B27B-4621-826A-2DD69D97C064 • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-2 (simulator)
[✓] Network resources
• All expected network resources are available.
```
</details>
| package,has reproducible steps,team-engine,found in release: 3.27,p: flutter_svg,found in release: 3.28 | low | Minor |
2,749,947,590 | PowerToys | New feature request: Mouse jump | ### Description of the new feature / enhancement
Because of the issue described in "Scenario", I think it would be nice if there was a feature that would:
1. when a window receives focus with Alt+Tab, check if the mouse is already on that screen and, if not, move the mouse to the last saved position on that screen.
2. if there is no saved position, move the mouse to the middle of the screen of the window receiving focus.
3. remember/save the mouse position for each screen whenever the mouse is on a screen that I Alt+Tab away from (to a window on another screen).
It may need some experience to find the right behavior, but if done right I am hopeful that such a tool would prevent more frustration than it would cause. ;-) (Despite the smiley, I am very serious and hopeful regarding this possible feature.)
### Scenario when this would be used?
I am working in a multi monitor setup with 3, sometimes 4, monitors.
I often find myself quite annoyed and frustrated by the fact that when I switch the focus to a different window on a different screen (mostly with Alt+Tab), the mouse cursor is left behind way off on another screen.
### Supporting information
_No response_ | Idea-Enhancement,Needs-Triage,Product-Mouse Utilities | low | Minor |
2,749,958,621 | PowerToys | Improvement suggestion for Find My Mouse (should also work by dbl-pressing the right Ctrl) | ### Description of the new feature / enhancement
Find My Mouse (see picture in German) should also work by dbl-pressing the right Ctrl.
So far it seems to work only by dbl-pressing the left Ctrl.

### Scenario when this would be used?
Whenever I want to use the Find My Mouse feature and my left hand is busy with something else.
Even though I am right handed and have run into the mentioned limitation already, the improvement could also be beneficial for left handed persons.
### Supporting information
_No response_ | Idea-Enhancement,Needs-Triage,Product-Mouse Utilities | low | Minor |
2,749,963,476 | kubernetes | Don't copy whole response during response marshalling | ### What would you like to be added?
When experimenting with measuring memory usage for large LIST requests I noticed one thing that surprised me. It's expected that apiserver requires a lot of memory when listing from etcd. It needs to fetch the data, decode etcd, however what about listing from cache?
I was surprised when listing from cache still required gigabytes of data (10 concurrent list of 1.5GB data increased memory usage by 22GB). Why? apiserver has all the data it needs, we copy structure of data (e.g. converting between types), but data for that should be miniscule. Most data is stored should come from strings, which are immutable. This lead me to revisit old discussion I had with @mborsz and his experimental work on streaming lists from etcd https://github.com/kubernetes/kubernetes/compare/master...mborsz:kubernetes:streaming, he proposed to implement custom streaming encoder. I looked at current implementation of encoding:
* https://github.com/kubernetes/kubernetes/blob/29101e9774e318f502cbdf68080bafe7c5794124/staging/src/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go#L245-L246
* https://github.com/golang/go/blob/4f0561f9d354233787de7aa9eff8119a2d4cd5c6/src/encoding/json/stream.go#L223-L233
Built In json library encoder still marshalls whole value and writes it at once. While this is ok for single objects that weight 2MB, it's bad for LIST responses which can be up to 2GB.
I did a PoC that confirmed my suspicions. https://github.com/kubernetes/kubernetes/compare/master...serathius:kubernetes:streaming-encoder-list Simple hack over encoder reduced memory needed from 26GB to 4 GB.
Proposal:
* Validate different options for streaming JSON and Proto encoder for LIST responses and enable it.
Options:
* Custom encoder for List objects based on reflections https://github.com/kubernetes/kubernetes/compare/master...serathius:kubernetes:streaming-encoder-list
* Define a interface to streaming encoding and generate code for all implementations
* Pick some of available streaming encoding libraries.
Other thoughts:
* Can be also expanded to client decoders
* Similar to https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/3157-watch-list/README.md
### Why is this needed?
For LISTs served from watch cache it prevents allocating data proportional to size of response. This makes memory usage more proportional to CPU usage improving cost estimations of APF which cares only about memory.
For LISTs served from etcd there is a ongoing proposal to serve them from cache. | sig/api-machinery,kind/feature,triage/accepted | medium | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.