id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,699,458,792 | angular | Check if a required input is provided using `withComponentInputBinding` | ### Which @angular/* package(s) are relevant/related to the feature request?
compiler
### Description
When using `withComponentInputBinding` in ApplicationConfig, I can pass:
- query parameters
- path and matrix parameters
- static route data
- data from resolvers
to component via inputs. But, even if I define my input as required it can be `undefined` if not provided, without any compilation error.
```
export abstract class ExampleComponent {
myData = input.required<MyData>();
}
```
### Proposed solution
It would be great if the required inputs could be checked during compilation, and throw an error if missing, just like it works for components instantiated using selector:
```
Required input 'myData' from component ExampleComponent must be specified.
```
I don't think query/matrix parameters could be checked, but are the others checkable?
### Alternatives considered
None, open to suggestions. | feature,area: router | low | Critical |
2,699,529,694 | flutter | Support all `Image` widget properties on `RawWebImage` | ### Use case
In https://github.com/flutter/flutter/pull/157755, an initial minimum viable product implementation of `WebImage` is added to Flutter. It supports some of the most commonly used `Image` properties, but not all of them. The missing properties are:
* `color`
* `opacity`
* `colorBlendMode`
* `repeat`
* `centerSlice`
* `invertColors`
* `isAntiAlias`
* `filterQuality`
These properties effect the rendering of the image. Since RawWebImage is supposed to be a drop-in replacement for RawImage, it should also support these properties.
### Proposal
Extend `RawWebImage` to support the missing properties. Some of the properties are easier to implement than others.
* `opacity`, `color` and `colorBlendMode` could probably be implemented by simply wrapping the `HTMLImageElement` in the appropriate `Opacity` and `ColorFiltered` widgets and let the web engine handle setting the CSS properties on the platform view.
* `centerSlice` could maybe be implemented using the CSS `border-image-slice` property: https://developer.mozilla.org/en-US/docs/Web/CSS/border-image-slice
* `repeat` could maybe be implemented using the `background-repeat` CSS property: https://developer.mozilla.org/en-US/docs/Web/CSS/background-repeat
* `isAntiAlias` and `filterQuality` could maybe be approximated with the `image-rendering` CSS property: https://developer.mozilla.org/en-US/docs/Web/CSS/image-rendering | platform-web,e: web_canvaskit,P2,e: web_skwasm,team-web,triaged-web | low | Minor |
2,699,634,668 | go | x/tools/gopls: DidModifyFiles: "non-abs file path %q" bug in port.matches | ```
#!stacks
"bug.Reportf" && "cache.port.matches:+5"
```
Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks).
```go
func (p port) matches(path string, content []byte) bool {
ctxt := build.Default // make a copy
ctxt.UseAllFiles = false
path = filepath.Clean(path)
if !filepath.IsAbs(path) {
bug.Reportf("non-abs file path %q", path) // <---------
return false // fail closed
...
```
This stack `NaAAKw` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2024-11-26.json):
- `gopls/bug`
- [`golang.org/x/tools/gopls/internal/util/bug.report:+35`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/util/bug/bug.go;l=109)
- [`golang.org/x/tools/gopls/internal/util/bug.Reportf:+1`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/util/bug/bug.go;l=54)
- [`golang.org/x/tools/gopls/internal/cache.port.matches:+5`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/cache/port.go;l=146)
- `golang.org/x/tools/gopls/internal/cache.matchingView[...]:+23`
- [`golang.org/x/tools/gopls/internal/cache.(*Session).viewOfLocked:+15`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/cache/session.go;l=464)
- [`golang.org/x/tools/gopls/internal/cache.(*Session).DidModifyFiles:+138`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/cache/session.go;l=891)
- [`golang.org/x/tools/gopls/internal/server.(*server).didModifyFiles:+36`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/server/text_synchronization.go;l=256)
- [`golang.org/x/tools/gopls/internal/server.(*server).DidOpen:+20`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/server/text_synchronization.go;l=114)
- [`golang.org/x/tools/gopls/internal/protocol.serverDispatch:+253`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/protocol/tsserver.go;l=423)
- [`golang.org/x/tools/gopls/internal/lsprpc.(*streamServer).ServeStream.ServerHandler.func3:+5`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/protocol/protocol.go;l=160)
- [`golang.org/x/tools/gopls/internal/lsprpc.(*streamServer).ServeStream.handshaker.func4:+52`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:gopls/internal/lsprpc/lsprpc.go;l=509)
- [`golang.org/x/tools/gopls/internal/protocol.Handlers.MustReplyHandler.func1:+2`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:internal/jsonrpc2/handler.go;l=35)
- [`golang.org/x/tools/gopls/internal/protocol.Handlers.AsyncHandler.func2.2:+3`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.2:internal/jsonrpc2/handler.go;l=104)
- `runtime.goexit:+0`
```
golang.org/x/tools/[email protected] go1.23.2 windows/amd64 vscode (2)
```
Dups: HAiIaQ TeKpaQ | NeedsInvestigation,gopls,Tools,gopls/telemetry-wins | low | Critical |
2,699,636,205 | vscode | submodule support in ui for vscode web / github.dev | vscode for web is useless when a repo has a submodule
<img src="https://github.com/user-attachments/assets/5bf2a954-8f13-4977-964d-e675bf922bf8" width="30%" />
ui should have a menu option to pull and update submodule without needing terminal access (which github.dev doesn't provide) | feature-request,vscode.dev | low | Minor |
2,699,637,819 | ollama | Model Context Protocol (MCP) support | Model Context Protocol as the name suggests standardizes the external datasource interaction.
+ the fact that is completely open source opens up the path for faster collaboration/progress imo
[Official Github](https://github.com/modelcontextprotocol/)
[15-minute-walkthrough-yt](https://github.com/modelcontextprotocol/)
| feature request | medium | Critical |
2,699,645,517 | rust | Tracking Issue for `BTreeSet` entry APIs | Feature gate: `#![feature(btree_set_entry)]`
This is a tracking issue for `Entry` and entry-like methods on `BTreeSet`.
### Public API
```rust
impl<T, A: Allocator + Clone> BTreeSet<T, A> {
pub fn get_or_insert(&mut self, value: T) -> &T
where
T: Ord,
{...}
pub fn get_or_insert_with<Q: ?Sized, F>(&mut self, value: &Q, f: F) -> &T
where
T: Borrow<Q> + Ord,
Q: Ord,
F: FnOnce(&Q) -> T,
{...}
pub fn entry(&mut self, value: T) -> Entry<'_, T, A>
where
T: Ord,
{...}
}
pub enum Entry<'a, T, A: Allocator + Clone = Global> {
Occupied(OccupiedEntry<'a, T, A>),
Vacant(VacantEntry<'a, T, A>),
}
pub struct OccupiedEntry<'a, T, A: Allocator + Clone = Global> {...}
pub struct VacantEntry<'a, T, A: Allocator + Clone = Global> {...}
impl<T: Debug + Ord, A: Allocator + Clone> Debug for Entry<'_, T, A> {...}
impl<T: Debug + Ord, A: Allocator + Clone> Debug for OccupiedEntry<'_, T, A> {...}
impl<T: Debug + Ord, A: Allocator + Clone> Debug for VacantEntry<'_, T, A> {...}
impl<'a, T: Ord, A: Allocator + Clone> Entry<'a, T, A> {
pub fn insert(self) -> OccupiedEntry<'a, T, A> {...}
pub fn or_insert(self) {...}
pub fn get(&self) -> &T {...}
}
impl<'a, T: Ord, A: Allocator + Clone> OccupiedEntry<'a, T, A> {
pub fn get(&self) -> &T {...}
pub fn remove(self) -> T {...}
}
impl<'a, T: Ord, A: Allocator + Clone> VacantEntry<'a, T, A> {
pub fn get(&self) -> &T {...}
pub fn into_value(self) -> T {...}
pub fn insert(self) {...}
}
```
### Steps / History
- [x] Implementation: #133548
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
See also #60896 for `HashSet`.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Critical |
2,699,659,932 | pytorch | Re-evaluate `nonzero` chunks with CCCL 2.8 | ### 🚀 The feature, motivation and pitch
This has introduced chunks:
https://github.com/pytorch/pytorch/pull/141592
Here native upstream for CCCL 2.8:
https://github.com/NVIDIA/cccl/pull/2400#issuecomment-2494987280
/cc @ptrblck @msaroufim @eqy @ngimel @ezyang
### Alternatives
_No response_
### Additional context
_No response_ | module: cuda,triaged | low | Major |
2,699,683,157 | next.js | Combining root catchAll route + parallel nested route causes build failure | Example setup that causes failure:
./app/[...segment]/page.tsx
./app/someroute/page.tsx
./app/someroute/layout.tsx
./app/someroute/@someslot/subroute/page.tsx
### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/distracted-chatterjee-6vjhh9?workspaceId=6caaae1b-8862-4fe4-9ff9-fe868ceec1a7
### To Reproduce
1. Run pnpm build
2. The build will fail
### Current vs. Expected behavior
The app should build /[...segment], /someroute, and /someroute/subroute but instead the build fails with `TypeError: Cannot read properties of undefined (reading 'entryCSSFiles')`
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.0.4-canary.29 // Latest available version is detected (15.0.4-canary.29).
eslint-config-next: N/A
react: 19.0.0-rc-b01722d5-20241114
react-dom: 19.0.0-rc-b01722d5-20241114
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
_No response_ | bug,Parallel & Intercepting Routes | low | Critical |
2,699,705,191 | flutter | Exposing the iOS `FlutterTextInputView` to control keyboard directly | ### Use case
Searching for how to add a Done button to a number keyboard in Flutter, there are thousands of searches and no solution...
Issues such as #12220 get closed with no resolution or point you to a https://pub.dev/packages/keyboard_actions, which while popular is not native and has its issues. (Functionality / Animation / Look&Feel)
### Proposal
Since you can put almost anything above the keyboard in iOS a pure Flutter solution is impossible. There will always be someone who finds a missing feature. So the only real option, that covers everyone, is to expose the `UIView` and add your own `UIToolbar` implementation.
I looked at the engine code and found `FlutterTextInputView` inside shell/platform/darwin/ios/framework/Source
If we could somehow expose this into the `AppDelete` via a callback in response to the user interacting with a widget that opens a keyboard, then we could set up the keyboard any way we'd like (apart from the values settable from Flutter.)
Flutter sets it's values in `- (void)configureWithDictionary:(NSDictionary*)configuration`. If there was a callback raised from here providing `self` we could do additional modifications on the Swift side.
I am willing to look at this if this is an approach that is accepted. Although I have very little experience in Objective-C / Swift... | a: text input,c: new feature,platform-ios,engine,c: proposal,P3,team-ios,triaged-ios | low | Minor |
2,699,718,189 | flutter | [file_selector_android] Attempting to return null paths from `FileUtils.java` results in an `IllegalStateException` | `FileUtils.java` will return a null path in a couple of exception cases:
https://github.com/flutter/packages/blob/main/packages/file_selector/file_selector_android/android/src/main/java/dev/flutter/packages/file_selector_android/FileUtils.java#L139
The idea behind this approach was as an alternative to throwing errors in Java code, which would crash the app (originally in `image_picker`, these two plugins share much of this code)
https://github.com/flutter/packages/pull/4004
But this approach doesn't actually prevent a crash, because the path is marked as not nullable. The path gets returned
https://github.com/flutter/packages/blob/main/packages/file_selector/file_selector_android/android/src/main/java/dev/flutter/packages/file_selector_android/FileSelectorApiImpl.java#L360
and then the builder will try to set that path on the object, at which point the generated Java object does the following check
https://github.com/flutter/packages/blob/main/packages/file_selector/file_selector_android/android/src/main/java/dev/flutter/packages/file_selector_android/GeneratedFileSelectorApi.java#L77
and throws an `IllegalStateException` if the path is null.
We should probably switch to surfacing exceptions in dart, following the pattern introduced in https://github.com/flutter/packages/pull/8184.
`image_picker` is likely also affected, though I haven't checked. | platform-android,package,P2,p: file_selector,team-android,triaged-android | low | Critical |
2,699,747,133 | react | [React 19] Dynamically importing a data fetching hook via `use()` leads to an error | ### Describe the bug
I've been trying to create a component which would encapsulate data loading logic, to have the most granular approach to using `<Suspense/>`, avoiding UI duplication with skeletons. My goal was to have _one component_ which would handle suspending, fallback UI, loading, polling, errors, etc, and be used in a _server component_ like so:
```
<AsyncValue
query={useSomeQuery} // query wrapper fn or potentially a query wrapper fn name string
queryProps={{
itemId: "0001", // optional query props
}}
dataKey="name" // optional accessor to a value which `extends ReactNode` in case an object is returned
render={Value} // Some valid React compoennt
/>
```
This, **afaik**, would result in everything but the suspended value being pre-rendered server-side, eliminating the need to care about large skeletons, duplicating layout components or what not. Thought that would be neat?
I have arrived at a working solution, but a hacky one, as it used `throw` to trigger `<Suspense />`. When it struck me that we now have the `use()` hook to handle exactly what I thought was needed – keeping a component suspended while dynamically importing a hook. However, when I tried using it I saw:
`Error: Update hook called on initial render. This is likely a bug in React. Please file an issue.`
Even though it seems to function. The data fetch is fired on the server, response is streamed to the client, where the hook proceeds to poll and update the value.
TLDR:
- Is dynamically importing hooks even viable or it ruins some logic under the hood?
- If is is viable, can it be done via a `use()` hook to trigger `<Suspense />` boundaries?
### Reproducible example
https://codesandbox.io/p/devbox/6xmt4y
### Expected behavior
Work with `use()` like it does without it.
### React version
19.0.0-rc-2d16326d-20240930
### Related issue
https://github.com/TanStack/query/issues/8362
| Resolution: Needs More Information,React 19 | low | Critical |
2,699,801,738 | yt-dlp | Stripchat livestream download ends abruptly | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Europe
### Provide a description that is worded well enough to be understood
For some time now, the download of the livestream has been interrupted, although the stream is running in the browser. At the same time, the livestream download of the same modal on Chaturbate runs without interruption.
Sometimes the download stops after a few seconds, sometimes after a few minutes. The simultaneous download of the same modal on Chaturbate continues without any problems.
The problem generally occurs with all models I tried.
Unfortunately I see no reason for the interruption in the logs. A different isp also causes the same problem.
Thanks in advance.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--no-part', 'https://stripchat.com/kaydenwithpaul']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [4b5eec0aa] (zip)
[debug] Python 3.11.2 (CPython x86_64 64bit) - Linux-6.1.0-28-amd64-x86_64-with-glibc2.36 (OpenSSL 3.0.15 3 Sep 2024, glibc 2.36)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.11.0, brotli-1.0.9, certifi-2022.09.24, mutagen-1.46.0, pyxattr-0.8.1, requests-2.28.1, sqlite3-3.40.1, urllib3-1.26.12, websockets-10.4
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[Stripchat] Extracting URL: https://stripchat.com/kaydenwithpaul
[Stripchat] kaydenwithpaul: Downloading webpage
[Stripchat] kaydenwithpaul: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: best/bestvideo+bestaudio
[info] kaydenwithpaul: Downloading 1 format(s): hls-6
[debug] Invoking ffmpeg downloader on "https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8"
[download] Destination: kaydenwithpaul 2024-11-26 19_35 [kaydenwithpaul].mp4
[debug] ffmpeg command line: ffmpeg -y -loglevel verbose -headers 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Sec-Fetch-Mode: navigate
' -i https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8 -c copy -f mpegts 'file:kaydenwithpaul 2024-11-26 19_35 [kaydenwithpaul].mp4'
ffmpeg version 5.1.6-0+deb12u1 Copyright (c) 2000-2024 the FFmpeg developers
built with gcc 12 (Debian 12.2.0-14)
configuration: --prefix=/usr --extra-version=0+deb12u1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librist --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --disable-sndio --enable-libjxl --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-libplacebo --enable-librav1e --enable-shared
libavutil 57. 28.100 / 57. 28.100
libavcodec 59. 37.100 / 59. 37.100
libavformat 59. 27.100 / 59. 27.100
libavdevice 59. 7.100 / 59. 7.100
libavfilter 8. 44.100 / 8. 44.100
libswscale 6. 7.100 / 6. 7.100
libswresample 4. 7.100 / 4. 7.100
libpostproc 56. 6.100 / 56. 6.100
[tcp @ 0x56326425adc0] Starting connection attempt to 195.181.175.21 port 443
[tcp @ 0x56326425adc0] Successfully connected to 195.181.175.21 port 443
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:25.864+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:27.870+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:29.854+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[tcp @ 0x563264562640] Starting connection attempt to 195.181.175.21 port 443
[tcp @ 0x563264562640] Successfully connected to 195.181.175.21 port 443
[AVIOContext @ 0x5632645b7240] Statistics: 1238 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2233_p03WU4bZhAAHRpv4_1732646125.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2233_p03WU4bZhAAHRpv4_1732646125.mp4' for reading
[tcp @ 0x563264563f80] Starting connection attempt to 195.181.170.2 port 443
[tcp @ 0x563264563f80] Successfully connected to 195.181.170.2 port 443
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2234_JF9hvoL4zSt1dQRg_1732646127.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2234_JF9hvoL4zSt1dQRg_1732646127.mp4' for reading
[tcp @ 0x563264533d00] Starting connection attempt to 195.181.175.21 port 443
[tcp @ 0x563264533d00] Successfully connected to 195.181.175.21 port 443
[h264 @ 0x5632645bbd40] Reinit context to 1920x1088, pix_fmt: yuv420p
Input #0, hls, from 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8':
Duration: N/A, start: 4468.133063, bitrate: 0 kb/s
Program 0
Metadata:
variant_bitrate : 0
Stream #0:0: Video: h264 (Main), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, left), 1920x1080 (1920x1088) [SAR 1:1 DAR 16:9], 0 kb/s, 60 fps, 60 tbr, 90k tbn (default)
Metadata:
variant_bitrate : 0
compatible_brands: iso5iso6mp41
major_brand : iso5
minor_version : 512
encoder : Lavf60.3.100
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 0 kb/s (default)
Metadata:
variant_bitrate : 0
[mpegts @ 0x5632645bcfc0] service 1 using PCR in pid=256, pcr_period=100ms
[mpegts @ 0x5632645bcfc0] muxrate VBR, sdt every 500 ms, pat/pmt every 100 ms
Output #0, mpegts, to 'file:kaydenwithpaul 2024-11-26 19_35 [kaydenwithpaul].mp4':
Metadata:
encoder : Lavf59.27.100
Stream #0:0: Video: h264 (Main), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, left), 1920x1080 (0x0) [SAR 1:1 DAR 16:9], q=2-31, 0 kb/s, 60 fps, 60 tbr, 90k tbn (default)
Metadata:
variant_bitrate : 0
compatible_brands: iso5iso6mp41
major_brand : iso5
minor_version : 512
encoder : Lavf60.3.100
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 0 kb/s (default)
Metadata:
variant_bitrate : 0
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
Automatically inserted bitstream filter 'h264_mp4toannexb'; args=''
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2235_P4cQGsBdid30fhMI_1732646129.mp4', offset 0, playlist 0
[https @ 0x56326426cd00] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2235_P4cQGsBdid30fhMI_1732646129.mp4' for reading
[tcp @ 0x5632645e7700] Starting connection attempt to 195.181.175.37 port 443
[tcp @ 0x5632645e7700] Successfully connected to 195.181.175.37 port 443
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:29.854+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:31.853+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:33.860+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x56326426cd00] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[AVIOContext @ 0x563264533440] Statistics: 3224876 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2236_dcxlPmBdLhquzXYK_1732646131.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2236_dcxlPmBdLhquzXYK_1732646131.mp4' for reading
[tcp @ 0x563264563f80] Starting connection attempt to 195.181.175.38 port 443
[tcp @ 0x563264563f80] Successfully connected to 195.181.175.38 port 443
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2237_bB0oTCwhCYSQGV09_1732646133.mp4', offset 0, playlist 0
[https @ 0x563264597180] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2237_bB0oTCwhCYSQGV09_1732646133.mp4' for reading
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:31.853+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:33.860+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:35.869+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x563264597180] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[AVIOContext @ 0x5632648ce4c0] Statistics: 3046260 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2238_vfeOWJiVS62MUkQi_1732646135.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2238_vfeOWJiVS62MUkQi_1732646135.mp4' for reading
[tcp @ 0x563264543c80] Starting connection attempt to 195.181.175.21 port 443
[tcp @ 0x563264543c80] Successfully connected to 195.181.175.21 port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:33.860+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:35.869+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:37.852+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x563264a7cd80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[AVIOContext @ 0x563264d67940] Statistics: 1566064 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2239_N8wNJCO24O3hFnuA_1732646137.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2239_N8wNJCO24O3hFnuA_1732646137.mp4' for reading
[tcp @ 0x5632648cffc0] Starting connection attempt to 195.181.175.12 port 443
[tcp @ 0x5632648cffc0] Successfully connected to 195.181.175.12 port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:35.869+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:37.852+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:39.853+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x563264667100] Cannot reuse HTTP connection for different host: b-hls-10.doppiocdn.live:-1 != b-hls-10.sacdnssedge.com:-1
[AVIOContext @ 0x563264999b00] Statistics: 1583539 bytes read, 0 seeks
[hls @ 0x563264256e80] keepalive request failed for 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' with error: 'Invalid argument' when opening url, retrying with new connection
[hls @ 0x563264256e80] Opening 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[tcp @ 0x563264543c80] Starting connection attempt to 195.181.175.13 port 443
[tcp @ 0x563264543c80] Successfully connected to 195.181.175.13 port 443
[AVIOContext @ 0x563264733000] Statistics: 1238 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_2240_pDMJdPSn9PxF73oh_1732646139.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_2240_pDMJdPSn9PxF73oh_1732646139.mp4' for reading
[tcp @ 0x5632648cffc0] Starting connection attempt to 195.181.170.2 port 443
[tcp @ 0x5632648cffc0] Successfully connected to 195.181.170.2 port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:37.852+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:39.853+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:41.859+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x563264698900] Cannot reuse HTTP connection for different host: b-hls-10.sacdnssedge.com:-1 != b-hls-10.doppiocdn.live:-1
[AVIOContext @ 0x56326467c880] Statistics: 1645585 bytes read, 0 seeks
[hls @ 0x563264256e80] keepalive request failed for 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' with error: 'Invalid argument' when opening url, retrying with new connection
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[tcp @ 0x5632648cffc0] Starting connection attempt to 195.181.175.13 port 443
[tcp @ 0x5632648cffc0] Successfully connected to 195.181.175.13 port 443
[AVIOContext @ 0x563264605780] Statistics: 1238 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2241_MfriyGfdNvJ3kmMY_1732646141.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2241_MfriyGfdNvJ3kmMY_1732646141.mp4' for reading
[tcp @ 0x563264543c80] Starting connection attempt to 195.181.175.37 port 443
[tcp @ 0x563264543c80] Successfully connected to 195.181.175.37 port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:39.853+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:41.859+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:43.870+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x56326475c780] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[AVIOContext @ 0x563264750680] Statistics: 1563847 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2242_53gz0jOX0N7y2kKu_1732646143.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2242_53gz0jOX0N7y2kKu_1732646143.mp4' for reading
[tcp @ 0x563264543c80] Starting connection attempt to 195.181.175.13 port 443
[tcp @ 0x563264543c80] Successfully connected to 195.181.175.13 port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:41.859+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:43.870+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:45.856+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x5632648a4140] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[AVIOContext @ 0x5632646221c0] Statistics: 1666896 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2243_57GVSXvUejqnuLch_1732646145.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2243_57GVSXvUejqnuLch_1732646145.mp4' for reading
[tcp @ 0x563264622f00] Starting connection attempt to 195.181.175.13 port 443
[tcp @ 0x563264622f00] Successfully connected to 195.181.175.13 port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:43.870+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:45.856+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:47.855+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x5632645b3300] Cannot reuse HTTP connection for different host: b-hls-10.doppiocdn.live:-1 != b-hls-10.sacdnssedge.com:-1
[AVIOContext @ 0x563264d4e3c0] Statistics: 1550924 bytes read, 0 seeks
[hls @ 0x563264256e80] keepalive request failed for 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' with error: 'Invalid argument' when opening url, retrying with new connection
[hls @ 0x563264256e80] Opening 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[tcp @ 0x5632648cffc0] Starting connection attempt to 195.181.170.2 port 443
[tcp @ 0x5632648cffc0] Successfully connected to 195.181.170.2 port 443
[AVIOContext @ 0x563264d4c0c0] Statistics: 1238 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_2244_yKxytumWlEakST2H_1732646147.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_2244_yKxytumWlEakST2H_1732646147.mp4' for reading
[tcp @ 0x563264d5a640] Starting connection attempt to 195.181.175.13 port 443
[tcp @ 0x563264d5a640] Successfully connected to 195.181.175.13 port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:45.856+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:47.855+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:49.859+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x563264399c40] Cannot reuse HTTP connection for different host: b-hls-10.sacdnssedge.com:-1 != b-hls-10.doppiocdn.live:-1
[AVIOContext @ 0x56326486d640] Statistics: 1570025 bytes read, 0 seeks
[hls @ 0x563264256e80] keepalive request failed for 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' with error: 'Invalid argument' when opening url, retrying with new connection
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[tcp @ 0x5632648d2e80] Starting connection attempt to 195.181.175.38 port 443
[tcp @ 0x5632648d2e80] Successfully connected to 195.181.175.38 port 443
[AVIOContext @ 0x563264724080] Statistics: 1238 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2245_RymW4f2alsm8kVfj_1732646149.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2245_RymW4f2alsm8kVfj_1732646149.mp4' for reading
[tcp @ 0x56326486e800] Starting connection attempt to 195.181.170.3 port 443
[tcp @ 0x56326486e800] Successfully connected to 195.181.170.3 port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:47.855+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:49.859+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:51.871+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x5632646e7a80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[AVIOContext @ 0x5632648bfac0] Statistics: 1627693 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2246_Vti5OjeaCtCLTl4o_1732646151.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2246_Vti5OjeaCtCLTl4o_1732646151.mp4' for reading
[tcp @ 0x5632648d2e80] Starting connection attempt to 195.181.175.38 port 443
[tcp @ 0x5632648d2e80] Successfully connected to 195.181.175.38 port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:49.859+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:51.871+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:53.854+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x56326465d000] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[AVIOContext @ 0x563264d4b500] Statistics: 1466682 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2247_Lq5ZU5EgCcaVX2Ls_1732646153.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2247_Lq5ZU5EgCcaVX2Ls_1732646153.mp4' for reading
[tcp @ 0x5632648c0dc0] Starting connection attempt to 195.181.170.2 port 443
[tcp @ 0x5632648c0dc0] Successfully connected to 195.181.170.2 port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:51.871+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:53.854+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:55.854+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x563264d39940] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[AVIOContext @ 0x56326484fe00] Statistics: 1663238 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2248_DCDp7IWh3YFbbJ1Z_1732646155.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2248_DCDp7IWh3YFbbJ1Z_1732646155.mp4' for reading
[tcp @ 0x56326484a400] Starting connection attempt to 195.181.175.13 port 443
[tcp @ 0x56326484a400] Successfully connected to 195.181.175.13 port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:55.854+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:57.867+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:59.867+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x563264724d00] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[AVIOContext @ 0x563264601700] Statistics: 1499266 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2249_UZbiIpbZD3vypfck_1732646157.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2249_UZbiIpbZD3vypfck_1732646157.mp4' for reading
[tcp @ 0x5632648d2e80] Starting connection attempt to 195.181.175.37 port 443
[tcp @ 0x5632648d2e80] Successfully connected to 195.181.175.37 port 443
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2250_4Wk3XdpATqDcYJW5_1732646159.mp4', offset 0, playlist 0
[https @ 0x56326426cd00] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60_2250_4Wk3XdpATqDcYJW5_1732646159.mp4' for reading
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563264560800] Found duplicated MOOV Atom. Skipped it
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:55.854+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:57.867+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:59.867+0000')
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:55.854+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:57.867+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:59.867+0000')
[https @ 0x5632645cce40] Opening 'https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8' for reading
[hls @ 0x563264256e80] Skip ('#EXT-X-VERSION:6')
[hls @ 0x563264256e80] Skip ('#EXT-X-INDEPENDENT-SEGMENTS')
[hls @ 0x563264256e80] Skip ('#EXT-X-DISCONTINUITY-SEQUENCE:2')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:57.867+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:35:59.867+0000')
[hls @ 0x563264256e80] Skip ('#EXT-X-PROGRAM-DATE-TIME:2024-11-26T18:36:01.852+0000')
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4', offset 0, playlist 0
[https @ 0x563264451340] Cannot reuse HTTP connection for different host: b-hls-10.doppiocdn.live:-1 != b-hls-10.sacdnssedge.com:-1
[AVIOContext @ 0x56326439bd00] Statistics: 3229334 bytes read, 0 seeks
[hls @ 0x563264256e80] keepalive request failed for 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' with error: 'Invalid argument' when opening url, retrying with new connection
[hls @ 0x563264256e80] Opening 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_init_9nVaeUBYSE2s5Ezt.mp4' for reading
[tcp @ 0x56326425a9c0] Starting connection attempt to 195.181.175.38 port 443
[tcp @ 0x56326425a9c0] Successfully connected to 195.181.175.38 port 443
[AVIOContext @ 0x5632643fa200] Statistics: 1238 bytes read, 0 seeks
[hls @ 0x563264256e80] HLS request for url 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_2251_FrtEMytvk6kkQoQv_1732646161.mp4', offset 0, playlist 0
[hls @ 0x563264256e80] Opening 'https://b-hls-10.sacdnssedge.com/hls/62336272/62336272_1080p60_2251_FrtEMytvk6kkQoQv_1732646161.mp4' for reading
[tcp @ 0x563264605000] Starting connection attempt to 195.181.175.21 port 443
[tcp @ 0x563264605000] Successfully connected to 195.181.175.21 port 443
No more output streams to write to, finishing.
frame= 2160 fps= 67 q=-1.0 Lsize= 28744kB time=00:00:35.99 bitrate=6540.9kbits/s speed=1.11x
video:26994kB audio:703kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3.778836%
Input file #0 (https://b-hls-10.doppiocdn.live/hls/62336272/62336272_1080p60.m3u8):
Input stream #0:0 (video): 2160 packets read (27642127 bytes);
Input stream #0:1 (audio): 1687 packets read (719598 bytes);
Total: 3847 packets (28361725 bytes) demuxed
Output file #0 (file:kaydenwithpaul 2024-11-26 19_35 [kaydenwithpaul].mp4):
Output stream #0:0 (video): 2160 packets muxed (27642127 bytes);
Output stream #0:1 (audio): 1687 packets muxed (719598 bytes);
Total: 3847 packets (28361725 bytes) muxed
[AVIOContext @ 0x563264aa9780] Statistics: 29433468 bytes written, 0 seeks, 113 writeouts
[AVIOContext @ 0x56326442dd00] Statistics: 0 bytes read, 0 seeks
[AVIOContext @ 0x5632647ee140] Statistics: 1511582 bytes read, 0 seeks
[AVIOContext @ 0x5632648e0c40] Statistics: 11884 bytes read, 0 seeks
[AVIOContext @ 0x563264561f40] Statistics: 742 bytes read, 0 seeks
[download] 100% of 28.07MiB in 00:00:33 at 860.37KiB/s
[debug] ffprobe command line: ffprobe -hide_banner -show_format -show_streams -print_format json 'file:kaydenwithpaul 2024-11-26 19_35 [kaydenwithpaul].mp4'
[debug] ffmpeg command line: ffprobe -show_streams 'file:kaydenwithpaul 2024-11-26 19_35 [kaydenwithpaul].mp4'
[FixupM3u8] Fixing MPEG-TS in MP4 container of "kaydenwithpaul 2024-11-26 19_35 [kaydenwithpaul].mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:kaydenwithpaul 2024-11-26 19_35 [kaydenwithpaul].mp4' -map 0 -dn -ignore_unknown -c copy -f mp4 -bsf:a aac_adtstoasc -movflags +faststart 'file:kaydenwithpaul 2024-11-26 19_35 [kaydenwithpaul].temp.mp4'
```
| NSFW,site-bug,triage | low | Critical |
2,699,824,334 | neovim | LSP: data properties will be removed if null | ### Problem
Neovim provides `dataSupprt` for codeActions and publishDiagnostics
```
/**
* Whether code action supports the `data` property which is
* preserved between a `textDocument/codeAction` and a
* `codeAction/resolve` request.
*
* @since 3.16.0
*/
dataSupport?: boolean;
```
```
/**
* A data entry field that is preserved on a code action between
* a `textDocument/codeAction` and a `codeAction/resolve` request.
*
* @since 3.16.0
*/
data?: [LSPAny](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#lspAny);
}
```
In the example notice that `REALLY_IMPORTANT_PARAM` is missing from the resolve request.
Because we serialise converting null to lua nils, when we decode and re-encode we lose properties in the data object
### Steps to reproduce using "nvim -u minimal_init.lua"
```lua
local requests = {
['initialize'] = function(_)
return {
--- @type lsp.ServerCapabilities
capabilities = {
codeActionProvider = {
resolveProvider = true,
codeActionKinds = { 'source' },
},
},
}
end,
['shutdown'] = function(_) end,
['textDocument/codeAction'] = function(_)
---@type lsp.CodeAction[]
local action = {
{
title = 'Example Action',
kind = 'source',
data = {
something = 'this',
REALLY_IMPORTANT_PARAM = vim.NIL,
something_else = 'that',
},
},
}
--- NOTE: Force serde as in memory lsp doesnt go through rpc
local encoded = vim.json.encode(action)
local decoded = vim.json.decode(encoded, { luanil = { object = true } })
return decoded
end,
['codeAction/resolve'] = function(params)
print(vim.inspect(params))
return {}
retu
end,
}
local noops = {
['initialized'] = true,
}
local function server()
local srv = {}
local closing = false
function srv.request(method, params, handler)
coroutine.wrap(function()
if requests[method] then
local response = requests[method](params)
handler(nil, response)
elseif method == 'exit' then
closing = true
else
vim.notify('Unhandled request: ' .. method)
end
end)()
end
function srv.notify(method, _params)
coroutine.wrap(function()
if method == 'exit' then
closing = true
elseif not noops[method] then
vim.notify('Unhandled notification: ' .. method)
end
end)()
end
function srv.is_closing()
return closing
end
function srv.terminate()
closing = true
end
return srv
end
vim.lsp.start({
name = 'serde_nil',
cmd = server,
})
```
Perform a code action and see that the param is missing
### Expected behavior
Data properties to be preserved as null is a valid LSPAny value
```
/**
* The LSP any type
*
* @since 3.17.0
*/
export type LSPAny = LSPObject | LSPArray | string | integer | uinteger |
decimal | boolean | null;
```
### Nvim version (nvim -v)
0.11
### Language server name/version
Any
### Operating system/version
N/A
### Log file
_No response_ | bug,lsp | low | Minor |
2,699,874,716 | rust | Tracking issue for release notes of #116161: Stabilize `extended_varargs_abi_support` |
This issue tracks the release notes text for #116161.
### Steps
- [x] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [x] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Language
- [Support var-args for more calling conventions](https://github.com/rust-lang/rust/pull/116161)
This stabilizes usage of `...` variable args in extern declarations for `cdecl`, `system`, `aapcs`, `sysv64`, `win64`, and `efiapi` calling conventions.
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @Soveu, @cjgillot -- origin issue/PR authors and assignees for starting to draft text
| T-lang,T-compiler,relnotes,F-extended_varargs_abi_support,relnotes-tracking-issue | low | Minor |
2,699,876,909 | rust | Many applications on Windows (e.g. the Chocolatey wrapper) do not work when current_dir is a canonicalized path | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I found a "weird" issue with std::process::Command and canonicalized paths on Windows. Something like (which assumes [lsd](https://github.com/lsd-rs/lsd) is installed on the machine):
```rust
let p1 = Path::new(".");
let p2 = p1.canonicalize().unwrap();
let e = std::process::Command::new("lsd").current_dir(&p1).spawn().wait_for_output().unwrap();
println!("P1 = {e:?}");
let e = std::process::Command::new("lsd").current_dir(&p2).spawn().wait_for_output().unwrap();
println!("P2 = {e:?}");
```
... works fine for the first execution, showing the current directory contents; but the second execution fails with
```
Unhandled Exception: System.ArgumentException: Illegal characters in path.
at System.Security.Permissions.FileIOPermission.EmulateFileIOPermissionChecks(String fullPath)
at System.IO.Directory.InternalGetCurrentDirectory(Boolean checkHost)
at shim.ShimProgram.Main(String[] args)
```
... which looks like it is caused due `.canonicalize()` prefixing the path with "\\\\?\\" -- which, for what I can understand, correct, telling Windows that the path must be taken as-in, without any processing.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 19.1.1
```
Also note that I'm cross-compiling from Linux to Windows, using "x86_64-pc-windows-gnu" as target triple.
| O-windows,C-bug,T-libs | low | Critical |
2,699,936,123 | vscode | Editor GPU: Canvas stretches on resize | Repro:
1. Open editor with GPU acceleration
2. Resize canvas, 🐛 notice that the canvas stretches until the debounced resize kicks in | bug,editor-gpu | low | Minor |
2,699,957,028 | PowerToys | Convert Mis-Typed Text Between Languages | ### Description of the new feature / enhancement
Often, I accidentally type text in Persian when I think my keyboard is set to English. For example, I intended to type:
**"Does Raspberry Pi 5 support ADC?".**
But because my keyboard was on Persian, it came out as:
**"يخثس ًشسحذثققه [ه ۵ سعححخقف ؤيژ؟".**
It would be incredibly helpful to have a tool that could automatically convert such mis-typed text back to the intended language. For example, by selecting incorrect text and pressing a combined key, the sentence could be converted to English or Persian. This feature would save a lot of time and reduce errors for users who frequently switch between different language keyboards.
### Scenario when this would be used?
This feature would be especially useful for users who frequently switch between different language keyboards. For instance, developers, multilingual individuals, or professionals working in diverse linguistic environments might often find themselves accidentally typing in the wrong language. A tool that can automatically convert mis-typed text back to the intended language would streamline their workflow, prevent errors, and save significant amounts of time. This would enhance productivity and reduce frustration for users who deal with multiple languages on a daily basis.
### Supporting information
_No response_ | Needs-Triage | low | Critical |
2,699,964,636 | deno | Streams: commit pull-into descriptors after filling from queue | In https://github.com/whatwg/streams/security/advisories/GHSA-p5g2-876g-95h9, we discovered that in Chromium, a user could run JavaScript code *synchronously* during `ReadableStreamFulfillReadIntoRequest` by patching `Object.prototype.then`, and use this gadget to break some invariants within `ReadableByteStreamControllerProcessPullIntoDescriptorsUsingQueue`. Fortunately, [Deno was unaffected](https://github.com/whatwg/streams/security/advisories/GHSA-p5g2-876g-95h9#advisory-comment-102576).
The Streams standard has been updated with a proper fix for this case. We now postpone all calls to `ReadableByteStreamControllerCommitPullIntoDescriptor` until *after* all pull-into descriptors have been filled up by `ReadableByteStreamControllerProcessPullIntoDescriptorsUsingQueue`. This way, we won't trigger any patched `then()` method until the stream is in a stable state.
* Specification change: https://github.com/whatwg/streams/pull/1326
* WPT tests: https://github.com/web-platform-tests/wpt/pull/48085 | web | low | Minor |
2,699,978,111 | excalidraw | Text fit to screen width | Hi, I would like to propose that text be able to adapt to the width of the screen. Currently, when writing a long text, only the last few words are displayed, while the initial text is out of view. This creates an extended text box that, in most cases, needs to be adjusted manually. This would be especially useful on vertical monitors or mobile phones.

| enhancement | low | Minor |
2,699,995,953 | pytorch | inductor and eager gives slightly different results for rms_norm, with `emulate_precision_casts=True` | repro below:
```python
import torch
torch._inductor.config.emulate_precision_casts = True
def f(x, scale):
y = x.float()
z = torch.rms_norm(y, (4096,), scale, 1e-05)
return z.to(dtype=torch.bfloat16)
torch.manual_seed(0)
x = torch.randn(2, 157, 4096, device='cuda:0', dtype=torch.bfloat16)
scale = torch.randn(4096, device='cuda:0', dtype=torch.bfloat16)
f_compiled = torch.compile(f)
out_eager = f(x, scale)
out = f_compiled(x, scale)
out_fp64 = torch.rms_norm(x.to(dtype=torch.float64), (4096,), scale.to(dtype=torch.float64), 1e-05)
print(torch.max(torch.abs(out_fp64 - out_eager)))
print(torch.max(torch.abs(out_fp64 - out)))
print(torch.max(torch.abs(out_eager - out)))
print(torch.allclose(out_eager, out))
```
I'm not sure if this is expected, but even with `emulate_precision_casts=True` this prints the below for me:
```
tensor(0.0311, device='cuda:0', dtype=torch.float64)
tensor(0.0311, device='cuda:0', dtype=torch.float64)
tensor(0.0078, device='cuda:0', dtype=torch.bfloat16)
```
so compile and eager are equally close to an fp64 comparison, but they are not the same.
cc @chauhang @penguinwu | triaged,oncall: pt2 | low | Minor |
2,700,011,009 | godot | Bottom panel: Shader Editor keeps popping up over Animation tab | ### Tested versions
- Reproducible in 4.4dev5
### System information
Godot v4.4.dev5 - Windows 10.0.19045 - Single-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6094) - AMD Ryzen 7 5800X 8-Core Processor (16 threads)
### Issue description
The UX of the bottom panel could greatly benefit from improvement. To keep this report contained to a single issue, here's something I've just run into:
when I want to edit an animation frame, I select the Sprite2D node to edit the keyframe - but then the animation panel switches over to the shader panel, so I have to switch back again before I can edit the keyframes.
https://github.com/user-attachments/assets/532f025d-314b-4e50-996f-60f99939654f
### Steps to reproduce
- make a Scene with both a Sprite2D, and an AnimationPlayer operating on that Sprite2D.
- add a Shader to the Sprite2D.
- create an animation in the AnimationPlayer.
- with the Animation panel open, try to select the sprite2D.
- contrary to what you'd expect, when you select the Sprite2D, the Shader Editor panel pops up, requiring you to switch to the Animation panel again before you can actually edit the keyframes.
(this seems to only happen after reopening the project?)
### Minimal reproduction project (MRP)
[mrp.zip](https://github.com/user-attachments/files/17940482/mrp.zip)
| bug,topic:editor,confirmed,regression | low | Major |
2,700,043,522 | angular | Support i18n boolean expression in template | ### Which @angular/* package(s) are relevant/related to the feature request?
localize
### Description
It is very common to translate the following:
```
<button>{{ active ? 'On' : 'Off' }}</button>
```
And the easiest way is to use the [select clause ICU expression](https://angular.dev/guide/i18n/prepare#mark-alternates-and-nested-expressions) like this:
```
<button i18n>{ active, select, true { On } false { Off } }</button>
```
but the documentation says:
> The select clause marks choices for alternate text based on your defined <ins>**string values**</ins>.
In my case, `active` property is a boolean and I'm using true/false like it's a boolean selection_category but it is actually interpreted as text.
Is this usage officially supported or is this a strange/hacky case that just happens to work because of boolean to string conversion?
### Proposed solution
Either:
1. Explicitly say in the documentation that using the select ICU expression with a boolean property is supported and provide an example.
- In that case, could the select clause be compile checked to make sure true and false categories, and only true and false, are specified?
3. Create a specific boolean ICU expression, for example: `{ active, bool, true { On } false { Off } } `
4. Add support for `$localize` and back quotes (`` ` ``) in template to be able to do: ``<button>{{ active ? $localize`On` : $localize`Off` }}</button>``
### Alternatives considered
We can define a method in the component and use it like that:
```
export class ExampleComponent {
active: boolean;
getActiveText(): string {
return active ? $localize`On` : $localize`Off`
}
}
```
```
<button>{{ getActiveText() }}</button>
```
But it will tend to complicate your component script with a lot of meaningless methods, and I feel that this use case could be simplified. | area: i18n | low | Minor |
2,700,049,861 | rust | Confusing error message when using Borrow'd keys in Hashmap | ### Code
```Rust
use std::borrow::Borrow;
use bumpalo::Bump;
use std::collections::HashSet;
#[derive(PartialEq, Eq, Hash)]
struct Board;
#[derive(PartialEq, Eq, Hash)]
struct TreeNode<'a> {
state: Board,
foo: &'a ()
}
impl Borrow<Board> for TreeNode<'_> {
fn borrow(&self) -> &Board {
&self.state
}
}
// Reduced for clarity
fn new_from_bump<'b>(state: Board, alloc: &'b Bump, cache: &HashSet<&'b TreeNode>) {
cache.contains(&state);
}
```
### Current output
```Shell
error[E0277]: the trait bound `&TreeNode<'_>: Borrow<Board>` is not satisfied
--> src/lib.rs:22:24
|
22 | cache.contains(&state);
| -------- ^^^^^^ the trait `Borrow<Board>` is not implemented for `&TreeNode<'_>`
| |
| required by a bound introduced by this call
|
note: required by a bound in `HashSet::<T, S>::contains`
--> /playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/collections/hash/set.rs:671:12
|
669 | pub fn contains<Q: ?Sized>(&self, value: &Q) -> bool
| -------- required by a bound in this associated function
670 | where
671 | T: Borrow<Q>,
| ^^^^^^^^^ required by this bound in `HashSet::<T, S>::contains`
help: consider removing the leading `&`-reference
|
22 - cache.contains(&state);
22 + cache.contains(state);
|
```
### Rationale and extra context
I'm not confident I'm interpreting this correctly, but I *think* this error message is misleading because the signature of `contains` implies that the `Borrow` implementation needs to be on the type `Q=Board`. Additionally, taking the suggestion of removing the & leads to an error because the parameter needs to be a `&Q` and there's no valid type for `Q` that makes `&Q` equal to `Board`.
Changing the `Borrow` impl to instead be `for &TreeNode` *does* work, but looking at the docs, this isn't consistent with the general concept of the trait or the other examples in stdlib so I feel like it's not the right answer. If it is the right answer, the provided error message doesn't do anything to point me in that direction.
### Rust Version
```Shell
❯ rustc --version --verbose
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-pc-windows-msvc
release: 1.81.0
LLVM version: 18.1.7
``` | A-diagnostics,T-compiler | low | Critical |
2,700,131,100 | ollama | Deepseek (various) 236b crashes on run | ### What is the issue?
Deepseek V2, V2.5, and V2-coder all crash with an OOM error when loading the 236b size. Other versions of Deepseek may as well, that's all I've tested. Hardware is dual A6000's with 48GB each.
```
Error: llama runner process has terminated: cudaMalloc failed: out of memory
ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 882903040
llama_new_context_with_model: failed to allocate compute buffers
```
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
v0.4.5 | bug,needs more info | medium | Critical |
2,700,223,522 | ui | [feat]: Adapting new Tailwind CSS v4.0 Beta | ### Feature description
### Why adapting?
The public release of Tailwind [CSS v4.0 Beta is out,](https://tailwindcss.com/docs/v4-beta#css-first-configuration) and I am thrilled with the new performance enhancements and features. There are so many global CSS features I'm excited to use, such as:
- [3D Transform](https://tailwindcss.com/docs/v4-beta#3-d-transforms)
- [`field-sizing` utilities](https://tailwindcss.com/docs/v4-beta#field-sizing-utilities)
- [Linear Gradient](https://tailwindcss.com/docs/v4-beta#linear-gradient-angles)
### The speed improvements are also quite impressive

It still requires some configuration to achieve that performance.
### Hope you adapting this beta version
maybe you can add new choose to using `tailwindcss` v4.0 or a way to update to this version
### Affected component/components
Tailwind CSS
### Additional Context
Check out the [Theo react video](https://www.youtube.com/watch?v=q55u3_Nj3Lw) for more insights.
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | medium | Major |
2,700,227,343 | electron | [Docs] Document `view.setLayout()` | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
There is an undocumented `view.setLayout()` function that seems to be used by some apps.
See https://github.com/electron/electron/pull/43069.
### Proposed Solution
Document it.
### Alternatives Considered
N/A
### Additional Information
_No response_ | enhancement :sparkles: | low | Minor |
2,700,298,351 | transformers | How to Log Training Loss at Step Zero in Hugging Face Trainer or SFT Trainer? | ### Feature request
log train loss on start
----
’m using the Hugging Face `Trainer` (or `SFTTrainer`) for fine-tuning, and I want to log the training loss at step 0 (before any training steps are executed). I know there’s an `eval_on_start` option for evaluation, but I couldn't find a direct equivalent for training loss logging at the beginning of training.
Is there a way to log the initial training loss at step zero (before any updates) using `Trainer` or `SFTTrainer`? Ideally, I'd like something similar to `eval_on_start`.
Here’s what I’ve tried so far:
#### Solution 1: Custom Callback
I implemented a custom callback to log the training loss at the start of training:
```python
from transformers import TrainerCallback
class TrainOnStartCallback(TrainerCallback):
def on_train_begin(self, args, state, control, logs=None, **kwargs):
# Log training loss at step 0
logs = logs or {}
logs["train/loss"] = None # Replace None with an initial value if available
logs["train/global_step"] = 0
self.log(logs)
def log(self, logs):
print(f"Logging at start: {logs}")
wandb.log(logs)
# Adding the callback to the Trainer
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
args=training_args,
optimizers=(optimizer, scheduler),
callbacks=[TrainOnStartCallback()],
)
```
This works but feels a bit overkill. It logs metrics at the start of training before any steps.
#### Solution 2: Manual Logging
Alternatively, I manually log the training loss before starting training:
```python
wandb.log({"train/loss": None, "train/global_step": 0})
trainer.train()
```
### Question:
Are there any built-in features in `Trainer` or `SFTTrainer` to log training loss at step zero? Or is a custom callback or manual logging the best solution here? If so, are there better ways to implement this functionality? similar to the `eval_on_start` but `train_on_start`?
cross: https://discuss.huggingface.co/t/how-to-log-training-loss-at-step-zero-in-hugging-face-trainer-or-sft-trainer/128188
### Motivation
Crucial sanity check
### Your contribution
yes, happy to implement this. | Feature request | low | Major |
2,700,318,676 | PowerToys | PowerToys fails to build on local machine | 
PowerToys fails to build on local machine due to, what seems like, a WinAppSDK generated file.
```
PRI175: 0x80073b0f - Processing Resources failed with error: Duplicate Entry.
PRI277: 0x80073b0f - Conflicting values for resource 'Microsoft.UI.Xaml/Resources/WarningSuitableWebView2NotFound'
```
For both, the project is `MeasureToolUI` and the file is `WINAPPSDKGENERATEPROJECTPRIFILE`.
This is in `Debug` build profile, with Visual Studio 2022 17.12.
| Issue-Bug,Needs-Triage | low | Critical |
2,700,346,368 | pytorch | DISABLED test_split_scan (__main__.MultiKernelTest) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_split_scan&suite=MultiKernelTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/33622700488).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_split_scan`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/inductor/test_multi_kernel.py", line 272, in test_split_scan
self.assertEqual(expect, actual)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4007, in assertEqual
raise error_metas.pop()[0].to_error(
AssertionError: Tensor-likes are not close!
Mismatched elements: 44 / 3717120 (0.0%)
Greatest absolute difference: 22.0 at index (3650050,) (up to 1e-05 allowed)
Greatest relative difference: 1.370539848721819e-06 at index (3407607,) (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
python test/inductor/test_multi_kernel.py MultiKernelTest.test_split_scan
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_multi_kernel.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,700,384,898 | deno | Temporal: Deno got into panic when trying to get `era` of Non-ISO 8601 calendars | Version: Deno 2.1.1
OS: macOS Sonoma 14.7
In `deno eval --unstable-temporal`, following code will cause a panic:
```console
Deno 2.1.1
exit using ctrl+d, ctrl+c, or close()
> Temporal.PlainDate.from("2024-11-28[u-ca=japanese]").era
#
# Fatal error in , line 0
# unimplemented code
#
#
#
#FailureMessage Object: 0x16af23b88
==== C stack trace ===============================
0 deno 0x0000000106787ea4 v8::base::debug::StackTrace::StackTrace() + 24
1 deno 0x000000010678d7b8 v8::platform::(anonymous namespace)::PrintStackTrace() + 24
2 deno 0x0000000106784e5c V8_Fatal(char const*, ...) + 356
3 deno 0x0000000106c1bb4c v8::internal::JSTemporalCalendar::EraYear(v8::internal::Isolate*, v8::internal::Handle<v8::internal::JSTemporalCalendar>, v8::internal::
Handle<v8::internal::Object>) + 0
4 deno 0x000000010683bb1c v8::internal::Builtin_TemporalCalendarPrototypeEra(int, unsigned long*, v8::internal::Isolate*) + 124
5 deno 0x0000000107b44dd4 Builtins_CEntry_Return1_ArgvOnStack_BuiltinExit + 84
6 deno 0x0000000107aa666c Builtins_JSEntryTrampoline + 172
7 deno 0x0000000107aa6310 Builtins_JSEntry + 176
8 deno 0x00000001068e5040 v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) + 1588
9 deno 0x00000001068e49dc v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Obj
ect>, int, v8::internal::Handle<v8::internal::Object>*) + 120
10 deno 0x0000000106c0e998 v8::internal::temporal::CalendarEra(v8::internal::Isolate*, v8::internal::Handle<v8::internal::JSReceiver>, v8::internal::Handle<v8::int
ernal::JSReceiver>) + 204
11 deno 0x000000010683bdd8 v8::internal::Builtin_TemporalPlainDatePrototypeEra(int, unsigned long*, v8::internal::Isolate*) + 128
12 deno 0x0000000107b44dd4 Builtins_CEntry_Return1_ArgvOnStack_BuiltinExit + 84
13 deno 0x0000000107ab8258 Builtins_LoadIC_NoFeedback + 4056
14 deno 0x0000000107c20c78 Builtins_GetNamedPropertyHandler + 4728
15 deno 0x0000000107aa89f8 Builtins_InterpreterEntryTrampoline + 280
16 deno 0x0000000107aa666c Builtins_JSEntryTrampoline + 172
17 deno 0x0000000107aa6310 Builtins_JSEntry + 176
18 deno 0x00000001068e5040 v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) + 1588
19 deno 0x00000001068e564c v8::internal::Execution::CallScript(v8::internal::Isolate*, v8::internal::Handle<v8::internal::JSFunction>, v8::internal::Handle<v8::int
ernal::Object>, v8::internal::Handle<v8::internal::Object>) + 132
20 deno 0x000000010689e610 v8::internal::DebugEvaluate::Global(v8::internal::Isolate*, v8::internal::Handle<v8::internal::String>, v8::debug::EvaluateGlobalMode, v
8::internal::REPLMode) + 332
21 deno 0x00000001068a9e00 v8::debug::EvaluateGlobal(v8::Isolate*, v8::Local<v8::String>, v8::debug::EvaluateGlobalMode, bool) + 184
22 deno 0x0000000107346734 v8_inspector::V8RuntimeAgentImpl::evaluate(v8_inspector::String16 const&, v8_crdtp::detail::ValueMaybe<v8_inspector::String16>, v8_crdtp
::detail::ValueMaybe<bool>, v8_crdtp::detail::ValueMaybe<bool>, v8_crdtp::detail::ValueMaybe<int>, v8_crdtp::detail::ValueMaybe<bool>, v8_crdtp::detail::ValueMaybe<bool>, v8_crdtp::detail::ValueMaybe
<bool>, v8_crdtp::detail::ValueMaybe<bool>, v8_crdtp::detail::ValueMaybe<bool>, v8_crdtp::detail::ValueMaybe<double>, v8_crdtp::detail::ValueMaybe<bool>, v8_crdtp::detail::ValueMaybe<bool>, v8_crdtp:
:detail::ValueMaybe<bool>, v8_crdtp::detail::ValueMaybe<v8_inspector::String16>, v8_crdtp::detail::PtrMaybe<v8_inspector::protocol::Runtime::SerializationOptions>, std::__Cr::unique_ptr<v8_inspector:
:protocol::Runtime::Backend::EvaluateCallback, std::__Cr::default_delete<v8_inspector::protocol::Runtime::Backend::EvaluateCallback>>) + 940
23 deno 0x00000001072eae4c v8_inspector::protocol::Runtime::DomainDispatcherImpl::evaluate(v8_crdtp::Dispatchable const&) + 636
24 deno 0x0000000107364d28 v8_crdtp::UberDispatcher::DispatchResult::Run() + 40
25 deno 0x0000000107340ea8 v8_inspector::V8InspectorSessionImpl::dispatchProtocolMessage(v8_inspector::StringView) + 468
26 deno 0x00000001067839fc v8_inspector__V8InspectorSession__dispatchProtocolMessage + 44
27 deno 0x00000001054958c8 _ZN87_$LT$deno_core..inspector..InspectorSession$u20$as$u20$futures_core..stream..Stream$GT$9poll_next17h2d0ff313a9fa1493E + 240
28 deno 0x00000001054952d4 deno_core::inspector::JsRuntimeInspector::poll_sessions::h3cb79f0bc20450b5 + 880
29 deno 0x00000001055208c8 deno_core::runtime::jsruntime::JsRuntime::poll_event_loop::hdde3da0f976e8d60 + 404
30 deno 0x00000001051115fc _ZN9deno_core7runtime9jsruntime9JsRuntime22with_event_loop_future28_$u7b$$u7b$closure$u7d$$u7d$17h79435de154e5ccd6E + 116
31 deno 0x00000001052725b8 _ZN4deno5tools4repl7session11ReplSession28post_message_with_event_loop28_$u7b$$u7b$closure$u7d$$u7d$17hac66a658310b5a52E + 116
32 deno 0x0000000105275128 _ZN4deno5tools4repl7session11ReplSession19evaluate_expression28_$u7b$$u7b$closure$u7d$$u7d$17h8472b9cb10e78062E + 176
33 deno 0x0000000105274d2c _ZN4deno5tools4repl7session11ReplSession22evaluate_ts_expression28_$u7b$$u7b$closure$u7d$$u7d$17hd96f02ce35996413E + 2516
34 deno 0x0000000105273914 _ZN4deno5tools4repl7session11ReplSession34evaluate_line_with_object_wrapping28_$u7b$$u7b$closure$u7d$$u7d$17h6de43dd61f9cf9d4E + 284
35 deno 0x0000000105272850 _ZN4deno5tools4repl7session11ReplSession28evaluate_line_and_get_output28_$u7b$$u7b$closure$u7d$$u7d$17heb55cab8d8fd7e2eE + 156
36 deno 0x0000000105277c5c _ZN4deno5tools4repl3run28_$u7b$$u7b$closure$u7d$$u7d$17h01792d2d83e52cb2E + 10480
37 deno 0x00000001052aec58 _ZN4deno16spawn_subcommand28_$u7b$$u7b$closure$u7d$$u7d$17h283fbb92f4a3b6e1E + 192
38 deno 0x0000000104edab20 _ZN100_$LT$deno_unsync..tokio..task..MaskFutureAsSend$LT$F$GT$$u20$as$u20$core..future..future..Future$GT$4poll17h0f21cc1725a1b6cdE + 28
39 deno 0x000000010505cfac tokio::runtime::task::raw::poll::h82f097423786f2a6 + 88
40 deno 0x00000001052be3ec deno::main::h4da7c7f8eefca462 + 3632
41 deno 0x0000000104f7ee04 std::sys::backtrace::__rust_begin_short_backtrace::h0c730e3ff58a8863 + 12
42 deno 0x0000000105311410 main + 700
43 dyld 0x000000018eb2b154 start + 2476
``` | bug,upstream | low | Critical |
2,700,390,806 | pytorch | [RFC] Intel GPU distributed Backend integration in `torch-xpu-ops`and registeration in PyTorch | ### 🚀 The feature, motivation and pitch
# Motivation
This Request for Comments (RFC) document aims to propose and discuss Intel GPU distributed support in PyTorch. This initiative begins with Intel distributed backend (`XCCL`) integration into PyTorch component `torch-xpu-ops`, and registration in PyTorch distributed Python package.
The RFC outlines a high-level design strategy for this integration. NOTE: the device name for Intel GPU in PyTorch is XPU. Therefore, `XCCL` represents XPU collective communications library in this RFC.
# Design
**1. Intel GPU distributed Backend integration in PyTorch `torch-xpu-ops`**
In the current design, PyTorch distributed utilizes c10d::ProcessGroup class as an interface to manage multiple communication backends (inherited from c10::Backend) and provide collective APIs that can be dispatched based on device type and backend.
Regarding per-backend implementation, c10d::ProcessGroupNCCL targets the CUDA device with backend name “nccl”. Similarly, we would like add c10d::ProcessGroupXCCL on Intel GPU device with new backend name “xccl”. We can visualize this design as below.

Regarding code structure, XCCL backend source code will be put in PyTorch [torch-xpu-ops](https://github.com/intel/torch-xpu-ops), while those code will be built into `libtorch_xpu.so`.

**2. Intel distributed Backend register in PyTorch distributed package**
All XPU components in PyTorch are on par with CUDA in PyTorch, illustrated in chart below. For example, `libtorch.so` supports both CUDA and XPU stream/device, `libtorch_cuda.so` contains CUDA ATen and collective ops while libtorch_xpu.so contains ATen and collective ops for XPU.

Consequently, we expect XCCL backend could handle Python binding and backend register in the same way as NCCL backend.
1) Add `ProcessGroupXCCL` Python module binding in [torch/csrc/distributed/c10d/init.cpp](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/init.cpp), and those code will be built in `libtorch_python.so`.
2) Register `XCCL` name and XCCL backend in PyTorch Python Backend, including distributed [backend_type_map](https://github.com/pytorch/pytorch/blob/main/torch/distributed/distributed_c10d.py#L268), [backend_capability](https://github.com/pytorch/pytorch/blob/main/torch/distributed/distributed_c10d.py#L261) and [default_device_backend_map](https://github.com/pytorch/pytorch/blob/main/torch/distributed/distributed_c10d.py#L256).
3) Add `XCCL` name in native ProcessGroup [backend type list](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroup.hpp#L74).
# PR Plan
The code changes involve some parts of PyTorch. To be clear and concise, we will split those changes into 3 PRs for easy to review with below priority.
- [x] Backend ProcessGroupXCCL and collective `allreduce` as an entrypoint to `torch-xpu-ops`.
- [x] Backend ProcessGroupXCCL Python binding in PyTorch and register `XCCL` in PyTorch distributed Python package.
- [x] Remaining collectives to `torch-xpu-ops`.
### Alternatives
_No response_
### Additional context
_No response_
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Major |
2,700,415,877 | pytorch | "RuntimeError: Triton Error [CUDA]: device kernel image is invalid" when running generated Triton code | ### 🐛 Describe the bug
In https://github.com/pytorch/pytorch/issues/140395 I would like to run the generated Triton code for the program the user gave me. But I cannot: it has hard-coded what GPU architecture it is for, and I don't have exactly the same GPU. So when I run it, it fails with
```
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/wiggu.py", line 46, in <module>
triton_red_fused__native_batch_norm_legit_functional_native_batch_norm_backward_threshold_backward_0 = async_compile.triton('triton_', '''
File "/data/users/ezyang/b/pytorch/torch/_inductor/async_compile.py", line 235, in triton
kernel.precompile()
File "/data/users/ezyang/b/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 293, in precompile
compiled_binary, launcher = self._precompile_config(
File "/data/users/ezyang/b/pytorch/torch/_inductor/runtime/triton_heuristics.py", line 520, in _precompile_config
binary._init_handles()
File "/home/ezyang/local/b/pytorch-env/lib/python3.10/site-packages/triton/compiler/compiler.py", line 376, in _init_handles
self.module, self.function, self.n_regs, self.n_spills = driver.active.utils.load_binary(
RuntimeError: Triton Error [CUDA]: device kernel image is invalid
```
This makes me sad. It would be nice if the file did not actually bake in the architecture so it was portable across A100/H100/etc
### Versions
main
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @oulgen @davidberard98 | triaged,module: inductor | low | Critical |
2,700,455,206 | pytorch | How to specify the port for processes with rank > 1 in the Gloo communication backend? | In Pytorch, when performing distributed training using gloo as the communication backend, you only need to specify master_addr and master_port; other processes will actively connect and use random ports for initialization. May I ask if it is possible for other processes to perform initialization by specifying the port?
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Minor |
2,700,493,372 | flutter | Keyboard TextInputAction intermittently not changing when moving focus between fields | ### Steps to reproduce
1. Create two `TextField` widgets, the first with `TextInputAction.next`, the second with `TextInputAction.done`
2. Run the flutter app
3. Tap the first text field to focus it and bring up the android keyboard
4. Tap the "Next" button on the android keyboard to focus the second field
Tested on:
- Samsung S21 Ultra, Android 14
- Samsung Keyboard 5.8.20.7
- Flutter master 3.27.0-1.0.pre.660, and stable 3.24.4. Logs are from using Flutter master channel
- Affects both debug and release builds
### Expected results
The keyboard should display the "Done" keyboard action when the second field has been focussed after tapping the "Next" keyboard button on the first field.
### Actual results
About 60% of the time, the keyboard continues to display the "Next" action on the second field after tapping the "Next" keyboard button on the first field. When tapping "Next" on the second field, the correct button "Done" displays briefly while the keyboard is being dismissed.
I've included videos both of the issue occurring, and the roughly 40% of occasions when the issue does not occur and the code sample works as expected.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
title: 'Flutter Demo',
home: MyHomePage(),
);
}
}
class MyHomePage extends StatelessWidget {
const MyHomePage({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Input action not changing'),
),
body: const Padding(
padding: EdgeInsets.all(24),
child: Column(
mainAxisAlignment: MainAxisAlignment.start,
children: <Widget>[
TextField(
decoration: InputDecoration(
hintText: 'Field with next action',
),
textInputAction: TextInputAction.next,
),
TextField(
decoration: InputDecoration(
hintText: 'Field with done action',
),
textInputAction: TextInputAction.done,
),
],
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
Issue occurring (~60% of the time):
https://github.com/user-attachments/assets/aad4ec4d-4412-4d39-8dec-f7907e1f7810
Working as expected (~40% of the time):
https://github.com/user-attachments/assets/42087cee-9efb-4e6d-9ab6-fb365b0a0ee9
</details>
### Logs
<details open><summary>Logs</summary>
[logs.txt](https://github.com/user-attachments/files/17942008/logs.txt)
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel master, 3.27.0-1.0.pre.660, on Microsoft Windows [Version 10.0.22631.4460], locale en-AU)
• Flutter version 3.27.0-1.0.pre.660 on channel master at C:\Users\t\fvm\versions\master
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision fdc666158e (8 hours ago), 2024-11-27 13:48:41 -0500
• Engine revision ba112add5d
• Dart version 3.7.0 (build 3.7.0-188.0.dev)
• DevTools version 2.41.0-dev.2
[√] Windows Version (11 Pro 64-bit, 23H2, 2009)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at C:\Users\t\AppData\Local\Android\sdk
• Platform android-35, build-tools 34.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
This is the JDK bundled with the latest Android Studio installation on this machine.
To manually set the JDK path, use: `flutter config --jdk-dir="path/to/jdk"`.
• Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.6.2)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.6.33723.286
• Windows 10 SDK version 10.0.22000.0
[√] Android Studio (version 2024.2)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
[√] IntelliJ IDEA Community Edition (version 2021.2)
• IntelliJ at C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2021.2.2
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
[√] VS Code (version 1.95.3)
• VS Code at C:\Users\t\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.100.0
[√] Connected device (4 available)
• SM G998B (mobile) • R5CR20A5RRA • android-arm64 • Android 14 (API 34)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4460]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.86
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.51
[√] Network resources
• All expected network resources are available.
```
</details>
| a: text input,platform-android,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.27 | low | Critical |
2,700,500,190 | go | proposal: add a handful of acquire/release atomics (make the internal runtime versions public) | ### Proposal Details
I asked for this a [decade ago](https://github.com/golang/go/issues/35639), and I want to start this discussion up again.
I'm not saying Go should add the full gamut of relaxed, acquire, release, etc orderings available in other languages.
But I am saying it's useful to have some basic acquire and release loads and stores for 32bit, 64bit, and uintptr.
To see that this is useful one need look no further than the Go runtime which use exactly these.
uintptr:
internal/runtime/atomic.LoadAcquintptr
internal/runtime/atomic.StoreReluintptr
uint32:
internal/runtime/atomic.LoadAcq
internal/runtime/atomic.StoreRel
uint64:
internal/runtime/atomic.LoadAcq64
internal/runtime/atomic.StoreRel64
I would like that we make these 6 functions available in the atomics package and a LoadAcq and StoreRel on each of the atomic types. I can make a PR.
It's not that these are so advanced and so dangerous that only Go runtime authors should have them. We deserve advanced tools too. For a language with concurrency as a strength, it's lacking in some important primitives to develop performant concurrent data structures. And it's not that we can't have that in Go, it's already implemented, it just isn't public. | Proposal | low | Major |
2,700,517,648 | next.js | [codemod] Unused params Causes next build Failure After Async API Codemod Execution | ### Link to the code that reproduces this issue
https://github.com/CastaChick/next_codemod_example
### To Reproduce
1. Create new next app using `next@14` by `npx create-next-app@14` command.
2. Add new pages with/without unused params
`app/example-unused-params/[id]/page.tsx`
```tsx
interface ExampleUnusedParamsProps {
params: {
id: string;
}
}
// eslint-disable-next-line @typescript-eslint/no-unused-vars
export default function ExampleUnusedParams({params}: ExampleUnusedParamsProps) {
return (
<div>foo</div>
)
}
```
`app/example-used-params/[id]/page.tsx`
```tsx
interface ExampleUsedParamsProps {
params: {
id: string;
}
}
export default function ExampleUsedParams({params}: ExampleUsedParamsProps) {
console.log(params.id)
return (
<div>foo</div>
)
}
```
3. Run the migration command to `next@15` by running `npx @next/codemod@canary upgrade latest`
4. Then build the next app by running `npm run build` ← build failed
```
src/app/example-unused-params/[id]/page.tsx
Type error: Type 'ExampleUnusedParamsProps' does not satisfy the constraint 'PageProps'.
Types of property 'params' are incompatible.
Type '{ id: string; }' is missing the following properties from type 'Promise<any>': then, catch, finally, [Symbol.toStringTag]
```
### Current vs. Expected behavior
In Pull Request #71664 , it appears that transform is not executed when `params` is not accessed within a function. However, even if `params` is not used within the function, the build process (`next build`) will fail if `params` is not wrapped in a `Promise`.
To ensure developers can confidently build their projects after running `npx @next/codemod@canary next-async-request-api .`, I suggest either removing this behavior or ignoring unused, synchronously declared `params` during the build process. This adjustment would provide a smoother and more reliable development experience.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6030
Available memory (MB): 36864
Available CPU cores: 12
Binaries:
Node: 20.10.0
npm: 10.2.3
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: 15.0.3
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Output (export/standalone)
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
_No response_ | bug,Output (export/standalone) | low | Critical |
2,700,547,889 | transformers | BatchEncoding.to throws away columns silently, thus no way to pass non-tensor columns such as String in Trainer metric computation | ### System Info
unrelated
### Who can help?
@muellerzr @SunMarc
(original tags, no longer valid)
@ArthurZucker
(re-tag because want to discuss patch release)
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi thanks for the library! Consider this simple line:
```
x = transformers.tokenization_utils_base.BatchEncoding({'a': ['x','y']})
x.to('cpu') # or cuda or whatever
```
The column `a` is then silently removed :(
This is annoying in the following scenario: For each of my training/eval sample, I have a string column that serves as a tag for it, and want to utilize it when computing metrics and losses.
Then it does not work. After some debugging, the root reason is that it gets silently removed in the `to` mentioned above.
It seems torch does not support a tensor of dtype `str`, thus it seems impossible to have data pass through.
### Expected behavior
(see above) | bug | low | Critical |
2,700,550,815 | tauri | [bug] How to display a window without taking focus in Linux | ### Describe the bug
I am trying to implement a function: a pop-up floating window does not grab the focus of the previous application. For example, I am renaming a file and a floating window is opened through a shortcut key. I should be in the file name editing state and the window is displayed on the top layer.
Currently, there is no API that can achieve this effect
### Reproduction
I tried the functions `request_user_attention` and `set_ignore_cursor_events`, also without calling `set_focus`
### Expected behavior
_No response_
### Full `tauri info` output
```text
$ pnpm tauri info INT ✘
> [email protected] tauri /home/witt/codes/open-source/EcoPaste
> tauri "info"
[✔] Environment
- OS: Manjaro 24.1.2 x86_64 (X64)
✔ webkit2gtk-4.1: 2.44.4
✔ rsvg2: 2.58.4
✔ rustc: 1.81.0 (eeb90cda1 2024-09-04)
✔ cargo: 1.81.0 (2dbb1af80 2024-08-20)
✔ rustup: 1.27.1 (2024-05-07)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 23.0.0
- pnpm: 9.12.2
- yarn: 1.22.22
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-log 🦀: 2.0.2
- @tauri-apps/plugin-log : 2.0.0
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1
- tauri-plugin-dialog 🦀: 2.0.3
- @tauri-apps/plugin-dialog : 2.0.1
- tauri-plugin-process 🦀: 2.0.1
- @tauri-apps/plugin-process : 2.0.0
- tauri-plugin-single-instance 🦀: 2.0.1
- @tauri-apps/plugin-single-instance : not installed!
- tauri-plugin-updater 🦀: 2.0.2
- @tauri-apps/plugin-updater : 2.0.0
- tauri-plugin-global-shortcut 🦀: 2.0.1
- @tauri-apps/plugin-global-shortcut : 2.0.0
- tauri-plugin-fs 🦀: 2.0.3
- @tauri-apps/plugin-fs : 2.0.2
- tauri-plugin-sql 🦀: 2.0.2
- @tauri-apps/plugin-sql : 2.0.1
- tauri-plugin-os 🦀: 2.0.1
- @tauri-apps/plugin-os : 2.0.0
- tauri-plugin-autostart 🦀: 2.0.1
- @tauri-apps/plugin-autostart : 2.0.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,700,609,291 | ant-design | antd 的 5.13.0 的更新,会导致在ios 13.3.1版本的手机浏览器里,Input输入框出现每输入一个字符,键盘就收起的情况 | ### Steps to reproduce
1、使用umi4新建一个测试项目
2、在umi4中,集成antd 5.13.0
3、随便写一个页面,在页面里,输一个Input框
4、启动起来,复制本地服务器地址
5、将地址复制到ios 13.3.1版本浏览器中
6、点击输入框,输入任一字符,会出现每输入一个字符,键盘就收起的情况
### What is expected?
Input在ios 13.3.1版本浏览器中,Input能正常输入,每输入一个字符,键盘不会收起
### What is actually happening?
antd 的 5.13.0 的更新,会导致在ios 13.3.1版本的手机浏览器里,Input输入框出现每输入一个字符,键盘就收起的情况
| Environment | Info |
| --- | --- |
| antd | 5.13.0 |
| React | "react": "^18.2.0", |
| System | ios 13.3.1 |
| Browser | safari |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 📱Mobile Device | low | Major |
2,700,616,427 | pytorch | Incorrect error message on incorrect byte boundries on _scaled_mm on fp8 | ### 🐛 Describe the bug
Incorrect error message:
```python
import torch
tensor1 = torch.rand(2048,2048, device="cuda").to(torch.float8_e4m3fn)
tensor2 = torch.rand(2048*2048+1, device="cuda").to(torch.float8_e4m3fn)[1:].view(2048,2048).T
print(torch._scaled_mm(tensor1, tensor2, torch.tensor(1.0, device="cuda"), torch.tensor(1.0, device="cuda")))
```
Current error message:
```RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling cublasLtMatmul with transpose_mat1 t transpose_mat2 n m 2048 n 2048 k 2048 mat1_ld 2048 mat2_ld 2048 result_ld 2048 computeType 68 scaleType 0```
correct error message should be something like:
```RuntimeError: Boundary Error: tensor.storage_offset must be divisible by 32 for tensor b```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Fedora Linux 39 (Workstation Edition) (x86_64)
GCC version: (GCC) 13.3.1 20240913 (Red Hat 13.3.1-3)
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.38
Python version: 3.12.7 (main, Oct 1 2024, 00:00:00) [GCC 13.3.1 20240913 (Red Hat 13.3.1-3)] (64-bit runtime)
Python platform: Linux-6.11.7-100.fc39.x86_64-x86_64-with-glibc2.38
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.9.7
/usr/lib64/libcudnn.so.9.5.1
/usr/lib64/libcudnn_adv.so.9.5.1
/usr/lib64/libcudnn_adv_infer.so.8.9.7
/usr/lib64/libcudnn_adv_train.so.8.9.7
/usr/lib64/libcudnn_cnn.so.9.5.1
/usr/lib64/libcudnn_cnn_infer.so.8.9.7
/usr/lib64/libcudnn_cnn_train.so.8.9.7
/usr/lib64/libcudnn_engines_precompiled.so.9.5.1
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib64/libcudnn_graph.so.9.5.1
/usr/lib64/libcudnn_heuristic.so.9.5.1
/usr/lib64/libcudnn_ops.so.9.5.1
/usr/lib64/libcudnn_ops_infer.so.8.9.7
/usr/lib64/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7940HS w/ Radeon 780M Graphics
CPU family: 25
Model: 116
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 61%
CPU max MHz: 5263.0000
CPU min MHz: 400.0000
BogoMIPS: 7985.44
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 8 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s):
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnxruntime==1.20.0
[pip3] pytorch-lightning==2.4.0
[pip3] rotary-embedding-torch==0.6.5
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[pip3] vector-quantize-pytorch==1.17.4
[conda] Could not collect
cc @yanbing-j @vkuzo @albanD @kadeng @penguinwu | triaged,module: float8 | low | Critical |
2,700,642,360 | svelte | tick() doesn't seem to work | ### Describe the bug
The code below is supposed to change the `clientHeight` of the `div` when the `button` is pressed and then `console.log` the original and the new height. According to the [svelte documentation](https://svelte.dev/docs/svelte/lifecycle-hooks#tick), you can "use `tick` to ensure that the UI is updated before continuing". But it doesn't work and still `console.log`s the original height. If you replace the `tick` function call in line 14 with my crude `customTick` function, it seems to work fine.
### Reproduction
```html
<script>
import { tick } from 'svelte';
const contents = ['Hello World', 'Hello<br />World'];
let currentContent = $state(0);
let clientHeight = $state();
function customTick() {
return new Promise((resolve, reject) => setTimeout(resolve, 1));
}
function changeContent() {
console.log(`Client Height Before Change: ${clientHeight}px`);
currentContent = (currentContent + 1) % 2;
tick().then(() => console.log(`Client Height After Change: ${clientHeight}px`)); // Doesn't work
}
</script>
<button onclick={changeContent}>Change Content</button>
<br /><br />
<div bind:clientHeight style="border: 1px solid red;">{@html contents[currentContent]}</div>
```
[Svelte Playground](https://svelte.dev/playground/4a8870c6f55e4271982626128dcb454a?version=5.2.10)
### Logs
_No response_
### System Info
```shell
I used the playground on svelte.dev
```
### Severity
annoyance | documentation | medium | Critical |
2,700,650,527 | vscode | Invalid outline in search editor | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: Windows 24H2 26100.2454
Steps to Reproduce:
1. Navigate to search panel
2. Open new search editor
3. Perform some search
4. Navigate to outline panel
5. Jump to some file item in outline panel
6. Open new tab with error `Unable to resolve resource search-editor:#0.38606787525588104`
| bug,outline | low | Critical |
2,700,674,092 | godot | Reordering node highlights it in scene tree but doesn't open it in inspector | ### Tested versions
- Reproducible in: 4.3.stable
### System information
macOS 15.1.1 - Godot v4.3.stable - Forward+ - M1 Pro
### Issue description
Reparenting a node in the scene tree highlights the node and opens it in the inspector. However, reordering a node in the scene tree doesn't open the node in the inspector even though it highlights the node.
Not only is this inconsistent, it prevents one from then opening the node in the inspector as it is already highlighted, trying to rename the file instead.
See the video below for a demo of 1. reparenting a node vs 2. reordering a node. Pay attention to the inspector on the right.
https://github.com/user-attachments/assets/9c43c96b-4faf-46a5-9099-b030e8673783
### Steps to reproduce
- Reorder a node in the scene view.
- Observe that the node is not opened in the inspector.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,usability | low | Minor |
2,700,779,715 | transformers | No use `no_sync` context manager when using gradient accumulation w/ deepspeed's zero stage 2 or 3 via `accelerate` | ### Feature request
`trainer.train` with deepspeed stage 2 or 3 via `accelerate` and gradient accumulation does not work as I expected. I suspect this is because deepspeed 0.16.0 introduces `no_sync` context manager https://github.com/microsoft/DeepSpeed/pull/6675.
For example, the error looks like
```
Traceback (most recent call last):
File "main.py", line 70, in <module>
main()
File "main.py", line 90, in main
trainer.train()
File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 2123, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 2480, in _inner_training_loop
with context():
File "/usr/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/accelerate/accelerator.py", line 973, in no_sync
with context():
File "/usr/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/deepspeed/runtime/engine.py", line 1995, in no_sync
assert not self.zero_optimization_partition_gradients(), \
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: no_sync context manager is incompatible with gradient partitioning logic of ZeRO stage 3
```
### Motivation
In my understanding, `trainer.train()` with deepspeed via HF `accelerate` and gradient accumulation uses own forward implementation rather than `accelerate.accumulate` defined at
https://github.com/huggingface/transformers/blob/052e652d6d53c2b26ffde87e039b723949a53493/src/transformers/trainer.py#L2474C75-L2482
So `no_sync` is always used. Even we give
```python
accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': {'sync_each_batch': True}, 'use_configured_state': False}
```
to `TrainingArguments` to use a feature introduced by https://github.com/huggingface/transformers/issues/29425.
### Your contribution
As suggested in https://github.com/huggingface/transformers/issues/29425#issuecomment-2505015569, adding some warning to config args on docs might be helpful, but ideally addressing this feature is much nicer but I do not know the right approach. | Feature request | low | Critical |
2,700,784,145 | next.js | Fast Refreshing dynamic component with CSS modules unloads CSS | ### Link to the code that reproduces this issue
https://github.com/edenstrom/nextjs-dynamic-css-modules-reproduction
### To Reproduce
1. Start the application in development mode with `npm run dev`
2. Open MyComponent.tsx
3. Make any change, e.g. change the text in the component
4. CSS module unloads and the red text disappears
> [!NOTE]
> The bug only happens _without_ using Turbopack.
> Activating Turbopack fixes the issue
>
> In our case we still need the legacy bundler in our project
### Current vs. Expected behavior
I expected the CSS to still exist after fast refreshing, but I observed it disappearing instead.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 22.11.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.14.1
Relevant Packages:
next: 15.0.4-canary.30 // Latest available version is detected (15.0.4-canary.30).
eslint-config-next: N/A
react: 19.0.0-rc-b01722d5-20241114
react-dom: 19.0.0-rc-b01722d5-20241114
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Lazy Loading
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I've tested a lot of canary versions, and the bug were introduced in 15.0.0-canary.10.
It's not reproducible in 15.0.0-canary.9 or below. | bug,Lazy Loading,dynamicIO,CSS | low | Critical |
2,700,910,751 | flutter | The text inside textfield overlaps with scrollbar in web and desktop platforms. | ### Steps to reproduce
1. Create textformfield
2. add maxLines: 2, minLines: 1, property
3. If more than 2 line it will shows the scrollbar inside the textformfield
### Expected results
Scrollbar should not overlap on the textformfield text content
### Actual results
Scrollbar overlapped on the textformfield text content
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(const MyApp());
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
const appTitle = 'Form Validation Demo';
return MaterialApp(
title: appTitle,
home: Scaffold(
appBar: AppBar(
title: const Text(appTitle),
),
body: const MyCustomForm(),
),
);
}
}
// Create a Form widget.
class MyCustomForm extends StatefulWidget {
const MyCustomForm({super.key});
@override
MyCustomFormState createState() {
return MyCustomFormState();
}
}
// Create a corresponding State class.
// This class holds data related to the form.
class MyCustomFormState extends State<MyCustomForm> {
// Create a global key that uniquely identifies the Form widget
// and allows validation of the form.
//
// Note: This is a GlobalKey<FormState>,
// not a GlobalKey<MyCustomFormState>.
final _formKey = GlobalKey<FormState>();
@override
Widget build(BuildContext context) {
// Build a Form widget using the _formKey created above.
return Form(
key: _formKey,
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
TextFormField(
// The validator receives the text that the user has entered.
scrollPadding: const EdgeInsets.symmetric(vertical: 8.0,horizontal: 8.0),
maxLines: 2,
minLines: 1,
decoration: InputDecoration(
contentPadding: const EdgeInsets.symmetric(vertical: 8.0,horizontal: 20.0),
),
validator: (value) {
if (value == null || value.isEmpty) {
return 'Please enter some text';
}
return null;
},
),
Padding(
padding: const EdgeInsets.symmetric(vertical: 16),
child: ElevatedButton(
onPressed: () {
// Validate returns true if the form is valid, or false otherwise.
if (_formKey.currentState!.validate()) {
// If the form is valid, display a snackbar. In the real world,
// you'd often call a server or save the information in a database.
ScaffoldMessenger.of(context).showSnackBar(
const SnackBar(content: Text('Processing Data')),
);
}
},
child: const Text('Submit'),
),
),
],
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/dad13d51-f890-4cfe-82a9-f410b926873b
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.4, on macOS 14.7.1 23H222 darwin-arm64, locale
en-IN)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.2)
[✓] VS Code (version 1.95.3)
[✓] Connected device (4 available)
[✓] Network resources
• No issues found!
```
</details>
| a: text input,framework,f: material design,platform-web,a: desktop,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.27 | low | Minor |
2,700,920,012 | vscode | Missing syntax highlighting for PHP `if-else` alternate syntax |
Type: <b>Bug</b>
While using the alternate syntax for the `if` statement in PHP, the `else` keyword is not being highlighted the same as `if` keyword.

VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) Ultra 5 125H (18 x 2995)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.61GB (5.37GB free)|
|Process Argv|--crash-reporter-id 42ce4d94-f018-448f-be6f-b9071de35767|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (22)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-intelephense-client|bme|1.12.6
simple-react-snippets|bur|1.2.8
vscode-eslint|dba|3.0.10
intelli-php-vscode|DEV|0.12.15062
phptools-vscode|DEV|1.53.16379
profiler-php-vscode|DEV|1.53.16379
es7-react-js-snippets|dsz|4.4.3
prettier-vscode|esb|11.0.0
code-runner|for|0.12.2
copilot|Git|1.246.0
copilot-chat|Git|0.22.4
path-autocomplete|ion|1.25.0
remote-wsl|ms-|0.88.5
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
ts-file-path-support|ms-|1.0.0
vscode-react-native|msj|1.13.0
es7-react-js-snippets|rod|1.9.3
vscode-icons|vsc|12.9.0
vscode-wakatime|Wak|24.9.1
five-server|yan|0.3.1
(2 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
724cj586:31013169
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc2:31185842
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | bug,php,confirmation-pending | low | Critical |
2,700,923,139 | pytorch | torch.compile for division gives different numeric output vs eager mode division: torch.tensor/torch.tensor | ### 🐛 Describe the bug
How to reproduce:
```python
import torch
from decimal import Decimal
y = torch.tensor([7.0], device="cuda") # torch.tensor
x = 11.0 # const
@torch.compile
def compiled_divide_v1(x, y):
return x / y
@torch.compile
def compiled_divide_v2(x: torch.Tensor, y: torch.Tensor):
return (x / y.to(torch.float64)).to(torch.float32)
# torch.compile with "/"
Decimal(compiled_divide_v1(torch.tensor(x), y).item()) # 1.57142865657806396484375
# const/torch.tensor
Decimal((x / y).item()) # 1.57142865657806396484375
# torch.compile with "/" and casting to torch.float64
Decimal(compiled_divide_v2(x, y).item()) # 1.5714285373687744140625
# torch.div
Decimal(torch.div(x, y).item()) # 1.5714285373687744140625
# true_divide
Decimal(torch.true_divide(torch.tensor(x, device="cuda"), y).item()) # 1.5714285373687744140625
# torch.tensor/torch.tensor
Decimal((torch.tensor(x) / y).item()) # 1.5714285373687744140625
```
It is surprising to see that `torch.compile with "/"` implemented in function `compiled_divide_v1` returns different output numerically (after 1e-6) in comparison to `torch.tensor/torch.tensor`: 1.57142865657806396484375 vs 1.5714285373687744140625.
To make `compiled_divide_v1` return the same output with `torch.tensor/torch.tensor`, I need to use `torch.compile with "/" and casting to torch.float64` as it is done in `compiled_divide_v2`.
I observe that divisions: `compiled_divide_v1`; `const/torch.tensor` are doing division which are not using rounding to nearest wrt the IEEE standard and they return output 1.57142865657806396484375 which is different with
divisions: `compiled_divide_v2`; `torch.div`; `torch.true_divide`; `torch.tensor/torch.tensor`, they are using rounding to nearest wrt the IEEE standard, as it is done in [triton.language.div_rn](https://triton-lang.org/main/python-api/generated/triton.language.div_rn.html) and return output 1.5714285373687744140625.
It is not clear if all above is a bug or a feature, but at least it would be great to have a proper documentation about `torch.true_divide`; `torch.div`; `torch.tensor/torch.tensor`; `const/torch.tensor`; `torch.compile with "/"`. Currently the descriptions of these ops do not mention anything about IEEE float rounding.
For example description of [torch.div](https://pytorch.org/docs/stable/generated/torch.div.html#torch.div); [torch.true_divide](https://pytorch.org/docs/stable/generated/torch.true_divide.html) do not say anything about `rounding to nearest wrt the IEEE standard`. `const/torch.tensor` is not even documented (but it returns different output vs `torch.div`)
Good documentation example is [triton.language.div_rn](https://triton-lang.org/main/python-api/generated/triton.language.div_rn.html) they describe that it uses `rounding to nearest wrt the IEEE standard`.
### Versions
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX 3500 Ada Generation Laptop GPU
Nvidia driver version: 550.120
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 2
CPU max MHz: 5200.0000
CPU min MHz: 400.0000
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11.5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.5.40
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] onnxruntime==1.18.0
[pip3] optree==0.13.0
[pip3] torch==2.3.1
[pip3] torchaudio==2.3.1
[pip3] torchvision==0.18.1
[pip3] triton==2.3.1
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng | high priority,triaged,oncall: pt2,module: inductor | low | Critical |
2,700,956,874 | kubernetes | Kubelet takes more than 10 minutes to pull up the pod | ### What happened?
Kubelet takes more than 10 minutes to pull up the pod,After adding logs for localization, it was found that it was `dswp.podManager.GetPods()` in `findAndAddNewPods()` method did not obtain the corresponding pod, suspecting that there is a problem with obtaining the lock. Causing waitForVolumeAttach (volumeToMount cache. VolumeToMount) to start slowly
`pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go findAndAddNewPods() `
### What did you expect to happen?
After the pod is scheduled, operationExecutor.VerifyControlerAttachedVolume can start normally
### How can we reproduce it (as minimally and precisely as possible)?
Resolve the issue where GetPods() cannot access all pods
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
v1.28.1
</details>
### Cloud provider
<details>
na
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/node,triage/needs-information,needs-triage | low | Major |
2,701,025,158 | react-native | Keyboard on iOS 18 changes from uppercase to lowercase when changing the text input field | ### Description
I encountered an issue where the keyboard changes from uppercase to lowercase when switching focus between two TextInput fields. Specifically, when focusing from Text Input Field A to Text Input Field B, and then back from Text Input Field B to Text Input Field A, the keyboard behavior unexpectedly switches between uppercase and lowercase.
This issue occurs on iOS 18.
### Steps to reproduce
1. Add two TextInput fields.
2. Focus on TextInput 1 and input some text (e.g., "T").
3. Focus on TextInput 2 and input some text (e.g., "B").
4. Focus back on TextInput 1.
5. Focus back on TextInput 2.
6. Observe that the keyboard switches from uppercase to lowercase and the first letter of the text remains uppercase.
```diff
<View style={styles.mb4}>
<TextInput
accessibilityLabel="Text input field"
style={{
width: 100,
height: 40, // Added height for the input box
borderWidth: 1, // Corrected border style
backgroundColor: 'red', // Corrected background color property
}}
/>
</View>
<View>
<TextInput
accessibilityLabel="Text input field"
style={{
width: 100,
height: 40, // Added height for the input box
borderWidth: 1, // Corrected border style
backgroundColor: 'red', // Corrected background color property
}}
/>
</View>
```
https://github.com/user-attachments/assets/515136b9-bbd8-45a5-9112-db6cbce6d148
### React Native Version
0.76.3
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.0
CPU: (11) arm64 Apple M3 Pro
Memory: 244.06 MB / 18.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.15.1
path: ~/.nvm/versions/node/v20.15.1/bin/node
Yarn: Not Found
npm:
version: 10.7.0
path: ~/.nvm/versions/node/v20.15.1/bin/npm
Watchman:
version: 2024.09.23.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods: Not Found
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK:
API Levels:
- "31"
- "33"
- "34"
Build Tools:
- 34.0.0
- 35.0.0
System Images:
- android-34-ext12 | Google Play ARM 64 v8a
- android-34 | Intel x86_64 Atom
- android-34 | Google APIs ARM 64 v8a
- android-34 | Google APIs Intel x86_64 Atom
- android-35 | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.19072.14.2412.12360217
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 3.3.4
path: /Users/lehuu/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.3
wanted: 0.76.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
https://snack.expo.dev/pT5crg9nC5joocIBBOZGo
### Screenshots and Videos
https://github.com/user-attachments/assets/847035b3-39e6-4da6-9bb8-e110302533ba
| Platform: iOS,Issue: Author Provided Repro,API: Keyboard,Needs: Repro,Needs: Attention | low | Minor |
2,701,025,260 | vscode | The VS Code Server with Remote SSH extension or Code Server experiences a memory leak | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: In VSCode with Remote SSH extension , Or Code Server
- OS Version: Connecting Windows to the Linux Server
Phenomenon
The VS Code Server with Remote SSH extension or Code Server experiences a memory leak issue
https://github.com/microsoft/vscode/blob/921f7ca0fc92817f6ab30ecc93850870cd0bd414/src/vs/base/parts/ipc/node/ipc.net.ts#L646 ,
The length of the array '_recordedInflateBytes' only grows over time and is not released unless the entry process is closed.
The snapshot information is as follows:



```
PersistentProtocol <- src\vs\base\parts\ipc\common\ipc.net.ts
_socket: WebSocketNodeSocket ->
_flowManager:WebSocketFlowManager ->
_zlibInflateStream: ZlibInflateStream ->
_recordedInflateBytes: VSBuffer[] -> The length of the array only grows with time (65411 items in heapsnap shot)
```
<!-- Failed to upload "snapshot.png" -->
Steps to Reproduce:
1. In VSCode version 1.94.2 with Remote SSH extension ,
2. disable the 'Use Exec server' option to switch the VSCode server to a WebSocket connection type to reproduce the issue.
3. Wait for one week and check the entry process.
1. Code Server.
2. Wait for one week and check the entry process.
Thought:
The recording of inflated bytes results in a memory leak. The Recording inflated bytes is not necessary if the socket will not be sent to ExtensionHost process. The solution is to disable sending the socket to the ExtensionHost process and as a result, the need to write inflated bytes is eliminated. A named pipe can be used to communicate with the ExtensionHost instead of sending a socket.
https://github.com/microsoft/vscode/blob/389bf6c37fb5c3e177abe762656ea8728c2cc3a4/src/vs/server/node/extensionHostConnection.ts#L128
the “_canSendSocket” property is always true for non-Windows platforms. The named pipe is only used on Windows. Are there any problems or concerns? | bug,freeze-slow-crash-leak,remote,performance | low | Critical |
2,701,071,499 | ui | [bug]: Confusing error message | ### Describe the bug
I did this

Which gave this error message:

But I didn't use DialogTitle, so I spent a long time figuring out what's going on. It should say:
> Error: `SheetTitle` must be used within `Dialog` or `Sheet`
### Affected component/components
Sheet
### How to reproduce
1. Use `<SheetTitle>` without a `<Sheet>` parent
### Codesandbox/StackBlitz link
it's pretty easy to trigger this bug
### Logs
_No response_
### System Info
```bash
none, it's straightforward.
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,701,086,556 | ant-design | 固定列有children时,无法固定在右侧 | ### Reproduction link
[](https://codesandbox.io/p/sandbox/biao-tou-fen-zu-antd-5-22-2-forked-jx3y3f)
### Steps to reproduce
打开demo,滑到右侧,发现无法固定有children的列
### What is expected?
期望能整体固定
### What is actually happening?
无法固定
| Environment | Info |
| --- | --- |
| antd | 5.22.2 |
| React | 18 |
| System | mac |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug,help wanted,Inactive | low | Minor |
2,701,093,707 | ollama | Installation not working on Fedora 41 Linux | ### What is the issue?
```
curl -fsSL https://ollama.com/install.sh | sh
> Installing ollama to /usr/local
> [sudo] password for bns:
> >>> Downloading Linux amd64 bundle
> ######################################################################## 100.0%
> >>> Creating ollama user...
> >>> Adding ollama user to render group...
> >>> Adding ollama user to video group...
> >>> Adding current user to ollama group...
> >>> Creating ollama systemd service...
> >>> Enabling and starting ollama service...
> Created symlink '/etc/systemd/system/default.target.wants/ollama.service' → '/etc/systemd/system/ollama.service'.
> >>> Installing NVIDIA repository...
> Unknown argument "--add-repo" for command "config-manager". Add "--help" for more information about the arguments.
```
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | bug | low | Major |
2,701,175,699 | electron | Electron fails to launch on Windows when null device is disabled | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.0
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 10
### What arch are you using?
x64
### Last Known Working Electron version
28.3.3
### Expected Behavior
Electron launches normally.
### Actual Behavior
Electron crashes in node::InitializeOncePerProcess and fails to launch.

### Testcase Gist URL
_No response_
### Additional Information
I have submitted this issue before with another account. But that account is suspended and my issue and pull request are 404 now. I don't know what happened to my account and fail to get my account back after a long struggle, so I submit this issue again with my new account.
It is rare but legal to disable null device on windows platform, in which situation, electron fails to launch definitely because node tries to reinitialize stdio with null device.
To reproduce this issue, we can just set Start = 4 (which is 1 normally) in registry key HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\Null

This issue was introduced in electron v29.0.0, when node was upgraded from v18.2.8 to v20.9.0 in this commit https://github.com/electron/electron/commit/9c94fd7afb4706c2d2228a455f0874a0370cfe1d

Since node v19.0.0, PlatformInit is changed to be invoked definitely, while in old versions, it is controlled by flag kRunPlatformInit and not invoked in electron. https://github.com/nodejs/node/commit/b6971602564fc93c536ad469947536b487c968ea

In PlatformInit, stdio is reinitialized with nul device.
```
#ifdef _WIN32
if (!(flags & ProcessInitializationFlags::kNoStdioInitialization)) {
for (int fd = 0; fd <= 2; ++fd) {
auto handle = reinterpret_cast<HANDLE>(_get_osfhandle(fd));
if (handle == INVALID_HANDLE_VALUE ||
GetFileType(handle) == FILE_TYPE_UNKNOWN) {
// Ignore _close result. If it fails or not depends on used Windows
// version. We will just check _open result.
_close(fd);
if (fd != _open("nul", _O_RDWR)) ABORT();
}
}
}
#endif // _WIN32
```
However, in widows gui apps, _get_osfhandle will always fail because stdio is not associated with a stream by default.
https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/get-osfhandle?view=msvc-170
> When stdin, stdout, and stderr aren't associated with a stream (for example, in a Windows application without a console window), the file descriptor values for these streams are returned from [_fileno](https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/fileno?view=msvc-170) as the special value -2. Similarly, if you use a 0, 1, or 2 as the file descriptor parameter instead of the result of a call to _fileno, _get_osfhandle also returns the special value -2 when the file descriptor is not associated with a stream, and does not set errno. However, this is not a valid file handle value, and subsequent calls that attempt to use it are likely to fail.
So node will always open null device to replace stdio. Once null device is disabled, node fails to open null device and aborts here.
PlatformInit was not invoked and node feature StdioInitialization was not used in old electron versions, which works fine. So I think electron doesn't depend on this feature and it is safe to add flag node::ProcessInitializationFlags::kNoStdioInitialization when initializing node in electron to solve this issue. | platform/windows,bug :beetle:,has-repro-comment,33-x-y | low | Critical |
2,701,185,422 | vscode | Don't import XYZ.contribution-files from non contribution/entry-files | We discussed the side-effects that contribution files have and that "normal" files inherit these side effects as soon as they import them. The couple of take aways
* contribution files shouldn't export anything
* contribution files shouldn't be imported by normal files
Let's look into lint rules for this | debt,engineering | low | Minor |
2,701,185,536 | flutter | [flutter_markdown] Allow specifying a custom image error builder | ### Use case
Developers that want to customize the layout of the error builder for images in markdown content.
### Proposal
In https://github.com/flutter/flutter/issues/158428 two problems were reported:
1) Invalid image urls caused crashes
2) there was no way to customize the error builder for the `Image` widget that resulted from parsing the markdown which had malformed image links (or errors during the loading of the image, per the functionality in Image.network's errorBuilder)
PR https://github.com/flutter/packages/pull/8058 addressed point 1, but point 2 was not yet implemented.
I propose we add an `imageErrorBuilder`, of type `ImageErrorWidgetBuilder?` next to https://github.com/flutter/packages/blob/main/packages/flutter_markdown/lib/src/widget.dart#L223
to fix the remaining work item. This error builder can be used in place of the default `kDefaultImageErrorWidgetBuilder`, when provided.
We probably also want to wire the image error builder up to https://github.com/flutter/packages/blob/main/packages/flutter_markdown/lib/src/builder.dart#L590
and https://github.com/flutter/packages/blob/main/packages/flutter_markdown/lib/src/builder.dart#L607 | c: new feature,a: images,package,c: proposal,team-ecosystem,P2,p: flutter_markdown,triaged-ecosystem | low | Critical |
2,701,195,143 | pytorch | Test in `test/test_overrides.py` failed with AssertionError | ### 🐛 Describe the bug
When developing on PR #134826, I built the pytorch from the source and ran the test in `test/test_overrides.py`. I got the output below with AssertionError:
```bash
$ python test/test_overrides.py
........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................F.
======================================================================
FAIL: test_warn_on_invalid_torch_function_tensor_subclass (__main__.TestTorchFunctionWarning)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/tywang/Projects/pytorch/torch/testing/_internal/common_utils.py", line 3099, in wrapper
method(*args, **kwargs)
File "/home/tywang/Projects/pytorch/test/test_overrides.py", line 1165, in test_warn_on_invalid_torch_function_tensor_subclass
torch.abs(b)
AssertionError: UserWarning not triggered
To execute this test, run the following from the base repo dir:
python test/test_overrides.py TestTorchFunctionWarning.test_warn_on_invalid_torch_function_tensor_subclass
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 1466 tests in 0.400s
FAILED (failures=1)
```
But if I run that single failed test, it will pass:
```bash
$ python test/test_overrides.py TestTorchFunctionWarning.test_warn_on_invalid_torch_function_tensor_subclass
.
----------------------------------------------------------------------
Ran 1 test in 0.146s
OK
```
### Versions
$ python ./torch/utils/collect_env.py
Collecting environment information...
PyTorch version: 2.6.0a0+git0f84ba2
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Kali GNU/Linux Rolling (x86_64)
GCC version: (Debian 11.5.0-1) 11.5.0
Clang version: 16.0.6 (27+b1)
CMake version: version 3.31.1
Libc version: glibc-2.40
Python version: 3.9.20 | packaged by conda-forge | (main, Sep 30 2024, 17:49:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.10.9-amd64-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.216.01
cuDNN version: Probably one of the following:
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.9.3.0
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv.so.9.3.0
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn.so.9.3.0
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9.3.0
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_graph.so.9.3.0
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_heuristic.so.9.3.0
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12700K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 95%
CPU max MHz: 5000.0000
CPU min MHz: 800.0000
BogoMIPS: 7219.20
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (12 instances)
L1i cache: 512 KiB (12 instances)
L2 cache: 12 MiB (9 instances)
L3 cache: 25 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0a0+git0f84ba2
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2025.0.0 h901ac74_941 conda-forge
[conda] mkl-include 2025.0.0 hf2ce2f3_941 conda-forge
[conda] numpy 1.22.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0a0+git0f84ba2 dev_0 <develop> | triaged,module: testing | low | Critical |
2,701,263,715 | vscode | Git - Command `git.openFile` does not do the same thing as the "Open File" icon button on a file that has been committed in diff view | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.0-insider
- OS Version: Ubuntu 24.04.1 LTS (6.8.0-49-generic)
Steps to Reproduce:
1. Setup a keybinding for the `git.openFile` command (in this case, it is `ctrl+alt+a`).
```json5
// keybindings.json
[
// ...
{
"key": "ctrl+alt+a",
"command": "git.openFile"
}
]
```
2. Open a project with a git repository.
3. Make changes on a file (or a new file).
4. Stage the file.
5. Select the staged file to show the diff view. This should be readonly.
6. Commit the changes.
7. Then, click on the "Open File" icon button.
- This should open the file (writable).
- The tooltip also shows the keybinding for this action (`ctrl+alt+a` as expected).
8. Going back to the previous tab (the diff view), use the keybinding to open the file.
- Unexpectedly, this opens a readonly editor even though "Open File" must be related to this keybinding because of the tooltip.
- Using the keybinding on this readonly editor doesn't seem to do anything.
Additionally, the "Git: Open File" command is not available in the Command Palette (`ctrl+shift+p`) for both the diff view and the readonly editor, which I assume is `git.openFile`.
https://github.com/user-attachments/assets/d2a8a67e-c204-43d1-9520-9931fad16614
As reference, the [GitLens](https://github.com/gitkraken/vscode-gitlens) extension has a `gitlens.openWorkingFile` command that opens writable file from this diff view, similar to the "Open File" icon button. | bug,git | low | Critical |
2,701,300,828 | node | fs.rmSync('速') crash without throw | ### Version
23.3.0
### Platform
```text
Windows 10
```
### Subsystem
_No response_
### What steps will reproduce the bug?
When `fs.rmSync('速')` files with name containing “速”, node 23.3.0 will crash without throw.
### How often does it reproduce? Is there a required condition?
Everytime
### What is the expected behavior? Why is that the expected behavior?
Delete the file normally.
### What do you see instead?
The program just crashed.
### Additional information
There are other special characters that can cause similar problems, such as “請”. This problem did not occur in previous versions. | confirmed-bug,fs,windows,good first issue | low | Critical |
2,701,591,261 | pytorch | torch._dynamo.exc.Unsupported: call_method UserDefinedObjectVariable(set) __contains__ | ### 🐛 Describe the bug
```
import functools
import torch
import torch.utils._device
old_device_constructors_ = torch.utils._device._device_constructors()
@functools.lru_cache(1)
def origin_device_constructors():
global old_device_constructors_
return old_device_constructors_
torch.utils._device._device_constructors = origin_device_constructors
@torch.compile(fullgraph=True)
def split(x):
return x.split(4, 0)
x = torch.zeros(12)
res = split(x)
with torch.device("cpu"):
cpu_res = split(x)
for res, cpu_res in zip(res, cpu_res):
print(torch.equal(res, cpu_res))
```
### Error logs
```
File "/workspace/test.py", line 23, in <module>
cpu_res = split(x)
File "/workspace/pytorch/torch/_dynamo/eval_frame.py", line 556, in _fn
return fn(*args, **kwargs)
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 1401, in __call__
return self._torchdynamo_orig_callable(
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 549, in __call__
return _compile(
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 982, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 712, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/workspace/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 747, in _compile_inner
out_code = transform_code_object(code, transform)
File "/workspace/pytorch/torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object
transformations(instructions, code_options)
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 233, in _fn
return fn(*args, **kwargs)
File "/workspace/pytorch/torch/_dynamo/convert_frame.py", line 664, in transform
tracer.run()
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 2843, in run
super().run()
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 1034, in run
while self.step():
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 946, in step
self.dispatch_table[inst.opcode](self, inst)
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 641, in wrapper
return inner_fn(self, inst)
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 1638, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 879, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/workspace/pytorch/torch/_dynamo/variables/misc.py", line 1032, in call_function
return self.obj.call_method(tx, self.name, args, kwargs)
File "/workspace/pytorch/torch/_dynamo/variables/tensor.py", line 566, in call_method
return dispatch_torch_function(
File "/workspace/pytorch/torch/_dynamo/variables/torch_function.py", line 543, in dispatch_torch_function
res = tx.symbolic_torch_function_state.call_torch_function_mode(
File "/workspace/pytorch/torch/_dynamo/variables/torch_function.py", line 274, in call_torch_function_mode
return cur_mode.call_torch_function(tx, fn, types, args, kwargs)
File "/workspace/pytorch/torch/_dynamo/variables/torch_function.py", line 392, in call_torch_function
return call_torch_function(
File "/workspace/pytorch/torch/_dynamo/variables/torch_function.py", line 506, in call_torch_function
return tx.inline_user_function_return(torch_function_var, tf_args, {})
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 885, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 3047, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 3173, in inline_call_
tracer.run()
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 1034, in run
while self.step():
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 946, in step
self.dispatch_table[inst.opcode](self, inst)
File "/workspace/pytorch/torch/_dynamo/symbolic_convert.py", line 2139, in CONTAINS_OP
self.push(right.call_method(self, "__contains__", [left], {}))
File "/workspace/pytorch/torch/_dynamo/variables/user_defined.py", line 825, in call_method
return super().call_method(tx, name, args, kwargs)
File "/workspace/pytorch/torch/_dynamo/variables/base.py", line 424, in call_method
unimplemented(f"call_method {self} {name} {args} {kwargs}")
File "/workspace/pytorch/torch/_dynamo/exc.py", line 313, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: call_method UserDefinedObjectVariable(set) __contains__ [TorchInGraphFunctionVariable(<function Tensor.split at 0x7f7b19bdf0a0>)] {}
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0a0+e000cf0
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-16GB
GPU 1: Tesla V100-SXM2-16GB
GPU 2: Tesla V100-SXM2-16GB
GPU 3: Tesla V100-SXM2-16GB
Nvidia driver version: 535.161.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 4
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 32 MiB (32 instances)
L3 cache: 44 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvidia-cudnn-frontend==1.7.0
[pip3] nvidia-nccl-cu12==2.22.3
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.2
[pip3] optree==0.13.0
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-triton==3.0.0+dedb7bdf3
[pip3] torch==2.5.0a0+e000cf0
[pip3] torch_tensorrt==2.5.0a0
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.20.0a0
[conda] Could not collect
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,701,595,110 | electron | [Bug]: On MacOS, desktopCapturer make window with resizeable:false resizeable | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.1
### What operating system(s) are you using?
macOS
### Operating System Version
Sequoia 15.1.1
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
_No response_
### Expected Behavior
The window should be not resizeable
### Actual Behavior
After 10 second, `desktopCapturer.getSources({ types: ['screen'] })` is called, and the window become resizeable
### Testcase Gist URL
https://gist.github.com/peernohell/301b616884c0ff02a77dda56b59aa8bd
### Additional Information
You can use the Electron Fiddle to reproduce the issue and use the latest Electron version.
| platform/macOS,bug :beetle:,status/confirmed,has-repro-gist,component/desktopcapturer,33-x-y | low | Critical |
2,701,612,152 | bitcoin | ARM Windows build and release | This was brought up recently:
* https://bitcointalk.org/index.php?topic=5517601.0
* https://groups.google.com/d/msgid/bitcoindev/e5b06aaa-1fe9-4c8f-a0ea-db10f8a7e48cn%40googlegroups.com | Windows,Build system,Upstream | low | Major |
2,701,646,125 | pytorch | Replace Miniconda with minforge as option | Subtask of #138696: [CI/CD] Deprecating PyTorch’s official Anaconda channel.
cc @seemethere @malfet @pytorch/pytorch-dev-infra | module: ci,triaged,enhancement,better-engineering | low | Minor |
2,701,808,659 | react-native | Cocoapods-art Installation gives : Unable to find a specification for `SocketRocket (= 0.7.1)` depended upon by `React-Core` | ### Description
We build a package for native modules, and we import a native SDK in iOS using cocoapods-art plugin for jfrog.
pod install fails with :
Unable to find a specification for `SocketRocket (= 0.7.1)` depended upon by `React-Core`
only when we use :
plugin 'cocoapods-art', :sources => [
'OUR SDK'
]
before upgrading application with RN ^0.73 we had:
Unable to find a specification for libevent (~> 2.1.12)
IMPORTANT:
we are able to solve the bug when:
1. remove dependency from package.json
2. rm package.json && node_modules && cd ios && pod deintegrate && rm podfile.lock && cd ..
3. npm i && cd ios && pod install && cd ..
4. npm i dependency
5. cd ios
6. put this code into podfile : plugin 'cocoapods-art', :sources => [ OUR SDK ]
7. pod install
### Steps to reproduce
1. npm install (including dependency in package.json)
2. cd ios
3. pod install
### React Native Version
*
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.7.1
CPU: (11) arm64 Apple M3 Pro
Memory: 562.38 MB / 36.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 23.1.0
path: /opt/homebrew/bin/node
Yarn:
version: 1.22.22
path: /opt/homebrew/bin/yarn
npm:
version: 10.9.0
path: /opt/homebrew/bin/npm
Watchman:
version: 2024.11.04.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /opt/homebrew/lib/ruby/gems/3.3.0/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12550806
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: /opt/homebrew/opt/openjdk@17/bin/javac
Ruby:
version: 3.3.6
path: /opt/homebrew/opt/ruby/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.0
wanted: 15.0.0
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.1
wanted: 0.76.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
Found 1 module for target `ReactNativeSampleAppNew`
Auto-linking React Native module for target `ReactNativeSampleAppNew`: OUR_SDK
Framework build type is static library
[Codegen] Adding script_phases to ReactCodegen.
[Codegen] Generating ./build/generated/ios/ReactCodegen.podspec.json
[Codegen] Analyzing ReactNativeSampleAppNew/package.json
[Codegen] Searching for codegen-enabled libraries in the app.
[Codegen] The "codegenConfig" field is not defined in package.json. Assuming there is nothing to generate at the app level.
[Codegen] Searching for codegen-enabled libraries in the project dependencies.
[Codegen] Found react-native
[Codegen] >>>>> Searching for codegen-enabled libraries in react-native.config.js
[Codegen] Processing FBReactNativeSpec
[Codegen] Searching for podspec in the project dependencies.
[Codegen] Processing rncore
[Codegen] Searching for podspec in the project dependencies.
[Codegen] Generating Native Code for FBReactNativeSpec - ios
[Codegen] Generated artifacts: ReactNativeSampleAppNew/ios/build/generated/ios
[Codegen - rncore] Skipping iOS code generation for rncore as it has been generated already.
[Codegen] Creating component provider.
[Codegen] Generated provider in: ReactNativeSampleAppNew/node_modules/react-native/React/Fabric
[Codegen] Done.
/Users/vladislav.grisko/.cocoapods/repos-art/OUR_SDK/.artpodrc
Analyzing dependencies
Fetching podspec for `DoubleConversion` from `../node_modules/react-native/third-party-podspecs/DoubleConversion.podspec`
Fetching podspec for `RCT-Folly` from `../node_modules/react-native/third-party-podspecs/RCT-Folly.podspec`
Fetching podspec for `boost` from `../node_modules/react-native/third-party-podspecs/boost.podspec`
Fetching podspec for `fmt` from `../node_modules/react-native/third-party-podspecs/fmt.podspec`
Fetching podspec for `glog` from `../node_modules/react-native/third-party-podspecs/glog.podspec`
Fetching podspec for `hermes-engine` from `../node_modules/react-native/sdks/hermes-engine/hermes-engine.podspec`
[Hermes] Using release tarball from URL: https://repo1.maven.org/maven2/com/facebook/react/react-native-artifacts/0.76.1/react-native-artifacts-0.76.1-hermes-ios-debug.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 23.3M 100 23.3M 0 0 10.0M 0 0:00:02 0:00:02 --:--:-- 10.0M
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 16.3M 100 16.3M 0 0 13.4M 0 0:00:01 0:00:01 --:--:-- 13.4M
.cocoapods/repos-art/OUR_SDK/.artpodrc
[!] Unable to find a specification for `SocketRocket (= 0.7.1)` depended upon by `React-Core`
You have either:
* out-of-date source repos which you can update with `pod repo update` or with `pod install --repo-update`.
* mistyped the name or version.
* not added the source repo that hosts the Podspec to your Podfile.
```
### Reproducer
https://github.com/vladgrisko
### Screenshots and Videos
_No response_ | Needs: Repro,Needs: Attention,Needs: Version Info | low | Critical |
2,701,820,635 | flutter | Codelab word puzzle section 6 code sample does not have any import statements | The code for [lib/widgets/crossword_info_widget.dart](https://github.com/flutter/codelabs/blob/main/generate_crossword/step_07/lib/widgets/crossword_info_widget.dart)
is missing the required imports (in the web page that describes what to do) and does not compile - however having copy / pasted the name of the file and checked the source I see that the source is OK, so it's a display issue of some sort for the web page).
I added these imports locally to make it work:
```
import 'package:flutter/material.dart';
import 'package:flutter_riverpod/flutter_riverpod.dart';
import '../providers.dart';
import '../utils.dart';
import './ticker_builder.dart';
```
| team-codelabs,P1,triaged-codelabs | medium | Minor |
2,701,879,156 | TypeScript | Enable Type Checking for Custom File Extensions | ### 🔍 Search Terms
"xsjs", "xsjslib", "hana", "file extensions"
### ✅ Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
Enhance TypeScript's tsconfig.json to support type checking for files with unsupported extensions, such as "**/*.xsjs" and "**/*.xsjslib", when specified explicitly in the "include" config option
### 📃 Motivating Example
Support for SAP HANA server-side javascript and other javascript-based languages using custom file extensions
### 💻 Use Cases
We are using TypeScript to check our SAP HANA server-side javascript files (.xsjs) and library files(xsjslib).
To do so, we currently need to add at least two top-comments in each file:
```javascript
///<reference path"./pathto/config.xsjs" />
//@ts-check
```
In the config.xsjs we store all other triple-slash directives (like lib=es2016) to configure the typescript project.
It is really great that there's a way to enable type-checking on non-standard file extensions (at least in VSCode), but it would be great if you'd be able to configure an entire project which includes non-standard file extensions. | Suggestion,Awaiting More Feedback | low | Minor |
2,701,903,051 | godot | Resizing button is lagging in theme preview | ### Tested versions
Happens since 4.3 dev3, 4.4 dev7
### System information
Godot v4.4.dev5 - Windows 10.0.19045 - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - Radeon RX 560 Series (Advanced Micro Devices, Inc.; 31.0.14001.45012) - Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz (4 threads)
### Issue description
In the theme preview the buttons change size when you hover over mouse.
The button size changes only if there is StyleBoxFlat in normal, the button size change seems to be delayed and late
https://github.com/user-attachments/assets/f61f31b2-f598-42a2-bc0c-2b3542ec7e55
### Steps to reproduce
1. Open MRP, create new Theme
2. Add StyleBoxFlat to normal
3. Hover over button in theme preview
4. Button is late
### Minimal reproduction project (MRP)
[gui.zip](https://github.com/user-attachments/files/17947487/gui.zip)
| bug,topic:editor,needs testing,topic:gui,performance | low | Major |
2,701,937,707 | deno | deno init --npm vite doesn't exit after "Done" until you press Enter | Very minor bug: After running `deno init --npm vite`, the Vite scaffolding finishes and says Done, but Deno doesn't exit until you press Enter. I waited a fair bit until I thought "What if I press a key?" and happened to pick Enter. (Just Space or something doesn't work.) May be related to #26403? `npm create vite` exits normally. (Thanks for all the Node compat stuff in Deno! Can't be easy.)
```
$ deno init --log-level=debug --npm vite
⚠️ Do you fully trust npm:create-vite package? Deno will invoke code from it with all permissions. Do you want to continue? [y/n]
> y
DEBUG RS - deno::args:587 - .npmrc found at: '/home/tjc/.npmrc'
DEBUG RS - deno::args:931 - Finished config loading.
DEBUG RS - deno::cache::cache_db:170 - Opening cache /home/tjc/.cache/deno/dep_analysis_cache_v2...
DEBUG RS - deno::cache::cache_db:170 - Opening cache /home/tjc/.cache/deno/node_analysis_cache_v2...
DEBUG RS - deno::npm::managed::resolution:282 - Running npm resolution.
DEBUG RS - deno_npm::resolution::graph:932 - <package-req> - Resolved create-vite@* to [email protected]
DEBUG RS - deno::npm::managed:351 - Resolved package folder of [email protected] to /home/tjc/.cache/deno/npm/registry.npmjs.org/create-vite/6.0.1
DEBUG RS - deno::js:10 - Deno isolate init with snapshots.
DEBUG RS - deno::worker:198 - main_module file:///home/tjc/.cache/deno/npm/registry.npmjs.org/create-vite/6.0.1/index.js
? Project name: › vite-projectDEBUG RS - deno_runtime::worker:743 - received module evaluate Ok(
(),
)
✔ Project name: … deno-vite
✔ Select a framework: › React
✔ Select a variant: › TypeScript
Scaffolding project in /home/tjc/temp/deno-vite...
Done. Now run:
cd deno-vite
deno install
deno run dev
```
*(Ignore the fact you see "Project name:" twice, that's only caused by `--log-level=debug`.)*
deno 2.1.1 (stable, release, x86_64-unknown-linux-gnu)
v8 13.0.245.12-rusty
typescript 5.6.2
| bug,node compat | low | Critical |
2,701,982,855 | electron | [Bug]On MacOS, small size transparent windows become opaque white on 4K external monitor | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.1
### What operating system(s) are you using?
macOS
### Operating System Version
MacOS 13.5.1
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
_No response_
### Expected Behavior
window should be transparent
like this

### Actual Behavior
actually , window become white background instead of transparent background

### Testcase Gist URL
https://gist.github.com/e7611eda7db2ae9e7592ac1083486a22
### Additional Information
After testing, I found that the bug can be reproduced when the window width is less than 162px on my 4K screen.
Larger windows do not reproduce the issue.

| platform/macOS,bug :beetle:,has-repro-gist | low | Critical |
2,702,066,775 | next.js | Next.js 15: Same-Path Redirect in Server Action Temporarily Breaks page.tsx Display | ### Link to the code that reproduces this issue
https://github.com/y-hsgw/reproduction-next-app
### To Reproduce
1. `next dev`
2. Click the "go home" button
### Current vs. Expected behavior
As shown in the attached video, redirecting ( [redirect function](https://nextjs.org/docs/app/api-reference/functions/redirect) ) to the same path in Next.js 15 temporarily hides the content of page.tsx. This issue did not occur in Next.js 14. (You can confirm this by changing the version of Next.js in the attached repository to 14.2.18.)
https://github.com/user-attachments/assets/edf65b80-752d-4902-8471-73b1a4e62b77
I expect it to not temporarily disappear.
While I might be misunderstanding or missing something, I would appreciate it if you could look into this matter.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:05:14 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T8103
Available memory (MB): 8192
Available CPU cores: 8
Binaries:
Node: 22.10.0
npm: 10.9.0
Yarn: N/A
pnpm: 8.2.0
Relevant Packages:
next: 15.0.4-canary.30 // Latest available version is detected (15.0.4-canary.30).
eslint-config-next: N/A
react: 19.0.0-rc-b01722d5-20241114
react-dom: 19.0.0-rc-b01722d5-20241114
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | Navigation,linear: next | low | Major |
2,702,175,178 | angular | Web worker + barrel files can give JIT errors | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
In a project I have a web worker that runs some functions that are imported using a barrel file (an `index.ts`). The compiled web worker seems to also include all other files the barrel exports, while they are not being used in the web worker. In this case, that barrel file also re-exports a service that injects the `HttpClient`. This results in the web worker failing with a `JIT compiler unavailable` error when running the compiled `dist` folder using `http-server` for example (the error does not occur when using `ng serve`).
The error vanishes when:
- the function used in the web worker is imported directly from the function file instead of using the barrel file; or
- the service does not inject the `HttpClient`.
I think the main problem here is that no tree-shaking is happening to the web worker?
### Please provide a link to a minimal reproduction of the bug
https://github.com/tijsmoree/webworker-jit-error
### Please provide the exception or error you saw
```true
Error: JIT compiler unavailable
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.0.2
Node: 20.16.0
Package Manager: pnpm 9.14.2
OS: linux x64
Angular: 19.0.1
... animations, common, compiler, compiler-cli, core
... platform-browser
Package Version
------------------------------------------------------
@angular-devkit/architect 0.1900.2 (cli-only)
@angular-devkit/core 19.0.2 (cli-only)
@angular-devkit/schematics 19.0.2 (cli-only)
@angular/build 19.0.2
@angular/cli 19.0.2
@schematics/angular 19.0.2 (cli-only)
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else?
_No response_ | area: packaging | low | Critical |
2,702,185,737 | TypeScript | Start getting types error after update to v5.7.2 | ### 🔎 Search Terms
5.7.2 types issue, LibraryManagedAttributes, componentType issue
### 🕗 Version & Regression Information
on v5.7.2 the issue exists
with my previous v5.2.2 it was working
### ⏯ Playground Link
_No response_
### 💻 Code
```ts
import { ROUTES } from '@/shared/constants'
import { AppStore, RootState, setupStore } from '@/store/store'
import { MsalProvider } from '@azure/msal-react'
import type { RenderOptions } from '@testing-library/react'
import { render } from '@testing-library/react'
import { MsalReactTester } from 'msal-react-tester'
import React, { PropsWithChildren, ReactNode } from 'react'
import { Provider } from 'react-redux'
import { MemoryRouter } from 'react-router-dom'
export const msalTester: MsalReactTester = new MsalReactTester()
// This type interface extends the default options for render from RTL, as well
// as allows the user to specify other things such as initialState, store.
export interface ExtendedRenderOptions extends Omit<RenderOptions, 'queries'> {
preloadedState?: Partial<RootState>
store?: AppStore
}
type ContextProviderProps = { children: ReactNode }
/**
* Renders a React component with a Redux store and an optional context provider for testing.
*
* This utility function helps in setting up a test environment for React components that depend on Redux store
* and optional context providers. It wraps the provided UI element with the Redux provider and an optional context provider.
*
* @template T - The type of the context provider props.
*
* @param {React.ReactElement} ui - The React element to render.
*
* @param {Object} options - The options for rendering the component.
*
* @param {Object} [options.preloadedState={}] - The initial state for the Redux store.
*
* @param {Store} [options.store=setupStore(preloadedState)] - The Redux store instance. If not provided, a new store is created with the preloaded state.
*
* @param {Object} [options.renderOptions={}] - Additional options to pass to the render function from @testing-library/react.
*
* @param {React.ComponentType<T & ContextProviderProps>} [ContextProvider] - An optional context provider component to wrap around the rendered element.
*
* @param {string[] | Partial<Location>[]} [memoryRouterProps=[{ pathname: '/' }]] - The initial entries for the MemoryRouter.
*
* @param {T} [contextProviderProps] - The props to pass to the context provider component.
*
* @returns {{ store: Store } & RenderResult} The Redux store and the result of the render function from @testing-library/react.
* @example
* // Import the function and dependencies
* import { renderWithProviders } from 'path/to/this/function';
* import MyComponent from 'path/to/MyComponent';
* import MyContextProvider from 'path/to/MyContextProvider';
*
* // Define initial state and context provider props
* const preloadedState = { someSlice: { key: 'value' } };
* const contextProps = { someContextValue: 'value' };
*
* // Use the utility to render the component with store and context
* const { getByText, store } = renderWithProviders(
* <MyComponent />,
* { preloadedState },
* MyContextProvider,
* [{ pathname: '/some-path' }],
* contextProps
* );
*/
const renderWithProviders = <T extends Record<string, unknown>, F extends Record<string, unknown>>(
ui: React.ReactElement,
{
preloadedState = {},
// Automatically create a store instance if no store was passed in
store = setupStore(preloadedState),
...renderOptions
}: ExtendedRenderOptions = {},
ContextProvider?: React.ComponentType<T & ContextProviderProps>,
memoryRouterProps: string[] | Partial<Location>[] = [{ pathname: ROUTES.HOME }],
contextProviderProps?: T,
FormProvider?: React.ComponentType<F & ContextProviderProps>,
formProviderProps?: F
) => {
/**
* A wrapper component that provides Redux store and optional context to the children.
*
* @param {PropsWithChildren<{}>} props - The props of the wrapper component.
* @returns {JSX.Element} The wrapped children with Redux and optional context providers.
*/
function Wrapper({ children }: PropsWithChildren<Record<string, unknown>>): JSX.Element {
let wrappedChildren = <>{children}</>
if (FormProvider) {
wrappedChildren = <FormProvider {...(formProviderProps as F)}>{wrappedChildren}</FormProvider>
}
if (ContextProvider) {
wrappedChildren = <ContextProvider {...(contextProviderProps as T)}>{wrappedChildren}</ContextProvider>
}
return (
<MsalProvider instance={msalTester.client}>
<MemoryRouter initialEntries={memoryRouterProps}>
<Provider store={store}>{wrappedChildren}</Provider>
</MemoryRouter>
</MsalProvider>
)
}
return { store, ...render(ui, { wrapper: Wrapper, ...renderOptions }) }
}
export { renderWithProviders }
```
### 🙁 Actual behavior
I have next error on both FormProvider and ContextProvider


### 🙂 Expected behavior
No error should appear, or there are some new changes which were not mentioned in release notes
### Additional information about the issue
_No response_ | Needs More Info | low | Critical |
2,702,213,934 | neovim | Wrong ANSI colors in terminal | ### Problem
Terminal shows incorrect ANSI escapes for values 37 (bright white foreground) and 40 (black background). The black background seems to always be treated as background, even if particular color scheme uses a light color (here white). See `$PSStyle` output below, left are correct colors from regular terminal, right are values from terminal running inside Neovim.

### Steps to reproduce
> $PSStyle
> nvim -c 'term pwsh -c "$PSStyle"'
### Expected behavior
Colors match
### Nvim version (nvim -v)
NVIM v0.9.5, v0.10.2, v0.11.0
### Vim (not Nvim) behaves the same?
Yes
### Operating system/version
Microsoft Windows Version 23H2 (OS Build 22631.4460)
### Terminal name/version
Windows Terminal 1.21.3231.0
### $TERM environment variable
xterm-256color
### Installation
winget install -e Neovim.Neovim --version 0.9.5 | bug,terminal | low | Minor |
2,702,290,413 | vscode | R&D Workspace features for ai | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Integrating AI capabilities into Visual Studio Code (VS Code) are becoming phenomenal developments. Whereas especially tools like GitHub Copilot. Recently introducing features that enhance workflows. Also come with significant considerations regarding the frontier of software development emergent challenges.
In preference, integrations of Test, Evaluation, Validation, and Verification (TEVV) methodologies, concurrent with Software Assurance frame working (SwA), are proven to drive innovation at work. Although the issues I have realized are individually preferred but not yet adopted generally. Therefore, the proposal is not as urgent in status.
Furthermore, ongoing resolution. I think advancing workspace actions are essential to enable copilot workflow. To produce accordingly to project specific requirements or otherwise "needs" . And those requirements are workspace analysis, workflow TEVV and additionally a greater ai scope in Software assurance
| feature-request | low | Minor |
2,702,310,279 | react-native | requestAnimationFrame callback order is nondeterministic | ### Description
In my experience, the order of rAF callbacks is always deterministic on web. It [seems](https://stackoverflow.com/a/34905490) like it's deterministic per spec. In other words, this:
```js
requestAnimationFrame(() => {
console.log('1')
})
requestAnimationFrame(() => {
console.log('2')
})
requestAnimationFrame(() => {
console.log('3')
})
```
should produce
```
1
2
3
```
That *is* the case on the web, but it's not in React Native.
Instead, on React Native, it's seemingly non-deterministic
### Steps to reproduce
See this snack: https://snack.expo.dev/PsXPlo457DmfjCIEzRIsp?platform=ios
Expected:
```
1,2,3
```
Actual:
<img width="435" alt="Screenshot 2024-11-28 at 14 16 09" src="https://github.com/user-attachments/assets/82824338-b2d7-40e3-a495-fd72fd13da25">
<img width="411" alt="Screenshot 2024-11-28 at 14 16 31" src="https://github.com/user-attachments/assets/2bd7ed08-2908-420a-8757-4d9f818bcf07">
I believe this is the case for:
- Both platforms on 0.74.*
- Seemingly, only for Android on 0.76.* (maybe due to New Architecture being a default?)
### React Native Version
0.76.0
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.6.1
CPU: (16) arm64 Apple M3 Max
Memory: 60.87 GB / 128.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 22.8.0
path: /opt/homebrew/bin/node
Yarn:
version: 1.22.22
path: /opt/homebrew/bin/yarn
npm:
version: 10.8.2
path: /opt/homebrew/bin/npm
Watchman:
version: 2024.09.09.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2412.12266719
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.74.1
wanted: 0.74.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
https://snack.expo.dev/vZ6_gSpj6iB6z0tpgDnsw?platform=android
### Screenshots and Videos
_No response_ | Bug | low | Major |
2,702,316,561 | godot | SpinBox seemingly randomly crashes the whole Editor | ### Tested versions
Reproducible in: v4.3.stable.official [77dcf97d8]
(didnt test more versions since it takes so much time to get results)
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Ti (NVIDIA; 32.0.15.6094) - 13th Gen Intel(R) Core(TM) i5-13400 (16 Threads)
### Issue description
While I was testing my plugin's UI the whole editor suddenly crashed. I was able to repruduce the crash multiple times but its very inconsistent.
I stripped away as much stuff from my plugin as possible and left the project as a minimal repro project below. (Thats why the variable names and the labels have those not so generic texts)
Honestly, I couldnt figure out what is truly causing the crash since its so random and I cant get a crash log or anything. I tried launching the editor with --verbose but nothing gets printed when crashing.
And the even more weird thing is, I recorded my mouse inputs with a software and played back the whole thing after crashing and even this way i couldnt get a consistent result, I crashed a bit later.
I have left a video of me reproducing the crash but it well takes around 2-10 minutes to get a crash. It was really hard for me to test this since I was just guessing what might cause the crash and I cant even make sure that after like 10 minutes of trying this and not getting a crash wouldnt have caused a crash after like 15 minutes of trying.
All I can say is, doing the steps I list below on the minimal repro project WILL EVENTUALLY cause the editor to crash.
### Steps to reproduce
1. Click the `Add limb` button on the 3D toolbar.
2. Click the `Select tip bone` button
3. Click randomly around on the `SpinBox arrows` and maybe in the `input field` too for some time.
4. Click on the `Select` button.
5. Go back to step 2. and repeat a couple of times.
6. Click away from the PopupPanel and repeat the whole thing until the editor freezes and crashes at some point.
[editor_crash.zip](https://github.com/user-attachments/files/17949309/editor_crash.zip)
### Minimal reproduction project (MRP)
[plugin-bug-test.zip](https://github.com/user-attachments/files/17948584/plugin-bug-test.zip)
| bug,crash,topic:gui | low | Critical |
2,702,346,968 | godot | Compressed animation blend shape tracks don't preserve weights of zero | ### Tested versions
Reproducible in Godot v4.4.dev [bbc54692c].
### System information
Windows 10.0.19045. Compiled with MSVC.
### Issue description
When rendering a mesh with blend shapes, the vertex shader will skip any shapes with a weight close to zero. However, compressed animation blend shape tracks output a weight above this threshold even though the original animated weight was zero. This could cause a performance issue if an animation has a lot of zero weight blend shapes. It will also affect the visuals, although that will be hard to see as the weight is very close to zero.
The blend shape compression code is:
```cpp
// Animation::_compress_key
blend = (blend / float(Compression::BLEND_SHAPE_RANGE)) * 0.5 + 0.5;
values[0] = CLAMP(int32_t(blend * 65535.0), 0, 65535);
// Animation::_uncompress_blend_shape
float bsn = float(p_value.x) / 65535.0;
return (bsn * 2.0 - 1.0) * float(Compression::BLEND_SHAPE_RANGE);
```
A weight of 0.0 compresses to unorm 32767, which decompresses to -0.00012207. This is slightly bigger than the vertex shader's threshold of 0.0001.
```c
// Skeleton.glsl
for (uint i = 0; i < params.blend_shape_count; i++) {
float w = blend_shape_weights.data[i];
if (abs(w) > 0.0001) {
```
A potential fix is to change the 65535 scale to 65534 (https://github.com/greeble-dev/godot/commit/06502a0fbf778f1000e948261dbb3a20d4551b87). This means zero still compresses to unorm 32767, but that now decompresses to zero. The change would be backwards compatible with assets that were compressed before the change.
I can make a PR with this change if desired.
Note that there's another potential issue with how blend shape tracks and other tracks are doing unorm rounding. I've filed that as https://github.com/godotengine/godot/issues/99796, and will roll both fixes into the same PR unless advised otherwise.
### Steps to reproduce
Launch the MRP project. The main scene has two cubes playing an animation with some zero weight blend shape tracks. Left cube = uncompressed animation, right cube = compressed. Initially there should be no visual difference.
_Repo option 1 (requires code changes)_: Apply change https://github.com/greeble-dev/godot/commit/107bbe62e444f8d517a96a8732d016838e52779e to skeleton.glsl, which will force any blend shape that passes the weight threshold to blend with a weight of one instead. This is just a hacky way to show if the vertex shader skipped the shape. Launch the project and note that the cube with compressed animation is now distorted while the uncompressed animation is unchanged.
_Repro option 2 (no code changes)_: Breakpoint `Animation::try_blend_shape_track_interpolate`, then in the editor select node `AnimCompressed/AnimationPlayer` to trigger the breakpoint. Note that the compressed path returns a weight of -0.00012207.
To test the potential fix, apply change https://github.com/greeble-dev/godot/commit/06502a0fbf778f1000e948261dbb3a20d4551b87. Launch the editor and select node `AnimCompressed/AnimationPlayer` to make it update. The cubes should now match, showing that the blend shape is correctly skipped.
### Minimal reproduction project (MRP)
[bug-blend-shape-repro.zip](https://github.com/user-attachments/files/17949065/bug-blend-shape-repro.zip)
| bug,topic:animation | low | Critical |
2,702,429,692 | rust | Docs & `rustdoc`: Mark traits that have indirect and synthetic impls | There's a lot of implicit stuff going on behind the scenes in Rust that's dictated by the very nature of the trait-based type systems. However, that doesn't change the fact that hidden complexity makes it difficult to understand other people's code and write your own, let alone learn the language for newbies.
One example of such complexity is side effects of importing traits from other modules, when seemingly familiar types can suddenly acquire new methods and even [operational semantics](https://doc.rust-lang.org/1.82.0/reference/special-types-and-traits.html#operator-traits). But since we can declare the use of elements without naming each of them individually, with [asterisk wildcard syntax](https://doc.rust-lang.org/1.82.0/reference/items/use-declarations.html), these methods begin to look like as though they came out of nowhere. And sometimes this approach is even preferable and officially recommended - see e.g. [`std::io::prelude`](https://doc.rust-lang.org/1.82.0/std/io/prelude/index.html).
This problem could be greatly mitigated if such traits were distinctly marked out on overview pages. This includes:
- Traits that have blanket implementations. The most prominent example here is [`Any`](https://doc.rust-lang.org/1.82.0/std/any/trait.Any.html).
- Traits that have implementations on foreign types, along with generic (i.e. [covered](https://doc.rust-lang.org/reference/glossary.html#uncovered-type)) ones. A good illustration of it is [`zerocopy::Unaligned`](https://docs.rs/zerocopy/0.8.11/zerocopy/trait.Unaligned.html#foreign-impls).
- [Auto traits](https://doc.rust-lang.org/1.82.0/reference/special-types-and-traits.html#auto-traits): as of 1.82.0, it's [`Send`](https://doc.rust-lang.org/1.82.0/core/marker/trait.Send.html), [`Sync`](https://doc.rust-lang.org/1.82.0/core/marker/trait.Sync.html), [`Unpin`](https://doc.rust-lang.org/1.82.0/core/marker/trait.Unpin.html), [`UnwindSafe`](https://doc.rust-lang.org/1.82.0/core/panic/unwind_safe/trait.UnwindSafe.html), [`RefUnwindSafe`](https://doc.rust-lang.org/1.82.0/core/panic/unwind_safe/trait.RefUnwindSafe.html), [`Freeze`](https://doc.rust-lang.org/1.82.0/std/marker/trait.Freeze.html) (unstable) and [`Sized`](https://doc.rust-lang.org/1.82.0/std/marker/trait.Sized.html).
They don't quite fit the stated criterion, but are close to it in spirit, so I think it would be nice to also highlight them from the general mass. | T-rustdoc,C-enhancement,A-rustdoc-ui,T-rustdoc-frontend | low | Minor |
2,702,436,581 | godot | Unnecessary precision loss in compressed animation tracks | ### Tested versions
Reproducible in Godot v4.4.dev [bbc54692c].
### System information
Windows 10.0.19045. Compiled with MSVC.
### Issue description
Compressed animation tracks are losing half of their precision due to the choice of rounding in unorm conversion. The visual impact is very small in most cases, but the fix is simple so I hope it's worth considering.
In `Animation::_compress_key`, the various conversions to unorm are done with `int32_t(float_value * 65535.0)`. This rounds towards zero, so an input of 0.999999 will be encoded as 65534 and decompressed to 0.999985 (difference: \~0.000014).
Changing the conversions to `int32_t((float_value * 65535.0) + 0.5f)` would make them round to nearest, doubling the precision. So an input of 0.999999 would be encoded as 65535 and decompressed to 1.0 (difference: \~0.000001). In case it's relevant, this is the conversion used by most graphics APIs (e.g. [DirectX](https://learn.microsoft.com/en-us/windows/win32/direct3d10/d3d10-graphics-programming-guide-resources-data-conversion#:~:text=Convert%20to%20integer.,directly%20to%20an%20integer)).
I can make a PR if desired. Note that I've also filed a separate bug that involves the same unorm code (https://github.com/godotengine/godot/issues/99794). I will roll both fixes into the same PR unless advised otherwise.
And some bonus notes:
- The unorm compress/uncompress triggers an unnecessary double <-> float conversion because the scaling constant is a double literal.
- ~~Changing it to float is probably a negligible performance improvement, but seems like a safe change that can be folded into the other changes.~~
- EDIT: On second thoughts, it's not quite clear if this is true when real_t is double.
- There's a few other cases of unorm conversions with rounding towards zero, including mesh vertex compression.
- I haven't tested these yet to confirm if they have the same precision loss.
- I can file these as separate issues if desired.
### Steps to reproduce
I don't have a good repro as the visual difference is very subtle.
In case it's useful, https://github.com/greeble-dev/godot/commit/30a5e9afbe27a87e3a033b8053fa889765ed369d is a hacky code change I used to test a few values with the original conversion and proposed fix. If the change is applied, reimporting any compressed animation should trigger this debug output:
```
scene\resources\animation.cpp:4776 - Original: 1.000000 quantizes to 1.000000 (unorm 65535)
scene\resources\animation.cpp:4782 - Updated: 1.000000 quantizes to 1.000000 (unorm 65535)
scene\resources\animation.cpp:4776 - Original: 0.999999 quantizes to 0.999985 (unorm 65534)
scene\resources\animation.cpp:4782 - Updated: 0.999999 quantizes to 1.000000 (unorm 65535)
scene\resources\animation.cpp:4776 - Original: 0.000014 quantizes to 0.000000 (unorm 0)
scene\resources\animation.cpp:4782 - Updated: 0.000014 quantizes to 0.000015 (unorm 1)
```
### Minimal reproduction project (MRP)
N/A | bug,topic:animation | low | Critical |
2,702,482,170 | godot | `Variant::iter_get` on empty `Array` crashes in release, but no errors reported with `DEBUG_ENABLED` | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Mobile) - dedicated NVIDIA GeForce GTX 980 Ti (NVIDIA; 32.0.15.6603) - 13th Gen Intel(R) Core(TM) i7-13700K (24 Threads)
### Issue description
Calling `Variant::iter_get` on an empty `Array` causes this crash, but only in builds without `DEBUG_ENABLED`. If `DEBUG_ENABLED`, no warning or error will be reported and an empty `Variant` will be returned.
```
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
Vulkan 1.3.289 - Forward Mobile - Using Device #0: NVIDIA - NVIDIA GeForce GTX 980 Ti
ERROR: FATAL: Index p_index = 0 is out of bounds (size() = 0).
at: get (./core/templates/cowdata.h:205)
```
This appears to be because of these lines:
https://github.com/godotengine/godot/blob/0eadbdb5d0709e4e557e52377fa075d3e2f0ad1f/core/variant/variant_setget.cpp#L1807-L1927
Because editor builds always have the `DEBUG_ENABLED`, this crash will not reproduce with editor builds. Instead, this issue is more relevant to GDExtension or module projects, but I suspect it could also cause unexpected differences between debug templates and release templates as well.
### Solutions
At minimum, I would expect an error to be logged when `DEBUG_ENABLED`. If this causes too much error spam in the editor, which cannot be built without `DEBUG_ENABLED`, then this error could be silenced for editor builds.
Alternatively, these `DEBUG_ENABLED` segments could be changed to only apply to editor builds (changed to `TOOLS_ENABLED`), which would align debug behaviour with release behaviour. I have a preference towards this approach because I may not notice a rare error message during development, but then have the crash occur in a release build. This would leave me wondering why it never crashed during development, even though I tested it.
Whichever approach is taken, the important thing is that release build behaviour should be easy to reproduce or identify in a debug build. The current approach can result in a release mode crash when debug builds have no apparent issue.
### Other issues
From a quick glance at the source code, I suspect that this sort of pattern may be used elsewhere and other release-only crashes can occur because of the pattern.
### Steps to reproduce
Write the following code, say in a GDExtension using godot-cpp:
``` C++
Variant arrayAsVariant = Array();
Variant iterator;
bool iter_valid;
arrayAsVariant.iter_init(iterator, iter_valid);
Variant value = arrayAsVariant.iter_get(iterator, iter_valid);
```
Compile with `scons dev_build=yes` and note that there are no issues, warnings, or errors and `value` is `null`.
Compile with `scons target=template_release` and note the error and crash occur.
### Minimal reproduction project (MRP)
Here is an example GDExtension:
[release-only-gdextension-crash.zip](https://github.com/user-attachments/files/17949478/release-only-gdextension-crash.zip)
- Copy over godot-cpp 4.3 branch into godot-cpp folder
- Run both `scons dev_build=yes` and `scons target=template_release` to generate GDExtension DLL files
- Open the godot project in the `demo` folder
- Export both debug and release builds of the game
Here is my compiler info:
`Using SCons-detected MSVC version 14.3, arch x86_64`
### Other discussion
This issue replaces godotengine/godot-cpp/issues/1652. | bug,topic:core | low | Critical |
2,702,501,015 | transformers | Replace all torch.FloatTensor by torch.Tensor | ### Feature request
I think `torch.FloatTensor` dtype should be replace by `torch.Tensor` now. torch.FloatTensor is quite anoying as it trigger warning most of the time, and force users to manually cast type to avoid such warning.
It seems that FloatTensor is not really use anymore in Torch. torch.FloatTensor and torch.cuda.FloatTensor are still available to ensure backward compatibility
### Motivation
Fix typing warnings
### Your contribution
Replace every occurence of torch.FloatTensor by torch.Tensor. Including docstrings | Feature request | low | Minor |
2,702,582,226 | next.js | `Only plain objects can be passed from Client Components to Server Components` on route handler | ### Link to the code that reproduces this issue
https://github.com/tjwelde/next15-pinata-issue
### To Reproduce
1. `npm install`
2. `npm run dev`
3. open page (e.g. http://localhost:3000/)
4. everything still normal
5. reload page
6. see error in server logs
### Current vs. Expected behavior
Expected:
a. No error to be thrown
b. If there is an actual issue with the library used: since the error seems to stem from a route handler, I would've expected the error to not mention client and server components and/or being more precise
Current:
Following error is logged:
```
Error: [ Server ] Only plain objects can be passed to Client Components from Server Components. Set objects are not supported.
{P: </>, b: ..., p: "", c: ["", ""], i: ..., f: ..., m: Set, G: ..., s: ..., S: ...}
^^^
at createUnhandledError (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/react-dev-overlay/internal/helpers/console-error.js:27:49)
at handleClientError (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/react-dev-overlay/internal/helpers/use-error-handler.js:44:56)
at console.error (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/globals/intercept-console-error.js:48:56)
at react-stack-bottom-frame (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js:2446:58)
at resolveConsoleEntry (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js:1961:9)
at processFullStringRow (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js:2095:11)
at processFullBinaryRow (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js:2059:7)
at progress (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js:2262:17)
```
### Provide environment information
```bash
Node.js v20.15.0
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:11 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6020
Available memory (MB): 16384
Available CPU cores: 12
Binaries:
Node: 20.15.0
npm: 10.7.0
Yarn: 4.2.2
pnpm: N/A
Relevant Packages:
next: 15.0.4-canary.30 // Latest available version is detected (15.0.4-canary.30).
eslint-config-next: N/A
react: 19.0.0-rc-b01722d5-20241114
react-dom: 19.0.0-rc-b01722d5-20241114
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I tested my reproduction against multiple versions up to 14.0.0
14.1.0 still shows the error with a bit more details:
```
Warning: Only plain objects can be passed to Client Components from Server Components. Set objects are not supported.
<... buildId=... assetPrefix="" urlParts=... initialTree=... initialSeedData=... couldBeIntercepted=... initialHead=... globalErrorComponent=... missingSlots={Set}>
^^^^^
```
14.0.0 does not show the error
as background:
- I am trying to upgrade an existing project from [email protected] to [email protected]
- the `@pinata/sdk` package is deprecated and we will switch to the new `pinata-web3` package, which does not seem to have this issue in my early tests.
- Still, I think other libraries might also trigger an error like this, so I think it is worth investigating, what is actually going on
- My main issue here is, that it took a long time to debug, because the error said it would have something to do with the client/server component boundary, so I was looking at the wrong things for quite some time.
- Having a more precise error message would already help immensely
- Still not sure, if there is an actual error happening somewhere. Wrapping the affected call in a try..catch did not catch anything though | bug | low | Critical |
2,702,591,726 | electron | Request for official rpm package | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
Request for a proper fedora apm to be uploaded to the rpm repo following the guidelines in
https://docs.fedoraproject.org/en-US/packaging-guidelines/ and
https://docs.fedoraproject.org/en-US/packaging-guidelines/Node.js/
### Proposed Solution
.
### Alternatives Considered
have an option for source code in npm
### Additional Information
_No response_ | enhancement :sparkles: | low | Minor |
2,702,620,195 | node | WebAssembly source phase imports | TC39 proposal [source phase imports](https://github.com/tc39/proposal-source-phase-imports) reached to stage 3 and has been implemented in V8 (starting from M131).
```js
import source FooModule from "./foo.wasm";
FooModule instanceof WebAssembly.Module; // true
```
The feature requires Node.js integration to support WebAssembly source phase imports. TLDR, if a SourceTextModule imports a source-phase WebAssembly module, create a module source object with `v8::WasmModuleObject::Compile` and return the `WebAssembly.Module` object from `v8::Module::ResolveSourceCallback`, and `v8::HostImportModuleWithPhaseDynamicallyCallback`.
> See [design doc](https://docs.google.com/document/d/1yetUlxl_yHk0ooT-hg9ylJeQ6Q77_Ttm777tHbclwqg/edit?tab=t.0#heading=h.uzscwod1dpgu) for details.
WPT: https://github.com/web-platform-tests/wpt/blob/master/wasm/webapi/esm-integration/source-phase.tentative.html
/cc @guybedford | esm,wasm,web-standards | low | Minor |
2,702,646,258 | flutter | [go_router] Accessing GoRouterState in NavigationObserver after showMenu causes application to hang | ### Steps to reproduce
1. Setup GoRouter using a ShellRoute and add a NavigationObserver that uses ```GoRouterState.of(route.navigator!.context)```
2. add call to ```showMenu()``` in app
3. Line ```GoRouterState.of(route.navigator!.context)``` causes application to hang
Have noticed after setting up sample code that you can experience the same error if it tries to do this on the root Route.
Note this doesn't hang or crash on version 14.2.0, only noticed after updating to 14.6.0. Previoulsy there was an exception that we handled.
### Expected results
Exception occurs but application can recover.
### Actual results
Application hangs and doesn't recover
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
// GoRouter configuration
final _router = GoRouter(
routes: [
ShellRoute(
builder: (context, state, child) => child,
observers: [TestObserver()],
routes: [
GoRoute(
path: '/',
builder: (context, state) => const MyHomePage(title: ''),
),
],
),
],
);
class TestObserver extends NavigatorObserver {
@override
void didPush(Route<dynamic> route, Route<dynamic>? previousRoute) {
super.didPush(route, previousRoute);
// NOTE: removing this line will cause application to hang on startup
if (route.settings.name == '/') {
return;
}
var state = route.settings.arguments is GoRouterState ? route.settings.arguments as GoRouterState? : null;
if (state == null) {
if (route.navigator != null) {
try {
debugPrint(
'Attempting get GoRouterState - Route: ${route.settings.name}, ${route.navigator!.context.mounted}');
state = GoRouterState.of(route.navigator!.context);
} catch (e, st) {
debugPrint('Error: $e\n$st');
return;
}
}
}
}
}
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp.router(routerConfig: _router);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(widget.title),
),
body: Center(
child: InkWell(
onTap: () => _showMenu(context),
child: const Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text(
'Tap Here',
),
],
),
),
),
floatingActionButton: FloatingActionButton(
onPressed: () => _showMenu(context),
tooltip: 'Increment',
child: const Icon(Icons.add),
),
);
}
void _showMenu(BuildContext context) {
showMenu(context: context, position: const RelativeRect.fromLTRB(0, 0, 0, 0), items: [
const PopupMenuItem<int>(
value: 0,
child: Text('First'),
),
const PopupMenuItem<int>(
value: 1,
child: Text('Second'),
),
]);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib\main.dart on Chrome in debug mode...
This app is linked to the debug service: ws://127.0.0.1:50798/PJe3qvrOcvY=/ws
Debug service listening on ws://127.0.0.1:50798/PJe3qvrOcvY=/ws
Connecting to VM Service at ws://127.0.0.1:50798/PJe3qvrOcvY=/ws
Connected to the VM Service.
Attempting get GoRouterState - Route: null, true
Application finished.
Exited (-1).
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.5, on Microsoft Windows [Version 10.0.22631.4460], locale en-GB)
• Flutter version 3.24.5 on channel stable at C:\sdks\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision dec2ee5c1f (2 weeks ago), 2024-11-13 11:13:06 -0800
• Engine revision a18df97ca5
• Dart version 3.5.4
• DevTools version 2.37.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at C:\Users\pchar\AppData\Local\Android\Sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Build Tools 2019 16.11.42)
• Visual Studio at C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools
• Visual Studio Build Tools 2019 version 16.11.35425.106
• Windows 10 SDK version 10.0.19041.0
[√] Android Studio (version 2024.2)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
[√] VS Code (version 1.95.3)
• VS Code at C:\Users\pchar\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.100.0
[√] Connected device (4 available)
• FP4 (mobile) • 5e864742 • android-arm64 • Android 13 (API 33)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4460]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.86
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.70
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| package,has reproducible steps,P2,p: go_router,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.27 | low | Critical |
2,702,694,496 | PowerToys | clipboard didn't function well, when I come back to my pc's and the Screensaver are on then I can't open the second pc, and after that the mouse don't even comes back... | ### Microsoft PowerToys version
v0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders, FancyZones, Awake, General
### Steps to reproduce
x
### ✔️ Expected Behavior
x
### ❌ Actual Behavior
nothing
### Other Software
_No response_ | Issue-Bug,Product-Mouse Without Borders | low | Major |
2,702,734,393 | next.js | Next.js 14 `loading.tsx` bug | ### Link to the code that reproduces this issue
https://github.com/jelius-sama/loading-bug-nextjs-14.git
### To Reproduce
1. Start the dev server.
2. Do a hard refresh to make sure nothing is cached in the browser.
3. From the index route ("/"), navigate to "art page" by clicking the link an observe the bordered section.
### Current vs. Expected behavior
1. Following the steps, the loading UI looks to be incorrect (look into the code: "src/app/[artist]/[art]/loading.tsx" for the actual UI verses what was rendered).
2. The loading UI should've been "Loading art page..." instead we get to see "Loading artist page..."
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Home Single Language
Available memory (MB): 5990
Available CPU cores: 8
Binaries:
Node: 20.10.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.5 // An outdated version detected (latest is 15.0.3), upgrade is highly recommended!
eslint-config-next: 14.2.5
output: N/A
⚠ An outdated version detected (latest is 15.0.3), upgrade is highly recommended!
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which area(s) are affected? (Select all that apply)
Lazy Loading, Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
_No response_ | bug,Navigation,Lazy Loading | low | Critical |
2,702,803,384 | react | Blocking the event loop spawnSync | To avoid blocking event loop use **execSync** is best compare to **SpawnSync** this cause performance, when running multiple concurrent tasks in React build pipeline
**Consider replacing `spawnSync` with `spawn` to prevent blocking**
```js
const sha = execSync('git rev-parse HEAD').slice(0, 8);
| React 19 | medium | Major |
2,702,806,080 | opencv | Integrate TEBLID efficient descriptor with ORB feature detector | ### Describe the feature and motivation
[TEBLID](https://docs.opencv.org/4.x/dd/dc1/classcv_1_1xfeatures2d_1_1TEBLID.html) developed by @iago-suarez is a very efficient feature detector that is much better than [ORB](https://docs.opencv.org/4.x/db/d95/classcv_1_1ORB.html) current descriptor using the same pixel pair gray levels differences.
[TEBLID](https://docs.opencv.org/4.x/dd/dc1/classcv_1_1xfeatures2d_1_1TEBLID.html) selects pixel pairs using minimizing modern triplet loss (see [paper](https://docs.opencv.org/4.x/d0/de3/citelist.html#CITEREF_Suarez2021TEBLID)), while ORB descriptor uses simple correlation to select pixel pairs. The result is that in [AKAZE example](https://github.com/opencv/opencv/blob/4.x/samples/cpp/tutorial_code/features2D/AKAZE_match.cpp) with 10000 key points detected by [ORB](https://docs.opencv.org/4.x/db/d95/classcv_1_1ORB.html), [TEBLID](https://docs.opencv.org/4.x/dd/dc1/classcv_1_1xfeatures2d_1_1TEBLID.html) obtains 621 inliers (75.2%) with 256 bits while [ORB](https://docs.opencv.org/4.x/db/d95/classcv_1_1ORB.html) descriptor obtains only 493 inliers (63%).
In addition, TEBLID implementation in OpenCV's xfeatures3D is parallelized and runs faster than current ORB descriptor extraction.
With the advent of [OpenCV 5 features2D reorganization](https://github.com/opencv/opencv/issues/24999) to get a super fast local features and descriptor extractor we should have TEBLID descriptor integrated in the ORB detector along with the original BRIEF descriptor or removing it completely.
### Additional context
_No response_ | feature,category: features2d,GSoC | low | Major |
2,702,816,034 | PowerToys | Disabling Alt+Tab+1234567890 | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
To recreate: hold down Tab in WoW.exe and execute a macro (Alt+1).
This will open the Alt+Tab window, even when disabled in PowerTools and the Registry because Alt+Tab is disabled but not Alt+Tab+1 and a third modifier cannot be added through PowerTools.
### ✔️ Expected Behavior
I would like the Alt+Tab+Number functionality to cease entirely, at least when I am gaming
### ❌ Actual Behavior
Even when Alt+Tab is disabled through PowerToys and the registry, I still have a problem with Alt+Tab+1, Alt+Tab+2, Alt+Tab+3, etc. opening up the 1st, 2nd, 3rd, windows that would be shown when Alt+Tab is normally pressed.
### Other Software
I use Tab for targeting in gaming as well as the Alt modifier for 48 of my abilities on my Logitech G600.
Logitech G Hub V 2024.8.641856
WoW.exe Retail V 11.0.5.57689 | Issue-Bug,Product-Keyboard Shortcut Manager,Needs-Triage | low | Minor |
2,702,871,630 | rust | `no method named "fract" found for type "f64" in the current scope` when in `no_std` | I tried this code:
```rust
#![no_std]
fn get_fract(f: f64) -> f64 {
f.fract()
}
```
I expected to see this happen: Compiles successfully
Instead, this happened: Fails to compile with `E0599` "no method named `fract` found for type `f64` in the current scope"
### Meta
`rustc --version --verbose`:
```
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
```
| T-libs-api,T-libs,C-discussion | low | Major |
2,702,872,013 | rust | compiletest can explode due to cyclic auxiliaries | Right now, compiletest does not forbid auxiliaries from declaring each other as auxiliaries, creating a cycle in the auxiliary build graph. This means that, e.g. for aux A requesting aux build of B, aux B can also request aux build of A.
This will eventually kaboom.
I can't find an existing issue for this, tried searching. | E-hard,T-bootstrap,C-bug,A-compiletest | low | Minor |
2,702,876,397 | deno | Mongodb Deno SRV connection not working. | Version: Deno 2.0.* / canary
I am using windows and decided to switch from feathersjs to simple deno or hono+deno with mongoose/mongodb
Tried all possible options, changing deno versions or mongoose or mongdb.
It works on my IP (sometimes) but rest it doesnt work from any other IP.
Evfen my Atlas cloud Network is set to 0.0.0.0,
Connect well with Hono+Bun or Bun, or Nodejs or featherjs.
After spending 3 days and tried all possible solutions, I gave up and switched to bun for now. My project is getting delayed. I had high hopes from deno.
**URI** : mongodb+srv://<user>:<password>@cluster.rhaiu.mongodb.net/testdb
Latest error is around :
`MongoDB connection failed: MongooseServerSelectionError: Server selection timed out after 5000 ms at _handleConnectionErrors (c:\home\git\...........) at NativeConnection.openUri (c:\home\git\...\node_modules\mongoose\lib\connection.js:860:11) at eventLoopTick (ext:core/01_core.js:214:9) at async connectDB (file:///C:/home/git/..../src/config/db.ts:76:5) at async startApp (file:///C:/home/git/..../main.ts:20:7) { message: "Server selection timed out after 5000 ms", reason: TopologyDescription { type: "ReplicaSetNoPrimary",`
| bug,node compat | low | Critical |
2,702,876,993 | rust | compiletest: Automatically specify `-Zunstable-options` when the `edition` header is set to an unstable edition | During each new edition, it is a bit of a hassle to add/remove `-Zunstable-options` headers for the next edition. I think it would be helpful to automatically specify `-Zunstable-options` when an `edition` header specifies an unstable edition. This means there would roughly be one place to make the switch (compiletest itself).
| A-testsuite,C-enhancement,T-compiler,T-bootstrap,A-compiletest,E-needs-investigation | low | Minor |
2,702,924,772 | ui | [bug]: Unhandled Runtime Error TypeError: Cannot destructure property 'getFieldState' of '(0 , {imported module [project]/nodemodules/.pnpm/[email protected]@18.3.1/nodemodules/react-hook-form/dist/index.esm.mjs [app-client] (ecmascript)}.useFormContext)(...)' as it is null. | ### Describe the bug

React does not recognize the `handleSubmit` prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase `handlesubmit` instead. If you accidentally passed it from a parent component, remove it from the DOM element.
React does not recognize the `setValue` prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase `setvalue` instead. If you accidentally passed it from a parent component, remove it from the DOM element.
React does not recognize the `getValues` prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase `getvalues` instead. If you accidentally passed it from a parent component, remove it from the DOM element.
React does not recognize the `resetField` prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase `resetfield` instead. If you accidentally passed it from a parent component, remove it from the DOM element.
React does not recognize the `clearErrors` prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase `clearerrors` instead. If you accidentally passed it from a parent component, remove it from the DOM element.
React does not recognize the `setError` prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase `seterror` instead. If you accidentally passed it from a parent component, remove it from the DOM element.
React does not recognize the `setFocus` prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase `setfocus` instead. If you accidentally passed it from a parent component, remove it from the DOM element.
React does not recognize the `getFieldState` prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase `getfieldstate` instead. If you accidentally passed it from a parent component, remove it from the DOM element.
React does not recognize the `formState` prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase `formstate` instead. If you accidentally passed it from a parent component, remove it from the DOM element.
Invalid values for props `trigger`, `register`, `watch`, `reset`, `unregister` on <form> tag. Either remove them from the element, or pass a string or number value to keep them in the DOM. For details, see https://react.dev/link/attribute-behavior
⨯ src/components/ui/form.tsx (47:11) @ useFormField
⨯ TypeError: Cannot destructure property 'getFieldState' of '(0 , __TURBOPACK__imported__module__$5b$project$5d2f$node_modules$2f2e$pnpm$2f$react$2d$hook$2d$form$40$7$2e$53$2e$2_react$40$18$2e$3$2e$1$2f$node_modules$2f$react$2d$hook$2d$form$2f$dist$2f$index$2e$esm$2e$mjs__$5b$app$2d$ssr$5d$__$28$ecmascript$29$__.useFormContext)(...)' as it is null.
at useFormField (./src/components/ui/form.tsx:47:11)
at FormLabel (./src/components/ui/form.tsx:93:33)
digest: "2630363313"
45 | const fieldContext = React.useContext(FormFieldContext);
46 | const itemContext = React.useContext(FormItemContext);
> 47 | const { getFieldState, formState } = useFormContext();
| ^
48 |
49 | const fieldState = getFieldState(fieldContext.name, formState);
50 |
### Affected component/components
Form
### How to reproduce
1. install nextjs latest project and shadcn
2. add form
3. run the development server
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
## Hardware Information:
- **Memory:** 8.0 GiB
- **Processor:** Intel® Core™ i5-10210U × 8
- **Graphics:** Intel® UHD Graphics (CML GT2)
- **Graphics 1:** NVIDIA GeForce MX250
- **Disk Capacity:** 512.1 GB
## Software Information:
- **Firmware Version:** 1.24.0
- **OS Name:** Ubuntu 24.04.1 LTS
- **OS Build:** (null)
- **OS Type:** 64-bit
- **GNOME Version:** 46
- **Windowing System:** X11
- **Kernel Version:** Linux 6.8.0-49-generic
Google Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,702,925,168 | godot | Tileset's physics layer polygon editor doesn't update tile preview when selecting another tile in expanded mode | ### Tested versions
- v4.3.stable.arch_linux, should be https://github.com/godotengine/godot/commit/77dcf97d82cbfe4e4615475fa52ca03da645dbd8
### System information
Godot v4.3.stable unknown - Arch Linux #1 SMP PREEMPT_DYNAMIC Fri, 22 Nov 2024 16:04:27 +0000 - X11 - Vulkan (Mobile) - integrated Intel(R) UHD Graphics 620 (WHL GT2) - Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz (8 Threads)
### Issue description
Issue:
The expanded collision polygon editor for a selected tile of a TileMapLayer does not update the content when selecting another Tile.
Expectation:
selecting another tile should either close the expanded view or update the tile in the still expanded polygon editor, I guess.
This might be related to https://github.com/godotengine/godot/issues/97669, but this exact issue does not happen for me.
### Steps to reproduce
1. Select a `TileMapLayer`
2. Open `TileSet` tab
3. Open `Select` tab
4. Select Tile in TileSet overview
5. Open polygon editor for collision polygon: `Physics` -> `Physics Layer 0` -> Polygon
6. Expand the editor: 
7. Select another tile in the overview
8. Preview in expanded Polygon editor does not update and show the newly selected Tile (left tree selected, right tree still shown):

9. Closing the expanded editor updates the preview (left tree selected, left tree shown):

### Minimal reproduction project (MRP)
No MRP | bug,topic:editor,topic:2d | low | Minor |
2,703,023,038 | ui | [bug]: CLI not abiding by -p / --path | ### Describe the bug
`npx shadcn@latest add button --path=/foo/bar/test-path` is using what's in my `components.json` aliases and not the `test-path`.
Seems like something may have broke as the reply to this doesn't work: https://github.com/shadcn-ui/ui/issues/1803
### Affected component/components
All
### How to reproduce
Add a component with the `-p` or `--path` flags.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
MacOS 15.1.1
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,703,053,925 | node | monitorEventLoopDelay metrics are completely off especially the count | ### Version
v23.3.0
### Platform
```text
Darwin Rochs-MBP-3710.localdomain 23.4.0 Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:43 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6000 arm64
```
### Subsystem
perf_hooks
### What steps will reproduce the bug?
```js
const { monitorEventLoopDelay } = require('perf_hooks')
const h = monitorEventLoopDelay({ resolution: 1 })
h.enable()
setInterval(() => {
// Just to trigger an event loop iteration 10x per second.
}, 100)
setTimeout(() => {
console.log(h)
}, 1000)
```
### How often does it reproduce? Is there a required condition?
Every time.
### What is the expected behavior? Why is that the expected behavior?
The expected behaviour would be to get accurate metrics about event loop delay without impacting the event loop itself. In the above example, I would expect to see ~10 iterations.
### What do you see instead?
Using `monitorEventLoopDelay` alters the behaviour of the event loop, causing it to run more than necessary and preventing it from idling. This means that after 1 second with a resolution of `1`, you get ~1000 iterations in 1 second for an app that is mostly idle because of the added timer.
Another similar issue is that even the metrics themselves are wildly inaccurate as described in https://github.com/nodejs/node/issues/34661
### Additional information
It seems to me that this is a fundamental issue with the approach that was used to capture these metrics, and the function would need to be rewritten in order to provide accurate information.
One way to do this that we've been using successfully for years at this point is to collect timing information with 2 hooks on libuv around the non-IO part of the loop to isolate user code. While it's not 100% accurate, it's definitely way more accurate than the current approach and doesn't skew regardless of very low (0-10) or very high (thousands to millions) iterations. Here is a code snippet of the alternative approach: https://github.com/DataDog/dd-native-metrics-js/blob/4b326d5a1669e7a69c5c84a4d3e036163e60b9d5/src/metrics/EventLoop.hpp#L72-L96
I'm willing to open a PR to fix the issue by reworking the internals to monitor libuv directly instead of relying on a timer, but I first wanted to open this issue to see what others think, and if there may be better approaches or subtle potential issue with the proposed new approach. If it sounds good to everyone then I'll just go ahead with a PR. We've been waiting for a long time to get this out of the box, and thought that we could finally remove our native addon but this ended up not being the case, so I'm pretty motivated to help any way I can to fix this. | perf_hooks | low | Critical |
2,703,164,765 | tauri | [bug] Hot reload on android (angular) not working: [vite] failed to connect to websocket | ### Describe the bug
I want to build an app on android using tauri with angular. Hot reload is not working and i get below error message when starting using `npm run tauri android dev`:
When making changes, they are detected and I see the logs:
```
Initial chunk files | Names | Raw size
main.js | main | 8.05 kB |
Application bundle generation complete. [0.772 seconds]
Page reload sent to client(s).
```
But changes are not reflected in the android emulator.
Also, when I use `npm run tauri dev` hot reloading works.
### Reproduction
- Create a project using create tauri app and select angular framework
`npm create tauri-app@latest`
select angular
- Initialize app
`cd tauri-app`
`npm install`
`npm run tauri android init`
- Start app in android emulator
`npm run tauri android dev`
- Change a file and notice that it is not reloaded
### Expected behavior
hot reload on android
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
✔ WebView2: 131.0.2903.70
✔ MSVC: Visual Studio Build Tools 2022
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 22.11.0
- npm: 10.9.0
[-] Packages
- tauri :crab:: 2.1.1
- tauri-build :crab:: 2.0.3
- wry :crab:: 0.47.2
- tao :crab:: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-shell :crab:: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1
- tauri-plugin-notification :crab:: git+https://github.com/tauri-apps/plugins-workspace?branch=v2#33e924574afde37003219fdd4d9085abaadf59b8 (2.0.1)
- @tauri-apps/plugin-notification : 2.0.0
- tauri-plugin-os :crab:: 2.0.1
- @tauri-apps/plugin-os : 2.0.0
```
### Stack trace
```text
E Tauri/Console: File: http://tauri.localhost/@vite/client - Line 488 - Msg: [vite] failed to connect to websocket.
E Tauri/Console: your current setup:
E Tauri/Console: (browser) tauri.localhost/ <--[HTTP]--> localhost:1420/ (server)
E Tauri/Console: (browser) tauri.localhost:/ <--[WebSocket (failing)]--> localhost:1420/ (server)
E Tauri/Console: Check out your Vite / network configuration and https://vitejs.dev/config/server-options.html#server-hmr .
```
### Additional context
Not sure if this is relevan, but I had to modify package.json that it even starts:
`"start": "ng serve --host 0.0.0.0",` instead of `"start": "ng serve",`. Besides that, its a normal project created from `npm create tauri-app@latest` (or the powershell version, not sure anymore). | type: bug,status: needs triage | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.