id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,789,625,977 | pytorch | torch.nn.functional.scaled_dot_product_attention is_causal fails for kv-cache case (sequential and further parallel attention) | ### ๐ The feature, motivation and pitch
**Behaviour found for torch version 2.2.2**
It would be great if scaled_dot_product_attention could be (easily) used for the case of sequential token generation when a kv-cache is present. However, currently when is_causal is set and a single query vector is put in, the function only compares against the earliest k and v resulting in repeatedly producing the same vector in sequential token generation.
More generally, in cases where further parallel attention is required - even when already a kv-cache has been generated - I found the correct attention matrix difficult to generate. The code I converged to is
`mask = torch.tril(torch.ones(w, w, dtype=torch.bool))[-h:, :]`
with
w = sequence length of KV-cache
h = sequence length of queries
since a lower-triangular attention mask is required which is all-true for the case of a single query vector.
Is this indeed the intended way to use scaled_dot_product_attention or am I doing something dumb?
### Alternatives
for is_causal=True, I propose to use the attention mask generated by the code above (plus some unsqueezing for broadcasting).
### Additional context
Behaviour found for torch version 2.2.2 | triaged,module: sdpa | low | Minor |
2,789,626,231 | kubernetes | kube-proxy --cleanup issues | 1. We don't actually document `kube-proxy --cleanup` anywhere.
2. It could probably do a _slightly_ better job than it actually does (eg, https://github.com/kubernetes/kubeadm/issues/3133#issuecomment-2592104802) | sig/network,triage/accepted | low | Minor |
2,789,629,529 | next.js | Client-side navigation error with rewrites and catch-all routes | ### Link to the code that reproduces this issue
repository: https://github.com/klaasman/nextjs-rewrite-catchall-conflict
preview deployment: https://nextjs-rewrite-catchall-conflict.vercel.app/
### To Reproduce
A routing precedence issue arises when URL rewrites conflict with catch-all routes during client-side navigation, but only when middleware exists AND the catch-all route has `fallback: false` AND the resource served by the proxied host returns a 2xx response.
To reproduce, either [clone the repository](https://github.com/klaasman/nextjs-rewrite-catchall-conflict) and run it, or open the preview deployment at https://nextjs-rewrite-catchall-conflict.vercel.app, or follow the steps below:
1. Create a Next.js project (using pages-router, didn't test with app router)
2. Add a `middleware.ts` file (can be an empty function).
2. Add a catch-all route file `pages/[...segments].tsx` where `getStaticPaths` is configured with `fallback: false`.
3. Set up `next.config.js` rewrite rule to proxy incoming requests (using `fallback` rewrites)
4. Ensure the proxied endpoint (e.g., httpstat.us/200) returns a 2xx status code.
5. Navigate to `/200` using client-side navigation (via Link component).
### Current vs. Expected behavior
**Current behavior:**
During client-side navigation, the catch-all route takes precedence over the configured rewrite when the proxied endpoint returns a 2xx response. This results in incorrect routing behavior. (see [preview-deployment](https://nextjs-rewrite-catchall-conflict.vercel.app/))
**Expected behavior:**
The client-side navigation should follow the rewrite rule, correctly routing to the proxied endpoint regardless of the 2xx response. The catch-all route should not interfere with the rewrite when the endpoint returns a successful response.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 22.12.0
npm: 10.9.0
Yarn: 1.22.19
pnpm: 9.14.2
Relevant Packages:
next: 15.2.0-canary.11 // Latest available version is detected (15.2.0-canary.11).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware, Pages Router, Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed)
### Additional context
_No response_ | Middleware,Navigation,Pages Router | low | Critical |
2,789,700,016 | ollama | most powerful model with 4m context MiniMax-Text-01 | https://huggingface.co/MiniMaxAI/MiniMax-Text-01
https://x.com/MiniMax__AI/status/1879226391352549451 | model request | low | Major |
2,789,709,557 | vscode | Firefox on iOS: Cannot type in editor after changing focus | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
In Firefox on iOS (iPad Pro 12.9 Gen4) I am unable to type in the editor or terminal windows (whichever I last used) after switching focus to another application and then back to Firefox. Clicking (touching) another pane, e.g. if I was in the editor changing to the terminal and back, resolves the issue.
- VS Code Version: GitHub Codespaces
- OS Version: iOS 18
Steps to Reproduce:
1. Open a GitHub Codespace in Firefox on iOS.
2. Open an editor window to a file.
3. Change to another application.
4. Change back to Firefox.
5. No keyboard input works in the editor.
6. Select a different pane (e.g. explorer, terminal).
7. Select the editor.
8. The keyboard works normally.
**Additional Context**
This is not limited to GitHub Codespaces. The default in-place editor on github.com experiences the same issue. The issue occurs both with and without the Magic Keyboard attached.
Being on iOS there are no plugins in Firefox. Safe Browsing is FF is set to `Strict`.
| firefox,editor-edit-context | low | Critical |
2,789,714,297 | pytorch | Torch compile cache | ### ๐ Describe the bug
Hi,
I'm setting the following values
TORCHINDUCTOR_FX_GRAPH_CACHE
TORCHINDUCTOR_CACHE_DIR
I see the cache folder is populated by 3.8G.
I'm creating a tar archive to place the cache on another instance, with same H100 and untar on the other instance. But compile time shows the cache has not been used.
If I'm setting the variables on two instances that share the same network drive, compile on one, then run on the other one, I see that the compile time is still very high, like the cache has not been taken into account.
What are the signatures of the cache elements? If I know better what triggers the cache retrieval, I might find a configuration where I can reuse the cache between instances.
Thanks for your help!
### Versions
torch @ https://download.pytorch.org/whl/nightly/cu124/torch-2.6.0.dev20240918%2Bcu124-cp311-cp311-linux_x86_64.whl
torchaudio @ https://download.pytorch.org/whl/nightly/cu124/torchaudio-2.5.0.dev20240918%2Bcu124-cp311-cp311-linux_x86_64.whl
torchvision @ https://download.pytorch.org/whl/nightly/cu124/torchvision-0.20.0.dev20240918%2Bcu124-cp311-cp311-linux_x86_64.whl
pytorch_triton @ https://download.pytorch.org/whl/nightly/pytorch_triton-3.1.0%2B5fe38ffd73-cp311-cp311-linux_x86_64.whl
cc @chauhang @penguinwu | triaged,oncall: pt2 | low | Critical |
2,789,742,411 | flutter | [ERROR:flutter/runtime/dart_vm_initializer.cc(40)] Unhandled Exception: PlatformException(channel-error, Unable to establish connection on channel: "dev.flutter.pigeon.shared_preferences_foundation.LegacyUserDefaultsApi.getAll"., null, null) | ### Steps to reproduce
[ERROR:flutter/runtime/dart_vm_initializer.cc(40)] Unhandled Exception: PlatformException(channel-error, Unable to establish connection on channel: "dev.flutter.pigeon.shared_preferences_foundation.LegacyUserDefaultsApi.getAll"., null, null)
### Expected results
[ERROR:flutter/runtime/dart_vm_initializer.cc(40)] Unhandled Exception: PlatformException(channel-error, Unable to establish connection on channel: "dev.flutter.pigeon.shared_preferences_foundation.LegacyUserDefaultsApi.getAll"., null, null)
### Actual results
[ERROR:flutter/runtime/dart_vm_initializer.cc(40)] Unhandled Exception: PlatformException(channel-error, Unable to establish connection on channel: "dev.flutter.pigeon.shared_preferences_foundation.LegacyUserDefaultsApi.getAll"., null, null)
### Code sample
<details open><summary>Code sample</summary>
```dart
await SharedPreferences.getInstance();
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="1347" alt="Image" src="https://github.com/user-attachments/assets/413713cb-d5d0-4e9a-9faa-572efb9a4631" />
</details>
### Logs
<details open><summary>Logs</summary>
```console
h List all available interactive commands.
d Detach (terminate "flutter run" but leave application running).
c Clear the screen
q Quit (terminate the application on the device).
A Dart VM Service on 15 pro max 17.0 is available at: http://127.0.0.1:59532/h7t2NlTl1Ww=/
The Flutter DevTools debugger and profiler on 15 pro max 17.0 is available at: http://127.0.0.1:9101?uri=http://127.0.0.1:59532/h7t2NlTl1Ww=/
Performing hot restart...
Restarted application in 575ms.
[ERROR:flutter/runtime/dart_vm_initializer.cc(40)] Unhandled Exception: PlatformException(channel-error, Unable to establish connection on channel: "dev.flutter.pigeon.shared_preferences_foundation.LegacyUserDefaultsApi.getAll"., null, null)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
h List all available interactive commands.
d Detach (terminate "flutter run" but leave application running).
c Clear the screen
q Quit (terminate the application on the device).
A Dart VM Service on 15 pro max 17.0 is available at: http://127.0.0.1:59532/h7t2NlTl1Ww=/
The Flutter DevTools debugger and profiler on 15 pro max 17.0 is available at: http://127.0.0.1:9101?uri=http://127.0.0.1:59532/h7t2NlTl1Ww=/
Performing hot restart...
Restarted application in 575ms.
[ERROR:flutter/runtime/dart_vm_initializer.cc(40)] Unhandled Exception: PlatformException(channel-error, Unable to establish connection on channel: "dev.flutter.pigeon.shared_preferences_foundation.LegacyUserDefaultsApi.getAll"., null, null)
```
</details>
| waiting for customer response,in triage | low | Critical |
2,789,742,478 | terminal | BringWindowToTop doesn't set active tab | ### Description of the new feature
Currently, BringWindowToTop only brings the terminal window to the foreground; however, it does not activate the correct tab. In certain scenarios, the user's focus is not restored to the intended location if the active tab isn't the one that initiated the BringWindowToTop call.
Using the information from [this StackOverflow post](https://stackoverflow.com/a/59659421/3594197), I created a [functional example](https://github.com/zacuke/start-shim). The example demonstrates BringWindowToTop being used to ensure the user focus returns to the correct command line.
A current workaround for this limitation is to avoid using tabs altogether and stick with individual console windows instead.
As a side note, it would be nice if the built-in windows `start` command also had a `/refocus` option, so the functional example I created wouldn't be necessary. We could alias `start /wait /refocus myapp $*` to achieve this workflow behavior.
### Proposed technical implementation details
If BringWindowToTop() can't be hooked to automatically switch to the active tab, is there an alternative approach to programmatically identify the correct tab using some sort of ID or identifier, and then invoke a function to set that tab as active?
I see a possible way of doing it on line 940 in src/cascadia/TerminalControl/HwndTerminal.cpp
`void __stdcall TerminalSetFocus(void* terminal)` but it doesn't seem to me that is exposed as a public API.
Which leads me to this
```
/// This class is only left public since xaml cannot work with internal classes.
/// </remarks>
public class TerminalContainer : HwndHost
{
...
private IntPtr TerminalContainer_MessageHook(IntPtr hwnd, int msg, IntPtr wParam, IntPtr lParam, ref bool handled)
{
if (hwnd == this.hwnd)
{
switch ((NativeMethods.WindowMessage)msg)
{
case NativeMethods.WindowMessage.WM_SETFOCUS:
NativeMethods.TerminalSetFocus(this.terminal);
```
I can imagine trying to send the terminal host process some kind of IPC message which would trigger the TerminalSetFocus allowing us to call BringWindowToTop as well as this additional trick to bring the correct tab up too. But ideally, the host process detects BringWindowToTop() and also brings the tab to top. | Issue-Feature,Product-Conpty,Area-Windowing | low | Minor |
2,789,781,950 | vscode | Dragging does not work in a VSCode extension WebView | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
Version: 1.96.3 (user setup)
Commit: 91fbdddc47bc9c09064bf7acf133d22631cbf083
Date: 2025-01-09T18:14:09.060Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
- OS Version: Windows 11 Professional x64 22631.4602
Steps to Reproduce:
1. Clone https://github.com/wchengk09/vscode-webview-dragging-does-not-work
2. Open it in VSCode
3. Press `F5` to run the extension
I want to create a WebView to add files by dragging it from the explorer to the WebView. But when I drag a file, VSCode will open it in a new editor, and the dragging event will not be caught.
I also tried dragging files from the Windows explorer, and the result is the same.
I even tried pressing `shift` when dragging the file, but I still cannot catch the event.
I found some similar issues, such as #182449 and #218626, but none of them helped me.
Here is a video:
https://github.com/user-attachments/assets/acaf2051-4162-42bc-b7c6-d3101df17f31 | bug,webview | low | Critical |
2,789,801,882 | neovim | Capability to show diagnostics as markdown | ### Problem
I was working on pretty typescript error message formatter for float. While it's somewhat better than the default output it would be nice to have markdown syntax highlighting for diagnostics as well.
I did some digging and found out that we are using same function Neovim using for LSP `hover` in diagnostics except we have `syntax=plaintext` hard coded here.
https://github.com/s1n7ax/neovim/blob/a78eddd54112033eea0212865efd2f75cc59fc93/runtime/lua/vim/diagnostic.lua?plain=1#L1977
## Before

## After

I would like to implement this feature. My plan is to add `syntax` field to `vim.diagnostic.Opts.Float`. possible values would be either `markdown` or `plaintext`. Additionally we might need to disable default highlights that's added line by line
https://github.com/s1n7ax/neovim/blob/a78eddd54112033eea0212865efd2f75cc59fc93/runtime/lua/vim/diagnostic.lua?plain=1#L1986
### Expected behavior
When syntax is set to markdown, diagnostics markdown syntax highlighting should be used | enhancement,lsp,diagnostic | low | Critical |
2,789,812,889 | PowerToys | Image resizer incorrectly calculates missing dimension in fit mode | ### Microsoft PowerToys version
0.87.1
### Installation method
WinGet
### Running as admin
No
### Area(s) with issue?
Image Resizer
### Steps to reproduce
Take this image as input.

The dimensions are 2040 x 1530 pixels.
Now resize it.

### โ๏ธ Expected Behavior
Resulting dimension should be 1000 pixels x whatever number of pixels is necessary to keep the ratio.
### โ Actual Behavior
Here is the resulting image.

The dimensions are 853 x 640 pixels.
### Other Software
None. | Issue-Bug,Needs-Triage | low | Minor |
2,789,819,980 | opencv | threshold API : query threshold only, and masks | ### Describe the feature and motivation
Currently (OpenCV 4.11), `cv::threshold()` supports auto-threshold binarizations OTSU and TRIANGLE and returns the computed threshold value.
It's a pity that there is no API to only query that auto-threshold without actually performing the thresholding.
To avoid adding an API for that and do not duplicate any code, a very easy fix would be to add some flag ~`cv::THRESH_DISABLE`~ `cv::THRESH_DRYRUN` specifically for that purpose.
However, `cv::threshold()` also has a big limitation : it does not support masking. I would like to add such a `cv::thresholdWithMask()` (or a `cv::threshold()` overload with additional mask parameter). But if this new API is created, ~`cv::THRESH_DISABLE`~ `cv::THRESH_DRYRUN` could be irrelevant, replaced by a boolean parameter of the new function.
- would a PR `cv::thresholdWithMask()` (or `cv::threshold()` overload) be accepted or is there a reason why thresholding does not support masking ?
- which is better, `cv::thresholdWithMask()` or `cv::threshold()` overload ?
- is the flag ~`cv::THRESH_DISABLE`~ `cv::THRESH_DRYRUN` a better idea than some `cv::getThresholdAutoValue()` ?
- is the flag ~`cv::THRESH_DISABLE`~ `cv::THRESH_DRYRUN` a better idea than some specific boolean input parameter ?
| feature,category: imgproc | low | Minor |
2,789,828,631 | yt-dlp | Cannot download on Patreon with --cookies-from-browser | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
France
### Provide a description that is worded well enough to be understood
Hi everyone, and thank you for the software,
Since the last few weeks, I cannot download no more on Patreon, it used to work.
I have update to the last stable version:
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--cookies-from-browser', 'firefox', 'https://www.patreon.com/posts/four-ways-to-118220732']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [dade5e35c] (zip)
[debug] Python 3.12.8 (CPython x86_64 64bit) - Linux-6.12.6-100.fc40.x86_64-x86_64-with-glibc2.39 (OpenSSL 3.2.2 4 Jun 2024, glibc 2.39)
[debug] exe versions: ffmpeg 6.1.2 (setts), ffprobe 6.1.2
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2023.05.07, mutagen-1.47.0, requests-2.31.0, sqlite3-3.45.1, urllib3-1.26.20, websockets-12.0
[debug] Proxy map: {}
Extracting cookies from firefox
[debug] Extracting cookies from: "/home/kevin/.mozilla/firefox/meicwcva.default-release-3-1699360557147/cookies.sqlite"
Extracted 602 cookies from firefox
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[patreon] Extracting URL: https://www.patreon.com/posts/four-ways-to-118220732
[patreon] 118220732: Downloading API JSON
ERROR: [patreon] 118220732: Unable to download JSON metadata: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>)
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/extractor/patreon.py", line 281, in _real_extract
post = self._call_api(
^^^^^^^^^^^^^^^
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/extractor/patreon.py", line 43, in _call_api
return self._download_json(
^^^^^^^^^^^^^^^^^^^^
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/extractor/common.py", line 1152, in download_content
res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/extractor/common.py", line 1112, in download_handle
res = self._download_webpage_handle(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/extractor/common.py", line 962, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/extractor/common.py", line 911, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/networking/_urllib.py", line 398, in _send
res = opener.open(urllib_req, timeout=self._calculate_timeout(request))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/urllib/request.py", line 521, in open
response = meth(req, response)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/urllib/request.py", line 630, in http_response
response = self.parent.error(
^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/urllib/request.py", line 559, in error
return self._call_chain(*args)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/urllib/request.py", line 492, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "/usr/lib64/python3.12/urllib/request.py", line 639, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/extractor/common.py", line 898, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query, extensions))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/YoutubeDL.py", line 4175, in urlopen
return self._request_director.send(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/networking/common.py", line 117, in send
response = handler.send(request)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/networking/_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/networking/common.py", line 340, in send
return self._send(request)
^^^^^^^^^^^^^^^^^^^
File "/mnt/rip/Rip/./yt-dlp/yt_dlp/networking/_urllib.py", line 403, in _send
raise HTTPError(UrllibResponseAdapter(e.fp), redirect_loop='redirect error' in str(e)) from e
yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden
```
| account-needed,site-bug,triage,can-share-account | low | Critical |
2,789,830,741 | pytorch | Region check for in-place read and write does not always work | ### ๐ Describe the bug
Hello!
If I try to read and write from and to the same locations along the first axis of a tensor, I get a RuntimeError, which is expected:
```python
>>> arr = torch.arange(9).reshape(3, 3)
>>> arr[1:, :] = arr[:-1, :]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[25], line 1
----> 1 arr[1:, :] = arr[:-1, :]
2 arr
RuntimeError: unsupported operation: some elements of the input tensor and the written-to tensor refer to a single memory location. Please clone() the tensor before performing the operation.
```
However, if I do the same on the second axis, there is no error, but (for me) unexpected behaviour:
```python
>>> arr = torch.arange(9).reshape(3, 3)
>>> arr[:, 1:] = arr[:, :-1]
>>> arr
tensor([[0, 0, 0],
[3, 3, 3],
[6, 6, 6]])
```
The expected behaviour would be what happens on the GPU (and also for numpy arrays):
```python
>>> arr = torch.arange(9).reshape(3, 3).cuda()
>>> arr[:, 1:] = arr[:, :-1]
>>> arr
tensor([[0, 0, 1],
[3, 3, 4],
[6, 6, 7]], device='cuda:0')
```
I'm not sure if this is actually a bug or just something to be aware of, but I would at least expect CPU and GPU operations to behave the same. How does the check work that results in the RuntimeError in the first case? Is it too expensive to make work for arbitrary slices?
Thanks!
### Versions
Tested torch versions up to 2.4.0 | triaged,module: partial aliasing | low | Critical |
2,789,836,169 | deno | Support Custom Signal Listeners or a Global `afterAll` Hook in `deno test` | ## Description
It appears that when defining tests with `Deno.test()` and running them using `deno test`, adding custom signal listeners (e.g. `Deno.addSignalListener`) does not work as expected. The `deno test` runner seems to ignore these listeners, as the signals are only handled internally within the test runner's Rust implementation.
In my use case, I am building a test wrapper that awaits a global asynchronous teardown function during a final test step. While this works well during normal test execution, I want the teardown function to be triggered even when the test suite exits unexpectedly, such as when a `SIGINT` or `SIGTERM` signal is received.
Here is a simplified example:
```ts
Deno.addSignalListener("SIGINT", () => {
console.log("Initiating cleanup before exiting...");
// Perform teardown actions, then exit
});
// Calls Deno.test() internally and registers teardown logic within a test-step
await runTestSuiteAndRegisterTests();
```
However, when running the test suite with `deno test`, the signal listener is never triggered. Instead, the `deno test` runner uses its own signal handler and outputs the following when interrupted:
```
SIGINT The following tests were pending:
run mytopleveltest => ./packages/mypackage/src/runner.ts:42:42
```
This behavior prevents custom signal listeners from being used to handle cleanup tasks or shutdown hooks effectively.
## Feature Request
To make `deno test` more flexible and customizable for advanced test scenarios, I propose one of the following enhancements:
1. **Support for Custom Signal Listeners**:
- Allow users to register their own signal listeners (`Deno.addSignalListener`) in addition to the test runner's default handlers.
- This would enable users to handle signals like `SIGINT` or `SIGTERM` and execute custom logic, such as cleanup functions, before the process exits.
2. **A Global `afterAll` Hook**:
- Introduce a global `afterAll` hook that is executed after all tests complete, regardless of how the test suite terminates (e.g., normal exit or signal interruption).
- The `afterAll` hook would allow users to register teardown logic that ensures proper cleanup before the process exits.
- the hook should allow passing in synchronous functions as well as asynchronous functions that will be awaited
## Expected Behavior
- If a `Deno.addSignalListener` is registered, the test runner should invoke the user's listener(s) in addition to its own internal signal handling.
- Alternatively, a global `afterAll` hook would provide a standardized way to execute cleanup logic at the end of the test suite, regardless of how the suite terminates.
## Use Case
This feature would enable advanced test suite behaviors, such as:
- Cleaning up global resources (e.g. databases, external services, or temporary files) when the suite exits.
- Ensuring all test lifecycle hooks (setup, execution, teardown) are handled gracefully, even in interrupted scenarios.
## Current Behavior
Currently, `deno test`:
- Ignores user-defined signal listeners and only uses its internal Rust signal handling.
- Outputs pending tests on signal interruption but does not allow for custom cleanup logic to run.
## Conclusion
Adding support for custom signal listeners or a global `afterAll` hook would greatly enhance the flexibility and usability of the `deno test` runner, especially for users with complex test suite requirements.
Thank you for considering this feature request! I'm happy to provide further details or examples if needed.
| testing | low | Minor |
2,789,866,244 | rust | `-C split-debuginfo={off,packed,unpacked}` is (effectively) untested on windows-msvc and windows-gnu (well, windows generally) | `tests/run-make/split-debuginfo` has this:
https://github.com/rust-lang/rust/blob/2776bdfe423c9fdfcd6313d678f0852ea26f1309/tests/run-make/split-debuginfo/Makefile#L34-L38
Which lumps windows-msvc and windows-gnu together, and also doesn't test anything on windows altogether.
Noticed while working on porting `split-debuginfo` to rmake.rs.
If this is tested somewhere else, well, it's clearly not in this run-make test. | A-testsuite,A-debuginfo,E-hard,T-compiler,O-windows-gnu,O-windows-msvc,C-bug,A-run-make | low | Critical |
2,789,885,203 | PowerToys | Exclusion list for 'Move newly created windows to their last known zone' | ### Description of the new feature / enhancement
I generally love the 'Move newly created windows to their last known zone' feature, but it breaks Edge's own attempt to place its windows to monitors they were last on. After starting edge, it first positions windows well and then PowerToys move it to a single zone.
I wouldn't like to exclude edge completely as I like the snap feature that works well.
### Scenario when this would be used?
Described above.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,789,934,113 | kubernetes | persistentVolumeReclaimPolicy: Recycle is said deprecated but still used in PV example | Recycle reclaim policy is deprecated:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#recycle
https://github.com/kubernetes/kubernetes/pull/59063
https://groups.google.com/g/kubernetes-dev/c/uexugCza84I
Primary example to showcase a PV is using `persistentVolumeReclaimPolicy: Recycle`:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes | sig/docs,needs-triage | low | Minor |
2,789,947,937 | transformers | Add support for MiniMax-Text-01 and MiniMax-VL-01 from MiniMaxAI | ### Model description
MiniMaxAI has just released two new models for text generation. While the code and weights have been made publicly available, the code requires significant formatting and cleaning to align with the standards of the Hugging Face Transformers library. The models are:
**MiniMax-Text-01**
MiniMax-Text-01 is a powerful language model with 456 billion total parameters, of which 45.9 billion are activated per token. To better unlock the long context capabilities of the model, MiniMax-Text-01 adopts a hybrid architecture that combines Lightning Attention, Softmax Attention and Mixture-of-Experts (MoE). Leveraging advanced parallel strategies and innovative compute-communication overlap methodsโsuch as Linear Attention Sequence Parallelism Plus (LASP+), varlen ring attention, Expert Tensor Parallel (ETP), etc., MiniMax-Text-01's training context length is extended to 1 million tokens, and it can handle a context of up to 4 million tokens during the inference. On various academic benchmarks, MiniMax-Text-01 also demonstrates the performance of a top-tier model.
**MiniMax-VL-01**
It adopts the โViT-MLP-LLMโ framework, which is a commonly used technique in the field of multimodal large language models. The model is initialized and trained with three key parts: a 303-million-parameter Vision Transformer (ViT) for visual encoding, a randomly initialized two-layer MLP projector for image adaptation, and the MiniMax-Text-01 as the base LLM. MiniMax-VL-01 has a notable dynamic resolution feature. Input images are resized per a pre-set grid, with resolutions from 336ร336 to 2016ร2016, keeping a 336ร336 thumbnail. The resized images are split into non-overlapping patches of the same size. These patches and the thumbnail are encoded separately and then combined for a full image representation. The training data for MiniMax-VL-01 consists of caption, description, and instruction data. The Vision Transformer (ViT) is trained on 694 million image-caption pairs from scratch. Across four distinct stages of the training pipeline, a total of 512 billion tokens are processed, leveraging this vast amount of data to endow the model with strong capabilities. Finally, MiniMax-VL-01 has reached top-level performance on multimodal leaderboards, demonstrating its edge and dependability in complex multimodal tasks.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
- Research Paper: https://arxiv.org/abs/2501.08313
- Authors: [MiniMax](https://arxiv.org/search/cs?searchtype=author&query=MiniMax), [Aonian Li](https://arxiv.org/search/cs?searchtype=author&query=Li,+A), [Bangwei Gong](https://arxiv.org/search/cs?searchtype=author&query=Gong,+B), et al.
- Implementation
- [MiniMaxAI/MiniMax-Text-01](https://huggingface.co/MiniMaxAI/MiniMax-Text-01/tree/main)
- [MiniMaxAI/MiniMax-VL-01](https://huggingface.co/MiniMaxAI/MiniMax-VL-01/tree/main)
- Models Weights
- [MiniMaxAI/MiniMax-Text-01](https://huggingface.co/MiniMaxAI/MiniMax-Text-01)
- [MiniMaxAI/MiniMax-VL-01](https://huggingface.co/MiniMaxAI/MiniMax-VL-01) | New model | low | Major |
2,789,974,579 | TypeScript | Accessing protected computed property does not produce a compiler error | ### ๐ Search Terms
protected computed no error bug
### ๐ Version & Regression Information
This is the behavior in every version I tried (v3.3.3333, v4.0.5, v5.7.3), and I reviewed the FAQ for entries about **Common "Bugs" That Aren't Bugs**
### โฏ Playground Link
https://tsplay.dev/wjdn7m
### ๐ป Code
```ts
declare class Foo {
protected readonly baz = 42
}
declare class Bar {
readonly foo: Foo
}
declare const bar: Bar
// @ts-expect-error
console.log(bar.foo["baz"].toFixed(2))
```
### ๐ Actual behavior
- (without `// @ts-expect-error`): No errors
- (with `// @ts-expect-error`): Compiler error `Unused '@ts-expect-error' directive`
### ๐ Expected behavior
- (without `// @ts-expect-error`): Compiler error `Property [โฆ] is protected and only accessible within class 'Foo' and its subclasses`
- (with `// @ts-expect-error`): No errors
### Additional information about the issue
_No response_ | Suggestion,Awaiting More Feedback | low | Critical |
2,789,992,508 | tensorflow | Windows libtensorflow size increased 4x with 2.17 | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.17+
### Custom code
No
### OS platform and distribution
Windows x86_64
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Libtensorflow.dll for windows was 238MB with 2.16.2, is 909MB with 2.17.0, and is 931MB with 2.18.0.
At the same time the linux versions haven't changed significantly.
I would not expect the tensorflow binaries on windows to be over twice the size of linux, nor for the size to increase so much without any notice in the release notes.
I've tried to look through the bazel configs but there's nothing obvious to me which would cause this!
### 2.16.2
versions_2.16.2_libtensorflow-cpu-windows-x86_64 238MB
versions_2.16.2_libtensorflow-cpu-linux-x86_64 422MB
### 2.17.0
versions_2.17.0_libtensorflow-cpu-windows-x86_64 909MB
versions_2.17.0_libtensorflow-cpu-linux-x86_64 412MB
### 2.18.0
versions_2.18.0_libtensorflow-cpu-windows-x86_64 931MB
### Standalone code to reproduce the issue
```shell
Libtensorflow is provided by google via the GCS buckets documented here https://www.tensorflow.org/install/lang_c
```
### Relevant log output
```shell
``` | type:build/install,subtype:windows,2.17 | medium | Critical |
2,789,993,956 | flutter | [flutter_adaptive_scaffold] allow users to specify navigation rail padding with AdaptiveScaffold | ### Use case
The current implementation of AdapativeScaffold supplies the default 8 dp padding which is undesirable in some layouts, such as when there is an `AppBar` in the body of a given page. I would like to have control over the padding and be able to override it manually.
<img width="202" alt="Image" src="https://github.com/user-attachments/assets/4ace5aab-8146-43d5-a379-980f3078e837" />
### Proposal
Include a `navigationRailPadding` property in the constructor for `AdaptiveScaffold` that allows you to override the default padding for the navigation rail. | framework,package,c: proposal,team-ecosystem,P2,p: flutter_adaptive_scaffold,triaged-ecosystem | low | Minor |
2,790,025,830 | tensorflow | TensorRT ( C++ ) inference strange behavior on Jetson AGX Xavier | I developed 2 distinct models, for 2 use cases, to analyzed some vibration patterns: one of them when system is turn on and second when system is shut down (so there are no any vibration detected )
The entire training process uses TensorFlow 2.7.0 (an auto encoder in python) to create .h5 models, which are converted to .onnx models files and then to .engine files for the Jetson platform (Jetson AGX Xavier CUDA ).
Jetson AGX Xavier specs:
cuda: 11.4.315
cuDNN: 8.6.0
tensorRT: 8.5.2.2
jetpack: 5.1.3
python3 -c "import tensorflow as tf; print('TensorFlow version:', tf.__version__)"
TensorFlow version: 2.11.0
Auto encoder trainig script in python ( sample) :
```
input_img = tf.keras.layers.Input(shape=(2000, lines))
# Encoder
x = tf.keras.layers.Conv1D(12, 128, padding='same')(input_img)
x = tf.keras.layers.MaxPooling1D(4)(x) # Downsample: 2000 -> 500
x = tf.keras.layers.Conv1D(12, 64, padding='same')(x)
x = tf.keras.layers.MaxPooling1D(2)(x) # Downsample: 500 -> 250
x = tf.keras.layers.Conv1D(12, 16, padding='same')(x)
x = tf.keras.layers.MaxPooling1D(2)(x) # Downsample: 250 -> 125
# Bottleneck
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(self.__config['MODEL']['ENCODED_STATE_SIZE'])(x)
# Decoder
x = tf.keras.layers.Dense(125 * 12)(x) # Expand to match last encoder feature size
x = tf.keras.layers.Reshape((125, 12))(x)
x = tf.keras.layers.UpSampling1D(2)(x) # Upsample: 125 -> 250
x = tf.keras.layers.Conv1D(12, 16, padding='same')(x)
x = tf.keras.layers.UpSampling1D(2)(x) # Upsample: 250 -> 500
x = tf.keras.layers.Conv1D(12, 64, padding='same')(x)
x = tf.keras.layers.UpSampling1D(4)(x) # Upsample: 500 -> 2000
x = tf.keras.layers.Conv1D(lines, 128, padding='same')(x) # Correct Final Layer
# Model definition
self.__model = tf.keras.models.Model(input_img, x)
```
It doesn't matter which model I use, inference result values are the SAME, exactly the same values, as if the neural network learned nothing......
You can see below 2 comparative charts with the inference values

Don't assume that the data might be corrupted, I have collected enough data to train for both cases and I've checked their validity
The confusing part is that inference works in python, using TensorFlow 2.7.0 with GPU, an Ubuntu Focal x86_64...I mean, I saw different values between 2 charts
In Jetson I've made a py script to convert .h5 model file into .onnx and then into .engine format:
```
import tf2onnx
import tensorflow as tf
import argparse
import subprocess
def convert_h5_to_onnx(h5_model_path, onnx_model_path):
print("Converting .h5 model to ONNX...")
model = tf.keras.models.load_model(h5_model_path)
model_proto, _ = tf2onnx.convert.from_keras(model, opset=13)
with open(onnx_model_path, "wb") as f:
f.write(model_proto.SerializeToString())
print(f"ONNX model saved at {onnx_model_path}")
def convert_onnx_to_trt(onnx_model_path, engine_model_path, trt_precision_mode):
print("Converting ONNX model to TensorRT Engine...")
fp_precision_flag = '--fp16' if trt_precision_mode.upper() == 'FP16' else ''
trtexec_path = "/usr/src/tensorrt/bin/trtexec"
command = f"{trtexec_path} --onnx={onnx_model_path} --saveEngine={engine_model_path} {fp_precision_flag}"
process = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if process.returncode != 0:
print(f"Error in converting to TensorRT engine:\n{process.stderr.decode('utf-8')}")
else:
print(f"TensorRT engine saved at {engine_model_path}")
# Main
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Convert a .h5 model to ONNX and TensorRT engine format")
parser.add_argument("--h5_model_path", type=str, required=True, help="Path to the .h5 model file")
parser.add_argument("--onnx_model_path", type=str, required=True, help="Path to save the converted ONNX model")
parser.add_argument("--engine_model_path", type=str, required=True, help="Path to save the converted TensorRT engine")
parser.add_argument("--trt_precision_mode", type=str, choices=['FP32', 'FP16'], default="FP16", help="Precision mode for TensorRT engine (FP32 or FP16)")
args = parser.parse_args()
convert_h5_to_onnx(args.h5_model_path, args.onnx_model_path)
convert_onnx_to_trt(args.onnx_model_path, args.engine_model_path, args.trt_precision_mode)
```
"RunInference" is my C/C++ inference function using TensorRT ( as input data , I used FFT s of the raw values )
```
void RunInference(ICudaEngine* engine, IExecutionContext* context, int input_index, int output_index, kiss_fft_cpx* x_fft, kiss_fft_cpx* y_fft, kiss_fft_cpx* z_fft, float* predicted_output, int g_code, const char* clientName) {
int batchSize = 1;
int input_size = batchSize * 2000 * 3 * sizeof(float); // [1, 2000, 3]
int output_size = batchSize * 3 * sizeof(float); // [1, 3]
// Prepare normalized input data and set DC component to zero
float input_data[2000 * 3];
const int MN = 4000;
for (int i = 0; i < 2000; i++) {
input_data[i * 3 + 0] = sqrt(x_fft[i].r * x_fft[i].r + x_fft[i].i * x_fft[i].i) / MN;
input_data[i * 3 + 1] = sqrt(y_fft[i].r * y_fft[i].r + y_fft[i].i * y_fft[i].i) / MN;
input_data[i * 3 + 2] = sqrt(z_fft[i].r * z_fft[i].r + z_fft[i].i * z_fft[i].i) / MN;
}
// Set DC component to zero
input_data[0] = 0; // X-axis
input_data[1] = 0; // Y-axis
input_data[2] = 0; // Z-axis
////Allocate GPU buffers for input and output
void* buffers[2];
write_log(LOG_DEBUG, "RunInference for '%s' - input_index = %d, output_index = %d", clientName, input_index, output_index);
if (cudaMalloc(&buffers[input_index], input_size) != cudaSuccess) {
write_log(LOG_ERROR, "RunInference for '%s' - Failed to allocate memory for input buffer", clientName);
return;
}
if (cudaMalloc(&buffers[output_index], output_size) != cudaSuccess) {
write_log(LOG_ERROR, "RunInference for '%s' - Failed to allocate memory for output buffer", clientName);
cudaFree(buffers[input_index]);
return;
}
if (cudaMemset(buffers[input_index], 0, input_size) != cudaSuccess) {
write_log(LOG_ERROR, "RunInference for '%s' - Failed to memset input buffer to zero", clientName);
return;
}
if (cudaMemset(buffers[output_index], 0, output_size) != cudaSuccess) {
write_log(LOG_ERROR, "RunInference for '%s' - Failed to memset output buffer to zero", clientName);
return;
}
///////////////////
// Copy the input data to the GPU
cudaMemcpy(buffers[input_index], input_data, input_size, cudaMemcpyHostToDevice);
// Launch inference
cudaStream_t stream;
cudaStreamCreate(&stream);
context->enqueueV2(buffers, stream, nullptr);
cudaStreamSynchronize(stream);
// Copy the output data from GPU to CPU
cudaMemcpy(predicted_output, buffers[output_index], output_size, cudaMemcpyDeviceToHost);
// Free GPU memory
cudaFree(buffers[input_index]);
cudaFree(buffers[output_index]);
cudaStreamDestroy(stream);
}
```
This is how I load one model in app and how I call inference function:
```
IRuntime* runtime = createInferRuntime(gLogger);
if (!runtime) {
write_log(LOG_ERROR, "client_handler: Failed to create runtime for client %s", client.ClientName);
return (void*)-1;
}
std::vector<char> engine_data = loadEngine(client.ModelPath, client.ClientName);
ICudaEngine* engine = runtime->deserializeCudaEngine(engine_data.data(), engine_data.size(), nullptr);
if (!engine) {
write_log(LOG_ERROR, "client_handler: Failed to create engine for thread %s", client.ClientName);
return (void*)-1;
}
IExecutionContext* context = engine->createExecutionContext();
if (!context) {
write_log(LOG_ERROR, "client_handler: Failed to create execution context for thread %s", client.ClientName);
engine->destroy();
return (void*)-1;
}
int input_index = engine->getBindingIndex(client.ModelInputBindingName) ;//get from config file
int output_index = engine->getBindingIndex(client.ModelOutputBindingName); //get from config file
RunInference(engine, context, input_index, output_index, x_fft, y_fft, z_fft, predicted_output, client.G_code, client.ClientName);
// Synchronize the GPU to ensure all operations are completed
cudaDeviceSynchronize();
// Check for CUDA errors after synchronization
cudaError_t err = cudaGetLastError();
if (err != cudaSuccess) {
write_log(LOG_ERROR, "CUDA error after synchronization in thread '%s': %s", client.ClientName, cudaGetErrorString(err));
} else {
write_log(LOG_INFO, "GPU synchronized successfully for thread '%s'", client.ClientName);
}
context->destroy();
engine->destroy();
runtime->destroy();
```
I want to point out that the vibrations are detected by the application, but I donโt understand why the range of values doesnโt change depending on the trained model from the two scenarios. I suspect the problem might be with the model conversion or the inference process / function in TensorRT using C/C++.
Do you have any suggestions?
| stat:awaiting response,TF 2.11 | medium | Critical |
2,790,039,794 | PowerToys | (Mouse) back/forward button support | ### Description of the new feature / enhancement
Ability to use the back/forward button e.g. mouse buttons in the PowerToys app
### Scenario when this would be used?
Switch back to the dashboard when visiting a module's settings and the back button was clicked
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,790,055,265 | rust | Tracking issue for release notes of #135536: Add more impls of PartialEq and PartialOrd for strings |
This issue tracks the release notes text for #135536.
### Steps
- [x] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [x] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Libraries
- [Add more impls of `PartialEq` and `PartialOrd` for strings and different levels of references](https://github.com/rust-lang/rust/pull/135536). These impls make more cases of comparisons work, and provide more robustness against future inference issues when adding more impls in the future.
````
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @joshtriplett -- origin issue/PR authors and assignees for starting to draft text | T-libs-api,relnotes,needs-triage,I-release-nominated,relnotes-tracking-issue | low | Minor |
2,790,057,913 | neovim | reframe :help "Run with g==" hints as codelenses | # Problem
Currently the "Run with ..." pseudo-codelenses in `:help` buffers don't have an interface for enabling/disabling them.
# Proposal
_Originally posted by @mfussenegger in https://github.com/neovim/neovim/issues/31947#issuecomment-2592420466_
> Conceptually the "Run with ..." virtual-text is similar to either LSP inlay hints or codelens, both can have a executable command. (we currently don't support that for inlay-hints, but for codelens we have `vim.lsp.codelens.run`. Some servers provide exactly this type of code lens for test cases (Run Test | Debug Test)
>
> This has me wonder if we either:
>
> - Could have a vim.lsp.server for vim help files, and provide the info as codelens and make it executable via `codelens.run()`
> - Have a codelens abstraction sitting between virtual-text/lines and LSP
>
>
> This would give users a more consistent interface and user experience in that if it looks like a code-lens it acts like a code lens. Same keymaps, same options to enable/disable.
| enhancement,plugin,runtime,lsp | low | Critical |
2,790,066,491 | flutter | can't move/reuse widget in differenct place of widget tree. | ### Steps to reproduce
I saved a widget tree for reusing, like the following
```dart
class StageState extends State<Stage> implements Show<Stage> {
int idx = 0;
Key key = unique_key();
Widget win = primary_ui();
@override
void show(int n) {
idx = n;
setState(() {});
}
Widget build(BuildContext context) {
if (idx == 0) {
var stack = Stack(key: key, children: <Widget>[
ExcludeFocus(child: IgnorePointer(ignoring: true, child: win)),
Align(child: login_view())
]);
return Expanded(
key: key,
child: stack,
);
} else {
return Expanded(key: key, child: win);
}
}
}
```
But flutter don't allow me to do so
### Expected results
I can reuse the widget, because all the widget is saved, and the widget is normal in users sense, the internal state conflict should be resolved in framework.
### Actual results
```console
The following assertion was thrown building Expanded(flex: 1):
'package:flutter/src/widgets/framework.dart': Failed assertion: line 5730 pos 12: 'state._element ==
null': is not true.
Either the assertion indicates an error in the framework itself, or we should provide substantially
more information in this error message to help you determine and fix the underlying cause.
In either case, please report this assertion by filing a bug on GitHub:
https://github.com/flutter/flutter/issues/new?template=2_bug.yml
The relevant error-causing widget was:
Expanded Expanded:file:///home/zylthinking/code/flt/lib/stage.dart:80:7
When the exception was thrown, this was the stack:
#2 new StatefulElement (package:flutter/src/widgets/framework.dart:5730:12)
#3 StatefulWidget.createElement (package:flutter/src/widgets/framework.dart:777:38)
... Normal element mounting (4 frames)
#7 Element.inflateWidget (package:flutter/src/widgets/framework.dart:4480:16)
#8 MultiChildRenderObjectElement.inflateWidget (package:flutter/src/widgets/framework.dart:7049:36)
#9 MultiChildRenderObjectElement.mount (package:flutter/src/widgets/framework.dart:7061:32)
... Normal element mounting (7 frames)
#16 Element.inflateWidget (package:flutter/src/widgets/framework.dart:4480:16)
#17 MultiChildRenderObjectElement.inflateWidget (package:flutter/src/widgets/framework.dart:7049:36)
#18 MultiChildRenderObjectElement.mount (package:flutter/src/widgets/framework.dart:7061:32)
... Normal element mounting (13 frames)
#31 Element.inflateWidget (package:flutter/src/widgets/framework.dart:4480:16)
#32 MultiChildRenderObjectElement.inflateWidget (package:flutter/src/widgets/framework.dart:7049:36)
#33 MultiChildRenderObjectElement.mount (package:flutter/src/widgets/framework.dart:7061:32)
#34 Element.inflateWidget (package:flutter/src/widgets/framework.dart:4480:16)
#35 Element.updateChild (package:flutter/src/widgets/framework.dart:3957:20)
#36 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5656:16)
#37 Element.rebuild (package:flutter/src/widgets/framework.dart:5347:7)
#38 ProxyElement.update (package:flutter/src/widgets/framework.dart:5960:5)
#39 Element.updateChild (package:flutter/src/widgets/framework.dart:3941:15)
#40 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5656:16)
#41 StatefulElement.performRebuild (package:flutter/src/widgets/framework.dart:5794:11)
#42 Element.rebuild (package:flutter/src/widgets/framework.dart:5347:7)
#43 BuildScope._tryRebuild (package:flutter/src/widgets/framework.dart:2694:15)
#44 BuildScope._flushDirtyElements (package:flutter/src/widgets/framework.dart:2753:11)
#45 BuildOwner.buildScope (package:flutter/src/widgets/framework.dart:3048:18)
#46 WidgetsBinding.drawFrame (package:flutter/src/widgets/binding.dart:1176:21)
#47 RendererBinding._handlePersistentFrameCallback (package:flutter/src/rendering/binding.dart:475:5)
#48 SchedulerBinding._invokeFrameCallback (package:flutter/src/scheduler/binding.dart:1397:15)
#49 SchedulerBinding.handleDrawFrame (package:flutter/src/scheduler/binding.dart:1318:9)
#50 SchedulerBinding._handleDrawFrame (package:flutter/src/scheduler/binding.dart:1176:5)
#51 _invoke (dart:ui/hooks.dart:312:13)
#52 PlatformDispatcher._drawFrame (dart:ui/platform_dispatcher.dart:427:5)
#53 _drawFrame (dart:ui/hooks.dart:283:31)
(elided 2 frames from class _AssertionError)
```
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| waiting for customer response,in triage | low | Critical |
2,790,067,302 | vscode | Vscode crash on arch linux after saving file | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.3-1
- OS Version: Arch linux 6.12.8-arch1-1
Steps to Reproduce:
1. Open vscode
2. Open 2 files
3. Change some line and save the file
4. Crash
Logs:
[log.txt](https://github.com/user-attachments/files/18426266/log.txt)
| info-needed | low | Critical |
2,790,074,914 | tauri | [bug] window.outer_position() always zero on Linux | ### Describe the bug
I'm trying to debug why the window state plugin is restoring size but not position, and have discovered that both `window.outer_position()` and `window.inner_position()` return `(0,0)` on Ubuntu 24.04.
### Reproduction
Here's the basic debug code I have in `run()`
```rust
// Save window state on exit
match event {
RunEvent::WindowEvent{
event: WindowEvent::CloseRequested{..} | WindowEvent::Moved(_),
label,
..
} => {
let windows = app_handle.webview_windows();
let window = windows.get(&label).unwrap();
debug!("INNER SIZE {:?}", window.inner_size());
debug!("OUTER POSITION {:?}", window.outer_position());
if let Err(e) = app_handle.save_window_state(StateFlags::all()) {
warn!("Failed to save window state {e:?}");
};
},
_ => {}
};
```
### Expected behavior
Linux should be able to get window positions
### Full `tauri info` output
```text
[โ] Environment
- OS: Ubuntu 24.4.0 x86_64 (X64) (ubuntu on wayland)
โ webkit2gtk-4.1: 2.46.5
โ rsvg2: 2.58.0
โ rustc: 1.84.0 (9fc6b4312 2025-01-07)
โ cargo: 1.84.0 (66221abde 2024-11-19)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 18.19.1
- npm: 9.2.0
[-] Packages
- tauri ๐ฆ: 2.2.0
- tauri-build ๐ฆ: 2.0.4
- wry ๐ฆ: 0.48.0
- tao ๐ฆ: 0.31.1
- @tauri-apps/api ๎: 2.0.2 (outdated, latest: 2.2.0)
- @tauri-apps/cli ๎: 2.2.2 (outdated, latest: 2.2.4)
[-] Plugins
- tauri-plugin-log ๐ฆ: 2.2.0
- @tauri-apps/plugin-log ๎: 2.0.0 (outdated, latest: 2.2.0)
- tauri-plugin-fs ๐ฆ: 2.2.0
- @tauri-apps/plugin-fs ๎: 2.0.0 (outdated, latest: 2.2.0)
- tauri-plugin-window-state ๐ฆ: 2.2.0
- @tauri-apps/plugin-window-state ๎: not installed!
- tauri-plugin-single-instance ๐ฆ: 2.2.0
- @tauri-apps/plugin-single-instance ๎: not installed!
- tauri-plugin-os ๐ฆ: 2.2.0
- @tauri-apps/plugin-os ๎: 2.0.0 (outdated, latest: 2.2.0)
- tauri-plugin-shell ๐ฆ: 2.2.0
- @tauri-apps/plugin-shell ๎: 2.0.0 (outdated, latest: 2.2.0)
- tauri-plugin-updater ๐ฆ: 2.3.0
- @tauri-apps/plugin-updater ๎: not installed!
- tauri-plugin-dialog ๐ฆ: 2.2.0
- @tauri-apps/plugin-dialog ๎: 2.0.0 (outdated, latest: 2.2.0)
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
```
### Stack trace
_No response_
### Additional context
ChatGPT suggests this could have something to do with Wayland's coordinate system ๐คท๐ผโโ๏ธ | type: documentation | low | Critical |
2,790,109,834 | tensorflow | How can I compile TensorFlowLite for Swift without Bitcode?" | Hello!
I would like to use TensorFlowLite Swift without Bitcode, as Apple has discontinued the use of Bitcode. I am using version 2.17.0 available on CocoaPods, but the binaries already come with Bitcode. How can I resolve this?
And does TensorFlowLite Swift have a version available that does not require Rosetta to run on ARM architectures?
Thank you | stat:awaiting response,type:support,comp:lite,iOS | medium | Minor |
2,790,113,787 | rust | Compiler error while compiling the k256 library | The compiler error is caused by the line `self.0.sign_digest(Digest::new_with_prefix(digest_bytes))` and `self.0.sign_digest(Digest::new())` (commented out in the provided code for testing).
<!--
Thank you for finding an Internal Compiler Error! ๐ง If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
use k256::ecdsa::{Signature, SigningKey, signature::{DigestSigner, DigestVerifier}};
use sha3::{Digest, Keccak256};
struct SignatureECDSA(SigningKey);
impl SignatureECDSA {
pub fn generate() -> Self {
Self(SigningKey::random(&mut rand::thread_rng()))
}
pub fn new(signing_key: SigningKey) -> Self {
Self(signing_key)
}
pub fn sign(&self, digest_bytes: &[u8; 32]) -> Signature {
// self.0.sign_digest(Digest::new())
self.0.sign_digest(Digest::new_with_prefix(digest_bytes))
}
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: aarch64-apple-darwin
release: 1.84.0
LLVM version: 19.1.5
```
### Error output
```
thread 'rustc' panicked at compiler/rustc_trait_selection/src/error_reporting/traits/fulfillment_errors.rs:1824:44:
called `Option::unwrap()` on a `None` value
stack backtrace:
0: 0x10e8b50c8 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hadba1856081fe8dc
1: 0x10bae7760 - core::fmt::write::h5358bd20891469bc
2: 0x10e8a9370 - std::io::Write::write_fmt::hbf0611cc5d72cc91
3: 0x10e8b4f88 - std::sys::backtrace::BacktraceLock::print::he2302a8c253c9a13
4: 0x10e8b7464 - std::panicking::default_hook::{{closure}}::hec1f77a77d7e7ffc
5: 0x10e8b72ac - std::panicking::default_hook::hdd59ab537dd27efb
6: 0x10c6f99a4 - <alloc[2c4d29f23d41489e]::boxed::Box<rustc_driver_impl[fde3e58afcc15f53]::install_ice_hook::{closure#0}> as core[9d7f355b91206121]::ops::function::Fn<(&dyn for<'a, 'b> core[9d7f355b91206121]::ops::function::Fn<(&'a std[73d933f036ca7723]::panic::PanicHookInfo<'b>,), Output = ()> + core[9d7f355b91206121]::marker::Sync + core[9d7f355b91206121]::marker::Send, &std[73d933f036ca7723]::panic::PanicHookInfo)>>::call
7: 0x10e8b7d30 - std::panicking::rust_panic_with_hook::h533a16f5f89e4278
8: 0x10e8b7944 - std::panicking::begin_panic_handler::{{closure}}::h168c3a4362c8e4df
9: 0x10e8b5570 - std::sys::backtrace::__rust_end_short_backtrace::h601e3529ca2053df
10: 0x10e8b7630 - _rust_begin_unwind
11: 0x110f9a66c - core::panicking::panic_fmt::ha0f8363f677e0181
12: 0x110f9a6dc - core::panicking::panic::hdb1c1abf01ff1978
13: 0x110f9a604 - core::option::unwrap_failed::hb903c8fd63cd2e84
14: 0x10e41966c - <rustc_trait_selection[89a651589df3a14e]::error_reporting::TypeErrCtxt>::report_similar_impl_candidates
15: 0x10e43d8d8 - <rustc_trait_selection[89a651589df3a14e]::error_reporting::TypeErrCtxt>::report_fulfillment_errors
16: 0x10cc7150c - <rustc_hir_typeck[986b14b3a50cff68]::fn_ctxt::FnCtxt>::report_ambiguity_errors
17: 0x10ce2f990 - rustc_hir_typeck[986b14b3a50cff68]::typeck
18: 0x10ddf1910 - rustc_query_impl[145af9e7c4635083]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[145af9e7c4635083]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[e034b0937dcee594]::query::erase::Erased<[u8; 8usize]>>
19: 0x10de4ff1c - <rustc_query_impl[145af9e7c4635083]::query_impl::typeck::dynamic_query::{closure#2} as core[9d7f355b91206121]::ops::function::FnOnce<(rustc_middle[e034b0937dcee594]::ty::context::TyCtxt, rustc_span[23ddc3a9082bdf6f]::def_id::LocalDefId)>>::call_once
20: 0x10dd94afc - rustc_query_system[2ae06c999199ab2d]::query::plumbing::try_execute_query::<rustc_query_impl[145af9e7c4635083]::DynamicConfig<rustc_data_structures[8a142a31ce6323d3]::vec_cache::VecCache<rustc_span[23ddc3a9082bdf6f]::def_id::LocalDefId, rustc_middle[e034b0937dcee594]::query::erase::Erased<[u8; 8usize]>, rustc_query_system[2ae06c999199ab2d]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[145af9e7c4635083]::plumbing::QueryCtxt, true>
21: 0x10df25d84 - rustc_query_impl[145af9e7c4635083]::query_impl::typeck::get_query_incr::__rust_end_short_backtrace
22: 0x10ca14820 - <rustc_middle[e034b0937dcee594]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[c404e5c07f76cdb8]::check_crate::{closure#4}>::{closure#0}
23: 0x10c9fe2d0 - <rustc_data_structures[8a142a31ce6323d3]::sync::parallel::ParallelGuard>::run::<(), rustc_data_structures[8a142a31ce6323d3]::sync::parallel::par_for_each_in<&rustc_span[23ddc3a9082bdf6f]::def_id::LocalDefId, &[rustc_span[23ddc3a9082bdf6f]::def_id::LocalDefId], <rustc_middle[e034b0937dcee594]::hir::map::Map>::par_body_owners<rustc_hir_analysis[c404e5c07f76cdb8]::check_crate::{closure#4}>::{closure#0}>::{closure#0}::{closure#1}::{closure#0}>
24: 0x10ca65d34 - rustc_hir_analysis[c404e5c07f76cdb8]::check_crate
25: 0x10d090718 - rustc_interface[11fadb382dc0a35f]::passes::analysis
26: 0x10ddf19b0 - rustc_query_impl[145af9e7c4635083]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[145af9e7c4635083]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[e034b0937dcee594]::query::erase::Erased<[u8; 1usize]>>
27: 0x10de503c4 - <rustc_query_impl[145af9e7c4635083]::query_impl::analysis::dynamic_query::{closure#2} as core[9d7f355b91206121]::ops::function::FnOnce<(rustc_middle[e034b0937dcee594]::ty::context::TyCtxt, ())>>::call_once
28: 0x10dd00068 - rustc_query_system[2ae06c999199ab2d]::query::plumbing::try_execute_query::<rustc_query_impl[145af9e7c4635083]::DynamicConfig<rustc_query_system[2ae06c999199ab2d]::query::caches::SingleCache<rustc_middle[e034b0937dcee594]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[145af9e7c4635083]::plumbing::QueryCtxt, true>
29: 0x10df13d40 - rustc_query_impl[145af9e7c4635083]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
30: 0x10c71bb08 - <rustc_middle[e034b0937dcee594]::ty::context::GlobalCtxt>::enter::<rustc_driver_impl[fde3e58afcc15f53]::run_compiler::{closure#0}::{closure#1}::{closure#6}, core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>>
31: 0x10c6b77f4 - <rustc_interface[11fadb382dc0a35f]::interface::Compiler>::enter::<rustc_driver_impl[fde3e58afcc15f53]::run_compiler::{closure#0}::{closure#1}, core[9d7f355b91206121]::result::Result<core[9d7f355b91206121]::option::Option<rustc_interface[11fadb382dc0a35f]::queries::Linker>, rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>>
32: 0x10c6eca24 - rustc_span[23ddc3a9082bdf6f]::create_session_globals_then::<core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>, rustc_interface[11fadb382dc0a35f]::util::run_in_thread_with_globals<rustc_interface[11fadb382dc0a35f]::util::run_in_thread_pool_with_globals<rustc_interface[11fadb382dc0a35f]::interface::run_compiler<core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>, rustc_driver_impl[fde3e58afcc15f53]::run_compiler::{closure#0}>::{closure#1}, core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>>::{closure#0}, core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}>
33: 0x10c6e1c50 - std[73d933f036ca7723]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[11fadb382dc0a35f]::util::run_in_thread_with_globals<rustc_interface[11fadb382dc0a35f]::util::run_in_thread_pool_with_globals<rustc_interface[11fadb382dc0a35f]::interface::run_compiler<core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>, rustc_driver_impl[fde3e58afcc15f53]::run_compiler::{closure#0}>::{closure#1}, core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>>::{closure#0}, core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>>
34: 0x10c6e5050 - <<std[73d933f036ca7723]::thread::Builder>::spawn_unchecked_<rustc_interface[11fadb382dc0a35f]::util::run_in_thread_with_globals<rustc_interface[11fadb382dc0a35f]::util::run_in_thread_pool_with_globals<rustc_interface[11fadb382dc0a35f]::interface::run_compiler<core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>, rustc_driver_impl[fde3e58afcc15f53]::run_compiler::{closure#0}>::{closure#1}, core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>>::{closure#0}, core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[9d7f355b91206121]::result::Result<(), rustc_span[23ddc3a9082bdf6f]::ErrorGuaranteed>>::{closure#1} as core[9d7f355b91206121]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
35: 0x10e8c1df0 - std::sys::pal::unix::thread::Thread::new::thread_start::ha1530855ce6ee203
36: 0x1970ac2e4 - __pthread_deallocate
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.84.0 (9fc6b4312 2025-01-07) running on aarch64-apple-darwin
note: compiler flags: --crate-type lib -C embed-bitcode=no -C debuginfo=2 -C split-debuginfo=unpacked -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [typeck] type-checking `cryptography::ecdsa::<impl at core/src/cryptography/ecdsa.rs:6:1: 6:20>::sign`
#1 [analysis] running analysis passes on this crate
end of query stack
warning: `pod-core` (lib) generated 2 warnings (run `cargo fix --lib -p pod-core` to apply 2 suggestions)
error: could not compile `pod-core` (lib); 2 warnings emitted
Caused by:
process didn't exit successfully: `/Users/justinaszaliaduonis/.rustup/toolchains/stable-aarch64-apple-darwin/bin/rustc --crate-name pod_core --edition=2021 core/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=176 --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debuginfo=2 -C split-debuginfo=unpacked --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values())' -C metadata=5dfa7fc097c045d7 -C extra-filename=-5dfa7fc097c045d7 --out-dir /Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps -C incremental=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/incremental -L dependency=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps --extern alloy_consensus=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_consensus-575f2a9d311403a6.rmeta --extern alloy_network=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_network-aed7f31b4108e763.rmeta --extern alloy_primitives=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_primitives-da31089a68d4daa1.rmeta --extern alloy_rpc_types=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_rpc_types-4bab661dd9076f22.rmeta --extern alloy_signer=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_signer-ff3e5ef0d362441c.rmeta --extern alloy_signer_local=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_signer_local-fd08fbd5b331cc96.rmeta --extern alloy_sol_types=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_sol_types-676c006ac9658dd0.rmeta --extern anyhow=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libanyhow-d3b278b560b571e1.rmeta --extern async_trait=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libasync_trait-66317e4e1eeea339.dylib --extern backoff=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libbackoff-6ddb2a2229dc28fd.rmeta --extern bincode=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libbincode-4440fd910e2e7173.rmeta --extern bitvec=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libbitvec-be496b0820bccfde.rmeta --extern bls_signatures=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libbls_signatures-37a0cf3cfafa8253.rmeta --extern bytes=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libbytes-1c7495fed026ffe0.rmeta --extern colored=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libcolored-59cd1b7b79d6571a.rmeta --extern const_format=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libconst_format-1cd57abea4a99a5b.rmeta --extern ecdsa=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libecdsa-c69e5b95dc948354.rmeta --extern ssz=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libssz-422185e560af5475.rmeta --extern fern=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libfern-cf5b1308e4b3af0c.rmeta --extern futures=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libfutures-b3d9a5aeb30388c1.rmeta --extern hex=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libhex-fbd2faa49cb32b1e.rmeta --extern humantime=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libhumantime-509cc6eee85d09b1.rmeta --extern itertools=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libitertools-f870de18c4a1367b.rmeta --extern jsonrpc_core=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libjsonrpc_core-803238207c194eec.rmeta --extern jsonrpc_http_server=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libjsonrpc_http_server-543e3a3d1dfd60ed.rmeta --extern k256=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libk256-d361e2837e756c80.rmeta --extern log=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liblog-772705a0b3e0ca9f.rmeta --extern rand=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/librand-ba188ab9ae3b1888.rmeta --extern rand_core=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/librand_core-d435e3cd28922f58.rmeta --extern rocksdb=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/librocksdb-4fcf09b511b9fad9.rmeta --extern serde=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libserde-1a42ab868aa043c4.rmeta --extern serde_hex=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libserde_hex-fba37cffb63ebba3.rmeta --extern serde_json=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libserde_json-ffc6474a53afc502.rmeta --extern sha3=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libsha3-052f69e42719b731.rmeta --extern subtle=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libsubtle-aab122e52ebab310.rmeta --extern sylow=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libsylow-5c9faa6001270801.rmeta --extern tokio=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libtokio-7121a589b609e181.rmeta --extern tokio_stream=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libtokio_stream-7ce4f25a2d11acc6.rmeta --extern tokio_util=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libtokio_util-7d4599cbd6e3d102.rmeta --extern toml=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libtoml-c9134db262540f71.rmeta -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/librocksdb-sys-35b1810c411157e3/out -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/librocksdb-sys-35b1810c411157e3/out -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/bzip2-sys-bdc90ecee50f6211/out/lib -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/libz-sys-cbe9e5a544ab1b8e/out/lib -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/libz-sys-cbe9e5a544ab1b8e/out/lib -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/lz4-sys-1fb1cd34b7227341/out -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/zstd-sys-474b81cc591b6cf4/out` (exit status: 101)
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
thread 'rustc' panicked at compiler/rustc_trait_selection/src/error_reporting/traits/fulfillment_errors.rs:1824:44:
called `Option::unwrap()` on a `None` value
stack backtrace:
0: _rust_begin_unwind
1: core::panicking::panic_fmt
2: core::panicking::panic
3: core::option::unwrap_failed
4: <rustc_trait_selection::error_reporting::TypeErrCtxt>::report_similar_impl_candidates
5: <rustc_trait_selection::error_reporting::TypeErrCtxt>::report_fulfillment_errors
6: <rustc_hir_typeck::fn_ctxt::FnCtxt>::report_ambiguity_errors
7: rustc_hir_typeck::typeck
[... omitted 2 frames ...]
8: <rustc_middle::hir::map::Map>::par_body_owners::<rustc_hir_analysis::check_crate::{closure#4}>::{closure#0}
9: <rustc_data_structures::sync::parallel::ParallelGuard>::run::<(), rustc_data_structures::sync::parallel::par_for_each_in<&rustc_span::def_id::LocalDefId, &[rustc_span::def_id::LocalDefId], <rustc_middle::hir::map::Map>::par_body_owners<rustc_hir_analysis::check_crate::{closure#4}>::{closure#0}>::{closure#0}::{closure#1}::{closure#0}>
10: rustc_hir_analysis::check_crate
11: rustc_interface::passes::analysis
[... omitted 2 frames ...]
12: <rustc_middle::ty::context::GlobalCtxt>::enter::<rustc_driver_impl::run_compiler::{closure#0}::{closure#1}::{closure#6}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
13: <rustc_interface::interface::Compiler>::enter::<rustc_driver_impl::run_compiler::{closure#0}::{closure#1}, core::result::Result<core::option::Option<rustc_interface::queries::Linker>, rustc_span::ErrorGuaranteed>>
14: rustc_span::create_session_globals_then::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}>
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.84.0 (9fc6b4312 2025-01-07) running on aarch64-apple-darwin
note: compiler flags: --crate-type lib -C embed-bitcode=no -C debuginfo=2 -C split-debuginfo=unpacked -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [typeck] type-checking `cryptography::ecdsa::<impl at core/src/cryptography/ecdsa.rs:6:1: 6:20>::sign`
#1 [analysis] running analysis passes on this crate
end of query stack
warning: `pod-core` (lib) generated 2 warnings (run `cargo fix --lib -p pod-core` to apply 2 suggestions)
error: could not compile `pod-core` (lib); 2 warnings emitted
Caused by:
process didn't exit successfully: `/Users/justinaszaliaduonis/.rustup/toolchains/stable-aarch64-apple-darwin/bin/rustc --crate-name pod_core --edition=2021 core/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=176 --crate-type lib --emit=dep-info,metadata,link -C embed-bitcode=no -C debuginfo=2 -C split-debuginfo=unpacked --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values())' -C metadata=5dfa7fc097c045d7 -C extra-filename=-5dfa7fc097c045d7 --out-dir /Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps -C incremental=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/incremental -L dependency=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps --extern alloy_consensus=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_consensus-575f2a9d311403a6.rmeta --extern alloy_network=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_network-aed7f31b4108e763.rmeta --extern alloy_primitives=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_primitives-da31089a68d4daa1.rmeta --extern alloy_rpc_types=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_rpc_types-4bab661dd9076f22.rmeta --extern alloy_signer=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_signer-ff3e5ef0d362441c.rmeta --extern alloy_signer_local=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_signer_local-fd08fbd5b331cc96.rmeta --extern alloy_sol_types=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liballoy_sol_types-676c006ac9658dd0.rmeta --extern anyhow=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libanyhow-d3b278b560b571e1.rmeta --extern async_trait=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libasync_trait-66317e4e1eeea339.dylib --extern backoff=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libbackoff-6ddb2a2229dc28fd.rmeta --extern bincode=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libbincode-4440fd910e2e7173.rmeta --extern bitvec=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libbitvec-be496b0820bccfde.rmeta --extern bls_signatures=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libbls_signatures-37a0cf3cfafa8253.rmeta --extern bytes=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libbytes-1c7495fed026ffe0.rmeta --extern colored=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libcolored-59cd1b7b79d6571a.rmeta --extern const_format=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libconst_format-1cd57abea4a99a5b.rmeta --extern ecdsa=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libecdsa-c69e5b95dc948354.rmeta --extern ssz=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libssz-422185e560af5475.rmeta --extern fern=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libfern-cf5b1308e4b3af0c.rmeta --extern futures=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libfutures-b3d9a5aeb30388c1.rmeta --extern hex=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libhex-fbd2faa49cb32b1e.rmeta --extern humantime=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libhumantime-509cc6eee85d09b1.rmeta --extern itertools=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libitertools-f870de18c4a1367b.rmeta --extern jsonrpc_core=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libjsonrpc_core-803238207c194eec.rmeta --extern jsonrpc_http_server=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libjsonrpc_http_server-543e3a3d1dfd60ed.rmeta --extern k256=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libk256-d361e2837e756c80.rmeta --extern log=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/liblog-772705a0b3e0ca9f.rmeta --extern rand=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/librand-ba188ab9ae3b1888.rmeta --extern rand_core=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/librand_core-d435e3cd28922f58.rmeta --extern rocksdb=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/librocksdb-4fcf09b511b9fad9.rmeta --extern serde=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libserde-1a42ab868aa043c4.rmeta --extern serde_hex=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libserde_hex-fba37cffb63ebba3.rmeta --extern serde_json=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libserde_json-ffc6474a53afc502.rmeta --extern sha3=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libsha3-052f69e42719b731.rmeta --extern subtle=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libsubtle-aab122e52ebab310.rmeta --extern sylow=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libsylow-5c9faa6001270801.rmeta --extern tokio=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libtokio-7121a589b609e181.rmeta --extern tokio_stream=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libtokio_stream-7ce4f25a2d11acc6.rmeta --extern tokio_util=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libtokio_util-7d4599cbd6e3d102.rmeta --extern toml=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/deps/libtoml-c9134db262540f71.rmeta -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/librocksdb-sys-35b1810c411157e3/out -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/librocksdb-sys-35b1810c411157e3/out -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/bzip2-sys-bdc90ecee50f6211/out/lib -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/libz-sys-cbe9e5a544ab1b8e/out/lib -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/libz-sys-cbe9e5a544ab1b8e/out/lib -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/lz4-sys-1fb1cd34b7227341/out -L native=/Users/justinaszaliaduonis/Desktop/POD/pod-core/target/debug/build/zstd-sys-474b81cc591b6cf4/out` (exit status: 101)
```
</p>
</details>
| I-ICE,T-compiler,C-bug,needs-triage | low | Critical |
2,790,126,754 | godot | Dragging Editor to another screen breaks editor | ### Tested versions
Godot v4.3.stable.official [77dcf97d8]
Related issue:
#17699
### System information
MacOS 14.6.1 - Processor: 2.3 GHz 8-Core Intel Core i9 - Graphics: AMD Radeon Pro 5500M 8GB
### Issue description
When having the Godot editor open on mac and dragging it to another screen, the editor completely breaks. I can resize the window, but nothing adjusts in the window and I cannot click on anything. I tend to like to have my code in nvim on my main screen and have the editor on the smaller screen. In this case, I am using my iPad Pro as a second screen, so they are both hi-rez, but different resolutions.
Mac Display: 16-inch (3072 x 1920)
iPad Pro Display: 12.9-inch (2732 x 2048)
<img width="1290" alt="Image" src="https://github.com/user-attachments/assets/d4308c63-bc25-4ed5-bf36-85a96b491c23" />
### Steps to reproduce
Open a project and drag the editor to another window. It will be broken, as in you cannot click on anything other than resizing and dragging the window. If you drag it back to the original window and resize, it will work again.
### Minimal reproduction project (MRP)
No project needed | bug,topic:editor,crash | low | Critical |
2,790,169,407 | transformers | Mismatch between `_convert_token_to_id_with_added_voc` and `encode` for Llama-3.2 tokenizer | ### System Info
- `transformers` version: 4.47.1
- Platform: macOS-13.6.3-arm64-arm-64bit
- Python version: 3.12.0
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.5.1
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: N/A
### Who can help?
@ArthurZucker and @itazap - thanks!
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Simple reproducible example:
```{python}
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
print(tokenizer._convert_token_to_id_with_added_voc("\n") is None)
print(tokenizer.encode("\n"))
print(repr(tokenizer.decode(198)))
```
should give
```{python}
True
[128000, 198] # 12800 is just the added beginning of text token
'\n'
```
with `encode` and `decode` we get `\n` has id 198 and both directions work, but `tokenizer._convert_token_to_id_with_added_voc("\n")` returns `None`.
Thanks in advance!
### Expected behavior
`_convert_token_to_id_with_added_voc` should ideally process tokens the same way as `encode` and `decode` | Usage,bug | low | Minor |
2,790,203,893 | transformers | AttributeError in automatic_speech_recognition.py when return_segments and return_timestamps are both True | ### System Info
- `transformers` version: 4.48.0
- Platform: macOS-15.2-arm64-arm-64bit
- Python version: 3.12.3
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. With Whisper long-form automatic speech recognition, set return_segments=True and return_timestamps=True.
2. AttributeError: 'dict' object has no attribute 'dtype' in `automatic_speech_recognition.py:postprocess`
line 579 in postprocess ` if self.framework == "pt" and outputs[key].dtype in (torch.bfloat16, torch.float16):` is expecting outputs['tokens'] to be a tensor. In the case return_segments is true, outputs['tokens'] is not a tensor, but rather a dict with keys ['sequences', 'segments']. The block of code at line 526 is accounting for this case when return_timestamps = "word" but not accounting for it when return_timestamps = True
### Expected behavior
I would expect the code in around 526 to do some format changing similar to what it does when return_timestamps = "word". For what it's worth, setting return_timestamps to "word" is not a usable work-around because that causes output_attentions to get set to True, which is incompatible with SPDA | bug | low | Critical |
2,790,209,100 | pytorch | FUNC_INLINELIST doesn't exist | probably just obsolete comment: https://github.com/pytorch/pytorch/blob/7c52c97a65f58e1de2967509ab732e20f468dae8/torch/_dynamo/trace_rules.py#L3176
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | triaged,oncall: pt2,module: dynamo | low | Minor |
2,790,221,284 | langchain | Question: chidlren's key is covered by the parent's same key | for cls in [None, *self.__class__.mro()]:
values of the same key in children will be covered by the parent's same key. It is intendedly designed or a bug? I think it's not reasonable. | ๐ค:bug | low | Critical |
2,790,226,489 | vscode | Duplicate + icon for both "stage line(s)" and "add comment" is not ideal | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
As you can see, when being in diff mode, both "stage line(s)" and "add comment" buttons are represented by a plus button. This can be confusing and lead to frustration when clicking the wrong button a couple times a day. Could you maybe change the comment button to a speech bubble icon or the like?
 | triage-needed,stale | low | Minor |
2,790,247,484 | deno | Warning Message for Lifecycle Scripts Only Appears When Running `deno task` | When I ran `deno install --allow-scripts`, I didn't see any warning messages. However, when I ran `deno task setup`, which included the same script, I received a warning about npm lifecycle scripts not being executed.
### Steps to Reproduce
1. Create a `deno.json` file with the following content:
```json
{
"lock": false,
"nodeModulesDir": "auto",
"tasks": {
"setup": "deno install --allow-scripts",
},
"imports": {
"astro": "npm:[email protected]",
}
}
```
2. Run `deno install --allow-scripts` in terminal. (No warning appeared.)
3. Delete the `node_modules` directory.
4. Run `deno task setup`. (A warning appeared.)
```ansi
qz@localhost:/tmp/astro-deno> deno task setup
Warning The following packages contained npm lifecycle scripts (preinstall/install/postinstall) that were not executed:
โ โ npm:[email protected]
โ
โ โ This may cause the packages to not work correctly.
โโ To run lifecycle scripts, use the `--allow-scripts` flag with `deno install`:
deno install --allow-scripts=npm:[email protected]
Task setup deno install --allow-scripts
```
### Expected Behavior
I expected to see no warning for both `deno install --allow-scripts` and `deno task setup`, since the `--allow-scripts` flag was used.
### Actual Behavior
No warning appeared during `deno install`, but a warning appeared during `deno task setup`.
### Environment
Version: Deno 2.1.5 (stable, release, x86_64-unknown-linux-gnu)
v8 13.0.245.12-rusty
typescript 5.6.2
| needs investigation,task runner | low | Minor |
2,790,248,843 | rust | Integer `to_string` is slow | On my machine, the following code runs in 3 seconds in stable version, release build:
```rust
// (EDIT: made it compile)
fn main() {
for i in 0..100000000u32 {
let s = i.to_string();
assert!(s.len() > 0);
}
}
```
whereas the C++ counterpart runs in 1.2 seconds with `-O2`:
```cpp
#include <string>
#include <cassert>
int main() {
for(unsigned int i=0; i<100000000; i++){
std::string s = std::to_string(i);
assert(s.size() > 0);
}
}
```
I've found that most of the time loss comes from passing `&str` through a formatter instead of directly `memcpy`ing; replacing `to_string` with [_fmt](https://doc.rust-lang.org/src/core/fmt/num.rs.html#241) with `.to_owned()` at the end speeds it up to around 1.6s. | A-fmt,C-optimization | low | Major |
2,790,254,149 | go | cmd/go: go get fails against GerritHub server | ### Go version
go version devel go1.24-368a9ec998 Tue Jan 14 14:54:07 2025 -0800 linux/arm64
### Output of `go env` in your module/workspace:
```shell
AR='ar'
CC='gcc'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_ENABLED='1'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
CXX='g++'
GCCGO='gccgo'
GO111MODULE=''
GOARCH='arm64'
GOARM64='v8.0'
GOAUTH='netrc'
GOBIN=''
GOCACHE='/home/myitcv/.cache/go-build'
GOCACHEPROG=''
GODEBUG=''
GOENV='/no-home/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFIPS140='off'
GOFLAGS=''
GOGCCFLAGS='-fPIC -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=$WORK/.tmp/go-build1451863026=/tmp/go-build -gno-record-gcc-switches'
GOHOSTARCH='arm64'
GOHOSTOS='linux'
GOINSECURE=''
GOMOD='/dev/null'
GOMODCACHE='/home/myitcv/gostuff/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/myitcv/gostuff'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/myitcv/dev/go'
GOSUMDB='sum.golang.org'
GOTELEMETRY='local'
GOTELEMETRYDIR='/no-home/.config/go/telemetry'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/myitcv/dev/go/pkg/tool/linux_arm64'
GOVCS=''
GOVERSION='devel go1.24-368a9ec998 Tue Jan 14 14:54:07 2025 -0800'
GOWORK=''
PKG_CONFIG='pkg-config'
```
### What did you do?
Testscript repro:
```
env GOPRIVATE=cuelang.org/go
go mod init mod.example
go get cuelang.org/go@master
```
### What did you see happen?
```
> env GOPRIVATE=cuelang.org/go
> go mod init mod.example
[stderr]
go: creating new go.mod: module mod.example
> go get cuelang.org/go@master
[stderr]
go: cuelang.org/go@master: git fetch --unshallow -f origin in /home/myitcv/gostuff/pkg/mod/cache/vcs/d82383d43199d57840995f1c0a94e81eee5ed02e43dbba4468223292497673e2: exit status 1:
error: Could not read b5e1647ec470060133fd6f7f1913fd1c65f5f75c
fatal: Failed to traverse parents of commit 74a0c9d01e05b13cb15fa77371bbfb4461eccdff
error: remote did not send all necessary objects
[exit status 1]
FAIL: /tmp/testscript1710667514/repro.txtar/script.txtar:3: unexpected go command failure
```
### What did you expect to see?
Passing test.
Also relevant:
```
$ git --version
git version 2.48.1.2.g757161efcc
```
Raising this on the back of seeing #71261 and the fix from @rsc which appears to be somewhat related.
Note that the server here is GerritHub.io.
Per https://issues.gerritcodereview.com/issues/384756627 it might be a server issue. But per #71261 I guess it might be a client expectation issue. | NeedsInvestigation,GoCommand,BugReport | low | Critical |
2,790,257,436 | kubernetes | Update secret and then upgrade the pod, Sometimes pod will get the old value of secret | ### What happened?
Mount the secret to the specified directory in the pod. The startup script of pod will read the value of secret. Our program will update the secret and then upgrade the pod. Sometimes the pod read the old value of secret, after container restart it will read the new value of secret. We use WatchChangeDetectionStrategy, Looks like there's a problem with the kubelet cache update.
### What did you expect to happen?
The newly created pod immediately detects the secret cache update in kubelet.
### How can we reproduce it (as minimally and precisely as possible)?
The probability of the problem is very low, we only encountered it twice in total. I suspect that limiting the CPU resources of the apiserver process and triggering a large number of pods(pods in same node and use same secret) to rebuild may increase the probability of this problem. I am trying to reproduce this problem in this way.
### Anything else we need to know?
I'm having problems probably due to pkg/kubelet/util/manager/watch_based_manager.go method _AddReference_ and _DeleteReference_. I think all the secrets used by new pods should be created with a new list-watch listener instead of reusing the ones already created. From the perspective of method implementation, if multiple pods use the same secret and are on the same node, this situation may occur.
### Kubernetes version
<details>
1.25.3
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,priority/backlog,area/kubelet,sig/node,triage/accepted | low | Major |
2,790,258,344 | go | proposal: cmd/go: allow setting runtime godebugs in go.mod / go:debug directives | ### Proposal Details
Now `asyncpreemptoff=1` cannot be specified by `//go:debug`:
```
examples\test\main.go:1:1: invalid //go:debug: unknown //go:debug setting "asyncpreemptoff"
```
However, there are many situations where disabling async preemption is required as a workaround to make applications work correctly:
* https://github.com/golang/go/issues/36981
* https://github.com/golang/go/issues/48059
* https://github.com/golang/go/issues/57442
* https://github.com/golang/go/issues/64781
* https://github.com/golang/go/issues/68485
* https://github.com/golang/go/issues/71242
* https://github.com/hashicorp/terraform/issues/27350
I propose to enable to specify `asyncpreemptoff=1` as `//go:debug` and `godebug` section in go.mod.
Note that it is still possible to specify it by `-ldflags="-X=runtime.godebugDefault=asyncpreemptoff=1"`, but this is pretty hacky. | Proposal,ToolProposal | low | Critical |
2,790,260,166 | transformers | Regression - Phi3 has graph breaks in 4.48 but not in 4.47.1 | ### System Info
- `transformers` version: 4.48.0
- Platform: Linux-6.8.0-48
- Python version: 3.12.3
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.2
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA RTX 6000 Ada Generation
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import AutoConfig, AutoModelForCausalLM
cfg = AutoConfig.from_pretrained("microsoft/Phi-3-mini-128k-instruct")
cfg.num_hidden_layers = 2
with torch.device("cuda"):
m = AutoModelForCausalLM.from_config(cfg)
def backend(gm, sample_args):
# gm.print_readable()
print("SUBGRAPH")
return gm
m.model = torch.compile(m.model, backend=backend)
input_ids = torch.randint(0, 100, (1, 4096), device="cuda")
m(input_ids)
```
For 4.48, we see 4 subgraphs while with previous 4.47.1 we see only 1 subgraph.
Running with `TORCH_LOGS="graph_breaks"` prints
```python
V0115 16:09:58.933000 510381 torch/_dynamo/symbolic_convert.py:444] [1/0] [__graph_breaks] Graph break (details suppressed) in user code at /usr/local/lib/python3.12/dist-packages/transformers/models/phi3/modeling_phi3.py:386
V0115 16:09:58.933000 510381 torch/_dynamo/symbolic_convert.py:444] [1/0] [__graph_breaks] Reason: Unsupported: Dynamic control flow is not supported at the moment. Please use functorch.experimental.control_flow.cond to explicitly capture the control flow. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#cond-operands
V0115 16:09:58.945000 510381 torch/_dynamo/symbolic_convert.py:444] [2/0] [__graph_breaks] Graph break (details suppressed) in user code at /usr/local/lib/python3.12/dist-packages/transformers/models/phi3/modeling_phi3.py:386
V0115 16:09:58.945000 510381 torch/_dynamo/symbolic_convert.py:444] [2/0] [__graph_breaks] Reason: Data-dependent jump
```
### Expected behavior
Should have a single subgraph ideally like before. | bug | low | Critical |
2,790,291,794 | flutter | Misconfigured android engine test "Invalid shard: android_engine_tests" | I am seeing failures that look like misconfiguration.
https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8725700029924606593/+/u/run_test.dart_for_android_engine_tests_shard_and_subshard_None/stdout
```
Invalid shard: android_engine_tests
The available shards are: add_to_app_life_cycle_tests, build_tests, framework_coverage, framework_tests, tool_tests, web_tool_tests, tool_integration_tests, android_preview_tool_integration_tests, android_java11_tool_integration_tests, tool_host_cross_arch_tests, web_tests, web_canvaskit_tests, web_skwasm_tests, web_long_running_tests, flutter_driver_android, flutter_plugins, skp_generator, customer_testing, analyze, fuchsia_precache, snippets, docs, verify_binaries_codesigned, test_harness_tests
```
https://chat.google.com/room/AAAAK4LG52w/aYB-Xd3HXrc/aYB-Xd3HXrc?cls=10
| platform-android,t: flutter driver,team-infra,P1,android-testing | medium | Critical |
2,790,292,902 | electron | Backdrop filters no longer function/apply between View's | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
34.0.0
### What operating system(s) are you using?
Other Linux
### Operating System Version
6.12.8
### What arch are you using?
x64
### Last Known Working Electron version
34.0.0-beta.3
### Expected Behavior
If 2 views or more are on top of each other, and the top one has a backdrop filter or similar, then the view below it should be taken into account, as shown in a screenshot from pre-v34.0.0:

### Actual Behavior
The backdrop filter does not take the view below it into account, as shown in a screenshot post-v34.0.0 (happens on latest nightly as well):

### Testcase Gist URL
https://gist.github.com/KiruPoruno/7ba1d4ba3fbf2f4f1878a335801f75f7
### Additional Information
From my testing this started happening after v34.0.0-beta.3, didn't test nightly versions for more specificity, but hope that info helps. | platform/windows,platform/linux,bug :beetle:,has-repro-gist,34-x-y | low | Critical |
2,790,302,622 | deno | Support sending OTEL data over a unix socket or printed to stdout. | It would be nice to be able to your the collected opentelemetry data to either STDOUT or a unix socket. Might be helpful with local development, or in certain contexts where it would be helpful to skip the networking stack. | otel | low | Minor |
2,790,332,270 | pytorch | FlexAttention errors with certain functions and half precision in score_mod | ### ๐ Describe the bug
Using certain functions in `score_mod` as part of FlexAttention error when using float16 or bfloat16. This is on nightly, to reproduce:
```python
import torch
from torch.nn.attention.flex_attention import flex_attention
flex_attention = torch.compile(flex_attention, dynamic=False)
q = torch.randn((1, 1, 128, 16), dtype=torch.float16, device="cuda")
k = torch.randn((1, 1, 128, 16), dtype=torch.float16, device="cuda")
v = torch.randn((1, 1, 128, 16), dtype=torch.float16, device="cuda")
mass = torch.ones((1), dtype=torch.float16, device="cuda")
def score_mod(score, b, h, q_idx, kv_idx):
return score + torch.log(mass[0])
out = flex_attention(q, k, v, score_mod=score_mod) # fails
```
Using `torch.log(mass[0].to(torch.float32))` succeeds.
I believe it's because the lowering from `torch.log` to Triton isn't converting to `tl.float32` before the log call, which Triton needs (and hence the same error occurs when using some other operations, like `sin`, `cos`, etc), since the error contains:
```
ValueError: Expected dtype ['fp32', 'fp64'] but got fp16
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 50:11:
# ~~~~~~~~~~~~~~~~~~~ Apply score modification ~~~~~~~~~~~~~~~~~~~
if CHECK_BLOCK_BOUNDARY:
# If this is the last block of a non divisible seqlen, we still need to load [BLOCK_M, BLOCK_N] elements,
# which is larger than the actual number of elements. To avoid access memory out of bound,
# we need to mask out the elements that are out of Q_LEN & KV_LEN.
m = offs_m % Q_LEN
n = offs_n % KV_LEN
else:
m = offs_m
n = offs_n
tmp0 = tl_math.log(tl.load(in_ptr8 + 0))
```
I did have a look into it, and while there is a decorator on log here: https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/triton.py#L1218, the arguments provided from the lowering process are just the string `'tl.load(in_ptr8 + 0)'` rather than a CSEVariable and hence don't get upcast:
https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/triton.py#L763
<details>
<summary>Full error</summary>
```
InductorError: SubprocException: An exception occurred in a subprocess:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/triton/language/core.py", line 35, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/triton/language/math.py", line 26, in check
raise ValueError(f"Expected dtype {dtypes} but got {arg.type.scalar.name}")
ValueError: Expected dtype ['fp32', 'fp64'] but got fp16
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 50:11:
# ~~~~~~~~~~~~~~~~~~~ Apply score modification ~~~~~~~~~~~~~~~~~~~
if CHECK_BLOCK_BOUNDARY:
# If this is the last block of a non divisible seqlen, we still need to load [BLOCK_M, BLOCK_N] elements,
# which is larger than the actual number of elements. To avoid access memory out of bound,
# we need to mask out the elements that are out of Q_LEN & KV_LEN.
m = offs_m % Q_LEN
n = offs_n % KV_LEN
else:
m = offs_m
n = offs_n
tmp0 = tl_math.log(tl.load(in_ptr8 + 0))
^
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 44:28:
SPARSE_KV_MULTIPLE: tl.constexpr = (SPARSE_KV_BLOCK_SIZE // BLOCK_N)
RCP_LN2: tl.constexpr = 1.44269504
if PRESCALE_QK:
q = (q * SM_SCALE * RCP_LN2).to(MATMUL_PRECISION)
# loop over k, v and update accumulator until block_n_end
for start_n in range(block_n_start, block_n_end):
if IS_DIVISIBLE:
acc, l_i, m_i = forward_block_mn(
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/compile_worker/subproc_pool.py", line 337, in do_job
result = job()
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/runtime/compile_tasks.py", line 74, in _worker_compile_triton
load_kernel().precompile(warm_cache_only=True)
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/runtime/triton_heuristics.py", line 262, in precompile
compiled_binary, launcher = self._precompile_config(
File "/usr/local/lib/python3.10/dist-packages/torch/_inductor/runtime/triton_heuristics.py", line 449, in _precompile_config
binary = triton.compile(*compile_args, **compile_kwargs)
File "/usr/local/lib/python3.10/dist-packages/triton/compiler/compiler.py", line 273, in compile
module = src.make_ir(options, codegen_fns, module_map, context)
File "/usr/local/lib/python3.10/dist-packages/triton/compiler/compiler.py", line 100, in make_ir
return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns,
triton.compiler.errors.CompilationError: at 158:20:
)
V_block_ptr = tl.make_block_ptr(
base=V,
shape=(KV_LEN, V_HEAD_DIM),
strides=(stride_vn, stride_vk),
offsets=(kv_start, 0),
block_shape=(BLOCK_N, V_HEAD_DIM),
order=(1, 0)
)
offs_n = kv_start + tl.arange(0, BLOCK_N)
acc, l_i, m_i = forward_inner(
^
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
</details>
### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250115+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.10
[pip3] optree==0.13.1
[pip3] pynvjitlink-cu12==0.4.0
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250115+cu124
[pip3] torchaudio==2.5.1+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1+cu121
[conda] Could not collect
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | triaged,oncall: pt2,module: flex attention | low | Critical |
2,790,332,401 | vscode | fallo |
Type: <b>Bug</b>
crear nuevo perfil
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i7-1255U (12 x 2611)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.64GB (4.75GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (24)</summary>
Extension|Author (truncated)|Version
---|---|---
ng-template|Ang|19.0.3
vscode-eslint|dba|3.0.10
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
svg|joc|1.5.4
vscode-language-pack-es|MS-|1.96.2024121109
sqltools|mtx|0.28.3
reload|nat|0.0.7
java|red|1.38.0
vscode-xml|red|0.27.2
errorlens|use|3.22.0
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-boot-dev-pack|vmw|0.2.1
vscode-spring-boot|vmw|1.59.0
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
vscode-spring-boot-dashboard|vsc|0.14.0
vscode-spring-initializr|vsc|0.11.2
console-ninja|Wal|1.0.377
(1 theme extensions excluded)
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,790,356,929 | terminal | Cannot unbind key through null id | ### Windows Terminal version
1.21.3231.0
### Windows build number
10.0.26100.2894
### Other Software
_No response_
### Steps to reproduce
I am trying to unbind ctrl-h.
I am following the steps from https://learn.microsoft.com/en-us/windows/terminal/customize-settings/actions#unbind-keys-disable-keybindings
In this case, I copy the section about setting the id to null:
```
{
"id" : null, "keys" : ["ctrl+h"]
}
```
I put this in the settings.json file.
Here is what that part of my file looks like now:
```
{
"$help": "https://aka.ms/terminal-documentation",
"$schema": "https://aka.ms/terminal-profiles-schema",
"actions":
[
{
"command": "unbound",
"keys": "ctrl+v"
},
{
"command": "unbound",
"keys": "ctrl+c"
},
{
"command":
{
"action": "copy",
"singleLine": false
},
"id": "User.copy.644BA8F2"
},
{
"command": "paste",
"id": "User.paste",
"keys": "ctrl+shift+v"
},
{
"command":
{
"action": "splitPane",
"split": "auto",
"splitMode": "duplicate"
},
"id": "User.splitPane.A6751878",
"keys": "alt+shift+d"
},
{
"command": "find",
"id": "User.find",
"keys": "ctrl+shift+f"
},
{
"id" : null, "keys" : ["ctrl+h"]
}
],
"copyFormatting": "none",
"copyOnSelect": false,
"defaultProfile": "{d8e96812-b789-5068-a5ae-10b2fb53e95f}",
"newTabMenu":
[
{
"type": "remainingProfiles"
}
],
"profiles":
{
"defaults":
{
"cursorShape": "filledBox",
"experimental.rightClickContextMenu": true,
"font":
{
"size": 10
},
"showMarksOnScrollbar": true
},
"list":
[
{
"commandline": "%SystemRoot%\\System32\\WindowsPowerShell\\v1.0\\powershell.exe",
"guid": "{61c54bbd-c2c6-5271-96e7-009a87ff44bf}",
"hidden": false,
"name": "Windows PowerShell"
},
{
"commandline": "%SystemRoot%\\System32\\cmd.exe",
"guid": "{0caa0dad-35be-5f56-a8ff-afceeeaa6101}",
"hidden": false,
"name": "Command Prompt"
},
{
"guid": "{b453ae62-4e3d-5e58-b989-0a998ec441b8}",
"hidden": false,
"name": "Azure Cloud Shell",
"source": "Windows.Terminal.Azure"
},
{
"colorScheme": "Tango Dark",
"cursorShape": "filledBox",
"font":
{
"size": 11
},
"guid": "{d8e96812-b789-5068-a5ae-10b2fb53e95f}",
"hidden": false,
"name": "Ubuntu 24.04.1 LTS",
"source": "CanonicalGroupLimited.Ubuntu24.04LTS_79rhkp1fndgsc"
},
{
"guid": "{963ff2f7-6aed-5ce3-9d91-90d99571f53a}",
"hidden": true,
"name": "Ubuntu-24.04",
"source": "Windows.Terminal.Wsl"
}
]
},
"schemes": [],
"themes": []
}
```
### Expected Behavior
I expect it to at least load the settings file. Ideally, it should unbind ctrl-h from WSL and the command prompt.
Since I am copying directly from the windows documents, I expect this to work.
### Actual Behavior
I get an error:

As you can see, it does not like the `null` and says it expects a string. | Issue-Bug,Area-Settings,Product-Terminal | low | Critical |
2,790,364,778 | neovim | boolean in vim.lsp.Config to determine if a language server should be enabled. | ### Problem
I'd like to disable some language servers based on what the path / cwd is. For example, I don't want to run [harper-ls](https://github.com/Automattic/harper) if the path is outside of my development paths. eg: Don't run on public cloned trees.
I was thinking something like a `enabled = function(...):boolean` or similar. I'm aware that I can have this logic around where I call `vim.lsp.enable()`, but having it in the LS definition table feels cleaner.
I can create a PR if this is a feature that would be accepted. Thanks.
Related: https://github.com/neovim/neovim/issues/31762
### Expected behavior
Have the ability to enable/disable a language server from the `vim.lsp.Config` table. | enhancement,lsp | low | Major |
2,790,379,899 | svelte | Wrong "Unused CSS selector" in slotted content | ### Describe the bug
CSS `:has` selector is incorrectly reported as unused in slotted content.
### Reproduction
[https://svelte.dev/playground/f4f152811664414b927a42289c904a52?version=5.18.0](https://svelte.dev/playground/f4f152811664414b927a42289c904a52?version=5.18.0)
### Logs
```shell
```
### System Info
```shell
System:
OS: Windows 11 10.0.22631
CPU: (16) x64 13th Gen Intel(R) Core(TM) i5-1350P
Memory: 15.85 GB / 31.69 GB
Binaries:
Node: 20.11.0 - C:\Program Files\nodejs\node.EXE
npm: 10.2.5 - C:\Program Files\nodejs\npm.CMD
pnpm: 9.15.2 - ~\AppData\Local\pnpm\pnpm.CMD
Browsers:
Internet Explorer: 11.0.22621.3527
npmPackages:
svelte: ^5.18.0 => 5.18.0
```
### Severity
blocking an upgrade | css | low | Critical |
2,790,423,672 | pytorch | [dynamo] Model `__dict__` with `ConstDictVariable` rather than `GetAttrVariable` | This tracks (1) from https://github.com/pytorch/pytorch/pull/144419#pullrequestreview-2541259169.
It'll lead to removal of duplicated logic for dictionary object handling below, and make it easier to reason about `__dict__` in general.
https://github.com/pytorch/pytorch/blob/d85ae4be734cfd53f5b893240894381ac65fe8b4/torch/_dynamo/variables/misc.py#L1027-L1074
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Minor |
2,790,425,283 | pytorch | [dynamo] Support mutation on type objects | This tracks (2) from https://github.com/pytorch/pytorch/pull/144419#issuecomment-2583533712.
Repro:
```python
@torch.compile(backend="eager", fullgraph=True)
def f(x):
Foo.a = 1
return x + 1
f(torch.ones(1))
# File ".../torch/_dynamo/symbolic_convert.py", line 1843, in STORE_ATTR
# BuiltinVariable(setattr).call_function(
# File ".../torch/_dynamo/variables/builtin.py", line 1003, in call_function
# return handler(tx, args, kwargs)
# ^^^^^^^^^^^^^^^^^^^^^^^^^
# File ".../torch/_dynamo/variables/builtin.py", line 845, in builtin_dispatch
# unimplemented(error_msg)
# File ".../torch/_dynamo/exc.py", line 356, in unimplemented
# raise Unsupported(msg, case_name=case_name)
#torch._dynamo.exc.Unsupported: builtin: setattr [<class 'torch._dynamo.variables.user_defined.UserDefinedClassVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>] False
```
We _might_ also want to support mutation on `__dict__` object as a result, although that could be subsumed by #144873.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,790,430,483 | vscode | Terminal suggest: support absolute paths well on Windows | Some test cases:
- `cd |` should show `C:\`, `D:\`, etc., whatever drives are available
- `cd C:\|` should show dirs in `C:\` with same prefix (eg. `C:\Windows`)
- `cd :|` should show all drives (should get for free with first test case) | feature-request,windows,terminal-suggest | low | Minor |
2,790,433,050 | tensorflow | Failed to load native TensorFlow Lite methods | Hi,
I'm trying to use tensorflow lite version 2.15.0 to run my tflite model and I'm getting an error when initializing the interpreter.
I'm adding the tensor flow libraries in the gradle file
`implementation('org.tensorflow:tensorflow-lite') { version { strictly("2.15.0") } }`
Code:
```
isLibraryLoaded = false
private fun initInterpreter(): Interpreter? {
val tfliteOptions = Interpreter.Options()
tfliteOptions.setNumThreads(2)
if (!isLibraryLoaded) {
System.loadLibrary("tensorflowlite_jni")
ARLog.d("OmniSenseMLDepthImageProcessor","Tensor flow lite Library load successful")
isLibraryLoaded = true
}
return try {
Log.d("InterpreterDelegate", "Thread id getInterpreter == ${Thread.currentThread().name}")
val interpreter = org.tensorflow.lite.Interpreter(loadModelFile(resolveModelFilePath()), tfliteOptions)
Log.d("InterpreterDelegate", "Interpreter initialized successfully")
return interpreter
} catch (e: Exception) {
Log.e("InterpreterDelegate", "Error: Could not initialize $tfLiteModel interpreter!: ${e.message}")
e.printStackTrace()
null
}
}
```
TFLite version: 2.15.0
Device: Samsung S20
Error:
```
E FATAL EXCEPTION: pool-140-thread-1
Process: com.amazon.mShop.android.shopping, PID: 14755
java.lang.UnsatisfiedLinkError: Failed to load native TensorFlow Lite methods. Check that the correct native libraries are present, and, if using a custom native library, have been properly loaded via System.loadLibrary():
java.lang.UnsatisfiedLinkError: dlopen failed: library "libtensorflowlite_jni_gms_client.so" not found
at org.tensorflow.lite.TensorFlowLite.init(TensorFlowLite.java:137)
at org.tensorflow.lite.NativeInterpreterWrapper.<init>(NativeInterpreterWrapper.java:62)
at org.tensorflow.lite.NativeInterpreterWrapperExperimental.<init>(NativeInterpreterWrapperExperimental.java:36)
at org.tensorflow.lite.Interpreter.<init>(Interpreter.java:232)
at com.a9.fez.tflite.TFLiteInterpreterDelegate.initInterpreter(TFLiteInterpreterDelegate.kt:50)
at com.a9.fez.tflite.TFLiteInterpreterDelegate.getValue(TFLiteInterpreterDelegate.kt:29)
Caused by: java.lang.UnsatisfiedLinkError: No implementation found for void org.tensorflow.lite.TensorFlowLite.nativeDoNothing() (tried Java_org_tensorflow_lite_TensorFlowLite_nativeDoNothing and Java_org_tensorflow_lite_TensorFlowLite_nativeDoNothing__) - is the library loaded, e.g. System.loadLibrary?
at org.tensorflow.lite.TensorFlowLite.nativeDoNothing(Native Method)
at org.tensorflow.lite.TensorFlowLite.init(TensorFlowLite.java:132)
... 14 more
```
| stat:awaiting response,type:support,comp:lite,TF 2.15 | medium | Critical |
2,790,443,660 | pytorch | [Monitoring] Display on HUD the information about runners that failed to be created (which cause jobs to queue) | ## Context
When job queuing for a significant period of time, it'll usually be for one of the following reasons:
- The desired machine is out of stock. We'll retry creating that instance until it becomes available
- There's a bug preventing that runner type from coming online, or perhaps even being provisioned
- Some other AWS issue that prevented the runner from being provisioned
## The Ask
This has two parts: Data Export and Visualization
### Data Export
Update the autoscaler lambdas to export the following data to ClickHouse:
- When instances are provisioned successfully
- When instances fail to get provisioned, along with their error codes
- Number of instances currently being retried
### Visualization
Add a new charts to HUD to show the number of runners of each type that have been provisioned, the number that failed (along with the reason), and the number currently waiting to be retried.
This could end up looking similar to the internal charts we have at https://fburl.com/unidash/z3wfjdwv.
Why do we need new charts if similar data is already available internally? Because the internal charts cannot capture stats about the LF fleet, and we want to be able to track service health across both the Meta and LF fleets.
cc @seemethere @malfet @pytorch/pytorch-dev-infra | module: ci,triaged | low | Critical |
2,790,444,887 | tensorflow | target //tensorflow/compiler/mlir/lite:tensorflow_lite_quantize fail to build | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf 2.19
### Custom code
No
### OS platform and distribution
Linux Debian 6.1.119-1 (2024-11-22) x86_64 x86_64 x86_64 GNU/Linux
### Mobile device
_No response_
### Python version
3.10
### Bazel version
bazel 6.5.0
### GCC/compiler version
gcc version 13.1.0
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
### Background
I follow the guidance here to run unit test locally. I use a google cloud compute engine.
[gcc version 13.1.0](https://github.com/tensorflow/tensorflow/blob/master/CONTRIBUTING.md#running-unit-tests)
I run the follow command to docker image tensorflow/build:2.19-python3.10.
docker run -it -v $PWD:/tmp -w /tmp tensorflow/build:2.19-python3.10 bash -c "bazel build --experimental_action_cache_store_output_metadata --disk_cache=~/.cache/bazel --jobs=3 --config=linux //tensorflow/compiler/mlir/lite:tensorflow_lite_quantize"
### The build failed and the error message is:
/usr/include/c++/13/bits/unique_ptr.h:1070:30: error: call of overloaded 'DefaultQuantParamsPass(const mlir::TFL::DefaultQuantParamsPassOptions&)' is ambiguous
1070 | { return unique_ptr<_Tp>(new _Tp(std::forward<_Args>(__args)...)); }
|
### The full debug message is:
ERROR: /tmp/tensorflow/compiler/mlir/lite/BUILD:1295:11: Compiling tensorflow/compiler/mlir/lite/transforms/default_quant_params.cc failed: (Exit 1): gcc failed: error executing command (from target //tensorflow/compiler/mlir/lite:tensorflow_lite_quantize) /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 235 arguments skipped)
In file included from /usr/include/c++/13/memory:78,
from tensorflow/compiler/mlir/lite/transforms/default_quant_params.cc:16:
/usr/include/c++/13/bits/unique_ptr.h: In instantiation of 'std::__detail::__unique_ptr_t<_Tp> std::make_unique(_Args&& ...) [with _Tp = mlir::TFL::{anonymous}::DefaultQuantParamsPass; _Args = {const mlir::TFL::DefaultQuantParamsPassOptions&}; __detail::__unique_ptr_t<_Tp> = __detail::__unique_ptr_tmlir::TFL::{anonymous}::DefaultQuantParamsPass]':
tensorflow/compiler/mlir/lite/transforms/default_quant_params.cc:249:50: required from here
/usr/include/c++/13/bits/unique_ptr.h:1070:30: error: call of overloaded 'DefaultQuantParamsPass(const mlir::TFL::DefaultQuantParamsPassOptions&)' is ambiguous
1070 | { return unique_ptr<_Tp>(new _Tp(std::forward<_Args>(__args)...)); }
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from tensorflow/compiler/mlir/lite/transforms/default_quant_params.cc:52:
bazel-out/k8-opt/bin/tensorflow/compiler/mlir/lite/transforms/passes.h.inc:162:3: note: candidate: 'mlir::TFL::{anonymous}::impl::DefaultQuantParamsPassBase::DefaultQuantParamsPassBase(mlir::TFL::DefaultQuantParamsPassOptions) [with DerivedT = mlir::TFL::{anonymous}::DefaultQuantParamsPass]'
162 | DefaultQuantParamsPassBase(DefaultQuantParamsPassOptions options) : DefaultQuantParamsPassBase() {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
tensorflow/compiler/mlir/lite/transforms/default_quant_params.cc:57:37: note: inherited here
57 | using DefaultQuantParamsPassBase::DefaultQuantParamsPassBase;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
tensorflow/compiler/mlir/lite/transforms/default_quant_params.cc:66:12: note: candidate: 'mlir::TFL::{anonymous}::DefaultQuantParamsPass::DefaultQuantParamsPass(const mlir::TFL::DefaultQuantParamsPassOptions&)'
66 | explicit DefaultQuantParamsPass(
| ^~~~~~~~~~~~~~~~~~~~~~
tensorflow/compiler/mlir/lite/transforms/default_quant_params.cc:54:7: note: candidate: 'mlir::TFL::{anonymous}::DefaultQuantParamsPass::DefaultQuantParamsPass(const mlir::TFL::{anonymous}::DefaultQuantParamsPass&)'
54 | class DefaultQuantParamsPass
| ^~~~~~~~~~~~~~~~~~~~~~
tensorflow/compiler/mlir/lite/transforms/default_quant_params.cc:54:7: note: candidate: 'mlir::TFL::{anonymous}::DefaultQuantParamsPass::DefaultQuantParamsPass(mlir::TFL::{anonymous}::DefaultQuantParamsPass&&)' (deleted)
Target //tensorflow/compiler/mlir/lite:tensorflow_lite_quantize failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 407.917s, Critical Path: 206.23s
INFO: 43 processes: 7 internal, 36 local.
FAILED: Build did NOT complete successfully
### Standalone code to reproduce the issue
```shell
I run the follow command to docker image tensorflow/build:2.19-python3.10.
docker run -it -v $PWD:/tmp -w /tmp tensorflow/build:2.19-python3.10 bash -c "bazel build --experimental_action_cache_store_output_metadata --disk_cache=~/.cache/bazel --jobs=3 --config=linux //tensorflow/compiler/mlir/lite:tensorflow_lite_quantize"
### The build failed and the error message is:
/usr/include/c++/13/bits/unique_ptr.h:1070:30: error: call of overloaded 'DefaultQuantParamsPass(const mlir::TFL::DefaultQuantParamsPassOptions&)' is ambiguous
1070 | { return unique_ptr<_Tp>(new _Tp(std::forward<_Args>(__args)...)); }
|
```
### Relevant log output
```shell
``` | type:build/install,comp:lite,TF 2.18 | low | Critical |
2,790,449,454 | kubernetes | Document why kernel tunable `kernel/panic` needs to be set to 10 | ### What would you like to be added?
Running kubelet on a system without this tunable is unsupported and causes kubelet to terminate. It's fine to set this and be done with it, but I can't figure out why?
Looking through the code/github, it seems that this pull request from @brendandburns first added this sysctl and requirement:
https://github.com/kubernetes/kubernetes/pull/17202/
```
[Refactor an interface for style](https://github.com/kubernetes/kubernetes/pull/17202/commits/fb576f30c8381aa30067e261c5af37adbcdfc3df)
```
Nothing in the commit or PR seems to indicate why this was added, and I don't seem to find anything in the project since that explains why this is enforced. Is there something relevant to `kubelet` that happens in that timeframe?
### Why is this needed?
Hard requirement of specific kernel configuration, but nothing apparent in the project seems to explain why. | sig/node,kind/feature,sig/docs,needs-triage | low | Minor |
2,790,459,222 | rust | Type inference failure: Unable to infer closure parameter type | I tried this code:
```rust
fn test_map_err() {
let mut m = None;
let a: i32;
let c = m.map_or(a, |v| v.min(a));
}
```
I expected to see this happen:
The compiler should be able to infer the type of the closure parameter v and compile the code successfully.
Instead, this happened:
The compiler failed to infer the type and produced the following error:

### Meta
> i have already try nightly version compiler, and meet the same behavior
`rustc --version --verbose`:
```
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
| C-discussion | low | Critical |
2,790,478,990 | flutter | [Impeller] error running Emulator binary 35.3.11.0 on Linux host. | ### Steps to reproduce
Upgrading flutter and everything (dart, android tools and images, IDE, ide plugins, building new emulator AVDs, etc) else from a 4mo old working setup.
now builds fail (even on a newly created project) with:
> `E/flutter ( 3868): [ERROR:flutter/impeller/renderer/backend/vulkan/allocator_vk.cc(522)] Break on 'ImpellerValidationBreak' to inspect point of failure: Unable to allocate a device buffer: ErrorFeatureNotPresent`
### Expected results
Run newly created application on emulator
### Actual results
I can see the application open (white screen, flutter logo on center) and then immediately close.
### Code sample
<details open><summary>Code sample</summary>
(literally the sample code from a new project)
```dart
import 'package:flutter/material.dart';
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// TRY THIS: Try running your application with "flutter run". You'll see
// the application has a purple toolbar. Then, without quitting the app,
// try changing the seedColor in the colorScheme below to Colors.green
// and then invoke "hot reload" (save your changes or press the "hot
// reload" button in a Flutter-supported IDE, or press "r" if you used
// the command line to start the app).
//
// Notice that the counter didn't reset back to zero; the application
// state is not lost during the reload. To reset the state, use hot
// restart instead.
//
// This works for code too, not just values: Most code changes can be
// tested with just a hot reload.
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
// This call to setState tells the Flutter framework that something has
// changed in this State, which causes it to rerun the build method below
// so that the display can reflect the updated values. If we changed
// _counter without calling setState(), then the build method would not be
// called again, and so nothing would appear to happen.
_counter++;
});
}
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
return Scaffold(
appBar: AppBar(
// TRY THIS: Try changing the color here to a specific color (to
// Colors.amber, perhaps?) and trigger a hot reload to see the AppBar
// change color while the other colors stay the same.
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
// Here we take the value from the MyHomePage object that was created by
// the App.build method, and use it to set our appbar title.
title: Text(widget.title),
),
body: Center(
// Center is a layout widget. It takes a single child and positions it
// in the middle of the parent.
child: Column(
// Column is also a layout widget. It takes a list of children and
// arranges them vertically. By default, it sizes itself to fit its
// children horizontally, and tries to be as tall as its parent.
//
// Column has various properties to control how it sizes itself and
// how it positions its children. Here we use mainAxisAlignment to
// center the children vertically; the main axis here is the vertical
// axis because Columns are vertical (the cross axis would be
// horizontal).
//
// TRY THIS: Invoke "debug painting" (choose the "Toggle Debug Paint"
// action in the IDE, or press "p" in the console), to see the
// wireframe for each widget.
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: const Icon(Icons.add),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/4b2dd36e-2df3-46a6-8fcc-97b8a76f331b
</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib/main.dart on sdk gphone16k x86 64 in debug mode...
Running Gradle task 'assembleDebug'...
โ Built build/app/outputs/flutter-apk/app-debug.apk
Installing build/app/outputs/flutter-apk/app-debug.apk...
I/flutter ( 3868): [IMPORTANT:flutter/shell/platform/android/android_context_vk_impeller.cc(60)] Using the Impeller rendering backend (Vulkan).
E/flutter ( 3868): [ERROR:flutter/impeller/renderer/backend/vulkan/allocator_vk.cc(522)] Break on 'ImpellerValidationBreak' to inspect point of failure: Unable to allocate a device buffer: ErrorFeatureNotPresent
F/flutter ( 3868): [FATAL:flutter/impeller/core/host_buffer.cc(33)] Check failed: device_buffer. Failed to allocate device buffer.
Error connecting to the service protocol: failed to connect to http://127.0.0.1:34567/GVyrjALKCqY=/ DartDevelopmentServiceException: WebSocketChannelException: HttpException: Connection closed before full header was received, uri = http://127.0.0.1:34567/GVyrjALKCqY=/ws
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
$ flutter doctor -v
[โ] Flutter (Channel stable, 3.27.2, on Arch Linux 6.12.9-arch1-1, locale en_US.UTF-8)
Flutter version 3.27.2 on channel stable at /opt/flutter
Upstream repository https://github.com/flutter/flutter.git
Framework revision 68415ad1d9 (2 days ago), 2025-01-13 10:22:03 -0800
Engine revision e672b006cb
Dart version 3.6.1
DevTools version 2.40.2
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
Android SDK at /home/gcb/Android/Sdk
Platform android-35, build-tools 35.0.0
Java binary at: /home/gcb/JAVA_HOME/bin/java
Java version OpenJDK Runtime Environment (build 21.0.5+11)
All Android licenses accepted.
[โ] Chrome - develop for the web (Cannot find Chrome executable at google-chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[โ] Linux toolchain - develop for Linux desktop
clang version 19.1.6
โ CMake is required for Linux development.
It is likely available from your distribution (e.g.: apt install cmake), or can be downloaded
from https://cmake.org/download/
โ ninja is required for Linux development.
It is likely available from your distribution (e.g.: apt install ninja-build), or can be
downloaded from https://github.com/ninja-build/ninja/releases
pkg-config version 2.3.0
[!] Android Studio (not installed)
Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/linux-android-setup for detailed instructions).
[โ] IntelliJ IDEA Community Edition (version 2024.3)
IntelliJ at /usr/share/idea
Flutter plugin version 83.0.4
Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
[โ] Connected device (2 available)
sdk gphone16k x86 64 (mobile) emulator-5554 android-x64 Android 15 (API 35) (emulator)
Linux (desktop) linux linux-x64 Arch Linux 6.12.9-arch1-1
[โ] Network resources
All expected network resources are available
```
(Some doctor check for intellij plugin is broken. This seat does have the dart plugin)
</details>
| P3,e: impeller,team-engine,triaged-engine | low | Critical |
2,790,479,205 | vscode | Terminal suggest: Handle paths on git bash | Git Bash uses special paths that map to Windows ones and then strictly uses the `/` separator:

We should support these well. | feature-request,windows,terminal-shell-git-bash | low | Minor |
2,790,482,192 | rust | Single use lifetimes lint suggests using unstable feature on stable | ### Code
[playground link](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=c63ad1a23efb14d71f23d0255132631c)
```Rust
#[allow(dead_code)]
#[warn(single_use_lifetimes)]
fn foo<'a>(_items: impl IntoIterator<Item = &'a i64>) -> Vec<f64> {
vec![]
}
```
### Current output
```Shell
warning: lifetime parameter `'a` only used once
--> src/lib.rs:3:8
|
3 | fn foo<'a>(_items: impl IntoIterator<Item = &'a i64>) -> Vec<f64> {
| ^^ this lifetime... -- ...is used only here
|
note: the lint level is defined here
--> src/lib.rs:2:8
|
2 | #[warn(single_use_lifetimes)]
| ^^^^^^^^^^^^^^^^^^^^
help: elide the single-use lifetime
|
3 - fn foo<'a>(_items: impl IntoIterator<Item = &'a i64>) -> Vec<f64> {
3 + fn foo(_items: impl IntoIterator<Item = &i64>) -> Vec<f64> {
```
### Desired output
```Shell
No warning
```
### Rationale and extra context
The feature required to elide the single use lifetime is unstable
### Rust Version
```Shell
$ rustc --version --verbose
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
and
```
$ rustc --version --verbose
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 19.1.1
```
| A-lints,A-diagnostics,T-compiler,L-false-positive | low | Critical |
2,790,503,850 | next.js | Parallel routes do not apply individual `loading.tsx` when used with nested routes | ### Link to the code that reproduces this issue
https://codesandbox.io/p/github/jrhackett/app-router-test/main?import=true
### To Reproduce
1. Start application in CodeSandbox
2. Click on "Broken subpath example" link
3. Note that the loading state for the "Slot" is showing "Loading Dashboard..." and not "Loading Slot..."
### Current vs. Expected behavior
The linked application in CodeSandbox has an example application with a route for `/broken/sub` that has a layout defined to display a parallel route called `@slot` and the `children` of the layout. The slot and the nested path both have their own `loading.tsx` files. The file structure is:
<img width="122" alt="Image" src="https://github.com/user-attachments/assets/09800db8-14c0-4fdb-8b62-fdf6e98ff12c" />
### **Current behavior**
The nested path's `loading.tsx` gets applied as expected but the slot's does not. The slot, instead, gets the parent directory's (`dashboard`) `loading.tsx` file.
<img width="542" alt="Image" src="https://github.com/user-attachments/assets/cf8ae6a4-be78-4536-a4df-ecee97b170cc" />
### **Expected behavior**
The loading state for `@slot` should display `Loading Slot...` instead of `Loading Dashboard...`.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.12.1
npm: 10.5.0
Yarn: 1.22.19
pnpm: 8.15.6
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: 15.1.4
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
next dev (local), Vercel (Deployed), next start (local), next build (local)
### Additional context
I've also tested this against `[email protected]` and the behavior is the same. | Parallel & Intercepting Routes | low | Critical |
2,790,505,616 | tauri | [bug] Generated Kotlin Code Returns Nullable Strings in WryActivity.kt (Tauri 2.0) | ### Describe the bug
Hi Tauri Team,
While using Tauri 2.0 for Android development, the generated WryActivity.kt file produces a compile-time error:
--------------------------------------------------------------------
Return type mismatch: expected 'kotlin.String', actual 'kotlin.String?'
--------------------------------------------------------------------
Specifically, the getAppUrl() and getAppAssetPath() methods return 'kotlin.String?' instead of 'kotlin.String', causing the Android build to fail. Previously, in Tauri 1 we could set "nullableReturnTypes": false in tauri.conf.json to fix this, but Tauri 2.0 no longer supports that configuration.
### Reproduction
Create a Tauri 2.0 project with Android support
Run tauri android init and tauri android dev
Observe the Kotlin compile error in WryActivity.kt
### Expected behavior
The generated methods should match their expected return types (non-null String) or provide a config to control nullability.
### Full `tauri info` output
```text
> [email protected] tauri C:\Users\19542\Documents\GitHub\Kiosk-Tizen-V2\odin
> tauri "info"
WARNING: no lock files found, defaulting to npm
[โ] Environment
- OS: Windows 10.0.26100 x86_64 (X64)
โ WebView2: 131.0.2903.112
โ MSVC: Visual Studio Build Tools 2019
โ rustc: 1.84.0 (9fc6b4312 2025-01-07)
โ cargo: 1.84.0 (66221abde 2024-11-19)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.11.0
- pnpm: 8.15.9
- npm: 10.4.0
[-] Packages
- tauri ๐ฆ: 2.2.2
- tauri-build ๐ฆ: 2.0.5
- wry ๐ฆ: 0.48.1
- tao ๐ฆ: 0.31.1
- @tauri-apps/api ๎: 2.0.1 (outdated, latest: 2.2.0)
- @tauri-apps/cli ๎: 2.0.0-rc.18 (outdated, latest: 2.2.4)
[-] Plugins
- tauri-plugin-shell ๐ฆ: 2.2.0
- @tauri-apps/plugin-shell ๎: 2.2.0
- tauri-plugin-localhost ๐ฆ: 2.2.0
- @tauri-apps/plugin-localhost ๎: not installed!
- tauri-plugin-fs ๐ฆ: 2.2.0
- @tauri-apps/plugin-fs ๎: 2.2.0
- tauri-plugin-http ๐ฆ: 2.2.0
- @tauri-apps/plugin-http ๎: 2.2.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:3000/
- framework: Svelte
- bundler: Vite
```
### Stack trace
```text
PS C:\Users\19542\Documents\GitHub\Kiosk-Tizen-V2\odin\src-tauri> pnpm tauri android dev
> [email protected] tauri C:\Users\19542\Documents\GitHub\Kiosk-Tizen-V2\odin
> tauri "android" "dev"
Info Detected connected device: Pixel_9_API_35 (sdk_gphone64_x86_64) with target "x86_64-linux-android"
Running BeforeDevCommand (`pnpm -w dev`)
> [email protected] dev C:\Users\19542\Documents\GitHub\Kiosk-Tizen-V2
> pnpm -C tizen-spa dev --config vite-web.config.js --mode mineremote --host
> [email protected] dev C:\Users\19542\Documents\GitHub\Kiosk-Tizen-V2\tizen-spa
> vite "--config" "vite-web.config.js" "--mode" "mineremote" "--host"
VITE v4.4.9 ready in 1558 ms
โ Local: http://localhost:3000/
โ Network: http://10.0.0.150:3000/
warning: `odin` (lib) generated 23 warnings (run `cargo fix --lib -p odin` to apply 15 suggestions)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 6.53s
Info symlinking lib "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\target\\x86_64-linux-android\\debug\\libnext_gen_android_lib.so" in jniLibs dir "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\gen/android\\app/src/main/jniLibs/x86_64"
Info "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\target\\x86_64-linux-android\\debug\\libnext_gen_android_lib.so" requires shared lib "libandroid.so"
Info "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\target\\x86_64-linux-android\\debug\\libnext_gen_android_lib.so" requires shared lib "libdl.so"
Info "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\target\\x86_64-linux-android\\debug\\libnext_gen_android_lib.so" requires shared lib "liblog.so"
Info "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\target\\x86_64-linux-android\\debug\\libnext_gen_android_lib.so" requires shared lib "libm.so"
Info "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\target\\x86_64-linux-android\\debug\\libnext_gen_android_lib.so" requires shared lib "libc.so"
Info symlink at "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\gen/android\\app/src/main/jniLibs/arm64-v8a\\libnext_gen_android_lib.so" points to "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\target\\aarch64-linux-android\\debug\\libnext_gen_android_lib.so"
Info symlink at "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\gen/android\\app/src/main/jniLibs/x86_64\\libnext_gen_android_lib.so" points to "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\target\\x86_64-linux-android\\debug\\libnext_gen_android_lib.so"
> [email protected] tauri C:\Users\19542\Documents\GitHub\Kiosk-Tizen-V2\odin
> tauri "android" "android-studio-script" "--target" "x86_64"
e: file:///C:/Users/19542/Documents/GitHub/Kiosk-Tizen-V2/odin/src-tauri/gen/android/app/src/main/java/com/grubbrr/kiosk/generated/WryActivity.kt:43:24 Return type mismatch: expected 'kotlin.String', actual 'kotlin.String?'.
e: file:///C:/Users/19542/Documents/GitHub/Kiosk-Tizen-V2/odin/src-tauri/gen/android/app/src/main/java/com/grubbrr/kiosk/generated/WryActivity.kt:51:24 Return type mismatch: expected 'kotlin.String', actual 'kotlin.String?'.
<==========---> 80% EXECUTING [1s]
> IDLE Info Forwarding port 3000 with adb
Info tcp:3000 already forwarded to Pixel_9_API_35
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:compileX86_64DebugKotlin'.
> A failure occurred while executing org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction
> Compilation error. See log for more details
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
BUILD FAILED in 9s
โELIFECYCLEโ Command failed with exit code 4294967295.
โELIFECYCLEโ Command failed with exit code 4294967295.
Failed to assemble APK: command ["C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\gen/android"] exited with code 1: command ["C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\gen/android"] exited with code 1
Error Failed to assemble APK: command ["C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\gen/android"] exited with code 1: command ["C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "C:\\Users\\19542\\Documents\\GitHub\\Kiosk-Tizen-V2\\odin\\src-tauri\\gen/android"] exited with code 1
โELIFECYCLEโ Command failed with exit code 1.
```
### Additional context
_No response_ | type: bug,status: needs triage,platform: Android | low | Critical |
2,790,532,890 | pytorch | FlexAttention Compilation Uses Non-Standard Invocation Of Inductor Ops | ### ๐ Describe the bug
`Modification Wrapper` uses a non-standard way of invoking inductor operators.
In https://github.com/pytorch/pytorch/blob/d065e8a9de7d6b91bd18286bf45e5094f1278f9f/torch/_inductor/select_algorithm.py#L623-L634
it passes string arguments to `subgraph.data.inner_fn(())` instead of `CSEVariable`. This makes the typing incorrect throughout codegen, and prevents relying on the properties of CSEVariable. I recently adding tracking of Dtypes to every intermediary in inductor codegen and enabled tests in opinfos. I would like to rely on them in codegen bc it enables:
- [Deletion of 7 ops from the inductor opset](https://github.com/pytorch/pytorch/blob/069419569d01c168952dc80bcc61bcb81a2bf3de/torch/_inductor/ops_handler.py#L719-L744)
- [Some codegen cleanups](https://github.com/pytorch/pytorch/blob/069419569d01c168952dc80bcc61bcb81a2bf3de/torch/_inductor/codegen/triton.py#L1374)
Dtype tracking is also being used today for both MTIA for low-precision, and prologue fusion low-precision (neither of which interaction with flex attention today).
I suspect this is also related to this error: https://github.com/pytorch/pytorch/issues/144869
When this is fixed we should be able to remove this special casing https://github.com/pytorch/pytorch/blob/069419569d01c168952dc80bcc61bcb81a2bf3de/torch/_inductor/dtype_propagation.py#L69.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
### Versions
on master | triaged,oncall: pt2,module: inductor,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Critical |
2,790,556,856 | godot | Incorrect behaviour using `draw_list_begin_for_screen` | ### Tested versions
- v4.4.dev.custom_build [1fa1f5c75]
- v4.4.dev7.official [46c8f8c5c]
### System information
Godot v4.4.dev (1fa1f5c75) - macOS Sequoia (15.2.0) - Multi-window, 2 monitors - Metal (Forward+) - integrated Apple M1 Max (Apple7) - Apple M1 Max (10 threads)
### Issue description
The `draw_list_begin_for_screen` API renders to the previous frame buffer. The symptoms are most visible when targeting Metal, as the previous frame buffer is unavailable, so the logs are filled with errors.
The issue is
1. The script calls `draw_list_begin_for_screen`, which finds the frame buffer for the screen:
https://github.com/godotengine/godot/blob/62ea2f76b46ff825ecf4f497aa0a2eaac6c88da9/servers/rendering/rendering_device.cpp#L4210
2. when Godot blits the render targets to the screen, it calls `screen_prepare_for_drawing`:
https://github.com/godotengine/godot/blob/e88e30c273fe7e8eb2e5eab13098132efdebc0f3/servers/rendering/renderer_rd/renderer_compositor_rd.cpp#L39-L40
which happens _after_ any calls made by GDScript, and this call erases that frame buffer to acquire the next:
https://github.com/godotengine/godot/blob/62ea2f76b46ff825ecf4f497aa0a2eaac6c88da9/servers/rendering/rendering_device.cpp#L4094-L4095
So the call to `draw_list_begin_for_screen` in GDScript is targeting the prior, invalid frame buffer.
### Steps to reproduce
Run one of the example projects, which uses the `draw_list_begin_for_screen` API.
### Minimal reproduction project (MRP)
- https://github.com/thimenesup/GodotIndirectDrawExample
- https://github.com/thimenesup/GodotBufferAddressExample | bug,topic:rendering | low | Critical |
2,790,565,456 | godot | Light behaves unexpectedly when using a screen-space material | ### Tested versions
- Reproducible on v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - EndeavourOS #1 SMP PREEMPT_DYNAMIC Fri, 27 Dec 2024 14:24:37 +0000 - Wayland - Vulkan (Forward+) - dedicated AMD Radeon RX 7800 XT (RADV NAVI32) - AMD Ryzen 5 3600 6-Core Processor (12 Threads)
### Issue description
When assigning the albedo as the screen color for a mesh through the "Next Pass" or "Material Overlay" properties, it causes light to behave weirdly with the Forward+ renderer. In the case of regular lights, it seems to intensify the attenuation of the light while darkening non-lit parts (it's a bit hard to tell exactly what happens). For negative lights, it causes sufficiently bright spots to "wrap around" into being fully lit (white) rather than dark.
Worth noting is that with the Compatibility renderer, the issue is not present for negative lights (regular lights still exhibit very similar "intensifying").
No shader, regular lights:

Shader present, regular lights:

No shader, negative lights:

Shader present, negative lights:

While the shader code is in the MRP, I'll paste it here since it's so tiny:
```glsl
shader_type spatial;
uniform sampler2D SCREEN_TEXTURE: hint_screen_texture, filter_linear;
void fragment() {
vec3 screen_color = textureLod(SCREEN_TEXTURE, SCREEN_UV, 0.0).rgb;
ALBEDO = screen_color;
}
```
### Steps to reproduce
1. Add a shader that assigns the screen color to albedo as the Material Overlay or Next pass of any mesh
2. Add some lights
3. Observe unexpected light behavior
### Minimal reproduction project (MRP)
[light-screen-texture-demo.zip](https://github.com/user-attachments/files/18428830/light-screen-texture-demo.zip) | bug,topic:rendering,confirmed | low | Minor |
2,790,573,055 | PowerToys | dashboard keeps popping up | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
just started powertoys manually. after a while the full powertoys dashboard window pops up on my screen. close it with x. after a while it pops up again. rinse & repeat.
i might have interacted with the quick access & more panels
### โ๏ธ Expected Behavior
dashboard should never pop up
### โ Actual Behavior
dashboard keeps popping up
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,790,582,524 | next.js | Unexpected Behavior using "after" in middleware | ### Link to the code that reproduces this issue
https://github.com/willwill96/nextjs-after-function-repro
### To Reproduce
1. Start the application
2. Fetch `http://localhost:3000` through preferred method
3. Observe Logs for the following
- Middleware Timing - Start {time}
- Middleware Timing - End {time}
- Page Timing - Start {time}
- Page Timing - End {time}
### Current vs. Expected behavior
Current Behavior:
`after` blocks within middleware functions are executed before page render begins
Expected Behavior:
[From the docs](https://nextjs.org/docs/app/api-reference/functions/after)
> after allows you to schedule work to be executed after a response (or prerender) is finished.
Based on this, I expect `after` blocks within middleware functions to be executed after page render is finished.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP Tue Nov 5 00:21:55 UTC 2024
Available memory (MB): 47749
Available CPU cores: 16
Binaries:
Node: 22.12.0
npm: 10.9.0
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: 15.1.4
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), next build (local)
### Additional context
Example Application Output:
```
Middleware Timing - Start 14041.842787
Middleware Timing - End 14042.483035
Page Timing - Start 14060.135459
GET / 200 in 27ms
Page Timing - End 14069.832975
```
Relevant code:
- [src/middleware.ts](https://github.com/willwill96/nextjs-after-function-repro/blob/master/src/middleware.ts)
```ts
import { after, NextRequest, NextResponse } from "next/server";
export async function middleware(req: NextRequest) {
if (req.url === 'http://localhost:3000/') {
console.log("Middleware Timing - Start", performance.now())
after(()=>{
console.log("Middleware Timing - End", performance.now())
})
}
return NextResponse.next()
}
```
- [src/app/page.tsx](https://github.com/willwill96/nextjs-after-function-repro/blob/master/src/app/page.tsx#L5-L8)
```tsx
import { after } from "next/server";
export default function Home() {
console.log("Page Timing - Start", performance.now())
after(()=>{
console.log("Page Timing - End", performance.now())
})
return (
<div className.....
);
}
```
| Middleware | low | Major |
2,790,589,350 | flutter | Cupertino Sheet should bounce slightly when overdragged upwards | On a native iOS sheet, when you drag it above it's max height, it will move up slightly. Then on gesture release it will snap back in to place. Currently in Flutter it will only drag downwards, then stop at max height when dragged back up again.
Native:
https://github.com/user-attachments/assets/745c0c46-7a28-41bb-9265-1b92353a0f9b | a: fidelity,f: cupertino,P2,team-design,triaged-design | low | Minor |
2,790,600,707 | flutter | Cupertino Sheet should have drag to dismiss and nested scrolling work together | On native iOS, when you have a sheet open with scrollable content, if you start scrolling, it will scroll normally. But then when you drag downwards, once you reach the top of the scrollable content, the sheet starts it's drag to dismiss gesture. Then if you drag back up again, once the the sheet reaches it's original height, the nested scroll starts back up again.
https://github.com/user-attachments/assets/72082b7b-414e-4093-a02a-27b8d2188dd3
In Flutter currently we can either give the nested scroll or the drag to dismiss gesture priority. And we can't switch back and forth between either animations on one drag event. | a: fidelity,f: scrolling,f: cupertino,P2,team-design,triaged-design | low | Minor |
2,790,618,232 | flutter | Change SystemUiOverlayStyle gradually | On native iOS, there is the ability to change the brightness of the system overlay gradually, set to an animation. For example, this happens while the sheet widget is opening. The system overlay (with the time, battery indicator, etc) at the top of the screen changes gradually from dark to light text.
https://github.com/user-attachments/assets/2372e736-75ae-4b6e-8a7f-5b43c874e472
Currently if the Flutter framework we are only able to change it suddenly with the `SystemChrome.setSystemUIOverlayStyle` API. | c: new feature,a: fidelity,f: cupertino,P2,team-ios,triaged-ios | low | Minor |
2,790,632,088 | kubernetes | Remove the MD5 hash function for FIPS compliance | ### What would you like to be added?
For now, there seems to be a hardcoded usage of MD5 in the [source code](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/v1/endpoints/util.go#L157), which is not FIPS compliant, and there is no configurable way to avoid it by declaring to use other hash functions like SHA256. When using the K8S with FIPS-compliant Go, `panic: openssl: unsupported hash function: 2` error is expected.
It would be great if the default hash function can be changed to a FIPS-compliant one, like SHA256. Or making it configurable via config files or something, that would be even better.
### Why is this needed?
This matters to people who need FIPS-compliant K8S clusters, and there is no obvious workaround at the moment. | sig/network,kind/feature,triage/accepted | low | Critical |
2,790,647,291 | flutter | [a11y] hotkey widget should have correct semantics | When using https://api.flutter.dev/flutter/widgets/Shortcuts-class.html, the loading spinner should have following role
| OS | role |
|--------|--------|
| web |- |
| ios | - |
| macos | - |
| windows | ROLE_SYSTEM_HOTKEYFIELD |
| android | - | | P3,team-accessibility,triaged-accessibility | low | Minor |
2,790,657,942 | flutter | [a11y][two_dimensional_scrollables] Tree view is missing semantics role | https://github.com/flutter/packages/tree/main/packages/two_dimensional_scrollables/lib/src/tree_view
For Tree
| OS | role |
|--------|--------|
| web | tree |
| ios | - |
| macos | NSAccessibilityOutlineRole |
| windows | ROLE_SYSTEM_OUTLINE |
| android | - |
for Tree item
| OS | role |
|--------|--------|
| web | treeitem |
| ios | - |
| macos | NSAccessibilityRowRole, NSAccessibilityOutlineRowSubrole |
| windows | ROLE_SYSTEM_OUTLINEBUTTON or ROLE_SYSTEM_OUTLINEITEM |
| android | - | | P3,team-accessibility,triaged-accessibility | low | Minor |
2,790,671,585 | PowerToys | Preview not working | ### Microsoft PowerToys version
latest
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
try to peek a wav file
### โ๏ธ Expected Behavior
working for wav files
### โ Actual Behavior
not working for wav files
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,790,700,049 | godot | Scene lighting is bugged on integrated graphics | ### Tested versions
Reproducible in:
- v4.2.2.stable.official [15073afe3]
- v4.3.stable.official [77dcf97d8]
- v4.4.dev7.mono.official [46c8f8c5c]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - integrated Intel(R) Iris(R) Xe Graphics (Intel Corporation; 31.0.101.5186) - 12th Gen Intel(R) Core(TM) i7-1255U (12 Threads)
### Issue description
I am working on a 3D game with a team and noticed the lighting does not work properly, but on only my computer. Meshes are way too bright and "glowy", and performance is noticeably reduced. My other team members had no lighting issues on their systems, which included:
- M4 Mac Mini Base Model
- Xubuntu 24.04.1 LTS
OpenGL API 4.6 Mesa 24.0.9-0ubuntu0.1 - Compatibility - Using Device: AMD - AMD Radeon Vega 3 Graphics (radeonsi, raven2, LLVM 17.0.6, DRM 3.57, 6.8.0-51-generic)
AMD Ryzen 3 3200U with Radeon Vega Mobile Gfx
How the lighting looks on the 2 systems mentioned above (and what it should look like):

The bugged lighting on my system:

The issue seems to be related to SDFGI, since the lighting looks normal after disabling it:

I'm aware that SDFGI is not recommended for integrated graphics, but the extra glow coming from meshes looks unintended and more than just a performance issue.
I've also included more information about my system and hardware here:
https://gist.github.com/donodj/18ff377a020a7e0708b39156b8c34a0d
### Steps to reproduce
1. Create a new godot project on a computer with Intel(R) Iris(R) Xe Graphics (or possibly any integrated GPU) and make a new scene
2. Add any MeshInstance3D with the material.tres material from the MRP
3. Add the WorldEnvironment and DirectionalLight3D from the bugged_environment.tscn scene in the MRP
4. The mesh should now appear "glowy" from the lighting in the scene
### Minimal reproduction project (MRP)
[godot-lighting-bug-main.zip](https://github.com/user-attachments/files/18428768/godot-lighting-bug-main.zip)
Contains a scene showing the bugged lighting and a scene with Godot's default lighting (which has no problems) | bug,topic:rendering,topic:3d | low | Critical |
2,790,703,156 | PowerToys | Mouse Without Borders broken? | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
Not sure how to reproduce this, but my systems that only yesterday used to connect just fine on Mouse Without Borders now seem unable to do so. They're still on the same LAN, and even when I "Refresh connections", all that I see is yellow, orange and blue borders around the machine representations under Device layout.
<!-- Failed to upload "PowerToysReport_2025-01-15-11-41-03.zip" --> [Not sure why the upload keeps failing]
### โ๏ธ Expected Behavior
The systems should continue connecting as they did before.
### โ Actual Behavior
They don't connect anymore.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,790,755,538 | godot | source_color not working properly | ### Tested versions
tested in 4.3 and 4.4 dev builds
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 960 (NVIDIA; 32.0.15.6636) - Intel(R) Core(TM) i5-6600K CPU @ 3.50GHz (4 Threads)
### Issue description
Instantiating a new `ShaderMaterial` results in the default uniform values marked `source_color` to skip the sRGB-to-linear transform.
- If the same scene is saved and reloaded, the correct color is rendered.
- If the uniform parameter is set in the inspector or by `set_shader_parameter()`, the correct color is rendered.
Workaround: manually set the uniform to it's own default after instantiating.
```gdscript
material.set_shader_parameter(p_name, RenderingServer.shader_get_parameter_default(material.shader.get_rid(), p_name))
```
### Steps to reproduce
- Create a MeshInstance3D of a plane
- Assign a new ShaderMaterial
- Assign a shader that displays a uniform vec3 of source_color
```
shader_type spatial;
render_mode unshaded;
uniform vec3 test_color: source_color = vec3(0.2, 0.5, 0.2);
void fragment() {
ALBEDO = test_color;
}
```
Note that the color looks washed out.
Save the scene and reload, it now looks correct.
### Minimal reproduction project (MRP)

[source_color_bug.tscn.zip](https://github.com/user-attachments/files/18429610/source_color_bug.tscn.zip) | bug,needs testing,topic:shaders | low | Critical |
2,790,770,595 | flutter | [Proposal] TabBar - allow disabled tabs | ### Use case
It would be nice to have a disabled tab feature
### Proposal
Let's have a disabled tab functionality | c: new feature,framework,f: material design,c: proposal,P2,team-design,triaged-design | low | Minor |
2,790,776,665 | flutter | TabBar - spacing | ### Use case
Current `labelPadding` adds padding but it does not allow to set spacing
Also `labelPadding` is a part of active area, it would be nice to have spacing non tapable
### Proposal
Same as Row has which were added in 3.27 | waiting for customer response,in triage | low | Minor |
2,790,866,668 | pytorch | TIMM cudagraphs_freezing inference regression | https://hud.pytorch.org/benchmark/timm_models/inductor_with_cudagraphs_freezing?dashboard=torchinductor&startTime=Mon,%2016%20Dec%202024%2020:49:27%20GMT&stopTime=Wed,%2015%20Jan%202025%2020:49:27%20GMT&granularity=day&mode=inference&model=lcnet_050&dtype=bfloat16&deviceName=cuda%20(a100)&lBranch=main&lCommit=1dab79470dbecef79ba4c7d4308d8a181091e58e&rBranch=main&rCommit=297ce776363cc4802fa74d210fced2b4128960d5
This model used to pass sometime in the last year but is now failing with an accuracy issue
cc @ezyang @gchanan @kadeng @msaroufim @mcarilli @eellison @penguinwu @BoyuanFeng @chauhang | high priority,triaged,module: cuda graphs,oncall: pt2,pt2-pass-rate-regression | low | Minor |
2,790,938,801 | deno | JSDoc inline import with Deno LSP does not find or apply types from npm package | Version: Deno 2.1.3-1 on Arch Linux, kernel 6.6.63-1-lts
When using the Deno LSP (tested in both VSCode with Deno for VSCode v3.43.2 and neovim with coc-deno 3.15.0), JSDoc imports from npm packages are not working.
In the same project before initializing the Deno LSP, everything is found and functional with tsserver.
Steps to reproduce, in a new folder:
```bash
deno init
deno add npm:pg
deno add npm:@types/pg
touch index.js
```
This results in this deno.json:
```json
{
"tasks": {
"dev": "deno run --watch main.ts"
},
"imports": {
"@std/assert": "jsr:@std/assert@1",
"@types/pg": "npm:@types/pg@^8.11.10",
"pg": "npm:pg@^8.13.1"
}
}
```
In index.js:
```javascript
/**
* @param {import('pg').}
*/
```
Removing the . after import('pg') and placing it back causes tsserver to give all types from @types/pg. Nothing happens with the Deno LSP.
The same exact thing can be seen with
```javascript
/**
* @param {import('pg').Pool} pool
*/
const example = (pool) => {
pool // Remove and type pool to see the intellisense
}
```
With the Deno LSP, the type for pool is:
```
(parameter) pool: any
@param - pool
```
With tsserver, the type is:
```
(parameter) pool: Pool
@param pool
```
---
# Deno Language Server Status
## Workspace Settings
```json
{
"enable": true,
"disablePaths": [],
"enablePaths": null,
"cache": null,
"cacheOnSave": true,
"certificateStores": null,
"config": null,
"importMap": null,
"codeLens": {
"implementations": false,
"references": false,
"referencesAllFunctions": false,
"test": true
},
"internalDebug": false,
"internalInspect": false,
"logFile": false,
"lint": true,
"documentPreloadLimit": 1000,
"suggest": {
"imports": {
"autoDiscover": true,
"hosts": {
"https://deno.land": true
}
}
},
"testing": {
"args": [
"--allow-all",
"--no-check"
]
},
"tlsCertificate": null,
"unsafelyIgnoreCertificateErrors": null,
"unstable": [],
"javascript": {
"inlayHints": {
"parameterNames": {
"enabled": "none",
"suppressWhenArgumentMatchesName": true
},
"parameterTypes": {
"enabled": false
},
"variableTypes": {
"enabled": false,
"suppressWhenTypeMatchesName": true
},
"propertyDeclarationTypes": {
"enabled": false
},
"functionLikeReturnTypes": {
"enabled": false
},
"enumMemberValues": {
"enabled": false
}
},
"preferences": {
"importModuleSpecifier": "shortest",
"jsxAttributeCompletionStyle": "auto",
"autoImportFileExcludePatterns": [],
"useAliasesForRenames": true,
"quoteStyle": "auto",
"preferTypeOnlyAutoImports": false
},
"suggest": {
"completeFunctionCalls": true,
"includeAutomaticOptionalChainCompletions": true,
"includeCompletionsForImportStatements": true,
"names": true,
"paths": true,
"autoImports": true,
"enabled": true,
"classMemberSnippets": {
"enabled": true
},
"objectLiteralMethodSnippets": {
"enabled": true
}
},
"updateImportsOnFileMove": {
"enabled": "prompt"
}
},
"typescript": {
"inlayHints": {
"parameterNames": {
"enabled": "none",
"suppressWhenArgumentMatchesName": true
},
"parameterTypes": {
"enabled": false
},
"variableTypes": {
"enabled": false,
"suppressWhenTypeMatchesName": true
},
"propertyDeclarationTypes": {
"enabled": false
},
"functionLikeReturnTypes": {
"enabled": false
},
"enumMemberValues": {
"enabled": false
}
},
"preferences": {
"importModuleSpecifier": "shortest",
"jsxAttributeCompletionStyle": "auto",
"autoImportFileExcludePatterns": [],
"useAliasesForRenames": true,
"quoteStyle": "auto",
"preferTypeOnlyAutoImports": false
},
"suggest": {
"completeFunctionCalls": true,
"includeAutomaticOptionalChainCompletions": true,
"includeCompletionsForImportStatements": true,
"names": true,
"paths": true,
"autoImports": true,
"enabled": true,
"classMemberSnippets": {
"enabled": true
},
"objectLiteralMethodSnippets": {
"enabled": true
}
},
"updateImportsOnFileMove": {
"enabled": "prompt"
}
}
}
```
## Workspace Details
- <details><summary>Documents in memory: 39</summary>
- file:///home/myuser/Desktop/denoexample/.vim/coc-settings.json
- file:///home/myuser/Desktop/denoexample/.vscode/settings.json
- file:///home/myuser/Desktop/denoexample/deno.json
- file:///home/myuser/Desktop/denoexample/index.js
- file:///home/myuser/Desktop/denoexample/main.ts
- file:///home/myuser/Desktop/denoexample/main_test.ts
- https://jsr.io/@std/assert/1.0.10/almost_equals.ts
- https://jsr.io/@std/assert/1.0.10/array_includes.ts
- https://jsr.io/@std/assert/1.0.10/assert.ts
- https://jsr.io/@std/assert/1.0.10/assertion_error.ts
- https://jsr.io/@std/assert/1.0.10/equal.ts
- https://jsr.io/@std/assert/1.0.10/equals.ts
- https://jsr.io/@std/assert/1.0.10/exists.ts
- https://jsr.io/@std/assert/1.0.10/fail.ts
- https://jsr.io/@std/assert/1.0.10/false.ts
- https://jsr.io/@std/assert/1.0.10/greater.ts
- https://jsr.io/@std/assert/1.0.10/greater_or_equal.ts
- https://jsr.io/@std/assert/1.0.10/instance_of.ts
- https://jsr.io/@std/assert/1.0.10/is_error.ts
- https://jsr.io/@std/assert/1.0.10/less.ts
- https://jsr.io/@std/assert/1.0.10/less_or_equal.ts
- https://jsr.io/@std/assert/1.0.10/match.ts
- https://jsr.io/@std/assert/1.0.10/mod.ts
- https://jsr.io/@std/assert/1.0.10/not_equals.ts
- https://jsr.io/@std/assert/1.0.10/not_instance_of.ts
- https://jsr.io/@std/assert/1.0.10/not_match.ts
- https://jsr.io/@std/assert/1.0.10/not_strict_equals.ts
- https://jsr.io/@std/assert/1.0.10/object_match.ts
- https://jsr.io/@std/assert/1.0.10/rejects.ts
- https://jsr.io/@std/assert/1.0.10/strict_equals.ts
- https://jsr.io/@std/assert/1.0.10/string_includes.ts
- https://jsr.io/@std/assert/1.0.10/throws.ts
- https://jsr.io/@std/assert/1.0.10/unimplemented.ts
- https://jsr.io/@std/assert/1.0.10/unreachable.ts
- https://jsr.io/@std/internal/1.0.5/build_message.ts
- https://jsr.io/@std/internal/1.0.5/diff.ts
- https://jsr.io/@std/internal/1.0.5/diff_str.ts
- https://jsr.io/@std/internal/1.0.5/format.ts
- https://jsr.io/@std/internal/1.0.5/styles.ts
</details>
- <details><summary>Performance measures: 169</summary>
- lsp.update_diagnostics_ts (210.96ms)
- tsc.host.$getDiagnostics (210.764ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.002ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0ms)
- tsc.op.op_is_node_file (0.001ms)
- tsc.op.op_is_node_file (0.012ms)
- tsc.op.op_load (0.016ms)
- tsc.op.op_resolve (0.014ms)
- tsc.op.op_load (0.012ms)
- tsc.op.op_resolve (0.016ms)
- tsc.op.op_load (0.013ms)
- tsc.op.op_resolve (0.013ms)
- tsc.op.op_load (0.013ms)
- tsc.op.op_resolve (0.013ms)
- tsc.op.op_load (0.013ms)
- tsc.op.op_resolve (0.023ms)
- tsc.op.op_load (0.017ms)
- tsc.op.op_resolve (0.013ms)
- tsc.op.op_load (0.015ms)
- tsc.op.op_resolve (0.065ms)
- tsc.op.op_load (0.013ms)
- tsc.op.op_resolve (0.019ms)
- tsc.op.op_load (0.013ms)
- tsc.op.op_resolve (0.027ms)
- tsc.op.op_load (0.015ms)
- tsc.op.op_resolve (0.034ms)
- tsc.op.op_load (0.013ms)
- tsc.op.op_resolve (0.015ms)
- tsc.op.op_load (0.012ms)
- tsc.op.op_resolve (0.015ms)
- tsc.op.op_load (0.011ms)
- tsc.op.op_resolve (0.028ms)
- tsc.op.op_load (0.013ms)
- tsc.op.op_resolve (0.034ms)
- tsc.op.op_load (0.012ms)
- tsc.op.op_resolve (0.023ms)
- tsc.op.op_load (0.02ms)
- tsc.op.op_resolve (0.025ms)
- tsc.op.op_load (0.014ms)
- tsc.op.op_resolve (0.035ms)
- tsc.op.op_load (0.013ms)
- tsc.op.op_resolve (0.013ms)
- tsc.op.op_load (0.012ms)
- tsc.op.op_resolve (0.025ms)
- tsc.op.op_load (0.012ms)
- tsc.op.op_resolve (0.035ms)
- tsc.op.op_load (0.013ms)
- tsc.op.op_resolve (0.014ms)
- tsc.op.op_load (0.013ms)
- tsc.op.op_resolve (0.015ms)
- tsc.op.op_load (0.015ms)
- tsc.op.op_resolve (0.033ms)
- tsc.op.op_load (0.017ms)
- tsc.op.op_resolve (0.046ms)
- tsc.op.op_load (0.021ms)
- tsc.op.op_load (0.023ms)
- tsc.op.op_load (0.014ms)
- tsc.op.op_resolve (0.344ms)
- tsc.op.op_load (0.021ms)
- tsc.op.op_resolve (1.347ms)
- tsc.op.op_load (0.022ms)
- tsc.op.op_load (0.039ms)
- tsc.op.op_load (0.016ms)
- tsc.op.op_resolve (0.287ms)
- tsc.op.op_load (0.021ms)
- tsc.op.op_load (0.012ms)
- tsc.op.op_resolve (0.025ms)
- tsc.op.op_load (0.025ms)
- tsc.op.op_resolve (5.378ms)
- tsc.op.op_load (0.016ms)
- tsc.op.op_resolve (1.668ms)
- tsc.op.op_load (0.028ms)
- tsc.op.op_resolve (0.17ms)
- tsc.op.op_load (0.02ms)
- tsc.op.op_script_names (0.026ms)
- lsp.update_diagnostics_lint (0.6ms)
- lsp.update_diagnostics_deps (0.197ms)
- lsp.did_open (0.604ms)
- lsp.update_cache (0.001ms)
- lsp.update_global_cache (1.718ms)
- lsp.initialize (63.442ms)
- tsc.request.$getAssets (6.621ms)
- tsc.host.$getAssets (5.017ms)
- tsc.request.$getSupportedCodeFixes (54.092ms)
- tsc.host.$getSupportedCodeFixes (0.318ms)
</details>
## Performance (last 3 000 entries)
|Name|Count|Duration|
|---|---|---|
|lsp.did_open|1|0.604ms|
|lsp.initialize|1|63.442ms|
|lsp.update_cache|1|0.001ms|
|lsp.update_diagnostics_deps|1|0.197ms|
|lsp.update_diagnostics_lint|1|0.6ms|
|lsp.update_diagnostics_ts|1|210.96ms|
|lsp.update_global_cache|1|1.718ms|
|tsc.host.$getAssets|1|5.017ms|
|tsc.host.$getDiagnostics|1|210.764ms|
|tsc.host.$getSupportedCodeFixes|1|0.318ms|
|tsc.op.op_is_node_file|88|0.001ms|
|tsc.op.op_load|37|0.016ms|
|tsc.op.op_resolve|31|0.316ms|
|tsc.op.op_script_names|1|0.026ms|
|tsc.request.$getAssets|1|6.621ms|
|tsc.request.$getSupportedCodeFixes|1|54.092ms|
## Performance (total)
|Name|Count|Duration|
|---|---|---|
|lsp.did_open|1|0.604ms|
|lsp.initialize|1|63.442ms|
|lsp.update_cache|1|0.001ms|
|lsp.update_diagnostics_deps|1|0.197ms|
|lsp.update_diagnostics_lint|1|0.600ms|
|lsp.update_diagnostics_ts|1|210.960ms|
|lsp.update_global_cache|1|1.718ms|
|lsp.virtual_text_document|1|0.000ms|
|tsc.host.$getAssets|1|5.017ms|
|tsc.host.$getDiagnostics|1|210.764ms|
|tsc.host.$getSupportedCodeFixes|1|0.318ms|
|tsc.op.op_is_node_file|88|0.039ms|
|tsc.op.op_load|37|0.608ms|
|tsc.op.op_resolve|31|9.812ms|
|tsc.op.op_script_names|1|0.026ms|
|tsc.request.$getAssets|1|6.621ms|
|tsc.request.$getSupportedCodeFixes|1|54.092ms|
| needs investigation,lsp | low | Critical |
2,790,950,274 | go | x/tools/gopls: record telemetry for which settings are used | We want to use telemetry to answer questions about which settings are being customized by our users. This signal can be useful in a number of ways, for example to prioritize improvements to various optional features, suggest that certain configuration can be deprecated, or indicate that certain optional features are not working well (if they are frequently disabled).
A few considerations:
- For some settings, we just want to record whether they are used (e.g. "buildFlags").
- For other settings, we want to bucket the actual setting values (e.g. "staticcheck:true")
- For yet other settings, there are logical sub-settings which may themselves have logical bucketing (e.g. "analyses/deprecated:false").
- Some settings can be set in multiple ways (usually in a transitional period when an old setting name is deprecated).
Therefore, this instrumentation will necessarily be customized to each individual setting.
The following schema seems pretty natural: `gopls/setting/<setting>[/<subsetting>][:buckets]`
For example, given the following configuration
```json
{
"buildFlags": ["-tags=mytag"],
"staticcheck": true,
"analyses": {
"deprecated": false,
}
}
```
we'd record the following counts:
```
gopls/setting/buildFlags
gopls/setting/staticcheck:true
gopls/setting/analyses/deprecated:false
```
Notably, we only record the settings that are actually explicitly set by the user. We don't record the effective values of all settings. I briefly considered recording the effective values of each setting, but that is a lossy projection: it loses information about which settings are actually being explicitly set, and we may want to see when users are customizing a setting, _even if they are customizing it to the default value_.
This issue tracks only the addition of this instrumentation. Separate telemetry proposals must be filed for any collection of this data from users who have opted in to telemetry.
CC @adonovan | gopls,Tools | low | Minor |
2,790,973,837 | pytorch | Ways the HUD compilers dashboard could be better | I got here because I'm trying to answer the question of "which compiler benchmarks regressed in the past year?" I've spent a couple of hours on the HUD dashboard page, and I still haven't figured this out yet. Here's some of the gripes that I ran into while trying to answer this question.
1) The page seems to refresh itself every couple of minutes. This disrupts the train of thought. Also, I am not sure if the settings change when it refreshes.
2) The passrate chart and the graphs don't have all of the data. In particular, the passrate chart doesn't contain the max_autotune configs. I don't know how to actually click into the max_autotune data.

3) https://github.com/pytorch/test-infra/issues/6173
4) There's one passrate chart but there are 3 passrate graphs. Scrolling between the graphs is kind of annoying
5) The graphs have so many series that some of them are hidden. Might be nicer to increase the height?

6) It's not clear to me how to hack on these charts. Using our internal tools (like scuba and unidash), it's easy (and well-known) on how to look up information.
Hypothesis: If we feed the data to internal sources and use internal tooling as the UXs, then we would be more productive than trying to roll our own UX.
cc @ZainRizvi @kit1980 @huydhn @clee2000 | triaged,enhancement,module: devx | low | Minor |
2,790,993,484 | pytorch | TorchBench mobilenet_v2 cudagraphs_freezing inference regression | https://hud.pytorch.org/benchmark/torchbench/inductor_with_cudagraphs_freezing?dashboard=torchinductor&startTime=Fri,%2019%20Jul%202024%2020:38:32%20GMT&stopTime=Wed,%2015%20Jan%202025%2021:38:32%20GMT&granularity=week&mode=inference&model=mobilenet_v2&dtype=bfloat16&deviceName=cuda%20(a100)&lBranch=main&lCommit=2ed4d65af0a1993c0df7b081f4088d0f3614283e&rBranch=main&rCommit=a8319698b3ba7c858fa3e4f3aac88d3fe9dc00d1
Regressed sometime in August
cc @ezyang @gchanan @kadeng @msaroufim @mcarilli @eellison @penguinwu @BoyuanFeng @chauhang | high priority,triaged,module: cuda graphs,oncall: pt2,pt2-pass-rate-regression | low | Minor |
2,791,002,020 | go | x/tools/gopls: show underlying type on hover over alias | ### gopls version
golang.org/x/tools/gopls v0.17.1
### go env
```shell
n/a
```
### What did you do?
Note: this is a feature request
Given
```go
// This is a dog.
type Dog struct {
Name string `json:"name"`
Age int `json:"age"`
}
type Hound = Dog
// A Puppy is a Dog with an age less than 2.
type Puppy = Dog
```
I hovered over `Hound` and `Puppy`
### What did you see happen?
The hover message reads
```go
type Hound = Dog
```
and respectively
```go
type Puppy = Dog
A puppy is a dog with an age less than 2.
```
### What did you expect to see?
```
type Hound = Dog
This is a dog.
type Dog struct {
Name string `json:"name"`
Age int `json:"age"`
}
```
and respectively
```
type Puppy = Dog
A puppy is a dog with an age less than 2.
type Dog struct {
Name string `json:"name"`
Age int `json:"age"`
}
```
I like the semantic that docstrings are inherited if, and only if, the alias provides no docstring of its own.
### Editor and settings
n/a
### Logs
n/a | help wanted,FeatureRequest,gopls,Tools | low | Major |
2,791,016,854 | pytorch | TIMM Training cudagraphs poolformer_m36 regression | Used to pass, now "eager_two_runs_differ". This probably just needs some tolerance adjustments
https://hud.pytorch.org/benchmark/timm_models/inductor_with_cudagraphs?dashboard=torchinductor&startTime=Fri,%2019%20Jul%202024%2020:48:05%20GMT&stopTime=Wed,%2015%20Jan%202025%2021:48:05%20GMT&granularity=week&mode=training&model=poolformer_m36&dtype=amp&deviceName=cuda%20(a100)&lBranch=main&lCommit=1dab79470dbecef79ba4c7d4308d8a181091e58e&rBranch=main&rCommit=a8319698b3ba7c858fa3e4f3aac88d3fe9dc00d1
cc @ezyang @gchanan @kadeng @msaroufim @mcarilli @eellison @penguinwu @BoyuanFeng @chauhang | high priority,triaged,module: cuda graphs,oncall: pt2,pt2-pass-rate-regression | low | Minor |
2,791,020,695 | pytorch | DISABLED test_mismatched_global_state (__main__.GraphRegionTrackerTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mismatched_global_state&suite=GraphRegionTrackerTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35666846912).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 18 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mismatched_global_state`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_graph_region_tracker.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_graph_region_tracker.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr | module: rocm,triaged,module: flaky-tests,skipped | low | Critical |
2,791,020,723 | pytorch | DISABLED test_recompile_on_global_state_change_dynamic_shapes (__main__.DynamicShapesMiscTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_recompile_on_global_state_change_dynamic_shapes&suite=DynamicShapesMiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35670761806).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 18 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_recompile_on_global_state_change_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_misc.py", line 7855, in test_recompile_on_global_state_change
assert read_state() == new_state
AssertionError
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_dynamic_shapes.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr | module: rocm,triaged,module: flaky-tests,skipped | low | Critical |
2,791,020,811 | pytorch | DISABLED test_recompile_on_global_state_change (__main__.MiscTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_recompile_on_global_state_change&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35668591743).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 18 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_recompile_on_global_state_change`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_misc.py", line 7855, in test_recompile_on_global_state_change
assert read_state() == new_state
AssertionError
```
</details>
Test file path: `dynamo/test_misc.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_misc.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr | module: rocm,triaged,module: flaky-tests,skipped | low | Critical |
2,791,020,866 | pytorch | DISABLED test_re_export_preserve_handle (__main__.TestNumericDebugger) | Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_re_export_preserve_handle&suite=TestNumericDebugger&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35666010341).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_re_export_preserve_handle`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_quantization.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @malfet @albanD | triaged,module: flaky-tests,module: macos,skipped | low | Critical |
2,791,020,926 | pytorch | DISABLED test_compile_forward_clone_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_forward_clone_cuda_float32&suite=TestNestedTensorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35670326638).
Over the past 3 hours, it has been determined flaky in 17 workflow(s) with 0 failures and 17 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_forward_clone_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_nestedtensor.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | triaged,module: flaky-tests,module: nestedtensor,skipped,module: unknown | low | Critical |
2,791,021,309 | pytorch | DISABLED test_compile_forward_chunk_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_forward_chunk_cuda_float32&suite=TestNestedTensorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35667819694).
Over the past 3 hours, it has been determined flaky in 26 workflow(s) with 2 failures and 26 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_forward_chunk_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_nestedtensor.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | triaged,module: flaky-tests,module: nestedtensor,skipped,module: unknown | low | Critical |
2,791,021,375 | pytorch | DISABLED test_compile_forward_select_cpu_float32 (__main__.TestNestedTensorOpInfoCPU) | Platforms: asan, linux, mac, macos, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_forward_select_cpu_float32&suite=TestNestedTensorOpInfoCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35668072388).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 0 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_forward_select_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_nestedtensor.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr | triaged,module: flaky-tests,module: nestedtensor,skipped,module: unknown | low | Critical |
2,791,021,454 | pytorch | DISABLED test_channel_group_quantization (__main__.TestQuantizePT2EAffineQuantization) | Platforms: asan, linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_channel_group_quantization&suite=TestQuantizePT2EAffineQuantization&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35666010341).
Over the past 3 hours, it has been determined flaky in 13 workflow(s) with 26 failures and 13 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_channel_group_quantization`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/quantization/pt2e/test_quantize_pt2e.py", line 2487, in test_channel_group_quantization
from torch.ao.quantization.pt2e._affine_quantization import (
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12793413718/lib/python3.9/site-packages/torch/ao/quantization/pt2e/_affine_quantization.py", line 189, in <module>
register_custom_op = _register_custom_op(quant_lib)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12793413718/lib/python3.9/site-packages/torch/ao/quantization/pt2e/_affine_quantization.py", line 161, in _register_custom_op
from torch._inductor.decomposition import register_decomposition
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12793413718/lib/python3.9/site-packages/torch/_inductor/decomposition.py", line 98, in <module>
decompositions = {**core_aten_decompositions(), **inductor_decompositions}
TypeError: 'CustomDecompTable' object is not a mapping
To execute this test, run the following from the base repo dir:
python test/quantization/pt2e/test_quantize_pt2e.py TestQuantizePT2EAffineQuantization.test_channel_group_quantization
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_quantization.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_quantization.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr | oncall: quantization,module: flaky-tests,skipped,module: unknown | low | Critical |
2,791,021,533 | pytorch | DISABLED test_pt2_traceable_aot_eager_cpu_float8_e4m3fn (__main__.TestFloat8DtypeCPUOnlyCPU) | Platforms: linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pt2_traceable_aot_eager_cpu_float8_e4m3fn&suite=TestFloat8DtypeCPUOnlyCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35666300791).
Over the past 3 hours, it has been determined flaky in 24 workflow(s) with 48 failures and 24 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pt2_traceable_aot_eager_cpu_float8_e4m3fn`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_quantization.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr | oncall: quantization,module: flaky-tests,skipped,module: unknown | low | Critical |
2,791,021,623 | pytorch | DISABLED test_compile_forward_chunk_cpu_float32 (__main__.TestNestedTensorOpInfoCPU) | Platforms: asan, linux, mac, macos, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_forward_chunk_cpu_float32&suite=TestNestedTensorOpInfoCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35668603559).
Over the past 3 hours, it has been determined flaky in 58 workflow(s) with 0 failures and 58 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_forward_chunk_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_nestedtensor.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @wdvr @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | triaged,module: flaky-tests,module: nestedtensor,skipped,module: unknown | low | Critical |
2,791,039,981 | go | proposal: testing: store test artifacts | This is an offshoot of #43936.
Some tests produce output files which the user may want to examine. For example, a test might produce files which are compared to some reference. If the comparison fails, the user will want to examine the generated files. Or a test might produce a packet capture or similar trace which can be used to debug failures.
We call these output files "test artifacts".
This is a proposal to add support for storing test artifacts to the testing package.
We add a new method to testing.TB:
```
package testing
// OutputDir returns a directory for the test to store output files in.
// When the -outputdir flag is provided, this directory will be located
// under that directory. Otherwise, OutputDir returns a temporary directory
// which is removed after the test completes.
//
// Each test or subtest has a unique artifact directory.
// Repeated calls to OutputDir in the same test or subtest return the same directory.
// Subtest outputs are not located under the parent test's output directory.
func (t *testing.T) OutputDir() string
```
The -outputdir flag already exists, and is currently used to specify a location to put output files from profiling. We're adding an additional meaning to it here: It's now where all your saved test outputs go.
When -outputdir is specified, the first call to OutputDir in a test or subtest will emit a line to the test output consisting of "=== ARTIFACTS ", the test name, and the test artifact directory, separated by spaces:
```
=== ARTIFACTS TestName/subtest_name /path/to/root/artifact/dir/TestName__subtest_name
```
When -json is specified, this will appear as an entry with an Action of "artifacts", the usual Time, Package, and Test keys, and a "Path" key containing the artifact directory:
```
{"Time":"2025-01-15T13:39:27.75235-08:00","Action":"artifacts","Package":"path","Test":"TestName","Path":"/path/to/root/artifact/dir/TestName"}
```
That's the proposal.
A few points on the design:
* I'm reusing the existing -outputdir flag, on the theory that output files from profiling are just another test artifact. If we don't like that reuse, then we could add a new -artifactdir flag and rename TB.OutputDir to TB.ArtifactDir for consistency.
* The test output uses the word "ARTIFACTS" because the JSON already has "output" events.
* TB.OutputDir returns a directory, same as TB.TempDir. This seems simpler than asking the testing package to copy files around.
* TB.OutputDir returns a directory even if we aren't saving artifacts so test behavior doesn't change depending on the presence or absence of the -outputdir flag.
In simple interactive use, users can pass -outputdir to store test artifacts when debugging a failing test.
Test frameworks that collect artifacts can arrange to pass -outputdir to the tests they run and collect any artifacts after the fact.
As a concrete use case, within Google our testing infrastructure sets an environment variable to the location of a directory. Tests can write files into this directory, and those files will be stored and associated with the test run. If we implement this proposal, we can arrange for the test infrastructure to also pass this directory as an -outputdir flag, and any test using TB.OutputDir will automatically use the right location. | Proposal,LibraryProposal | medium | Critical |
2,791,045,826 | pytorch | torch.export fails for whisper tiny | ### ๐ Describe the bug
Trying to export whisper model. Getting an error when I run with `strict=True`
The model exports when I used `strict=False`
Is this a valid Dynamo related issue which is addressed by non-strict mode?
```
import torch
from transformers import WhisperProcessor, WhisperForConditionalGeneration
from datasets import load_dataset
# load model and processor
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
input_features = torch.randn(1,80, 3000)
attention_mask = torch.ones(1, 3000)
decoder_input_ids = torch.tensor([[1, 1, 1 , 1]]) * model.config.decoder_start_token_id
model.eval()
exported_program: torch.export.ExportedProgram= torch.export.export(model, args=(input_features, attention_mask, decoder_input_ids,), strict=True)
```
Errors Logs
```
File "/home/agunapal/export_games/asr_1.py", line 16, in <module>
exported_program: torch.export.ExportedProgram= torch.export.export(model, args=(input_features, attention_mask, decoder_input_ids,), strict=True)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/__init__.py", line 368, in export
return _export(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1031, in wrapper
raise e
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1004, in wrapper
ep = fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/exported_program.py", line 122, in wrapper
return fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1957, in _export
export_artifact = export_func( # type: ignore[operator]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1251, in _strict_export
return _strict_export_lower_to_aten_ir(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1279, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 660, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1539, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1740, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 556, in _fn
return fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1740, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1395, in __call__
return self._torchdynamo_orig_callable(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 545, in __call__
return _compile(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1027, in _compile
raise InternalTorchDynamoError(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 977, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 706, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 741, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object
transformations(instructions, code_options)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 229, in _fn
return fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 658, in transform
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2912, in run
super().run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1828, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1828, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1828, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 547, in inner
if truth_fn(mod):
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/transformers/cache_utils.py", line 406, in __len__
return len(self.key_cache)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1935, in __getattr__
raise AttributeError(
torch._dynamo.exc.InternalTorchDynamoError: AttributeError: 'DynamicCache' object has no attribute 'key_cache'
from user code:
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 1767, in forward
outputs = self.model(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 1634, in forward
decoder_outputs = self.decoder(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 1324, in forward
layer_outputs = decoder_layer(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 732, in forward
hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 520, in forward
if is_cross_attention and past_key_value and is_updated:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241112+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn.so.9.1.0
/usr/lib64/libcudnn_adv.so.9.1.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn.so.9.1.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_engines_precompiled.so.9.1.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib64/libcudnn_graph.so.9.1.0
/usr/lib64/libcudnn_heuristic.so.9.1.0
/usr/lib64/libcudnn_ops.so.9.1.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
Frequency boost: enabled
CPU(s) scaling MHz: 100%
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241112+cu121
[pip3] torchaudio==2.5.0.dev20241112+cu121
[pip3] torchvision==0.20.0.dev20241112+cu121
[conda] numpy 2.0.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241112+cu121 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241112+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241112+cu121 pypi_0 pypi
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,module: dynamo,oncall: export | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.