id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,717,239,959 | puppeteer | "[screenshot.spec] Screenshots ElementHandle.screenshot should run in parallel with page.close()" is flaky with Firefox | See https://github.com/puppeteer/puppeteer/actions/runs/12157249591/job/33902654440#step:11:807
```
29) Screenshots
ElementHandle.screenshot
should run in parallel with page.close():
ProtocolError: Protocol error (browsingContext.create): unknown error TypeError: webProgress.browsingContext.currentWindowGlobal is null _sendCommandToBrowsingContext@chrome://remote/content/shared/messagehandler/transports/RootTransport.sys.mjs:132:9
at Callback.<instance_members_initializer> (packages\puppeteer-core\src\common\CallbackRegistry.ts:125:12)
at new Callback (packages\puppeteer-core\src\common\CallbackRegistry.ts:130:3)
at CallbackRegistry.create (packages\puppeteer-core\src\common\CallbackRegistry.ts:28:22)
at BidiConnection.send (packages\puppeteer-core\src\bidi\Connection.ts:105:28)
at Session.send (packages\puppeteer-core\src\bidi\core\Session.ts:111:34)
at Session.<anonymous> (packages\puppeteer-core\src\util\decorators.ts:63:21)
at UserContext.createBrowsingContext (packages\puppeteer-core\src\bidi\core\UserContext.ts:146:29)
at UserContext.<anonymous> (packages\puppeteer-core\src\util\decorators.ts:63:21)
at BidiBrowserContext.newPage (packages\puppeteer-core\src\bidi\BrowserContext.ts:189:44)
at Context.<anonymous> (test\src\screenshot.spec.ts:441:21)
```
In Nightly, it is flaky in the same test but with a different error message:
```
ProtocolError: Protocol error (browsingContext.create): no such frame DiscardedBrowsingContextError: BrowsingContext does no longer exist RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8
MessageHandlerError@chrome://remote/content/shared/messagehandler/Errors.sys.mjs:14:5
DiscardedBrowsingContextError@chrome://remote/content/shared/messagehandler/Errors.sys.mjs:76:5
waitForCurrentWindowGlobal@chrome://remote/content/shared/messagehandler/transports/BrowsingContextUtils.sys.mjs:132:11
at Callback.<instance_members_initializer> (packages\puppeteer-core\src\common\CallbackRegistry.ts:125:12)
at new Callback (packages\puppeteer-core\src\common\CallbackRegistry.ts:130:3)
at CallbackRegistry.create (packages\puppeteer-core\src\common\CallbackRegistry.ts:28:22)
at BidiConnection.send (packages\puppeteer-core\src\bidi\Connection.ts:105:28)
at Session.send (packages\puppeteer-core\src\bidi\core\Session.ts:111:34)
at Session.<anonymous> (packages\puppeteer-core\src\util\decorators.ts:63:21)
at UserContext.createBrowsingContext (packages\puppeteer-core\src\bidi\core\UserContext.ts:146:29)
at UserContext.<anonymous> (packages\puppeteer-core\src\util\decorators.ts:63:21)
at BidiBrowserContext.newPage (packages\puppeteer-core\src\bidi\BrowserContext.ts:189:44)
at Context.<anonymous> (test\src\screenshot.spec.ts:441:21)
``` | confirmed,P3,firefox | low | Critical |
2,717,279,307 | deno | `deno clean -e` / `deno clean --prod` | It would be great to add a mode to `deno clean` where it only cleans _some_ of the `DENO_DIR` / `node_modules` folder. I am imagining a `deno clean -e` that has the same behaviour as `deno clean && rm -rf node_modules && deno install -e`, and a `deno clean --prod` which has the same behaviour as `deno clean && rm -rf node_modules && deno install --prod`. Unlike the mentioned alternatives, this would not require re-downloading and untarring a bunch of packages however (this is slow!).
This would be very useful for Docker setups:
```Dockerfile
FROM denoland/deno
COPY . .
# install dependencies (including dev), to perform the build
RUN deno install
RUN deno task build
# remove all the dev dependencies
RUN deno clean -e main.ts # or RUN deno clean --prod
# warmup code cache
RUN timeout 2s deno run --cached-only -A main.ts || true
ENTRYPOINT ["deno", "run", "--cached-only", "-A", "main.ts"]
```
Right now this requires a multi step build, which is really annoying because it adds significant overhead because dependencies need to be installed twice. | cli,suggestion,install | low | Major |
2,717,497,086 | storybook | [Investigation]: Support React 19 | ### Describe the bug
React 19 has been released. We should investigate all the changes that are needed to fully support it.
- Make sure that React 19 is supported in all packages that have it as peer dependency
- [ ] @storybook/ember
- [x] @storybook/addon-links
- [x] @storybook/blocks
- [x] @storybook/react
- [x] @storybook/react-vite
- [x] @storybook/react-webpack5
- [x] @storybook/react-dom-shim
- [x] @storybook/react-webpack
- [x] @storybook/experimental-nextjs-vite
- [x] @storybook/react-native-web-vite
- [x] @storybook/nextjs
- [x] @storybook/icons
- [x] Replace react-confetti (dep of @storybook/addon-onboarding) with another one that supports React 19 https://github.com/storybookjs/storybook/pull/30098
- [x] Update [usage of act](https://bsky.app/profile/phry.dev/post/3lchyxhe2ts2b) in places [like this](https://github.com/storybookjs/storybook/blob/ceb8387e24f9b401bd3fc693dbb0f5b220a389d0/code/renderers/react/src/portable-stories.tsx#L66) https://github.com/storybookjs/storybook/pull/30037
#### To users:
You should be able to use Storybook with React 19, but if you found any inconsistency or unsupported feature, please let us know in this issue. | bug,dependencies,react | high | Critical |
2,717,518,369 | svelte | transition:slide does not apply overflow:hidden in Safari (Svelte 5 only) | ### Describe the bug
After migrating to Svelte 5, all my "slide" transitions overflow their inner content in Safari.
https://github.com/user-attachments/assets/08dd4e87-b7cf-467b-a16d-aab52b540535
In the code for the slide transition, there is an 'overflow: hidden' css that's supposed to be applied (and it does in all browsers when using Svelte 4, and in all browsers except Safari when using Svelte 5).
https://github.com/sveltejs/svelte/blob/4c4f18b24c644f7e17cd9fea7fde777f3324e206/packages/svelte/src/transition/index.js#L126
I can finish my upgrade by adding `overflow-hidden` to all the places I use this transition at (and test them manually), but I feel like this should have just worked without any changes (and didn't want to risk new UI issues).
### Reproduction
https://svelte.dev/playground/2320dc5a6c8a4ef7aeba080f88b32208?version=5.5.3
### Logs
_No response_
### System Info
```shell
System:
OS: macOS 15.1.1
CPU: (8) arm64 Apple M1 Pro
Memory: 1.05 GB / 16.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.18.1 - ~/.nvm/versions/node/v20.18.1/bin/node
npm: 10.9.1 - ~/.nvm/versions/node/v20.18.1/bin/npm
pnpm: 9.14.4 - ~/Library/pnpm/pnpm
bun: 1.1.38 - /opt/homebrew/bin/bun
Watchman: 2024.12.02.00 - /opt/homebrew/bin/watchman
Browsers:
Chrome: 131.0.6778.86
Edge: 131.0.2903.70
Safari: 18.1.1
npmPackages:
svelte: 5.5.3 => 5.5.3
```
### Severity
annoyance | transition/animation | low | Critical |
2,717,529,274 | go | all: remote file server I/O flakiness with "Bad fid" errors on plan9 | ```
#!watchflakes
post <- builder ~ `plan9-` && `: Bad fid`
```
Intermittent `Bad fid` errors are observed especially on plan9-arm performing I/O to a remote file server. The most common context is demand-loading pages from a binary hosted on the LUCI swarming front-end (linux) machine:
```
2024-11-08 14:46 x_tools-go1.23-plan9-arm tools@8a0e08fb release-branch.go1.23@c390a1c2 x/tools/go/callgraph/rta.TestRTA/testdata/multipkgs.txtar ([log](https://ci.chromium.org/b/8731827708724902417))
=== RUN TestRTA/testdata/multipkgs.txtar
testfiles.go:141: err: exit status: 'go 65004: 1': stderr: go: error obtaining buildID for go tool asm: exit status: 'asm 65114: i/o error in demand load accessing /home/swarming/.swarming/w/ir/x/w/goroot/pkg/tool/plan9_arm/asm: Bad fid'
--- FAIL: TestRTA/testdata/multipkgs.txtar (3.30s)
```
But errors can occur in other contexts, for example reading an object file from the build cache (also on the LUCI front-end):
```
2024-11-19 18:04 go1.23-plan9-arm release-branch.go1.23@777f43ab cmd/link.TestCheckLinkname/badlinkname.go ([log](https://ci.chromium.org/b/8730806350947619521))
=== RUN TestCheckLinkname/badlinkname.go
=== PAUSE TestCheckLinkname/badlinkname.go
=== CONT TestCheckLinkname/badlinkname.go
link_test.go:1454: build failed unexpectedly: exit status: 'go 74201: 1':
# command-line-arguments
link: cannot read object file:read /home/swarming/.swarming/w/ir/x/w/gocache/69/6932a64211f8ea57b303c16f21c046f130e2ca4053d7f9f2fa8ca6a5a87ac3be-d: Bad fid
--- FAIL: TestCheckLinkname/badlinkname.go (17.96s)
```
The fault is almost certainly not a go problem, but is within the plan9 file server: in this case, the variant of `exportsrv` built in to the `drawterm` command.
| OS-Plan9,NeedsInvestigation | low | Critical |
2,717,660,035 | kubernetes | cpumanager:staticpolicy:smtalign: pod admission failed after kubelet restart | ### What happened?
When Kubelet is configured with `static` CPUManager policy + `full-pcpus-only` option, after kubelet restart, pod is not get admitted and I'm getting the following error under kubelet logs:
```
Dec 04 12:42:25 kubenswrapper[2410667]: I1204 12:42:25.173086 2410667 kubelet.go:2320] "Pod admission denied" podUID="4355ee04-54b0-4755-b23e-d05ce12e54c1" pod="my-app-namespace/app-deployment" reason="SMTAlignmentError" message="SMT Alignment Error: not enough free physical CPUs: available physical CPUs = 1, requested CPUs = 2, CPUs per core = 1"
```
After manually deleting and recreating the pod, it gets admitted, so the claim that there are node enough free physical CPUs is wrong.
Pod spec:
```yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2024-12-04T12:50:20Z"
name: app-deployment
namespace: my-app-namespace
resourceVersion: "11109192"
uid: 16737fd6-45e1-4371-b10e-3eff1377c224
spec:
containers:
- args:
- while true; do sleep 10000; done;
command:
- /bin/sh
- -c
image: quay.io/jitesoft/alpine
imagePullPolicy: Always
name: app-container2
resources:
limits:
cpu: "2"
memory: 100Mi
requests:
cpu: "2"
memory: 100Mi
qosClass: Guaranteed
```
### What did you expect to happen?
After kubelet restart pod should readmitted
### How can we reproduce it (as minimally and precisely as possible)?
On a node with 4 CPUs:
`1` reserved (`reserverSystemCPUs`)
`3` allocatable
1. Configure CPUManager to `static` and `full-pcpus-only` option to `true`
2. Create a Guaranteed QoS class pod requesting 2 exclusive CPUs:
``` yaml
kind: Pod
...
spec:
containers:
...
resources:
limits:
cpu: "2"
memory: 100Mi
requests:
cpu: "2"
memory: 100Mi
```
3. wait for pod to start
4. restart kubelet
5. Pod will failed with SMTAlignment error
NOTE:
It's possible to reproduce the issue on a system with any size of CPUs.
A general formula would be:
On a node with `N` CPUs:
`1` reserved
`N-1` allocatable
Create a Guaranteed QoS class pod requesting `N / 2` exclusive CPUs:
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.29.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.31.2
WARNING: version difference between client (1.29) and server (1.31) exceeds the supported minor version skew of +/-1
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/node,triage/accepted | low | Critical |
2,717,707,718 | vscode | Rendered text has thinner lines and colors are off | Testing #234762
| Without Gpu accel | With Gpu accel |
|----------|----------|
|  |  |

My editor settings:
```
"editor.codeLensFontFamily": "\"Jetbrains Mono\"",
"editor.experimental.asyncTokenization": true,
"editor.fontFamily": "JetBrains Mono, Menlo, Monaco, 'Courier New', monospace",
"editor.fontLigatures": true,
"editor.fontSize": 16,
"editor.formatOnSave": true,
"editor.inlayHints.enabled": "offUnlessPressed",
"editor.inlayHints.fontSize": 14,
"editor.insertSpaces": false,
"editor.largeFileOptimizations": false,
"editor.lightbulb.enabled": "off",
"editor.renderWhitespace": "none",
"editor.smartSelect.selectSubwords": true,
"editor.tokenColorCustomizations": {
"[Default Light+]": {
"textMateRules": [
{
"scope": "entity.other.attribute-name",
"settings": {
"foreground": "#001080"
}
}
]
}
},
"editor.trimAutoWhitespace": false,
"editor.unicodeHighlight.ambiguousCharacters": false,
"editor.unicodeHighlight.invisibleCharacters": false,
"editor.unicodeHighlight.nonBasicASCII": false,
```
Version: 1.96.0-insider
Commit: 64e869f7ddf58d76097dc5ab7f36515044304b25
Date: 2024-12-04T05:03:54.771Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.1.0 | bug,macos,editor-gpu | low | Minor |
2,717,768,878 | vscode | VSCode for GitHub: search results incomplete | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: VSCode for GitHub
- OS Version: current GitHub version, Chrome version: 131.0.6778.87 (Official Build) (64-bit)
This issue is very similar to this one: https://github.com/microsoft/vscode/issues/116213
Steps to Reproduce:
1. Open a PR on GitHub.
2. Go to Files changed, press '.' on keyboard.
3. VSCode for GitHub is launched, open a changed file in this PR.
4. Open a changed file, select some entity that does appear in other files (function name, class, etc)
5. Press Ctrl-Shift-F, and search for said entity.
6. Only occurrences in the currently opened file are displayed.
7. Open some other file that mentions that entity.
8. Search again.
9. Now Search results show occurrences in two files, but not other ones, which do mention this entity.
10. Close one of the files, search again.
11. Now Search results show occurrences only in one file.
Expected Behavior: The search returns results for all instances of the string, regardless if the file is open or not.
| bug,info-needed,search | low | Critical |
2,717,813,983 | PowerToys | FancyZones causing issues when used with Firefox (the browser window decreases in size until it's just a small line when scrolling/moving mouse) | ### Microsoft PowerToys version
0.86.0
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
I'm using the latest version of Firefox (133.0 64-bit). Have created zones with fancyzones and it was working fine, but recently a strange issue is now consistently happening. When I pin a Firefox window in one of the zones, the right side of the window moves to the left and the bottom side moves up until there is basically nothing left. It looks like it's getting smaller on it's own, but it's actually any interaction you have with the window (scrolling, clicking etc.). It keeps getting smaller until it's basically gone. While it's getting smaller, I can move the window, maximize it by moving it to the top of the screen, but it just keeps getting smaller again. The window is basically rendered useless when the issue starts.
### โ๏ธ Expected Behavior
Fixefox window should just pin in the selected fancyzone and not get smaller when I'm scrolling/clicking inside the window.
### โ Actual Behavior
This isn't happening to any other application I'm using, including Chrome and Windows Explorer. This is also not happening when I use the built-in zone feature (hover on maximize button in a window and select a zone). It's only happening when using fancyzones. Sometimes it doesn't happen after pinning a window in a zone. When I do it multiple times, it does happen. The chance of it happening is about 30% based on my testing. It also doesn't matter which zone I'm using. When I move a Firefox tab outside of the window (creating a second/separate window), the new browser window is fine... until I snap it to a fancyzone.
Have searched quite a bit and can't find anything online where people are experiencing this and I don't think it's something I'm supposed to share as a bug with the Firefox developers. Important to note, the window isn't "resizing" as I can see that the minimize/maximize/close buttons are not moving to fit in the window. When the window gets smaller, the entire interface including the content is simply being cut off. Have closed all other applications to make sure something else isn't influencing this.
### Other Software
_No response_ | Issue-Bug,Product-FancyZones,Needs-Triage | low | Critical |
2,717,872,964 | neovim | Node selection behavior changed after recent Treesitter update in Neovim | This issue was originally reported on the nvim-treesitter repository. I was advised to open it here and attempt a bisect to identify the cause.
After a recent update to Neovim (commit 865ba42e0401043836bca567b4add164c5c46e6f), the behavior of Treesitter in R scripts has changed. Specifically, when placing the cursor at the beginning of an R script, a different node is now being selected compared to previous versions. This appears to be a regression introduced by the aforementioned commit.
Please let me know if you need additional details or steps to reproduce.
For this R code
```r
x <- 1L
y <- 2L
z <- 3L
```
The corresponding tree is
```
(program ; [0, 0] - [3, 0]
(binary_operator ; [0, 0] - [0, 7]
lhs: (identifier) ; [0, 0] - [0, 1]
rhs: (integer)) ; [0, 5] - [0, 7]
(binary_operator ; [1, 0] - [1, 7]
lhs: (identifier) ; [1, 0] - [1, 1]
rhs: (integer)) ; [1, 5] - [1, 7]
(binary_operator ; [2, 0] - [2, 7]
lhs: (identifier) ; [2, 0] - [2, 1]
rhs: (integer))) ; [2, 5] - [2, 7]
```
with nvim 0.11.0 (this commit https://github.com/neovim/neovim/commit/3d1e6c56f08f420c0d91ffbee888c05b20806a5e), when the cursor is placed on `x`, this is line 3 in the tree that is selected

However, on nvim 0.11.0 (this commit https://github.com/neovim/neovim/commit/865ba42e0401043836bca567b4add164c5c46e6f) the selected node is `program`

### Steps to reproduce
1. I am using bob (https://github.com/MordechaiHadad/bob) to switch between the two commits:
```bash
bob use 865ba42e0401043836bca567b4add164c5c46e6f
bob use 3d1e6c56f08f420c0d91ffbee888c05b20806a5e
```
2. Create an R file with this code *at the top of the file*:
```r
x <- 1L
y <- 2L
z <- 3L
```
3. Call `:InspectTree`.
4. Place the cursor on `x` (first line of the R code).
5. Compare which node is selected between 3d1e6c56f08f420c0d91ffbee888c05b20806a5e and 865ba42e0401043836bca567b4add164c5c46e6f.
### Expected behavior
Having the same node selected between the two commits.
### Nvim version (nvim -v)
NVIM v0.11.0-dev-865ba42e
### Vim (not Nvim) behaves the same?
I have not tried on vim.
### Operating system/version
Linux Mint 22
### Terminal name/version
kitty 0.37.0 created by Kovid Goyal
### $TERM environment variable
xterm-kitty
### Installation
bob installer | status:blocked-external,bug-regression,has:bisected,treesitter | low | Minor |
2,717,877,771 | PowerToys | black empty screen while using fancyzones | ### Microsoft PowerToys version
0.86.0
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
1. enter fancy zones editor
2. customize a new one (3 zones)
3. enable it and use it for a while (about 5 minutes)
4. hold and press Ctrl + Windows + Shift + B
And then the black screen appeared, I tried everything. nothing worked but hitting the power button (to turn my laptop off). The screen kept being black, empty and it only displayed something when I turned my laptop on!
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Product-FancyZones,Needs-Triage | low | Minor |
2,717,896,964 | flutter | Allow setting render resolution / DPI | ### Use case
In games, it is common to limit the resolution of the output as a tradeoff between quality and performance. Games need to run smooth even when the user has a 5K screen, so they simply render to a lower-than-native resolution.
I think this might get more and more relevant to Flutter because of the following trends:
1. Steadily increasing resolutions of screens[^1]
3. Higher and higher table-stakes for complexity of UI (e.g. todays UIs have a lot more frosted glass and always-on animations than 5 or 10 years ago)
4. Gaming itself being a promising target for Flutter (where screens are generally even more complex than in apps)
5. Flutter now targeting desktop (incl. desktop web) where a screen can easily have 8-14 megapixels
6. Moore's law slowing down[^2] or even reaching its limits (especially through memory wall)
[^1]: https://gs.statcounter.com/screen-resolution-stats/mobile/worldwide/#monthly-200903-202412
[^2]: https://www.zdnet.com/article/as-moores-law-slows-high-end-applications-may-feel-the-effect-mit-scientist-warns/
### Proposal
It would be great to have the option of switching the app (or game) to a specified resolution or limiting its DPI. A common gaming approach is to only allow this after restart (i.e. user changes resolution --> game asks to be restarted for changes to take effect). This is not ideal but better than nothing.
This is one of the things that should be _possible_ but don't need to be _easy_. The vast majority of Flutter developers won't need this. So, for example, I can imagine the embedder selecting a resolution from an environment variable and, if not present, doing its usual thing (i.e. using the highest possible DPI).
| c: new feature,engine,c: proposal,P3,team-engine,triaged-engine | low | Major |
2,717,899,625 | langchain | ContextualCompressionRetriever._get_relevant_documents() returns a list of _DocumentWithState instead of a list of Document | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_openai import AzureOpenAIEmbeddings
from langchain_chroma import Chroma
from langchain.retrievers.multi_vector import MultiVectorRetriever
from langchain.schema.document import Document
from langchain.storage.file_system import LocalFileStore
from langchain_community.document_transformers.embeddings_redundant_filter import EmbeddingsRedundantFilter
from langchain.retrievers.contextual_compression import ContextualCompressionRetriever
from langchain.retrievers.document_compressors.base import DocumentCompressorPipeline
from uuid import uuid4
embedder = AzureOpenAIEmbeddings(model='text-embedding-3-large')
vectorstore = Chroma(collection_name="docs",
embedding_function=embedder,
persist_directory="data/vector_db/")
retriever = MultiVectorRetriever(vectorstore=vectorstore,
docstore=create_kv_docstore(LocalFileStore("data/retriever_data/")),
id_key='doc_id')
compression_retriever = ContextualCompressionRetriever(
base_compressor=DocumentCompressorPipeline(transformers=[
EmbeddingsRedundantFilter(embeddings=embedder,
similarity_threshold=0.999)
]
),
base_retriever=retriever)
documents = '''list of documents to embed and store in the vectorstore'''
doc_ids = [str(uuid4()) for _ in documents]
docs = [
Document(page_content=s, metadata={'doc_id': doc_ids[i]})
for i, s in enumerate(documents)
]
retriever.base_retriever.vectorstore.add_documents(docs)
retrieved_docs = retriever.invoke('''query''')
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
According to LangChain's documentation, `retrieved_docs` shoud be a list of `Document` objects.
But it happens to be a list of `_DocumentWithState` objects, which is similar but includes the embedded representations of the documents.
In my case, this is a problem because the embedded vectors are big, and passing them to an LLM in the generation phase of a RAG application is not ideal.
The problem origins in the `EmbeddingsRedundantFilter.transform_documents()` method that returns:
`return [stateful_documents[i] for i in sorted(included_idxs)]`
which are then forwarded to the retriever output.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:50:58) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.3.10
> langchain: 0.3.3
> langchain_community: 0.3.2
> langsmith: 0.1.129
> langchain_chroma: 0.1.4
> langchain_huggingface: 0.1.0
> langchain_openai: 0.2.2
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: 4.0.3
> chromadb: 0.5.13
> dataclasses-json: 0.6.7
> fastapi: 0.115.2
> httpx: 0.27.0
> huggingface-hub: 0.25.2
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.51.2
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.8.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.1
> requests: 2.32.3
> sentence-transformers: 3.2.0
> SQLAlchemy: 2.0.34
> tenacity: 8.2.3
> tiktoken: 0.8.0
> tokenizers: 0.20.1
> transformers: 4.45.2
> typing-extensions: 4.12.2 | ๐ค:bug | low | Critical |
2,717,936,051 | flutter | Allow limiting FPS | ### Use case
I have noticed that Unity automatically limits games to 30 FPS on mobile devices.[^5] The developer can override this but it's a pretty well-hidden setting and I think most don't.
This is a trade off I can completely understand since 30 FPS is probably more than enough for most mobile games, and it saves battery and reduces jank (itโs much harder to miss a 33ms frame than it is to miss a 16ms frame).
There is no such possibility in Flutter. Flutter tries to render at the native framerate. (Not sure what the story with 120Hz screens is, to be honest, but I'd expect Flutter to try to match the refresh rate of the screen.)
This problem is related to https://github.com/flutter/flutter/issues/159796 (which deals with limiting resolution/DPI rather than framerate). To pull a relevant quote from that issue:
> I think this might get more and more relevant to Flutter because of the following trends:
>
> 1. Steadily increasing resolutions of screens[^1]
> 3. Higher and higher table-stakes for complexity of UI (e.g. todays UIs have a lot more frosted glass and always-on animations than 5 or 10 years ago)
> 4. Gaming itself being a promising target for Flutter (where screens are generally even more complex than in apps)
> 5. Flutter now targeting desktop (incl. desktop web) where a screen can easily have 8-14 megapixels
> 6. Moore's law slowing down[^2] or even reaching its limits (especially through memory wall)
[^1]: https://gs.statcounter.com/screen-resolution-stats/mobile/worldwide/#monthly-200903-202412
[^2]: https://www.zdnet.com/article/as-moores-law-slows-high-end-applications-may-feel-the-effect-mit-scientist-warns/
[^5]: https://filiph.net/text/benchmarking-flutter-flame-unity-godot.html
### Proposal
To enable more complex apps, and _especially_ games, to render without jank, I suggest there's a way to tell the embedder to render at a framerate lower than the native one.
I realize this seems like a crutch and in a perfect world there would be no need for such a setting but again, Unity (currently the most successful game engine for mobile) is doing 30 FPS _by default_.
// cc @bdero | c: new feature,engine,c: proposal,P3,team-engine,triaged-engine | low | Major |
2,717,944,027 | angular | AOT builds with TypeScript 5.6 or later are slower in Angular 19 | ### Command
serve
### Is this a regression?
- [x] Yes, this behavior used to work in the previous version
### The previous version in which this bug was not present was
18.0.1
### Description
After upgrading my project from Angular 18 to Angular 19 using ng update, I've noticed a significant performance degradation with ng serve. Previously, rebuilds were much faster, but now they take up to 10 seconds or more.


### Minimal Reproduction
https://github.com/Oussemasahbeni/saha-meter-frontend
I made the repo public until the issue got resolved, its a close source project for my company
Please check the clean-up branch
### Exception or Error
```text
```
### Your Environment
```text
Angular CLI: 19.0.2
Node: 22.11.0
Package Manager: npm 10.9.0
OS: win32 x64
Angular: 19.0.1
... animations, cdk, common, compiler, compiler-cli, core, forms
... material, material-luxon-adapter, platform-browser
... platform-browser-dynamic, router, service-worker
... youtube-player
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.2
@angular-devkit/build-angular 19.0.2
@angular-devkit/core 19.0.2
@angular-devkit/schematics 19.0.2
@angular/build 19.0.2
@angular/cli 19.0.2
@schematics/angular 19.0.2
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else relevant?
I managed to make it a little bit faster by disabling aot in development
```
"development": {
"optimization": false,
"extractLicenses": false,
"aot":false,
"sourceMap": true
}
```
but still a lot of people told that is not a solution but a workaround and it can hurt the project in the long term | area: performance,regression,area: compiler | medium | Critical |
2,717,950,370 | vscode | typescript and javascript language features extension loading foreever |
Type: <b>Bug</b>
This is happening a lot lately. After working for a while on my TypeScript Next.js project, Intellisense is taking forever to do whatever. "Loading Intellisense Status" is not ending.
VS Code version: Code 1.95.3 (Universal) (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Darwin arm64 24.1.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M4 (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|5, 4, 3|
|Memory (System)|32.00GB (0.04GB free)|
|Process Argv|. --crash-reporter-id 35286e07-1ff0-4f8f-8d92-418d938ced52|
|Screen Reader|yes|
|VM|0%|
</details><details><summary>Extensions (24)</summary>
Extension|Author (truncated)|Version
---|---|---
typescript-language-features|vsc|1.88.1
codewhisperer-for-command-line-companion|ama|1.5.0
vim-cheatsheet|And|0.0.1
spellright|ban|3.0.140
vscode-tailwindcss|bra|0.12.15
bracket-select|chu|2.0.2
vscode-markdownlint|Dav|0.57.0
vscode-eslint|dba|3.0.10
javascript-ejs-support|Dig|1.3.3
es7-react-js-snippets|dsz|4.4.3
gitlens|eam|16.0.4
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
auto-rename-tag|for|0.1.10
copilot|Git|1.246.0
copilot-chat|Git|0.22.4
prettier-sql-vscode|inf|1.6.0
workspace-cacheclean|Mam|0.0.2
remote-containers|ms-|0.388.0
prisma|Pri|6.0.0
vscode-icons|vsc|12.9.0
vscode-todo-highlight|way|1.0.5
pretty-ts-errors|Yoa|0.6.1
markdown-all-in-one|yzh|3.6.2
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
pythonvspyt551cf:31179979
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,717,986,606 | pytorch | Device check missing in torch.linalg.solve_triangular leading to hard crash | ### ๐ Describe the bug
It seems there is a device check missing in `torch.linalg.solve_triangular`.
When I run
```
import torch
sq_shape = (3, 3)
device = torch.device('mps:0')
A = torch.normal(torch.zeros(sq_shape), torch.ones(sq_shape)).to(device=device)
eye = torch.eye(A.shape[0], device=torch.device('cpu'))
torch.linalg.solve_triangular(A, eye, upper=True)
```
it crashes with the following error.
```
Process finished with exit code 139 (interrupted by signal 11:SIGSEGV)
```
Running all on `cpu` or `mps` works fine.
### Versions
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.6 (main, Sep 6 2024, 19:03:47) [Clang 15.0.0 (clang-1500.3.9.4)] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.5.1
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.20.1
[conda] libtorch 2.4.0 cpu_generic_hf1facdc_0 conda-forge
[conda] nomkl 1.0 h5ca1d4c_0 conda-forge
[conda] numpy 1.26.4 py310hd45542a_0 conda-forge
[conda] pytorch 2.4.0 cpu_generic_py310hb190f2a_0 conda-forge
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | module: crash,good first issue,triaged,module: linear algebra | low | Critical |
2,718,003,344 | vscode | Paste mode button looks different to other inline buttons | The blue background is very heavy and it's white on blue compared to the normal buttons that show up (yellow/blue foreground lightbulbs on transparent)

It also features a shadow which the light bulbs don't:


| ux,polish,under-discussion | low | Minor |
2,718,009,665 | tensorflow | TPU not support TensorFlow 2.18 and 2.17.1 | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf 2.18 and tf. 2.17.1
### Custom code
No
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
`import tensorflow as tf` results `segmentation fault core dumped`
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
```
### Relevant log output
_No response_ | stat:awaiting tensorflower,type:bug,comp:tpus,TF 2.18 | low | Critical |
2,718,056,857 | tensorflow | Are checkpoints broken in >= 2.16? | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.16, 2.17
### Custom code
No
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
The example given in https://www.tensorflow.org/guide/checkpoint does not seem to work as expected in 2.16 and 2.17, while working fine in 2.15. After restoring and restarting the training process, it starts training from the very beginning.
### Standalone code to reproduce the issue
```shell
https://colab.research.google.com/drive/1n76Mu5BhdBJBSXc7cXYJMr0lMDER2JRa?usp=sharing
```
### Relevant log output
_No response_ | stat:awaiting tensorflower,type:bug,comp:apis,2.17 | medium | Critical |
2,718,098,355 | angular | The `provideAnimations()` return type is `Provider` instead of `EnvironmentProviders` | ### Which @angular/* package(s) are the source of the bug?
platform-browser
### Is this a regression?
No
### Description
The `provideAnimations()` return type is `Provider` which makes it possible to add it to the `providers: []` of standalone component which is never correct as the multiple provisions could lead to multiple instances of created injectables which leads to random errors at runtime.
For example we had case where `@if` was adding but never removing the created elements...
The `provideAnimationsAsync()` already returns correct `EnvironmentProviders` type which prevents this misuse
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
_ _ ____ _ ___
/ \ _ __ __ _ _ _| | __ _ _ __ / ___| | |_ _|
/ โณ \ | '_ \ / _` | | | | |/ _` | '__| | | | | | |
/ ___ \| | | | (_| | |_| | | (_| | | | |___| |___ | |
/_/ \_\_| |_|\__, |\__,_|_|\__,_|_| \____|_____|___|
|___/
Angular CLI: 19.0.2
Node: 22.11.0
Package Manager: npm 10.9.0
OS: linux x64
Angular: 19.0.1
... animations, cdk, common, compiler, compiler-cli, core, forms
... material, platform-browser, platform-browser-dynamic, router
Package Version
------------------------------------------------------
@angular-devkit/architect 0.1900.2
@angular-devkit/core 19.0.2
@angular-devkit/schematics 19.0.2
@angular/build 19.0.2
@angular/cli 19.0.2
@schematics/angular 19.0.2
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else?
_No response_ | area: animations,state: has PR,P3,bug | low | Critical |
2,718,135,510 | vscode | Support an executable based recommendation by executable prefix name | MongoDB compass does not seem to be recommended for me on my machine.
Code pointer https://github.com/microsoft/vscode-distro/blob/main/mixin/insider/product.json#L875
MacOS
1. Install https://www.mongodb.com/products/tools/compass
2. Open VS Code Insiders
3. No recommendation ๐
| feature-request,extensions,extension-recommendations | low | Major |
2,718,208,153 | tensorflow | Potential Remote Code Execution (RCE) Vulnerability in Custom Layers Handling | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18.0
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
Kali Linux 2024.1
### Python version
Python 3.11.9
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
CUDA: 11.2
### GPU model and memory
NVIDIA GTX 1080
### Current behavior?
Currently, TensorFlow (primarily through Keras) allows users to create custom layers that can contain malicious code. When a model containing these custom layers is loaded or run, the malicious code in the layer can be executed without any restrictions or filtering. This opens up potential exploits because the executed code could potentially damage the system or steal sensitive data. For example, in this report, the saved model contains a MaliciousLayer layer that calls the os.system() system command to execute malicious code when the model is loaded, which could result in corruption or unauthorized data acquisition.
I hope TensorFlow can provide protection that limits or filters the execution of malicious code in registered custom layers. For example, by blocking malicious functions such as os.system() or prohibiting the use of code that can execute external commands when the model is loaded. In addition, the use of custom layers that contain unwanted external code should be protected by restrictions or sandboxing. This will ensure that the loaded model cannot inject malicious code into the user's system, thereby increasing the security of using TensorFlow models.
The main issue here is that TensorFlow (in this case Keras) allows users to define custom layers that can execute malicious code when the model is loaded and used. This is a potential security hole that can be exploited if malicious code is injected into the model.
The challenge is the ability to include custom layers that execute malicious code. TensorFlow/Keras does not filter or restrict the use of malicious code in registered custom layers, which allows exploits like this.
So, the vulnerability is in the implementation of custom layers in TensorFlow/Keras
Who can exploit the vulnerability?
An attacker or malicious individual who gains access to a TensorFlow model saved in the .keras format and has knowledge of how to create malicious custom layers can exploit this vulnerability. Typically, this would involve actors with an understanding of TensorFlow programming or machine learning who can inject harmful code into the model to be executed when the model is loaded.
What do they gain from it?
An attacker who successfully exploits this vulnerability can execute malicious code on the victim's system when the model is loaded. This allows them to steal sensitive personal data, such as login credentials, financial data, or other private information. Additionally, they can leverage remote code execution to gain control of the system or cause further dama
### Standalone code to reproduce the issue
```shell
1.Create a Python file, for example kontol13.py, containing the following code to create a model with a malicious layer (MaliciousLayer):
import tensorflow as tf
from tensorflow.keras.layers import Layer
import os
# Defines a custom layer containing malicious code
@tf.keras.utils.register_keras_serializable()
class MaliciousLayer(Layer):
def __init__(self, **kwargs):
super(MaliciousLayer, self).__init__(**kwargs)
def call(self, inputs):
# Code that is executed when the model runs
os.system('echo "Malicious Code Executed!"')
return inputs
# Creating a model with dangerous layers
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(10,)),
tf.keras.layers.Dense(10, activation='relu'),
MaliciousLayer(), # Layer berbahaya
tf.keras.layers.Dense(1)
])
# Save the model in Keras format (.keras)
model.save('kontol13.keras')
print("Malicious model created and saved as 'kontol13.keras'.")
```
2.Run the above script to create and save the model containing the malicious layers
I also saved the result in google colab:
https://colab.research.google.com/drive/1IwpwNGOeTPUgYu4Y4bAl7WufOXBvOQy9?authuser=1#scrollTo=EYbfvtj6O6wU&line=1&uniqifier=1
```
### Relevant log output
```shell
I run this code on kali linux this is the output
2024-12-04 17:45:53.327622: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() are called are written to STDERR
E0000 00:00:1733309153.640400 154 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733309153.730398 154 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-04 17:45:54.592724: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
/home/funsociety/.local/lib/python3.11/site-packages/keras/src/layers/core/input_layer.py:27: UserWarning: Argument `input_shape` is deprecated. Use `shape` instead.
warnings.warn(
2024-12-04 17:46:03.552814: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
Malicious Code Executed!
```
| type:others,TF 2.18 | medium | Critical |
2,718,235,085 | flutter | `Linux_pixel_7pro new_gallery_opengles_impeller__transition_perf` is timing out sometimes | Examples:
- https://ci.chromium.org/ui/p/flutter/builders/prod/Linux_pixel_7pro%20new_gallery_opengles_impeller__transition_perf/5379/overview
- https://ci.chromium.org/ui/p/flutter/builders/prod/Linux_pixel_7pro%20new_gallery_opengles_impeller__transition_perf/5288/overview
spawned from https://github.com/flutter/flutter/pull/156073 | P2,c: flake,team-engine,triaged-engine | low | Minor |
2,718,311,494 | vscode | Markdown Characters Should Be Skipped When Reading Aloud |
Type: <b>Bug</b>
**Title:** Markdown Characters Should Be Skipped When Reading Aloud
**Description:**
When reading back Markdown content, the reader currently vocalizes Markdown syntax characters such as asterisks (`*`) used for bold or italic text. This behavior detracts from the meaning and readability of the content. The reader should skip over Markdown characters and only vocalize the actual text content.
**Steps to Reproduce:**
1. Create a Markdown file with the following content:
```markdown
**Bold Text**
*Italic Text*
```
2. Use the text-to-speech feature to read the content aloud.
**Expected Behavior:**
The reader should vocalize:
- "Bold Text"
- "Italic Text"
**Actual Behavior:**
The reader vocalizes:
- "Asterisk asterisk Bold Text asterisk asterisk"
- "Asterisk Italic Text asterisk"
**Environment:**
- Application: Github Copilot Chat
- Version: [Version Number]
- Operating System: Windows 11
**Additional Information:**
Skipping Markdown characters will improve the clarity and comprehension of the read-aloud content.
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i9-12900K (24 x 3187)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.73GB (9.13GB free)|
|Process Argv|C:\\Users\\MPhil\\AppData\\Roaming\\talon\\user\\mystuff\\talon_my_stuff\\apps\\talon_my_stuff.code-workspace --crash-reporter-id 20d139f4-04aa-49d2-afa0-4b3653a32416|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (40)</summary>
Extension|Author (truncated)|Version
---|---|---
numbered-bookmarks|ale|8.5.0
andreas-talon|And|3.54.0
github-markdown-preview|bie|0.3.0
markdown-checkbox|bie|0.4.0
markdown-emoji|bie|0.3.0
markdown-footnotes|bie|0.1.1
markdown-mermaid|bie|1.27.0
markdown-preview-github-styles|bie|2.1.0
markdown-yaml-preamble|bie|0.1.0
vscode-eslint|dba|3.0.10
gitlens|eam|16.0.4
vscode-firefox-debug|fir|2.9.11
codespaces|Git|1.17.3
copilot|Git|1.246.1240
copilot-chat|Git|0.22.4
vscode-github-actions|git|0.27.0
vscode-pull-request-github|Git|0.100.3
vscode-peacock|joh|4.2.2
rainbow-csv|mec|3.13.0
vscode-talonscript|mro|0.3.22
debugpy|ms-|2024.12.0
python|ms-|2024.20.0
vscode-pylance|ms-|2024.11.3
remote-ssh-edit|ms-|0.87.0
extension-test-runner|ms-|0.0.12
vscode-speech|ms-|0.12.1
vscode-speech-language-pack-en-gb|ms-|0.5.0
talon-filetree|Pau|0.6.9
material-icon-theme|PKi|5.14.1
command-server|pok|0.10.1
cursorless|pok|0.29.1301
parse-tree|pok|0.32.0
semantic-movement|pok|0.3.0
talon|pok|0.2.0
vscode-xml|red|0.27.2
todotasks|san|0.5.0
vscode-fileutils|sle|3.10.3
errorlens|use|3.20.0
talonfmt-vscode|wen|0.1.1
markdown-all-in-one|yzh|3.6.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | feature-request,accessibility,under-discussion,workbench-voice | low | Critical |
2,718,335,769 | vscode | Add to `vscode.window.createWebviewPanel` an option to open detached (new `ViewColumn`) | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Hi All!
I tried and followed many suggestions here to simulate the feature of copying a tab into a new window `ctrl+k o` by opening the tab in a detached window like shown in the gif below

what I meant here is to start the tab in a completely detached way other than 'Beside' option.

I tried to execute commands such as `vscode.newWindow` and `vscode.openFolder` without specifying a folder path as suggested from @bpasero [suggestion](https://github.com/microsoft/vscode/issues/18034#issuecomment-270150728) ,but it doesn't make sense.
if there any other suggestion I'll be glad to hear.
| feature-request,api,webview,workbench-auxwindow | low | Minor |
2,718,354,308 | vscode | `shellIntegration.ps1` adds additional errors to `$Error` variable when errors occur | ## Env info
- VS Code Version: 1.95.3
- OS Version: Windows 11 24H2 x64
- PowerShell version: v7.4.6 x64
- Extension: PowerShell v2024.0
## Reproduce
In a PowerShell Extension terminal, do:
```pwsh
# Clear error
$Error.Clear()
# Invoke request known to fail
$null = Invoke-RestMethod -Method 'Post' -Uri 'https://graph.microsoft.com' -Body @{'test' = [string]'test'}
# Count errors, should be 1 but is 3
$Error.Count
```
I then found out by looking at `$Error[0]` that one of the errors were added by VSCode `shellIntegration.ps1`.
```pwsh
PS > $Error[0].InvocationInfo
MyCommand : Select-Object
BoundParameters : {}
UnboundArguments : {}
ScriptLineNumber : 218
OffsetInLine : 64
HistoryId : 55
ScriptName :
Line : $global:Error[0] | Where-Object { $_ -ne $null } | Select-Object -ExpandProperty InvocationInfo
Statement : Select-Object -ExpandProperty InvocationInfo
PositionMessage : At line:218 char:64
+ โฆ bject { $_ -ne $null } | Select-Object -ExpandProperty InvocationInfo
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PSScriptRoot :
PSCommandPath :
InvocationName : Select-Object
PipelineLength : 0
PipelinePosition : 0
ExpectingInput : False
CommandOrigin : Internal
DisplayScriptPosition :
PS > $Error[0].ScriptStackTrace
at Update-PoshErrorCode, <No file>: line 218
at <ScriptBlock>, <No file>: line 272
at Global:Prompt, C:\Program Files\Microsoft VS Code\resources\app\out\vs\workbench\contrib\terminal\common\scripts\shellIntegration.ps1: line 96
PS >
```
Could find `shellIntegration.ps1`, but I can't find that invocation line in it.
* <https://github.com/microsoft/vscode/blob/main/src/vs/workbench/contrib/terminal/common/scripts/shellIntegration.ps1>
Doing the same in "vanilla" `pwsh -NoProfile` only adds one error to the `$Error` variable:
```pwsh
PowerShell 7.4.6
PS > $null = Invoke-RestMethod -Method 'Post' -Uri 'https://graph.microsoft.com' -Body @{'test' = [string]'test'}
Invoke-RestMethod: Response status code does not indicate success: 405 (Method Not Allowed).
PS > $Error.Count
1
PS >
``` | bug,confirmation-pending,terminal-shell-pwsh | low | Critical |
2,718,374,786 | terminal | Use Bold To Mean Bold Text Where It Is Currently Ignored | ### Description of the new feature
Currently, the presence of the bold tag (\e[1m) is used to indicate that one of the original 8 basic colors (30-37) ought to be "bold". The meaning of bold is configurable, but the default is that these colors are promoted to their bright variants. If the color is not 30-37, if it is 256-color, if it is a true-color, the bold tag is ignored, unless the terminal has been configured to use bold text for the bold tag.
This decision was made because this is the original behavior (and is still broadly in use) of the bold tag for the 8 basic colors.
However, under this scheme, using the bold tag in conjunction with a 256-color (for instance \e[0;1;38;5;1) will cause the bold tag to be ignored entirely unless the terminal has been specifically configured otherwise. It is far more common when these tags are used together to use bold text, even in cases where a terminal interprets bold as bright. There's also no downside to interpreting it this way, since there is no expectation that the bold tag will change the color of text being styled with 256-colors or true-colors.
The one downside is what to do when text transitions away from text that was using bold-is-bright into a 256-color. I would argue that there is no real problem with either maintaining the bold styling even in this case, or in not doing so. I assume the former is easier.
### Proposed technical implementation details
When the bold tag is active and the text is colored using 256 or truecolor, display bold text.
For instance, the following uses the terminal settings (which by default would make the text bright red):
\e[0;1;31mI am bright red!\e[0m
But the following would display dark red in bold text:
\e[0;1;38;5;1mI am bold dark red!\e[0m
And the following would display bright red in bold text:
\e[0;1;38;5;9mI am bold bright red!\e[0m
The following would display the first portion in bright red and the second portion in bold dark red:
\e[0;1;31mI am bright red! \e[38;5;1mI am bold dark red!\e[0m | Issue-Feature,Area-Rendering,Area-VT,Product-Terminal | low | Major |
2,718,392,132 | vscode | Reduce space taken up by cached data | I have a macbook with only 256GB SSD as I am not willing to pay the insane upgrade prices of apple
Thus, 700MB here and there do add up and I want to reduce the disk space taken up by software so I can store more photos etc.
Is it really necessary to keep these VSIX cached after they have been installed?
Can their size be minified somehow?

| debt | low | Minor |
2,718,412,930 | pytorch | [Tensor Parallel] Conv2d with replicate inputs and weights raise error in backward | ### ๐ Describe the bug
Conv2d with replicate DTensor inputs and weights raise error in backward.
Error Msg:
```
[rank2]: Traceback (most recent call last):
[rank2]: File "/path/to/tp_conv_bug.py", line 24, in <module>
[rank2]: res_l.backward(dres)
[rank2]: File "/opt/miniconda/envs/torchdev/lib/python3.11/site-packages/torch/_tensor.py", line 626, in backward
[rank2]: torch.autograd.backward(
[rank2]: File "/opt/miniconda/envs/torchdev/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward
[rank2]: _engine_run_backward(
[rank2]: File "/opt/miniconda/envs/torchdev/lib/python3.11/site-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
[rank2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/opt/miniconda/envs/torchdev/lib/python3.11/site-packages/torch/_compile.py", line 32, in inner
[rank2]: return disable_fn(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/opt/miniconda/envs/torchdev/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 744, in _fn
[rank2]: return fn(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^
[rank2]: File "/opt/miniconda/envs/torchdev/lib/python3.11/site-packages/torch/distributed/tensor/_api.py", line 343, in __torch_dispatch__
[rank2]: return DTensor._op_dispatcher.dispatch(
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/opt/miniconda/envs/torchdev/lib/python3.11/site-packages/torch/distributed/tensor/_dispatch.py", line 163, in dispatch
[rank2]: return self._custom_op_handlers[op_call](op_call, args, kwargs) # type: ignore[operator]
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/opt/miniconda/envs/torchdev/lib/python3.11/site-packages/torch/distributed/tensor/_tp_conv.py", line 267, in convolution_backward_handler
[rank2]: dtensor.DTensor._op_dispatcher.sharding_propagator.propagate(op_info)
[rank2]: File "/opt/miniconda/envs/torchdev/lib/python3.11/site-packages/torch/distributed/tensor/_sharding_prop.py", line 206, in propagate
[rank2]: OutputSharding, self.propagate_op_sharding(op_info.schema)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/opt/miniconda/envs/torchdev/lib/python3.11/site-packages/torch/distributed/tensor/_sharding_prop.py", line 46, in __call__
[rank2]: return self.cache(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/opt/miniconda/envs/torchdev/lib/python3.11/site-packages/torch/distributed/tensor/_sharding_prop.py", line 449, in propagate_op_sharding_non_cached
[rank2]: self._wrap_output_spec_tensor_meta(
[rank2]: File "/opt/miniconda/envs/torchdev/lib/python3.11/site-packages/torch/distributed/tensor/_sharding_prop.py", line 192, in _wrap_output_spec_tensor_meta
[rank2]: raise ValueError(
[rank2]: ValueError: ShardingPropagator error: output 0 does not have an associated TensorMeta
```
Code:
```
import os
import torch
import torch.nn as nn
from torch.nn import functional as F
from torch.distributed.tensor import Shard, DTensor, Replicate
import torch.distributed as dist
from torch.distributed.device_mesh import init_device_mesh
_world_size = int(os.environ["WORLD_SIZE"])
device_mesh = init_device_mesh(device_type="cuda", mesh_shape=(_world_size,))
conv = nn.Conv2d(64, 64, 3, padding=1).train()
x = torch.randn(1, 64, 32, 32)
x_dt = DTensor.from_local(x, device_mesh, [Replicate()])
w = conv.weight.data
w_dt = torch.nn.Parameter(DTensor.from_local(w, device_mesh, [Replicate()]))
b = conv.bias.data
b_dt = torch.nn.Parameter(DTensor.from_local(b, device_mesh, [Replicate()]))
res = F.conv2d(x_dt, w_dt, b_dt, padding=1)
res_l = res.to_local()
dres = torch.rand_like(res_l)
res_l.backward(dres)
dist.barrier()
dist.destroy_process_group()
```
### Versions
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0.dev20241202+cu124
[pip3] torchaudio==2.5.0.dev20241202+cu124
[pip3] torchvision==0.20.0.dev20241202+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0.dev20241202+cu124 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241202+cu124 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241202+cu124 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu | oncall: distributed,triaged,module: dtensor | low | Critical |
2,718,425,298 | rust | Tracking Issue for rustc_contracts | This is a tracking issue for the experimental contracts feature [compiler-team MCP #759](https://github.com/rust-lang/compiler-team/issues/759)
The feature gate for the issue is `#![feature(rustc_contracts)]` (external user interface).
There is also a feature gate, `#![feature(rustc_contracts_internals)]`, which covers implementation details not intended for end-user exposure.
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
Discussion comments will get marked as off-topic or deleted.
Repeated discussions on the tracking issue may lead to the tracking issue getting locked.
### Steps
<!--
Include each step required to complete the feature. Typically this is a PR
implementing a feature, followed by a PR that stabilises the feature. However
for larger features an implementation could be broken up into multiple PRs.
-->
- [ ] Implement the MCP; initial implementation https://github.com/rust-lang/rust/pull/128045
- [ ] Add interface for external tools to retrieve the contract specification.
- [ ] Add type invariant.
- [ ] Adjust documentation ([see instructions on rustc-dev-guide][doc-guide])
- [ ] Formatting for new syntax has been added to the [Style Guide] ([nightly-style-procedure])
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[nightly-style-procedure]: https://github.com/rust-lang/style-team/blob/main/nightly-style-procedure.md
[Style Guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
* **Tooling support:** How much functionality should be integrated into Rust project provided tooling itself? E.g. should something like miri be exploring fuzzing of data that is fed into functional preconditions as a way to explore state space?
* **Syntax bikesheds galore:** Define the exact syntax for contract attributes. Should we separate correctness vs safety conditions?
* **Static vs dynamic semantics:** The idealized contract system would allow contracts to inform both static verification and dynamic validation tools. What's the best way to handle conditions that cannot be checked with both semantics.
* **Safety post-obligations:** The safety criteria for some unsafe methods is stated as a constraint on how the caller uses the return value from a method. This cannot be expressed as a mere safety::requires form as envisioned above. Should we add something like: `#[safety::at_lifetime_end(|output| str::from_utf8(output).is_ok())]`, which could only be checked in the future, right before the &mut u8 borrow expires? See original MCP for more details
* **Correctness invariants:** As mentioned above, there is probably utility in being able to attach invariants to a type that are used for proving functional correctness. But it is not as clear where to establish the points where correctness invariants must be checked. It may make more sense here to use something like refinement types, where explicit method calls (potentially in ghost code) would (re)establish such invariants.
* **Purity:** Do contracts need to be pure (i.e. have no non-local side-effects)?
* **Panic:** How should a panicking expression within a contract be treated? Should users be able to specify conditions that will lead to panic? Should any post-condition be checked during unwind?
### Implementation history
* https://github.com/rust-lang/rust/pull/128045
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"celinval"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | T-lang,T-compiler,C-tracking-issue | low | Critical |
2,718,473,217 | next.js | Variable expansions for media queries inside styled-jsx break with Turbopack (bis) | ### Link to the code that reproduces this issue
https://github.com/miselin/next-style-jsx-var-expansion-repro
### To Reproduce
1. Clone reproduction repository
1. Run `npm install`
1. Run `npm run dev`
1. Visit http://localhost:3000/
1. The page should display without error, showing an "Hello" text that changes size based on the browser window dimensions (i.e. the media query is using the variable-expanded 768px breakpoint correctly)
1. Terminate that process
1. Run `npm run dev:turbo`
1. Visit http://localhost:3000/
1. The page should display without error but the "Hello" text does not change its size based on the browser window dimensions anymore (i.e. the media query using the variable-expanded 768px breakpoint is not used)
### Current vs. Expected behavior
Expected: I see the expected variable expansion and correct media query CSS, allowing the use of constants for controlling media query breakpoints in an application.
Current: enabling Turbopack breaks the media query breakpoint unless the variables are removed.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Binaries:
Node: 20.16.0
npm: 10.8.2
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.4-canary.38
eslint-config-next: N/A
react: 18.2.0
react-dom: 18.2.0
typescript: 5.1.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
This does not repro with Next 14.1.4. The first version that repros this issue seems to be 14.2.0-canary.14. Canary versions between 14.2.0-canary.0 and 14.2.0-canary.13 behaves as issue #61788. | bug,Turbopack,linear: turbopack | low | Critical |
2,718,479,657 | deno | Optional dependencies that can't be resolved during npm resolution should not cause a failure | See https://github.com/denoland/deno/issues/27231
The constraint of the optional dependency `"@reflink/reflink-darwin-x64": "0.1.18"` doesn't work because the 0.1.18 package doesn't exist, leading to a failure, but it shouldn't fail because it's an optional dependency. | bug,install | low | Critical |
2,718,513,819 | vscode | VS Code re-installing extensions immediately after uninstalling | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
Probably not since they need to be present
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
- Version: 1.95.3 (user setup)
Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813
Date: 2024-11-13T14:50:04.152Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.22631
- OS Version:
Steps to Reproduce:
1. Install VS Code
2. Enable syncing of profiles
3. Try to uninstall an extension you have installed in your profile
Expected:
The extension to remain uninstalled
Actual:
The extension is re-installed within seconds of having uninstalled it. I tried to upload a video to show the issue but it keeps failing.
### Workaround
Thankfully I have the old computer still handy that originated the sync data. If I uninstall the extension from the original computer, ~~the change is then propogated to my new computer~~ (edited because my observation was wrong) I can then uninstall the extension on the other computer and it remains uninstalled.
| bug,settings-sync | low | Critical |
2,718,551,255 | rust | ICE: `deeply_normalize should not be called with pending obligations` | <!--
[31mICE[0m: Rustc ./a.rs '' 'error: internal compiler error: compiler/rustc_trait_selection/src/traits/normalize.rs:69:17: deeply_normalize should not be called with pending obligations: [', 'error: internal compiler error: compiler/rustc_trait_selection/src/traits/normalize.rs:69:17: deeply_normalize should not be called with pending obligations: ['
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
pub trait TraitA {
type AssocB = T;
}
pub trait TraitB {
type AssocB;
}
pub trait MethodTrait {
fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a>;
}
impl<T: TraitB> MethodTrait for T
where
<T::AssocB as TraitA>::AssocB: TraitA,
{
// }
fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a> {}
}
````
<details><summary><strong>original code</strong></summary>
<p>
original:
````rust
pub struct Wrapper<T>(T);
struct Struct;
pub trait TraitA {
type AssocB = T;
}
pub trait TraitB {
type AssocB;
}
pub fn helper(v: impl MethodTrait) {
// monomorphization instantiates something it then normalizes to:
//
// Closure(
// DefId(0:27 ~ unnamed_1[00e7]::{impl#0}::method::{closure#0}),
// [
// must be a method (through Self), the example below doesn't work (as a standalone function)
// i16,
// Binder {
// value: extern "RustCall" fn((&'^0 (),)) -> Alias(Projection, AliasTy { args: [StructX, '^0], def_id: DefId(0:10 ~ unnamed_1[00e7]::TraitA::AssocA), .. }),
// bound_vars: [Region(BrAnon)]
// },
// ()
// ]
// ),
//
// This should be completely normalized but isn't.
// so, normalizing again gives (StructX is inserted) for
// Alias(Projection, AliasTy { args: [StructX, '^0], def_id: DefId(0:10 ~ unnamed_1[00e7]::TraitA::AssocA), .. })
//
// Closure(
// DefId(0:27 ~ unnamed_1[00e7]::{impl#0}::method::{closure#0}),
pub struct Wrapper<T>(T);
// Wrapper1<StructX>,
// i16,
// Binder {
// value: extern "RustCall" fn((&'^0 (),)) -> StructX, bound_vars: [Region(BrAnon)]
// },
// ()
// ]
// ).
let _local_that_causes_ice = v.method();
}
pub fn main() {
helper(WrapperT::AssocB as TraitA);
}
pub trait MethodTrait {
type Assoc<'a>;
fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a>;
}
impl<T: TraitB> MethodTrait for T
where
<T::AssocB as TraitA>::AssocB: TraitA,
{
type Assoc<'a> = <T::AssocB as TraitA>::AssocA<'t>;
// must be a method (through Self), the example below doesn't work (as a standalone function)
// fn helper2<M: MethodTrait>(_v: M) -> impl for<'a> FnMut(&'a ()) -> M::Assoc<'a> {
// move |_| loop {}
// }
fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a> {
move |_| loop {}
}
}
impl<T, B> TraitB for Wrapper<B>
where
B: TraitB<AssocB = T>,
{
type AssocB = T;
}
impl TraitB for Struct {
type AssocB = Struct;
}
impl TraitA for Struct {
type AssocA<'t> = Self;
}
````
</p>
</details>
Version information
````
rustc 1.85.0-nightly (96e51d948 2024-12-04)
binary: rustc
commit-hash: 96e51d9482405e400dec53750f3b263d45784ada
commit-date: 2024-12-04
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.5
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/96e51d9482405e400dec53750f3b263d45784ada/compiler/rustc_trait_selection/src/traits/normalize.rs#L63-L75
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0412]: cannot find type `T` in this scope
--> /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:2:19
|
2 | type AssocB = T;
| ^ not found in this scope
error[E0658]: associated type defaults are unstable
--> /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:2:5
|
2 | type AssocB = T;
| ^^^^^^^^^^^^^^^^
|
= note: see issue #29661 <https://github.com/rust-lang/rust/issues/29661> for more information
= help: add `#![feature(associated_type_defaults)]` to the crate attributes to enable
= note: this compiler was built on 2024-12-04; consider upgrading it if it is out of date
error[E0601]: `main` function not found in crate `mvce`
--> /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:18:2
|
18 | }
| ^ consider adding a `main` function to `/tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs`
error[E0277]: the trait bound `<T as TraitB>::AssocB: TraitA` is not satisfied
--> /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:17:5
|
17 | fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a> {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `TraitA` is not implemented for `<T as TraitB>::AssocB`
|
help: consider further restricting the associated type
|
17 | fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a> where <T as TraitB>::AssocB: TraitA {}
| +++++++++++++++++++++++++++++++++++
error[E0220]: associated type `Assoc` not found for `Self`
--> /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:9:60
|
9 | fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a>;
| ^^^^^ associated type `Assoc` not found
error[E0220]: associated type `Assoc` not found for `Self`
--> /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:17:60
|
17 | fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a> {}
| ^^^^^ associated type `Assoc` not found
error[E0277]: the trait bound `<T as TraitB>::AssocB: TraitA` is not satisfied
--> /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:17:24
|
17 | fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a> {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `TraitA` is not implemented for `<T as TraitB>::AssocB`
|
help: consider further restricting the associated type
|
17 | fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a> where <T as TraitB>::AssocB: TraitA {}
| +++++++++++++++++++++++++++++++++++
error[E0277]: the trait bound `<T as TraitB>::AssocB: TraitA` is not satisfied
--> /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:12:1
|
12 | / impl<T: TraitB> MethodTrait for T
13 | | where
14 | | <T::AssocB as TraitA>::AssocB: TraitA,
| |__________________________________________^ the trait `TraitA` is not implemented for `<T as TraitB>::AssocB`
|
help: consider further restricting the associated type
|
14 | <T::AssocB as TraitA>::AssocB: TraitA, <T as TraitB>::AssocB: TraitA
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error[E0277]: the trait bound `<T as TraitB>::AssocB: TraitA` is not satisfied
--> /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:12:1
|
12 | / impl<T: TraitB> MethodTrait for T
13 | | where
14 | | <T::AssocB as TraitA>::AssocB: TraitA,
15 | | {
16 | | // }
17 | | fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a> {}
18 | | }
| |_^ the trait `TraitA` is not implemented for `<T as TraitB>::AssocB`
|
help: consider further restricting the associated type
|
14 | <T::AssocB as TraitA>::AssocB: TraitA, <T as TraitB>::AssocB: TraitA
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error[E0277]: the trait bound `<T as TraitB>::AssocB: TraitA` is not satisfied
--> /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:17:24
|
17 | fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a> {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `TraitA` is not implemented for `<T as TraitB>::AssocB`
|
help: consider further restricting the associated type
|
14 | <T::AssocB as TraitA>::AssocB: TraitA, <T as TraitB>::AssocB: TraitA
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error: internal compiler error: compiler/rustc_trait_selection/src/traits/normalize.rs:69:17: deeply_normalize should not be called with pending obligations: [
Obligation(predicate=Binder { value: TraitPredicate(<_ as TraitA>, polarity:Positive), bound_vars: [] }, depth=1),
]
--> /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:9:24
|
9 | fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a>;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
thread 'rustc' panicked at compiler/rustc_trait_selection/src/traits/normalize.rs:69:17:
Box<dyn Any>
stack backtrace:
0: 0x7f404913b59a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h7d5d2ef8548ea862
1: 0x7f4049813e26 - core::fmt::write::hc7f1fcf2cd4d0b24
2: 0x7f404a7f4751 - std::io::Write::write_fmt::h5dae3cf637b1b14c
3: 0x7f404913b3f2 - std::sys::backtrace::BacktraceLock::print::h603d5439a4d48e27
4: 0x7f404913d91a - std::panicking::default_hook::{{closure}}::h317c28a5ebbd850f
5: 0x7f404913d763 - std::panicking::default_hook::h0f4f8ead395a2965
6: 0x7f40482bbb88 - std[b886acd6fb60b0a8]::panicking::update_hook::<alloc[e4e71b9a99f0d22f]::boxed::Box<rustc_driver_impl[c07627fafa57377e]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7f404913e0d8 - std::panicking::rust_panic_with_hook::he1329bac1e03080f
8: 0x7f40482f10a1 - std[b886acd6fb60b0a8]::panicking::begin_panic::<rustc_errors[c4cb0ec99f59abcb]::ExplicitBug>::{closure#0}
9: 0x7f40482e6246 - std[b886acd6fb60b0a8]::sys::backtrace::__rust_end_short_backtrace::<std[b886acd6fb60b0a8]::panicking::begin_panic<rustc_errors[c4cb0ec99f59abcb]::ExplicitBug>::{closure#0}, !>
10: 0x7f40482e2d29 - std[b886acd6fb60b0a8]::panicking::begin_panic::<rustc_errors[c4cb0ec99f59abcb]::ExplicitBug>
11: 0x7f40482fb041 - <rustc_errors[c4cb0ec99f59abcb]::diagnostic::BugAbort as rustc_errors[c4cb0ec99f59abcb]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x7f404884f8bc - <rustc_errors[c4cb0ec99f59abcb]::DiagCtxtHandle>::span_bug::<rustc_span[5535563d387a33e2]::span_encoding::Span, alloc[e4e71b9a99f0d22f]::string::String>
13: 0x7f40488e2227 - rustc_middle[67be29f945e3ecf0]::util::bug::opt_span_bug_fmt::<rustc_span[5535563d387a33e2]::span_encoding::Span>::{closure#0}
14: 0x7f40488ca74a - rustc_middle[67be29f945e3ecf0]::ty::context::tls::with_opt::<rustc_middle[67be29f945e3ecf0]::util::bug::opt_span_bug_fmt<rustc_span[5535563d387a33e2]::span_encoding::Span>::{closure#0}, !>::{closure#0}
15: 0x7f40488ca5db - rustc_middle[67be29f945e3ecf0]::ty::context::tls::with_context_opt::<rustc_middle[67be29f945e3ecf0]::ty::context::tls::with_opt<rustc_middle[67be29f945e3ecf0]::util::bug::opt_span_bug_fmt<rustc_span[5535563d387a33e2]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
16: 0x7f404738acf7 - rustc_middle[67be29f945e3ecf0]::util::bug::span_bug_fmt::<rustc_span[5535563d387a33e2]::span_encoding::Span>
17: 0x7f4049b83090 - <rustc_trait_selection[8b7799daf48e843f]::traits::engine::ObligationCtxt<rustc_trait_selection[8b7799daf48e843f]::traits::FulfillmentError>>::assumed_wf_types_and_report_errors
18: 0x7f404a004b26 - rustc_hir_analysis[e6847c1464713c9a]::check::compare_impl_item::check_type_bounds
19: 0x7f404a22849d - rustc_hir_analysis[e6847c1464713c9a]::check::compare_impl_item::compare_impl_item
20: 0x7f404a225dd1 - rustc_query_impl[d2f5d943c2280edc]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[d2f5d943c2280edc]::query_impl::compare_impl_item::dynamic_query::{closure#2}::{closure#0}, rustc_middle[67be29f945e3ecf0]::query::erase::Erased<[u8; 1usize]>>
21: 0x7f404a1cf730 - rustc_query_system[254f9b8bf6b2b5ec]::query::plumbing::try_execute_query::<rustc_query_impl[d2f5d943c2280edc]::DynamicConfig<rustc_data_structures[d98f95e3c765a7d0]::vec_cache::VecCache<rustc_span[5535563d387a33e2]::def_id::LocalDefId, rustc_middle[67be29f945e3ecf0]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[254f9b8bf6b2b5ec]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[d2f5d943c2280edc]::plumbing::QueryCtxt, false>
22: 0x7f404a1cf201 - rustc_query_impl[d2f5d943c2280edc]::query_impl::compare_impl_item::get_query_non_incr::__rust_end_short_backtrace
23: 0x7f4045dbbf66 - rustc_hir_analysis[e6847c1464713c9a]::check::check::check_item_type
24: 0x7f40471754b4 - rustc_hir_analysis[e6847c1464713c9a]::check::wfcheck::check_well_formed
25: 0x7f404a1cf487 - rustc_query_impl[d2f5d943c2280edc]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[d2f5d943c2280edc]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[67be29f945e3ecf0]::query::erase::Erased<[u8; 1usize]>>
26: 0x7f404a1cf748 - rustc_query_system[254f9b8bf6b2b5ec]::query::plumbing::try_execute_query::<rustc_query_impl[d2f5d943c2280edc]::DynamicConfig<rustc_data_structures[d98f95e3c765a7d0]::vec_cache::VecCache<rustc_span[5535563d387a33e2]::def_id::LocalDefId, rustc_middle[67be29f945e3ecf0]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[254f9b8bf6b2b5ec]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[d2f5d943c2280edc]::plumbing::QueryCtxt, false>
27: 0x7f404a1cf462 - rustc_query_impl[d2f5d943c2280edc]::query_impl::check_well_formed::get_query_non_incr::__rust_end_short_backtrace
28: 0x7f404a1d01ec - rustc_hir_analysis[e6847c1464713c9a]::check::wfcheck::check_mod_type_wf
29: 0x7f404a1d000b - rustc_query_impl[d2f5d943c2280edc]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[d2f5d943c2280edc]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[67be29f945e3ecf0]::query::erase::Erased<[u8; 1usize]>>
30: 0x7f404a62b784 - rustc_query_system[254f9b8bf6b2b5ec]::query::plumbing::try_execute_query::<rustc_query_impl[d2f5d943c2280edc]::DynamicConfig<rustc_query_system[254f9b8bf6b2b5ec]::query::caches::DefaultCache<rustc_span[5535563d387a33e2]::def_id::LocalModDefId, rustc_middle[67be29f945e3ecf0]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[d2f5d943c2280edc]::plumbing::QueryCtxt, false>
31: 0x7f404a62b518 - rustc_query_impl[d2f5d943c2280edc]::query_impl::check_mod_type_wf::get_query_non_incr::__rust_end_short_backtrace
32: 0x7f4049ac9adc - rustc_hir_analysis[e6847c1464713c9a]::check_crate
33: 0x7f404a13a57c - rustc_interface[14bb8bd49e3c2011]::passes::run_required_analyses
34: 0x7f404a3cfd5e - rustc_interface[14bb8bd49e3c2011]::passes::analysis
35: 0x7f404a3cfd2f - rustc_query_impl[d2f5d943c2280edc]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[d2f5d943c2280edc]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[67be29f945e3ecf0]::query::erase::Erased<[u8; 1usize]>>
36: 0x7f404a7bc37a - rustc_query_system[254f9b8bf6b2b5ec]::query::plumbing::try_execute_query::<rustc_query_impl[d2f5d943c2280edc]::DynamicConfig<rustc_query_system[254f9b8bf6b2b5ec]::query::caches::SingleCache<rustc_middle[67be29f945e3ecf0]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[d2f5d943c2280edc]::plumbing::QueryCtxt, false>
37: 0x7f404a7bc04e - rustc_query_impl[d2f5d943c2280edc]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
38: 0x7f404a88c239 - rustc_interface[14bb8bd49e3c2011]::interface::run_compiler::<core[dd48f16c736a3ea3]::result::Result<(), rustc_span[5535563d387a33e2]::ErrorGuaranteed>, rustc_driver_impl[c07627fafa57377e]::run_compiler::{closure#0}>::{closure#1}
39: 0x7f404a71a3c7 - std[b886acd6fb60b0a8]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[14bb8bd49e3c2011]::util::run_in_thread_with_globals<rustc_interface[14bb8bd49e3c2011]::util::run_in_thread_pool_with_globals<rustc_interface[14bb8bd49e3c2011]::interface::run_compiler<core[dd48f16c736a3ea3]::result::Result<(), rustc_span[5535563d387a33e2]::ErrorGuaranteed>, rustc_driver_impl[c07627fafa57377e]::run_compiler::{closure#0}>::{closure#1}, core[dd48f16c736a3ea3]::result::Result<(), rustc_span[5535563d387a33e2]::ErrorGuaranteed>>::{closure#0}, core[dd48f16c736a3ea3]::result::Result<(), rustc_span[5535563d387a33e2]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[dd48f16c736a3ea3]::result::Result<(), rustc_span[5535563d387a33e2]::ErrorGuaranteed>>
40: 0x7f404a71a062 - <<std[b886acd6fb60b0a8]::thread::Builder>::spawn_unchecked_<rustc_interface[14bb8bd49e3c2011]::util::run_in_thread_with_globals<rustc_interface[14bb8bd49e3c2011]::util::run_in_thread_pool_with_globals<rustc_interface[14bb8bd49e3c2011]::interface::run_compiler<core[dd48f16c736a3ea3]::result::Result<(), rustc_span[5535563d387a33e2]::ErrorGuaranteed>, rustc_driver_impl[c07627fafa57377e]::run_compiler::{closure#0}>::{closure#1}, core[dd48f16c736a3ea3]::result::Result<(), rustc_span[5535563d387a33e2]::ErrorGuaranteed>>::{closure#0}, core[dd48f16c736a3ea3]::result::Result<(), rustc_span[5535563d387a33e2]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[dd48f16c736a3ea3]::result::Result<(), rustc_span[5535563d387a33e2]::ErrorGuaranteed>>::{closure#1} as core[dd48f16c736a3ea3]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
41: 0x7f404a7197ab - std::sys::pal::unix::thread::Thread::new::thread_start::h7cc6adb00f817196
42: 0x7f4044a6a39d - <unknown>
43: 0x7f4044aef49c - <unknown>
44: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.85.0-nightly (96e51d948 2024-12-04) running on x86_64-unknown-linux-gnu
query stack during panic:
#0 [compare_impl_item] checking assoc item `<impl at /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:12:1: 14:43>::{synthetic#0}` is compatible with trait definition
#1 [check_well_formed] checking that `<impl at /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:12:1: 14:43>` is well-formed
end of query stack
error: aborting due to 11 previous errors
Some errors have detailed explanations: E0220, E0277, E0412, E0601, E0658.
For more information about an error, try `rustc --explain E0220`.
```
</p>
</details>
<!--
query stack:
Obligation(predicate=Binder { value: TraitPredicate(<_ as TraitA>, polarity:Positive), bound_vars: [] }, depth=1),
#0 [compare_impl_item] checking assoc item `<impl at /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:12:1: 14:43>::{synthetic#0}` is compatible with trait definition
#1 [check_well_formed] checking that `<impl at /tmp/icemaker_global_tempdir.oqycY4Xka193/rustc_testrunner_tmpdir_reporting.FxaXXxPy3JEq/mvce.rs:12:1: 14:43>` is well-formed
-->
| I-ICE,T-compiler,C-bug,S-has-mcve,S-bug-has-test | low | Critical |
2,718,559,214 | kubernetes | Sweep and adjust Stat/Lstat/EvalSymlinks to go 1.23 behavior on Windows | Context: https://github.com/kubernetes/kubernetes/issues/129080
Go 1.23 changed stdlib behavior of filesystem calls Stat / Lstat / EvalSymlinks on Windows. This broke some kubelet handling of volumes on Windows, and possibly other use of those functions. For Kubernetes 1.32, the behavior was temporarily reverted via godebug switches in https://github.com/kubernetes/kubernetes/pull/129083, but this is not a long-term solution. Libraries like [k8s.io/mount-utils](https://github.com/kubernetes/mount-utils) can be used by downstream consumers building with go 1.23 and get unintended behavior, and the godebug opt-outs will eventually become unavailable, so we need to prepare Kubernetes to not require them as soon as possible.
As soon as possible in the 1.33 development cycle, sig-storage / sig-windows / sig-node need to sweep for use of Stat / Lstat / EvalSymlinks calls that would be impacted by the go 1.23 changes, and adjust them to work properly, and drop the change in https://github.com/kubernetes/kubernetes/pull/129083 to restore default go stdlib behavior.
A prereq to doing this safely is good test coverage (both unit test level and e2e level).
- There is currently no job running unit tests on Windows
- There are some e2e jobs exercising volumes on Windows, but the one that was broken (GCE PD) does not currently have working tests (see https://github.com/kubernetes/kubernetes/issues/129080#issuecomment-2517810676 and https://github.com/kubernetes/test-infra/issues/33905)
/sig node storage windows
/priority important-soon
/milestone v1.33 | priority/important-soon,sig/storage,sig/node,sig/windows,triage/accepted | low | Critical |
2,718,589,859 | ui | [bug]: https://ui.shadcn.com/r/styles/default/add.json was not found in the registry | ### Describe the bug

the error is in the image above. I have seen similar errors. it does not resolve even by choosing some other theme
### Affected component/components
Styles
### How to reproduce
1.Go to the terminal
2.Paste this "bunx --bun shadcn@latest init add
"
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
VS code, in a windows pc
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,718,614,003 | PowerToys | New+ | ### Microsoft PowerToys version
0.86.0
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
New+
### Steps to reproduce
in template folder create a folder with just numbers
### โ๏ธ Expected Behavior
it would show up in de New+ right mouse click field. only folders with letters show up
### โ Actual Behavior
Creating an folder in New+ doesnt work if the folder title only has numbers.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Status-Reproducible,Product-New+ | low | Minor |
2,718,619,216 | godot | Shader editor bottom panel still opens when undocked | ### Tested versions
4.4 dev5, 4.4 dev4
### System information
Godot v4.4.dev5 - Windows 10.0.19045 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated AMD Radeon RX 580 2048SP (Advanced Micro Devices, Inc.; 31.0.21921.1000) - Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz (4 threads)
### Issue description
When opening shader editor while it's floating, the bottom dock opens up as well.

video:
https://github.com/user-attachments/assets/f139784d-b1e5-4801-9c31-bf867ae72d22
### Steps to reproduce
Open shader editor while it's floating.
### Minimal reproduction project (MRP)
Any new project | bug,topic:editor,topic:shaders | low | Minor |
2,718,640,543 | angular | [FR] A utility similar to react-scan, but then for angular | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
Having a tool similar to react scan ( https://github.com/aidenybai/react-scan ) for angular would be great.
### Proposed solution
Having a tool similar to react scan ( https://github.com/aidenybai/react-scan ) for angular would be great.
### Alternatives considered
none | area: devtools | low | Major |
2,718,669,847 | kubernetes | flowcontrol: maxCL is unreachable | ### What happened?
With the current implementation of `maxCL`, it is possible that the value reported by `apiserver_flowcontrol_upper_limit_seats` exceeds the total concurrency limit. Since the concurrency limit of a given priority level is bound by the total concurrency limit, this effectively means the value reported by the metric is too loose.
Moreover, in practice a priority level of type `Limited` cannot borrow more than the total number of seats that other priority levels are configured to lend. This means that the total concurrency limit is generally speaking a loose upper bound that can be restricted even further.
Here below are the values of `apiserver_flowcontrol_upper_limit_seats` for an apiserver running with the default total concurrency limit of 600.
```bash
$ kubectl get --raw /metrics | grep apiserver_flowcontrol_upper_limit_seats
# HELP apiserver_flowcontrol_upper_limit_seats [ALPHA] Configured upper bound on number of execution seats available to each priority level
# TYPE apiserver_flowcontrol_upper_limit_seats gauge
apiserver_flowcontrol_upper_limit_seats{priority_level="catch-all"} 613
apiserver_flowcontrol_upper_limit_seats{priority_level="exempt"} 600
apiserver_flowcontrol_upper_limit_seats{priority_level="global-default"} 649
apiserver_flowcontrol_upper_limit_seats{priority_level="leader-election"} 625
apiserver_flowcontrol_upper_limit_seats{priority_level="node-high"} 698
apiserver_flowcontrol_upper_limit_seats{priority_level="system"} 674
apiserver_flowcontrol_upper_limit_seats{priority_level="workload-high"} 698
apiserver_flowcontrol_upper_limit_seats{priority_level="workload-low"} 845
```
### What did you expect to happen?
With the default priority levels and default values, that is total concurrency limit of 600 and 343 borrowable seats in total, we expect the following upper limits
```console
$ kubectl get --raw /metrics | grep apiserver_flowcontrol_upper_limit_seats
# HELP apiserver_flowcontrol_upper_limit_seats [ALPHA] Configured upper bound on number of execution seats available to each priority level
# TYPE apiserver_flowcontrol_upper_limit_seats gauge
apiserver_flowcontrol_upper_limit_seats{priority_level="catch-all"} 356 # = lower limit ( 13 ) + borrowable ( 343 )
apiserver_flowcontrol_upper_limit_seats{priority_level="exempt"} 600
apiserver_flowcontrol_upper_limit_seats{priority_level="global-default"} 367 # = lower limit ( 24 ) + borrowable ( 343 )
apiserver_flowcontrol_upper_limit_seats{priority_level="leader-election"} 368 # = lower limit ( 25 ) + borrowable ( 343 )
apiserver_flowcontrol_upper_limit_seats{priority_level="node-high"} 416 # = lower limit ( 73 ) + borrowable ( 343 )
apiserver_flowcontrol_upper_limit_seats{priority_level="system"} 393 # = lower limit ( 50 ) + borrowable ( 343 )
apiserver_flowcontrol_upper_limit_seats{priority_level="workload-high"} 392 # = lower limit ( 49 ) + borrowable ( 343 )
apiserver_flowcontrol_upper_limit_seats{priority_level="workload-low"} 367 # = lower limit ( 24 ) + borrowable ( 343 )
```
### How can we reproduce it (as minimally and precisely as possible)?
This behavior can be reproduced with a kind cluster using the default configuration
The total number of lendable seats sums up to 343, this is the maximum number of seats any limited priority level can borrow
```console
$ kind create cluster
$ kubectl get --raw /metrics | grep apiserver_flowcontrol_nominal_limit_seats
# HELP apiserver_flowcontrol_nominal_limit_seats [BETA] Nominal number of execution seats configured for each priority level
# TYPE apiserver_flowcontrol_nominal_limit_seats gauge
apiserver_flowcontrol_nominal_limit_seats{priority_level="catch-all"} 13 # 0% lendable = 0 lendable seats
apiserver_flowcontrol_nominal_limit_seats{priority_level="exempt"} 0
apiserver_flowcontrol_nominal_limit_seats{priority_level="global-default"} 49 # 50% lendable = 25 lendable seats
apiserver_flowcontrol_nominal_limit_seats{priority_level="leader-election"} 25 # 0% lendable = 0 lendable seats
apiserver_flowcontrol_nominal_limit_seats{priority_level="node-high"} 98 # 25% lendable = 25 lendable seats
apiserver_flowcontrol_nominal_limit_seats{priority_level="system"} 74 # 33% lendable = 24 lendable seats
apiserver_flowcontrol_nominal_limit_seats{priority_level="workload-high"} 98 # 50% lendable = 49 lendable seats
apiserver_flowcontrol_nominal_limit_seats{priority_level="workload-low"} 245 # 90% lendable = 220 lendable seats
```
### Anything else we need to know?
An accurate `apiserver_flowcontrol_upper_limit_seats` is useful to compare against `apiserver_flowcontrol_current_limit_seats`. If they are equal, operators can infer that the priority level cannot borrow any more seats, helping debug faster throttling issues.
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.3
Kustomize Version: v5.4.2
Server Version: v1.31.2
```
</details>
### Cloud provider
<details>
N/A
</details>
### OS version
<details>
```console
$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ uname -a
Linux kind-control-plane 6.10.14-linuxkit #1 SMP Thu Oct 24 19:28:55 UTC 2024 aarch64 GNU/Linux
```
</details>
### Install tools
<details>
https://kind.sigs.k8s.io/#installation-and-usage
</details>
### Container runtime (CRI) and version (if applicable)
<details>
N/A
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
N/A
</details>
| kind/bug,sig/api-machinery,triage/accepted | low | Critical |
2,718,680,798 | rust | ICE files are missing the span and message from `bug!`/`span_bug!` | I have seen it a couple of times already that people attach ICE files but these backtrace or files look like they are missing the actual ICE message from the report.
for example, my ICE may look like this (or, this is what I would consider part of it)
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
error: internal compiler error: compiler/rustc_trait_selection/src/traits/normalize.rs:69:17: deeply_normalize should not be called with pending obligations: [
Obligation(predicate=Binder { value: TraitPredicate(<_ as TraitA>, polarity:Positive), bound_vars: [] }, depth=1),
]
--> a.rs:52:24
|
52 | fn method(self) -> impl for<'a> FnMut(&'a ()) -> Self::Assoc<'a>;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
thread 'rustc' panicked at compiler/rustc_trait_selection/src/traits/normalize.rs:69:17:
Box<dyn Any>
stack backtrace:
0: 0x75750014124a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hbaae7ee6452314ac
1: 0x757500813c26 - core::fmt::write::h2f9a8d2f2637a85f
2: 0x757501be0551 - std::io::Write::write_fmt::he34daf083f9a3021
3: 0x7575001410a2 - std::sys::backtrace::BacktraceLock::print::h0058685886628669
4: 0x7575001435aa - std::panicking::default_hook::{{closure}}::h57409c565dc0e9b1
5: 0x7575001433f3 - std::panicking::default_hook::ha667d029378d179a
6: 0x7574ff2bfa48 - std[c2c54d6827da810b]::panicking::update_hook::<alloc[34a40601127b84fc]::boxed::Box<rustc_driver_impl[c0d80c2a9490dd80]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x757500143d68 - std::panicking::rust_panic_with_hook::h12054419da422896
8: 0x7574ff2f5271 - std[c2c54d6827da810b]::panicking::begin_panic::<rustc_errors[2e40efcb590bb17b]::ExplicitBug>::{closure#0}
9: 0x7574ff2ea136 - std[c2c54d6827da810b]::sys::backtrace::__rust_end_short_backtrace::<std[c2c54d6827da810b]::panicking::begin_panic<rustc_errors[2e40efcb590bb17b]::ExplicitBug>::{closure#0}, !>
10: 0x7574ff2ea123 - std[c2c54d6827da810b]::panicking::begin_panic::<rustc_errors[2e40efcb590bb17b]::ExplicitBug>
11: 0x7574ff2ff2b1 - <rustc_errors[2e40efcb590bb17b]::diagnostic::BugAbort as rustc_errors[2e40efcb590bb17b]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x7574ff854a9c - <rustc_errors[2e40efcb590bb17b]::DiagCtxtHandle>::span_bug::<rustc_span[2659c11acd16cd7]::span_encoding::Span, alloc[34a40601127b84fc]::string::String>
13: 0x7574ff8e75d7 - rustc_middle[27a877ea52447dab]::util::bug::opt_span_bug_fmt::<rustc_span[2659c11acd16cd7]::span_encoding::Span>::{closure#0}
14: 0x7574ff8cfa6a - rustc_middle[27a877ea52447dab]::ty::context::tls::with_opt::<rustc_middle[27a877ea52447dab]::util::bug::opt_span_bug_fmt<rustc_span[2659c11acd16cd7]::span_encoding::Span>::{closure#0}, !>::{closure#0}
15: 0x7574ff8cf8fb - rustc_middle[27a877ea52447dab]::ty::context::tls::with_context_opt::<rustc_middle[27a877ea52447dab]::ty::context::tls::with_opt<rustc_middle[27a877ea52447dab]::util::bug::opt_span_bug_fmt<rustc_span[2659c11acd16cd7]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
16: 0x7574fe3d2d77 - rustc_middle[27a877ea52447dab]::util::bug::span_bug_fmt::<rustc_span[2659c11acd16cd7]::span_encoding::Span>
17: 0x757500bc00d4 - <rustc_trait_selection[bf65a12e96ea037e]::traits::engine::ObligationCtxt<rustc_trait_selection[bf65a12e96ea037e]::traits::FulfillmentError>>::assumed_wf_types_and_report_errors
18: 0x75750105c126 - rustc_hir_analysis[6d68d5be0ecf7979]::check::compare_impl_item::check_type_bounds
19: 0x757500e5921d - rustc_hir_analysis[6d68d5be0ecf7979]::check::compare_impl_item::compare_impl_item
20: 0x757500e56b51 - rustc_query_impl[a2008a1cc616166f]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a2008a1cc616166f]::query_impl::compare_impl_item::dynamic_query::{closure#2}::{closure#0}, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>>
21: 0x75750121e930 - rustc_query_system[9a3c01f0ed066ff8]::query::plumbing::try_execute_query::<rustc_query_impl[a2008a1cc616166f]::DynamicConfig<rustc_data_structures[848008ad1566c8b0]::vec_cache::VecCache<rustc_span[2659c11acd16cd7]::def_id::LocalDefId, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[9a3c01f0ed066ff8]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[a2008a1cc616166f]::plumbing::QueryCtxt, false>
22: 0x75750121e401 - rustc_query_impl[a2008a1cc616166f]::query_impl::compare_impl_item::get_query_non_incr::__rust_end_short_backtrace
23: 0x7574fcdb057d - rustc_hir_analysis[6d68d5be0ecf7979]::check::check::check_item_type
24: 0x7574fe4419e5 - rustc_hir_analysis[6d68d5be0ecf7979]::check::wfcheck::check_well_formed
25: 0x75750121e687 - rustc_query_impl[a2008a1cc616166f]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a2008a1cc616166f]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>>
26: 0x75750121e948 - rustc_query_system[9a3c01f0ed066ff8]::query::plumbing::try_execute_query::<rustc_query_impl[a2008a1cc616166f]::DynamicConfig<rustc_data_structures[848008ad1566c8b0]::vec_cache::VecCache<rustc_span[2659c11acd16cd7]::def_id::LocalDefId, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[9a3c01f0ed066ff8]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[a2008a1cc616166f]::plumbing::QueryCtxt, false>
27: 0x75750121e662 - rustc_query_impl[a2008a1cc616166f]::query_impl::check_well_formed::get_query_non_incr::__rust_end_short_backtrace
28: 0x75750121f3ec - rustc_hir_analysis[6d68d5be0ecf7979]::check::wfcheck::check_mod_type_wf
29: 0x75750121f20b - rustc_query_impl[a2008a1cc616166f]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a2008a1cc616166f]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>>
30: 0x75750181a208 - rustc_query_system[9a3c01f0ed066ff8]::query::plumbing::try_execute_query::<rustc_query_impl[a2008a1cc616166f]::DynamicConfig<rustc_query_system[9a3c01f0ed066ff8]::query::caches::DefaultCache<rustc_span[2659c11acd16cd7]::def_id::LocalModDefId, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[a2008a1cc616166f]::plumbing::QueryCtxt, false>
31: 0x757501819fb0 - rustc_query_impl[a2008a1cc616166f]::query_impl::check_mod_type_wf::get_query_non_incr::__rust_end_short_backtrace
32: 0x757500a9eb9c - rustc_hir_analysis[6d68d5be0ecf7979]::check_crate
33: 0x757501128cbc - rustc_interface[4b3ef207fa86e12e]::passes::run_required_analyses
34: 0x757501123d1e - rustc_interface[4b3ef207fa86e12e]::passes::analysis
35: 0x757501123cef - rustc_query_impl[a2008a1cc616166f]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a2008a1cc616166f]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>>
36: 0x7575017d5dba - rustc_query_system[9a3c01f0ed066ff8]::query::plumbing::try_execute_query::<rustc_query_impl[a2008a1cc616166f]::DynamicConfig<rustc_query_system[9a3c01f0ed066ff8]::query::caches::SingleCache<rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[a2008a1cc616166f]::plumbing::QueryCtxt, false>
37: 0x7575017d5a8e - rustc_query_impl[a2008a1cc616166f]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
38: 0x757501864e39 - rustc_interface[4b3ef207fa86e12e]::interface::run_compiler::<core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>, rustc_driver_impl[c0d80c2a9490dd80]::run_compiler::{closure#0}>::{closure#1}
39: 0x7575017c08a1 - std[c2c54d6827da810b]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[4b3ef207fa86e12e]::util::run_in_thread_with_globals<rustc_interface[4b3ef207fa86e12e]::util::run_in_thread_pool_with_globals<rustc_interface[4b3ef207fa86e12e]::interface::run_compiler<core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>, rustc_driver_impl[c0d80c2a9490dd80]::run_compiler::{closure#0}>::{closure#1}, core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>>::{closure#0}, core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>>
40: 0x7575017c0548 - <<std[c2c54d6827da810b]::thread::Builder>::spawn_unchecked_<rustc_interface[4b3ef207fa86e12e]::util::run_in_thread_with_globals<rustc_interface[4b3ef207fa86e12e]::util::run_in_thread_pool_with_globals<rustc_interface[4b3ef207fa86e12e]::interface::run_compiler<core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>, rustc_driver_impl[c0d80c2a9490dd80]::run_compiler::{closure#0}>::{closure#1}, core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>>::{closure#0}, core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>>::{closure#1} as core[437bb64b3ca51b65]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
41: 0x7575017bfc7b - std::sys::pal::unix::thread::Thread::new::thread_start::h2b35487752b07311
42: 0x7574fbaa339d - <unknown>
43: 0x7574fbb2849c - <unknown>
44: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/tmp/im/1/rustc-ice-2024-12-04T19_53_55-344983.txt` to your bug report
query stack during panic:
#0 [compare_impl_item] checking assoc item `<impl at a.rs:55:1: 57:43>::{synthetic#0}` is compatible with trait definition
#1 [check_well_formed] checking that `<impl at a.rs:55:1: 57:43>` is well-formed
end of query stack
error[E0433]: failed to resolve: use of undeclared type `WrapperT`
--> a.rs:46:12
|
46 | helper(WrapperT::AssocB as TraitA);
| ^^^^^^^^
| |
| use of undeclared type `WrapperT`
| help: a struct with a similar name exists: `Wrapper`
error: aborting due to 13 previous errors
Some errors have detailed explanations: E0261, E0277, E0412, E0433, E0437, E0576, E0658.
For more information about an error, try `rustc --explain E0261`.
```
</p>
</details>
however, the ice-dump file does not contain the actual ICE message
```
error: internal compiler error: compiler/rustc_trait_selection/src/traits/normalize.rs:69:17: deeply_normalize should not be called with pending obligations: [
Obligation(predicate=Binder { value: TraitPredicate(<_ as TraitA>, polarity:Positive), bound_vars: [] }, depth=1),
]
```
which makes it harder to compare it to other ICEs, check if it has been reported already or search for in the tracker...
what ends up in the file is only this:
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
thread 'rustc' panicked at compiler/rustc_trait_selection/src/traits/normalize.rs:69:17:
Box<dyn Any>
stack backtrace:
0: 0x757501be59a5 - std::backtrace::Backtrace::create::h78d0e7511919e78f
1: 0x75750012c445 - std::backtrace::Backtrace::force_capture::h954b965bf7c9452b
2: 0x7574ff2c0046 - std[c2c54d6827da810b]::panicking::update_hook::<alloc[34a40601127b84fc]::boxed::Box<rustc_driver_impl[c0d80c2a9490dd80]::install_ice_hook::{closure#0}>>::{closure#0}
3: 0x757500143d68 - std::panicking::rust_panic_with_hook::h12054419da422896
4: 0x7574ff2f5271 - std[c2c54d6827da810b]::panicking::begin_panic::<rustc_errors[2e40efcb590bb17b]::ExplicitBug>::{closure#0}
5: 0x7574ff2ea136 - std[c2c54d6827da810b]::sys::backtrace::__rust_end_short_backtrace::<std[c2c54d6827da810b]::panicking::begin_panic<rustc_errors[2e40efcb590bb17b]::ExplicitBug>::{closure#0}, !>
6: 0x7574ff2ea123 - std[c2c54d6827da810b]::panicking::begin_panic::<rustc_errors[2e40efcb590bb17b]::ExplicitBug>
7: 0x7574ff2ff2b1 - <rustc_errors[2e40efcb590bb17b]::diagnostic::BugAbort as rustc_errors[2e40efcb590bb17b]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
8: 0x7574ff854a9c - <rustc_errors[2e40efcb590bb17b]::DiagCtxtHandle>::span_bug::<rustc_span[2659c11acd16cd7]::span_encoding::Span, alloc[34a40601127b84fc]::string::String>
9: 0x7574ff8e75d7 - rustc_middle[27a877ea52447dab]::util::bug::opt_span_bug_fmt::<rustc_span[2659c11acd16cd7]::span_encoding::Span>::{closure#0}
10: 0x7574ff8cfa6a - rustc_middle[27a877ea52447dab]::ty::context::tls::with_opt::<rustc_middle[27a877ea52447dab]::util::bug::opt_span_bug_fmt<rustc_span[2659c11acd16cd7]::span_encoding::Span>::{closure#0}, !>::{closure#0}
11: 0x7574ff8cf8fb - rustc_middle[27a877ea52447dab]::ty::context::tls::with_context_opt::<rustc_middle[27a877ea52447dab]::ty::context::tls::with_opt<rustc_middle[27a877ea52447dab]::util::bug::opt_span_bug_fmt<rustc_span[2659c11acd16cd7]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
12: 0x7574fe3d2d77 - rustc_middle[27a877ea52447dab]::util::bug::span_bug_fmt::<rustc_span[2659c11acd16cd7]::span_encoding::Span>
13: 0x757500bc00d4 - <rustc_trait_selection[bf65a12e96ea037e]::traits::engine::ObligationCtxt<rustc_trait_selection[bf65a12e96ea037e]::traits::FulfillmentError>>::assumed_wf_types_and_report_errors
14: 0x75750105c126 - rustc_hir_analysis[6d68d5be0ecf7979]::check::compare_impl_item::check_type_bounds
15: 0x757500e5921d - rustc_hir_analysis[6d68d5be0ecf7979]::check::compare_impl_item::compare_impl_item
16: 0x757500e56b51 - rustc_query_impl[a2008a1cc616166f]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a2008a1cc616166f]::query_impl::compare_impl_item::dynamic_query::{closure#2}::{closure#0}, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>>
17: 0x75750121e930 - rustc_query_system[9a3c01f0ed066ff8]::query::plumbing::try_execute_query::<rustc_query_impl[a2008a1cc616166f]::DynamicConfig<rustc_data_structures[848008ad1566c8b0]::vec_cache::VecCache<rustc_span[2659c11acd16cd7]::def_id::LocalDefId, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[9a3c01f0ed066ff8]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[a2008a1cc616166f]::plumbing::QueryCtxt, false>
18: 0x75750121e401 - rustc_query_impl[a2008a1cc616166f]::query_impl::compare_impl_item::get_query_non_incr::__rust_end_short_backtrace
19: 0x7574fcdb057d - rustc_hir_analysis[6d68d5be0ecf7979]::check::check::check_item_type
20: 0x7574fe4419e5 - rustc_hir_analysis[6d68d5be0ecf7979]::check::wfcheck::check_well_formed
21: 0x75750121e687 - rustc_query_impl[a2008a1cc616166f]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a2008a1cc616166f]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>>
22: 0x75750121e948 - rustc_query_system[9a3c01f0ed066ff8]::query::plumbing::try_execute_query::<rustc_query_impl[a2008a1cc616166f]::DynamicConfig<rustc_data_structures[848008ad1566c8b0]::vec_cache::VecCache<rustc_span[2659c11acd16cd7]::def_id::LocalDefId, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[9a3c01f0ed066ff8]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[a2008a1cc616166f]::plumbing::QueryCtxt, false>
23: 0x75750121e662 - rustc_query_impl[a2008a1cc616166f]::query_impl::check_well_formed::get_query_non_incr::__rust_end_short_backtrace
24: 0x75750121f3ec - rustc_hir_analysis[6d68d5be0ecf7979]::check::wfcheck::check_mod_type_wf
25: 0x75750121f20b - rustc_query_impl[a2008a1cc616166f]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a2008a1cc616166f]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>>
26: 0x75750181a208 - rustc_query_system[9a3c01f0ed066ff8]::query::plumbing::try_execute_query::<rustc_query_impl[a2008a1cc616166f]::DynamicConfig<rustc_query_system[9a3c01f0ed066ff8]::query::caches::DefaultCache<rustc_span[2659c11acd16cd7]::def_id::LocalModDefId, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[a2008a1cc616166f]::plumbing::QueryCtxt, false>
27: 0x757501819fb0 - rustc_query_impl[a2008a1cc616166f]::query_impl::check_mod_type_wf::get_query_non_incr::__rust_end_short_backtrace
28: 0x757500a9eb9c - rustc_hir_analysis[6d68d5be0ecf7979]::check_crate
29: 0x757501128cbc - rustc_interface[4b3ef207fa86e12e]::passes::run_required_analyses
30: 0x757501123d1e - rustc_interface[4b3ef207fa86e12e]::passes::analysis
31: 0x757501123cef - rustc_query_impl[a2008a1cc616166f]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a2008a1cc616166f]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>>
32: 0x7575017d5dba - rustc_query_system[9a3c01f0ed066ff8]::query::plumbing::try_execute_query::<rustc_query_impl[a2008a1cc616166f]::DynamicConfig<rustc_query_system[9a3c01f0ed066ff8]::query::caches::SingleCache<rustc_middle[27a877ea52447dab]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[a2008a1cc616166f]::plumbing::QueryCtxt, false>
33: 0x7575017d5a8e - rustc_query_impl[a2008a1cc616166f]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
34: 0x757501864e39 - rustc_interface[4b3ef207fa86e12e]::interface::run_compiler::<core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>, rustc_driver_impl[c0d80c2a9490dd80]::run_compiler::{closure#0}>::{closure#1}
35: 0x7575017c08a1 - std[c2c54d6827da810b]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[4b3ef207fa86e12e]::util::run_in_thread_with_globals<rustc_interface[4b3ef207fa86e12e]::util::run_in_thread_pool_with_globals<rustc_interface[4b3ef207fa86e12e]::interface::run_compiler<core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>, rustc_driver_impl[c0d80c2a9490dd80]::run_compiler::{closure#0}>::{closure#1}, core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>>::{closure#0}, core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>>
36: 0x7575017c0548 - <<std[c2c54d6827da810b]::thread::Builder>::spawn_unchecked_<rustc_interface[4b3ef207fa86e12e]::util::run_in_thread_with_globals<rustc_interface[4b3ef207fa86e12e]::util::run_in_thread_pool_with_globals<rustc_interface[4b3ef207fa86e12e]::interface::run_compiler<core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>, rustc_driver_impl[c0d80c2a9490dd80]::run_compiler::{closure#0}>::{closure#1}, core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>>::{closure#0}, core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[437bb64b3ca51b65]::result::Result<(), rustc_span[2659c11acd16cd7]::ErrorGuaranteed>>::{closure#1} as core[437bb64b3ca51b65]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
37: 0x7575017bfc7b - std::sys::pal::unix::thread::Thread::new::thread_start::h2b35487752b07311
38: 0x7574fbaa339d - <unknown>
39: 0x7574fbb2849c - <unknown>
40: 0x0 - <unknown>
rustc version: 1.85.0-nightly (c44b3d50f 2024-12-03)
platform: x86_64-unknown-linux-gnu
query stack during panic:
#0 [compare_impl_item] checking assoc item `<impl at a.rs:55:1: 57:43>::{synthetic#0}` is compatible with trait definition
#1 [check_well_formed] checking that `<impl at a.rs:55:1: 57:43>` is well-formed
#2 [check_mod_type_wf] checking that types are well-formed in top-level module
#3 [analysis] running analysis passes on this crate
end of query stack
```
</p>
</details>
can we extend what is printed into the file, or are there privacy concerns etc?
| T-compiler,C-bug,D-diagnostic-infra,A-metrics | low | Critical |
2,718,725,174 | flutter | [in_app_purchase] Crash for iOS 18.2 when purchase, when Deprecated field `applicationUsername` is null. | ### Steps to reproduce
1. On iOS 18.2 calling buyNonConsumable or buyConsumable API will crash
This issue is copy of https://github.com/flutter/flutter/issues/158097, because it needs attention.
`applicationUsername` is deprecated by apple since iOS 18.
https://developer.apple.com/documentation/storekit/skmutablepayment/applicationusername
Hence it should not crash we pass null there.
### Expected results
Should not crash.
### Actual results
Got
Fatal Exception: NSInvalidArgumentException -[NSNull length]
in Xcode
### Code sample
<details open><summary>Code sample</summary>
```dart
PurchaseParam purchaseParam =
PurchaseParam(productDetails: product, applicationUserName: null);
_connection.buyNonConsumable(purchaseParam: purchaseParam);
```
or
```
PurchaseParam purchaseParam =
PurchaseParam(productDetails: product);
_connection.buyNonConsumable(purchaseParam: purchaseParam);
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 3.24.3, on macOS 15.1 24B83 darwin-arm64, locale en-US)
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[โ] Xcode - develop for iOS and macOS (Xcode 16.2)
[โ] Chrome - develop for the web
[โ] Android Studio (version 2024.2)
[โ] VS Code (version 1.95.3)
[โ] Connected device (6 available)
[โ] Network resources
โข No issues found!
```
</details>
| c: crash,platform-ios,p: in_app_purchase,package,e: OS-version specific,P2,team-ios,triaged-ios | low | Critical |
2,718,741,534 | rustdesk | 100% scale display to remote hidpi 175% display: cursor movements don't match full frame | ### Bug Description
100% scale display to remote hidpi 175% display: cursor movements don't match the full remote frame.
### How to Reproduce
My setup was:
* local laptop 1680x1050 @ 100% scale
* remote laptop 2880x1800 @ 175% scale
Judging by the screenshot below I could only reach the ~960x600 area of 1680x1050 local pixels
### Expected Behavior
The cursor can reach the whole remote frame.
### Operating system(s) on local (controlling) side and remote (controlled) side
GNOME wayland > GNOME wayland
### RustDesk Version(s) on local (controlling) side and remote (controlled) side
1.3.4 > 1.3.4
### Screenshots

https://github.com/user-attachments/assets/a5de2efa-85a0-4d4a-bb11-ebca8cdc4797
### Additional Context
_No response_ | bug | low | Critical |
2,718,745,950 | vscode | [terminal completions]: broken in `bash`/`zsh` on Unix | Follow up on: https://github.com/microsoft/vscode/issues/235021
The completions seem to be broken on Unix machines, in `bash`/`zsh` terminals. Please see https://github.com/microsoft/vscode/issues/235021#issuecomment-2515627451 for more info. | bug,linux,terminal-suggest | low | Critical |
2,718,751,320 | rust | -Zthreads causes rustfix/jobserver to deadlock | ```bash
git clone --recursive -b rustfix_deadlock_repro https://gitlab.com/lib.rs/main.git/
cd main
cargo +nightly fix --edition-idioms --all --allow-dirty
cargo +nightly fix --edition-idioms --all --allow-dirty
```
The config in `.cargo/config.toml` has `-Zthreads=10`. I'm running it on 16-core Apple M3 Max. I can also reproduce the deadlock with `-Zthreads=2`. It doesn't deadlock without `-Zthreads`.
The deadlock happens reliably when running `cargo fix`. It doesn't happen when running `cargo test` or `cargo check`.
Cargo is waiting in `rustfix_crate` on a TCP read, waiting for `rustc` to give signs of life. It can get stuck with even a single `rustc` process running.
Rustc process is stuck here:
```
* thread #1, name = 'main', queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
* frame #0: 0x000000018abdfab0 libsystem_kernel.dylib`read + 8
frame #1: 0x000000010d9929c0 librustc_driver-ad3c18566d12557c.dylib`<jobserver::imp::Client>::acquire_allow_interrupts + 60
frame #2: 0x000000010e296584 librustc_driver-ad3c18566d12557c.dylib`rustc_data_structures::jobserver::acquire_thread + 60
frame #3: 0x000000010da4dc6c librustc_driver-ad3c18566d12557c.dylib`<rayon_core::thread_pool::ThreadPool>::wait_until_stopped + 272
frame #4: 0x000000010e2e9958 librustc_driver-ad3c18566d12557c.dylib`rustc_interface::util::run_in_thread_pool_with_globals::<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>> + 7224
frame #5: 0x000000010e329b2c librustc_driver-ad3c18566d12557c.dylib`rustc_driver_impl::run_compiler + 6600
frame #6: 0x000000010e335e20 librustc_driver-ad3c18566d12557c.dylib`rustc_driver_impl::main + 824
frame #7: 0x00000001043f2cd0 rustc`rustc_main::main + 12
frame #8: 0x00000001043f2c84 rustc`std::sys::backtrace::__rust_begin_short_backtrace::<fn(), ()> + 12
frame #9: 0x00000001043f2c9c rustc`std::rt::lang_start::<()>::{closure#0} + 16
frame #10: 0x000000010ff27dd4 librustc_driver-ad3c18566d12557c.dylib`std::rt::lang_start_internal::he7368dee48875b7d + 1092
frame #11: 0x00000001043f2d04 rustc`main + 52
frame #12: 0x000000018a8a0274 dyld`start + 2840
thread #2, name = 'ctrl-c'
frame #0: 0x000000018abdfab0 libsystem_kernel.dylib`read + 8
frame #1: 0x000000010e2eb480 librustc_driver-ad3c18566d12557c.dylib`std::sys::backtrace::__rust_begin_short_backtrace::<ctrlc::set_handler_inner<rustc_driver_impl::install_ctrlc_handler::{closure#0}>::{closure#0}, ()> + 48
frame #2: 0x000000010e2fb4b0 librustc_driver-ad3c18566d12557c.dylib`<<std::thread::Builder>::spawn_unchecked_<ctrlc::set_handler_inner<rustc_driver_impl::install_ctrlc_handler::{closure#0}>::{closure#0}, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 488
frame #3: 0x000000010ff4ecc8 librustc_driver-ad3c18566d12557c.dylib`std::sys::pal::unix::thread::Thread::new::thread_start::hf11e21675a82323b + 52
frame #4: 0x000000018ac202e4 libsystem_pthread.dylib`_pthread_start + 136
```
Backtrace from another run when there was only a single `rustc` process spawned:
```
bt all
* thread #1, name = 'main', queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
* frame #0: 0x000000018abe26cc libsystem_kernel.dylib`__psynch_cvwait + 8
frame #1: 0x000000018ac20894 libsystem_pthread.dylib`_pthread_cond_wait + 1204
frame #2: 0x000000010c272390 librustc_driver-ad3c18566d12557c.dylib`<rayon_core::latch::LockLatch>::wait + 136
frame #3: 0x000000010c275c4c librustc_driver-ad3c18566d12557c.dylib`<rayon_core::thread_pool::ThreadPool>::wait_until_stopped + 240
frame #4: 0x000000010cb11958 librustc_driver-ad3c18566d12557c.dylib`rustc_interface::util::run_in_thread_pool_with_globals::<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>> + 7224
frame #5: 0x000000010cb51b2c librustc_driver-ad3c18566d12557c.dylib`rustc_driver_impl::run_compiler + 6600
frame #6: 0x000000010cb5de20 librustc_driver-ad3c18566d12557c.dylib`rustc_driver_impl::main + 824
frame #7: 0x0000000102cfecd0 rustc`rustc_main::main + 12
frame #8: 0x0000000102cfec84 rustc`std::sys::backtrace::__rust_begin_short_backtrace::<fn(), ()> + 12
frame #9: 0x0000000102cfec9c rustc`std::rt::lang_start::<()>::{closure#0} + 16
frame #10: 0x000000010e74fdd4 librustc_driver-ad3c18566d12557c.dylib`std::rt::lang_start_internal::he7368dee48875b7d + 1092
frame #11: 0x0000000102cfed04 rustc`main + 52
frame #12: 0x000000018a8a0274 dyld`start + 2840
thread #2, name = 'ctrl-c'
frame #0: 0x000000018abdfab0 libsystem_kernel.dylib`read + 8
frame #1: 0x000000010cb13480 librustc_driver-ad3c18566d12557c.dylib`std::sys::backtrace::__rust_begin_short_backtrace::<ctrlc::set_handler_inner<rustc_driver_impl::install_ctrlc_handler::{closure#0}>::{closure#0}, ()> + 48
frame #2: 0x000000010cb234b0 librustc_driver-ad3c18566d12557c.dylib`<<std::thread::Builder>::spawn_unchecked_<ctrlc::set_handler_inner<rustc_driver_impl::install_ctrlc_handler::{closure#0}>::{closure#0}, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 488
frame #3: 0x000000010e776cc8 librustc_driver-ad3c18566d12557c.dylib`std::sys::pal::unix::thread::Thread::new::thread_start::hf11e21675a82323b + 52
frame #4: 0x000000018ac202e4 libsystem_pthread.dylib`_pthread_start + 136
thread #3, name = 'rustc'
frame #0: 0x000000018abdfab0 libsystem_kernel.dylib`read + 8
frame #1: 0x000000010c1ba9c0 librustc_driver-ad3c18566d12557c.dylib`<jobserver::imp::Client>::acquire_allow_interrupts + 60
frame #2: 0x000000010cabe584 librustc_driver-ad3c18566d12557c.dylib`rustc_data_structures::jobserver::acquire_thread + 60
frame #3: 0x0000000110e54098 librustc_driver-ad3c18566d12557c.dylib`<rayon_core::sleep::Sleep>::sleep + 744
frame #4: 0x0000000110e53d5c librustc_driver-ad3c18566d12557c.dylib`<rayon_core::registry::WorkerThread>::wait_until_cold + 324
frame #5: 0x000000010c272764 librustc_driver-ad3c18566d12557c.dylib`<rayon_core::registry::ThreadBuilder>::run + 344
frame #6: 0x000000010cb24070 librustc_driver-ad3c18566d12557c.dylib`<<crossbeam_utils::thread::ScopedThreadBuilder>::spawn<<rayon_core::ThreadPoolBuilder>::build_scoped<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#0}, rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, ()>::{closure#0} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 192
frame #7: 0x000000010cb13414 librustc_driver-ad3c18566d12557c.dylib`std::sys::backtrace::__rust_begin_short_backtrace::<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()> + 32
frame #8: 0x000000010cb230c8 librustc_driver-ad3c18566d12557c.dylib`<<std::thread::Builder>::spawn_unchecked_<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 456
frame #9: 0x000000010e776cc8 librustc_driver-ad3c18566d12557c.dylib`std::sys::pal::unix::thread::Thread::new::thread_start::hf11e21675a82323b + 52
frame #10: 0x000000018ac202e4 libsystem_pthread.dylib`_pthread_start + 136
thread #4, name = 'rustc'
frame #0: 0x000000018abdfab0 libsystem_kernel.dylib`read + 8
frame #1: 0x000000010c1ba9c0 librustc_driver-ad3c18566d12557c.dylib`<jobserver::imp::Client>::acquire_allow_interrupts + 60
frame #2: 0x000000010cabe584 librustc_driver-ad3c18566d12557c.dylib`rustc_data_structures::jobserver::acquire_thread + 60
frame #3: 0x000000010c2726bc librustc_driver-ad3c18566d12557c.dylib`<rayon_core::registry::ThreadBuilder>::run + 176
frame #4: 0x000000010cb24070 librustc_driver-ad3c18566d12557c.dylib`<<crossbeam_utils::thread::ScopedThreadBuilder>::spawn<<rayon_core::ThreadPoolBuilder>::build_scoped<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#0}, rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, ()>::{closure#0} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 192
frame #5: 0x000000010cb13414 librustc_driver-ad3c18566d12557c.dylib`std::sys::backtrace::__rust_begin_short_backtrace::<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()> + 32
frame #6: 0x000000010cb230c8 librustc_driver-ad3c18566d12557c.dylib`<<std::thread::Builder>::spawn_unchecked_<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 456
frame #7: 0x000000010e776cc8 librustc_driver-ad3c18566d12557c.dylib`std::sys::pal::unix::thread::Thread::new::thread_start::hf11e21675a82323b + 52
frame #8: 0x000000018ac202e4 libsystem_pthread.dylib`_pthread_start + 136
thread #5, name = 'rustc'
frame #0: 0x000000018abdfab0 libsystem_kernel.dylib`read + 8
frame #1: 0x000000010c1ba9c0 librustc_driver-ad3c18566d12557c.dylib`<jobserver::imp::Client>::acquire_allow_interrupts + 60
frame #2: 0x000000010cabe584 librustc_driver-ad3c18566d12557c.dylib`rustc_data_structures::jobserver::acquire_thread + 60
frame #3: 0x000000010c2726bc librustc_driver-ad3c18566d12557c.dylib`<rayon_core::registry::ThreadBuilder>::run + 176
frame #4: 0x000000010cb24070 librustc_driver-ad3c18566d12557c.dylib`<<crossbeam_utils::thread::ScopedThreadBuilder>::spawn<<rayon_core::ThreadPoolBuilder>::build_scoped<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#0}, rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, ()>::{closure#0} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 192
frame #5: 0x000000010cb13414 librustc_driver-ad3c18566d12557c.dylib`std::sys::backtrace::__rust_begin_short_backtrace::<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()> + 32
frame #6: 0x000000010cb230c8 librustc_driver-ad3c18566d12557c.dylib`<<std::thread::Builder>::spawn_unchecked_<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 456
frame #7: 0x000000010e776cc8 librustc_driver-ad3c18566d12557c.dylib`std::sys::pal::unix::thread::Thread::new::thread_start::hf11e21675a82323b + 52
frame #8: 0x000000018ac202e4 libsystem_pthread.dylib`_pthread_start + 136
thread #6, name = 'rustc'
frame #0: 0x000000018abdfab0 libsystem_kernel.dylib`read + 8
frame #1: 0x000000010c1ba9c0 librustc_driver-ad3c18566d12557c.dylib`<jobserver::imp::Client>::acquire_allow_interrupts + 60
frame #2: 0x000000010cabe584 librustc_driver-ad3c18566d12557c.dylib`rustc_data_structures::jobserver::acquire_thread + 60
frame #3: 0x000000010c2726bc librustc_driver-ad3c18566d12557c.dylib`<rayon_core::registry::ThreadBuilder>::run + 176
frame #4: 0x000000010cb24070 librustc_driver-ad3c18566d12557c.dylib`<<crossbeam_utils::thread::ScopedThreadBuilder>::spawn<<rayon_core::ThreadPoolBuilder>::build_scoped<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#0}, rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, ()>::{closure#0} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 192
frame #5: 0x000000010cb13414 librustc_driver-ad3c18566d12557c.dylib`std::sys::backtrace::__rust_begin_short_backtrace::<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()> + 32
frame #6: 0x000000010cb230c8 librustc_driver-ad3c18566d12557c.dylib`<<std::thread::Builder>::spawn_unchecked_<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 456
frame #7: 0x000000010e776cc8 librustc_driver-ad3c18566d12557c.dylib`std::sys::pal::unix::thread::Thread::new::thread_start::hf11e21675a82323b + 52
frame #8: 0x000000018ac202e4 libsystem_pthread.dylib`_pthread_start + 136
thread #7, name = 'rustc'
frame #0: 0x000000018abdfab0 libsystem_kernel.dylib`read + 8
frame #1: 0x000000010c1ba9c0 librustc_driver-ad3c18566d12557c.dylib`<jobserver::imp::Client>::acquire_allow_interrupts + 60
frame #2: 0x000000010cabe584 librustc_driver-ad3c18566d12557c.dylib`rustc_data_structures::jobserver::acquire_thread + 60
frame #3: 0x000000010c2726bc librustc_driver-ad3c18566d12557c.dylib`<rayon_core::registry::ThreadBuilder>::run + 176
frame #4: 0x000000010cb24070 librustc_driver-ad3c18566d12557c.dylib`<<crossbeam_utils::thread::ScopedThreadBuilder>::spawn<<rayon_core::ThreadPoolBuilder>::build_scoped<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#0}, rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, ()>::{closure#0} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 192
frame #5: 0x000000010cb13414 librustc_driver-ad3c18566d12557c.dylib`std::sys::backtrace::__rust_begin_short_backtrace::<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()> + 32
frame #6: 0x000000010cb230c8 librustc_driver-ad3c18566d12557c.dylib`<<std::thread::Builder>::spawn_unchecked_<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 456
frame #7: 0x000000010e776cc8 librustc_driver-ad3c18566d12557c.dylib`std::sys::pal::unix::thread::Thread::new::thread_start::hf11e21675a82323b + 52
frame #8: 0x000000018ac202e4 libsystem_pthread.dylib`_pthread_start + 136
thread #8, name = 'rustc'
frame #0: 0x000000018abdfab0 libsystem_kernel.dylib`read + 8
frame #1: 0x000000010c1ba9c0 librustc_driver-ad3c18566d12557c.dylib`<jobserver::imp::Client>::acquire_allow_interrupts + 60
frame #2: 0x000000010cabe584 librustc_driver-ad3c18566d12557c.dylib`rustc_data_structures::jobserver::acquire_thread + 60
frame #3: 0x000000010c2726bc librustc_driver-ad3c18566d12557c.dylib`<rayon_core::registry::ThreadBuilder>::run + 176
frame #4: 0x000000010cb24070 librustc_driver-ad3c18566d12557c.dylib`<<crossbeam_utils::thread::ScopedThreadBuilder>::spawn<<rayon_core::ThreadPoolBuilder>::build_scoped<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#0}, rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, ()>::{closure#0} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 192
frame #5: 0x000000010cb13414 librustc_driver-ad3c18566d12557c.dylib`std::sys::backtrace::__rust_begin_short_backtrace::<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()> + 32
frame #6: 0x000000010cb230c8 librustc_driver-ad3c18566d12557c.dylib`<<std::thread::Builder>::spawn_unchecked_<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 456
frame #7: 0x000000010e776cc8 librustc_driver-ad3c18566d12557c.dylib`std::sys::pal::unix::thread::Thread::new::thread_start::hf11e21675a82323b + 52
frame #8: 0x000000018ac202e4 libsystem_pthread.dylib`_pthread_start + 136
thread #9, name = 'rustc'
frame #0: 0x000000018abdfab0 libsystem_kernel.dylib`read + 8
frame #1: 0x000000010c1ba9c0 librustc_driver-ad3c18566d12557c.dylib`<jobserver::imp::Client>::acquire_allow_interrupts + 60
frame #2: 0x000000010cabe584 librustc_driver-ad3c18566d12557c.dylib`rustc_data_structures::jobserver::acquire_thread + 60
frame #3: 0x000000010c2726bc librustc_driver-ad3c18566d12557c.dylib`<rayon_core::registry::ThreadBuilder>::run + 176
frame #4: 0x000000010cb24070 librustc_driver-ad3c18566d12557c.dylib`<<crossbeam_utils::thread::ScopedThreadBuilder>::spawn<<rayon_core::ThreadPoolBuilder>::build_scoped<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#0}, rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#3}::{closure#0}::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, ()>::{closure#0} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 192
frame #5: 0x000000010cb13414 librustc_driver-ad3c18566d12557c.dylib`std::sys::backtrace::__rust_begin_short_backtrace::<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()> + 32
frame #6: 0x000000010cb230c8 librustc_driver-ad3c18566d12557c.dylib`<<std::thread::Builder>::spawn_unchecked_<alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output = ()> + core::marker::Send>, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} + 456
frame #7: 0x000000010e776cc8 librustc_driver-ad3c18566d12557c.dylib`std::sys::pal::unix::thread::Thread::new::thread_start::hf11e21675a82323b + 52
frame #8: 0x000000018ac202e4 libsystem_pthread.dylib`_pthread_start + 136
```
```
rustc 1.85.0-nightly (c44b3d50f 2024-12-03)
binary: rustc
commit-hash: c44b3d50fea96a3e0417e8264c16ea21a0a3fca2
commit-date: 2024-12-03
host: aarch64-apple-darwin
release: 1.85.0-nightly
LLVM version: 19.1.4
```
| T-compiler,C-bug,I-hang,WG-compiler-parallel | low | Critical |
2,718,797,483 | vscode | Hover: Make the width a bit wider so that the action labels fit | Create the following TS file:
```
foo() {
return (
<div className="page-container profile">
<div className="sidebar-menu">
}
```
- Hover over the error at the first `className`

> It would be nice if the hover can be a bit wider if the action labels don't fit.
_Originally posted by @aeschli in [#232047](https://github.com/microsoft/vscode/issues/232047#issuecomment-2518568805)_
Maybe it can be measured how much space the action labels require. There should still be a maximum width. | polish,under-discussion,editor-hover | low | Critical |
2,718,814,956 | flutter | [webview_flutter_android] Allow restricting File Access in WebView for Android SDK Versions < 29 | ### What package does this bug report belong to?
webview_flutter
### What target platforms are you seeing this bug on?
Android
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
_fe_analyzer_shared:
dependency: transitive
description:
name: _fe_analyzer_shared
sha256: f256b0c0ba6c7577c15e2e4e114755640a875e885099367bf6e012b19314c834
url: "https://pub.dev"
source: hosted
version: "72.0.0"
_macros:
dependency: transitive
description: dart
source: sdk
version: "0.3.2"
analyzer:
dependency: transitive
description:
name: analyzer
sha256: b652861553cd3990d8ed361f7979dc6d7053a9ac8843fa73820ab68ce5410139
url: "https://pub.dev"
source: hosted
version: "6.7.0"
args:
dependency: transitive
description:
name: args
sha256: bf9f5caeea8d8fe6721a9c358dd8a5c1947b27f1cfaa18b39c301273594919e6
url: "https://pub.dev"
source: hosted
version: "2.6.0"
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
build:
dependency: transitive
description:
name: build
sha256: "80184af8b6cb3e5c1c4ec6d8544d27711700bc3e6d2efad04238c7b5290889f0"
url: "https://pub.dev"
source: hosted
version: "2.4.1"
build_config:
dependency: transitive
description:
name: build_config
sha256: bf80fcfb46a29945b423bd9aad884590fb1dc69b330a4d4700cac476af1708d1
url: "https://pub.dev"
source: hosted
version: "1.1.1"
build_daemon:
dependency: transitive
description:
name: build_daemon
sha256: "79b2aef6ac2ed00046867ed354c88778c9c0f029df8a20fe10b5436826721ef9"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
build_resolvers:
dependency: transitive
description:
name: build_resolvers
sha256: "339086358431fa15d7eca8b6a36e5d783728cf025e559b834f4609a1fcfb7b0a"
url: "https://pub.dev"
source: hosted
version: "2.4.2"
build_runner:
dependency: "direct dev"
description:
name: build_runner
sha256: "028819cfb90051c6b5440c7e574d1896f8037e3c96cf17aaeb054c9311cfbf4d"
url: "https://pub.dev"
source: hosted
version: "2.4.13"
build_runner_core:
dependency: transitive
description:
name: build_runner_core
sha256: f8126682b87a7282a339b871298cc12009cb67109cfa1614d6436fb0289193e0
url: "https://pub.dev"
source: hosted
version: "7.3.2"
built_collection:
dependency: transitive
description:
name: built_collection
sha256: "376e3dd27b51ea877c28d525560790aee2e6fbb5f20e2f85d5081027d94e2100"
url: "https://pub.dev"
source: hosted
version: "5.1.1"
built_value:
dependency: transitive
description:
name: built_value
sha256: c7913a9737ee4007efedaffc968c049fd0f3d0e49109e778edc10de9426005cb
url: "https://pub.dev"
source: hosted
version: "8.9.2"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
checked_yaml:
dependency: transitive
description:
name: checked_yaml
sha256: feb6bed21949061731a7a75fc5d2aa727cf160b91af9a3e464c5e3a32e28b5ff
url: "https://pub.dev"
source: hosted
version: "2.0.3"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
code_builder:
dependency: transitive
description:
name: code_builder
sha256: "0ec10bf4a89e4c613960bf1e8b42c64127021740fb21640c29c909826a5eea3e"
url: "https://pub.dev"
source: hosted
version: "4.10.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
convert:
dependency: transitive
description:
name: convert
sha256: b30acd5944035672bc15c6b7a8b47d773e41e2f17de064350988c5d02adb1c68
url: "https://pub.dev"
source: hosted
version: "3.1.2"
crypto:
dependency: transitive
description:
name: crypto
sha256: "1e445881f28f22d6140f181e07737b22f1e099a5e1ff94b0af2f9e4a463f4855"
url: "https://pub.dev"
source: hosted
version: "3.0.6"
dart_style:
dependency: transitive
description:
name: dart_style
sha256: "7856d364b589d1f08986e140938578ed36ed948581fbc3bc9aef1805039ac5ab"
url: "https://pub.dev"
source: hosted
version: "2.3.7"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
file:
dependency: transitive
description:
name: file
sha256: a3b4f84adafef897088c160faf7dfffb7696046cb13ae90b508c2cbc95d3b8d4
url: "https://pub.dev"
source: hosted
version: "7.0.1"
fixnum:
dependency: transitive
description:
name: fixnum
sha256: b6dc7065e46c974bc7c5f143080a6764ec7a4be6da1285ececdc37be96de53be
url: "https://pub.dev"
source: hosted
version: "1.1.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
frontend_server_client:
dependency: transitive
description:
name: frontend_server_client
sha256: f64a0333a82f30b0cca061bc3d143813a486dc086b574bfb233b7c1372427694
url: "https://pub.dev"
source: hosted
version: "4.0.0"
glob:
dependency: transitive
description:
name: glob
sha256: "0e7014b3b7d4dac1ca4d6114f82bf1782ee86745b9b42a92c9289c23d8a0ab63"
url: "https://pub.dev"
source: hosted
version: "2.1.2"
graphs:
dependency: transitive
description:
name: graphs
sha256: "741bbf84165310a68ff28fe9e727332eef1407342fca52759cb21ad8177bb8d0"
url: "https://pub.dev"
source: hosted
version: "2.3.2"
http_multi_server:
dependency: transitive
description:
name: http_multi_server
sha256: "97486f20f9c2f7be8f514851703d0119c3596d14ea63227af6f7a481ef2b2f8b"
url: "https://pub.dev"
source: hosted
version: "3.2.1"
http_parser:
dependency: transitive
description:
name: http_parser
sha256: "2aa08ce0341cc9b354a498388e30986515406668dbcc4f7c950c3e715496693b"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
io:
dependency: transitive
description:
name: io
sha256: "2ec25704aba361659e10e3e5f5d672068d332fc8ac516421d483a11e5cbd061e"
url: "https://pub.dev"
source: hosted
version: "1.0.4"
js:
dependency: transitive
description:
name: js
sha256: c1b2e9b5ea78c45e1a0788d29606ba27dc5f71f019f32ca5140f61ef071838cf
url: "https://pub.dev"
source: hosted
version: "0.7.1"
json_annotation:
dependency: transitive
description:
name: json_annotation
sha256: "1ce844379ca14835a50d2f019a3099f419082cfdd231cd86a142af94dd5c6bb1"
url: "https://pub.dev"
source: hosted
version: "4.9.0"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
logging:
dependency: transitive
description:
name: logging
sha256: c8245ada5f1717ed44271ed1c26b8ce85ca3228fd2ffdb75468ab01979309d61
url: "https://pub.dev"
source: hosted
version: "1.3.0"
macros:
dependency: transitive
description:
name: macros
sha256: "0acaed5d6b7eab89f63350bccd82119e6c602df0f391260d0e32b5e23db79536"
url: "https://pub.dev"
source: hosted
version: "0.1.2-main.4"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
mime:
dependency: transitive
description:
name: mime
sha256: "41a20518f0cb1256669420fdba0cd90d21561e560ac240f26ef8322e45bb7ed6"
url: "https://pub.dev"
source: hosted
version: "2.0.0"
mockito:
dependency: "direct dev"
description:
name: mockito
sha256: "6841eed20a7befac0ce07df8116c8b8233ed1f4486a7647c7fc5a02ae6163917"
url: "https://pub.dev"
source: hosted
version: "5.4.4"
package_config:
dependency: transitive
description:
name: package_config
sha256: "1c5b77ccc91e4823a5af61ee74e6b972db1ef98c2ff5a18d3161c982a55448bd"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
pigeon:
dependency: "direct dev"
description:
name: pigeon
sha256: c0cf1bb291913ed09a2960986608710b4a27d494822092f5d880d701153e9b72
url: "https://pub.dev"
source: hosted
version: "22.6.4"
plugin_platform_interface:
dependency: transitive
description:
name: plugin_platform_interface
sha256: "4820fbfdb9478b1ebae27888254d445073732dae3d6ea81f0b7e06d5dedc3f02"
url: "https://pub.dev"
source: hosted
version: "2.1.8"
pool:
dependency: transitive
description:
name: pool
sha256: "20fe868b6314b322ea036ba325e6fc0711a22948856475e2c2b6306e8ab39c2a"
url: "https://pub.dev"
source: hosted
version: "1.5.1"
pub_semver:
dependency: transitive
description:
name: pub_semver
sha256: "40d3ab1bbd474c4c2328c91e3a7df8c6dd629b79ece4c4bd04bee496a224fb0c"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
pubspec_parse:
dependency: transitive
description:
name: pubspec_parse
sha256: c799b721d79eb6ee6fa56f00c04b472dcd44a30d258fac2174a6ec57302678f8
url: "https://pub.dev"
source: hosted
version: "1.3.0"
shelf:
dependency: transitive
description:
name: shelf
sha256: ad29c505aee705f41a4d8963641f91ac4cee3c8fad5947e033390a7bd8180fa4
url: "https://pub.dev"
source: hosted
version: "1.4.1"
shelf_web_socket:
dependency: transitive
description:
name: shelf_web_socket
sha256: cc36c297b52866d203dbf9332263c94becc2fe0ceaa9681d07b6ef9807023b67
url: "https://pub.dev"
source: hosted
version: "2.0.1"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_gen:
dependency: transitive
description:
name: source_gen
sha256: "14658ba5f669685cd3d63701d01b31ea748310f7ab854e471962670abcf57832"
url: "https://pub.dev"
source: hosted
version: "1.5.0"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
stream_transform:
dependency: transitive
description:
name: stream_transform
sha256: "14a00e794c7c11aa145a170587321aedce29769c08d7f58b1d141da75e3b1c6f"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
timing:
dependency: transitive
description:
name: timing
sha256: "70a3b636575d4163c477e6de42f247a23b315ae20e86442bebe32d3cabf61c32"
url: "https://pub.dev"
source: hosted
version: "1.0.1"
typed_data:
dependency: transitive
description:
name: typed_data
sha256: f9049c039ebfeb4cf7a7104a675823cd72dba8297f264b6637062516699fa006
url: "https://pub.dev"
source: hosted
version: "1.4.0"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "5c5f338a667b4c644744b661f309fb8080bb94b18a7e91ef1dbd343bed00ed6d"
url: "https://pub.dev"
source: hosted
version: "14.2.5"
watcher:
dependency: transitive
description:
name: watcher
sha256: "3d2ad6751b3c16cf07c7fca317a1413b3f26530319181b37e3b9039b84fc01d8"
url: "https://pub.dev"
source: hosted
version: "1.1.0"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
web_socket:
dependency: transitive
description:
name: web_socket
sha256: "3c12d96c0c9a4eec095246debcea7b86c0324f22df69893d538fcc6f1b8cce83"
url: "https://pub.dev"
source: hosted
version: "0.1.6"
web_socket_channel:
dependency: transitive
description:
name: web_socket_channel
sha256: "9f187088ed104edd8662ca07af4b124465893caf063ba29758f97af57e61da8f"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
webview_flutter_platform_interface:
dependency: "direct main"
description:
name: webview_flutter_platform_interface
sha256: d937581d6e558908d7ae3dc1989c4f87b786891ab47bb9df7de548a151779d8d
url: "https://pub.dev"
source: hosted
version: "2.10.0"
yaml:
dependency: transitive
description:
name: yaml
sha256: "75769501ea3489fca56601ff33454fe45507ea3bfb014161abc3b43ae25989d5"
url: "https://pub.dev"
source: hosted
version: "3.1.2"
sdks:
dart: ">=3.5.0 <4.0.0"
flutter: ">=3.24.0"
```
</details>
### Steps to reproduce
1. Create a `WebViewWidget` instance using the `AndroidWebViewController` in an Android app targeting SDK version <= 29.
2. Attempt to configure file access settings via the available controller methods.
3. Observe that access to directly modify `_webView.settings` or utilize `setAllowFileAccess` is not exposed through the `AndroidWebViewController`.
### Expected results
File access should be manageable through accessible methods in `AndroidWebViewController`. The default should restrict unauthorized file access.
Applications (with Android apps targeting SDK version <= 29) using this configuration without adjustment may be susceptible to unintended file access, which could be exploited for unauthorized data retrieval.
### Actual results
Developers are unable to explicitly set or manage file access configurations due to the protected nature of `_webView.settings`, limiting direct access to `setAllowFileAccess`.
### Code sample
<details open><summary>Code sample</summary>
Unfortunately, a precise code sample demonstrating this issue cannot be provided due to the encapsulated nature of access controls within `AndroidWebViewController`. The current API does not directly expose the settings needed to adjust file access permissions.
</details>
### Screenshots or Videos
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 3.24.3, on macOS 15.1.1 24B91 darwin-arm64, locale
en-US)
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[โ] Xcode - develop for iOS and macOS (Xcode 16.0)
[โ] Chrome - develop for the web
[โ] Android Studio (version 2024.2)
[โ] IntelliJ IDEA Ultimate Edition (version 2024.3)
[โ] VS Code (version 1.95.3)
[โ] Connected device (5 available)
[โ] Network resources
โข No issues found!
```
</details>
| platform-android,p: webview,package,e: OS-version specific,P2,team-android,triaged-android | low | Critical |
2,718,855,580 | godot | RenderingDevice: Extraneous wait on swapchain adds 1-2 frames of display latency | - *Production edit: Related to https://github.com/godotengine/godot/issues/71795.*
### Tested versions
Reproducible in latest git
### System information
Godot v4.4.dev (9e6098432) - Windows 10.0.22631 - Multi-window, 2 monitors - Direct3D 12 (Forward+) - dedicated NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6614) - AMD Ryzen 7 5800X3D 8-Core Processor (16 threads)
### Issue description
This bug was found independently by at least two other people, but up until now hasn't been documented in the issue tracker.
While testing display latency using the [latency tester](https://github.com/KeyboardDanni/godot-latency-tester), I noticed that both Vulkan and D3D12 have one additional frame of latency versus the Compatibility (OpenGL) renderer, even when queueing up the same amount of frames. I decided to run the tester through the PIX graphics debugger on D3D12 to see what I could find.
With a double-buffered frame queue:

Forcing a single-buffered frame queue:

Specifically, notice the blue markers on the Monitor row. Even when we don't buffer any additional frames, we end up waiting until the next V-Sync before we execute the command list. And once those commands do execute, we still have to wait for another V-Sync before we see the results. So we end up waiting two V-Syncs even though we should only wait for one. If we can eliminate the unnecessary V-Sync wait, we should be able to save a frame of latency without impacting parallelism (in fact, performance may actually improve).
Oddly enough, when we have a separate present queue via `#define FORCE_SEPARATE_PRESENT_QUEUE 1`, the latency is *worse*, at **2 added frames**.
https://github.com/godotengine/godot/pull/99257 has a proposed fix by splitting the rendering work and presentation into separate command lists, but it does not seem to solve the issue on my system (Windows 11 desktop, D3D12, nVidia). It does improve the latency when forcing a separate present queue, but it does not completely eliminate the added latency, as the command list is still waiting on the swapchain.
One possible cause of this issue, even with the above PR, might be due to only using one framebuffer total instead of one framebuffer per swapchain image. If the framebuffer is busy waiting on the next available swapchain image, that would explain why none of the command lists can start until the next V-Sync.
Note to bugsquad: While https://github.com/godotengine/godot/issues/75830 also deals with latency in RenderingDevice, this issue is meant to track and find a solution for one specific cause. I have identified at least three or four specific areas for improvement in RenderingDevice latency.
### Steps to reproduce
- Grab https://github.com/KeyboardDanni/godot-latency-tester and add `rendering_device/driver.windows="d3d12"` to `project.godot`.
- Run the project through the PIX graphics debugger (make sure "Launch For GPU Capture" is unchecked).
- Make sure the Timing Capture options include all the GPU checkboxes, and click Start Timing Capture.
- Stop the capture after some amount of time.
- Zoom in on the timeline and click on one of the Command List blocks in the API Queue row, then follow the arrows. If you don't see the arrows, enable the Thread rows one by one in the Lane Selector on the right until they appear.
### Minimal reproduction project (MRP)
https://github.com/KeyboardDanni/godot-latency-tester | bug,topic:rendering,topic:2d,topic:3d,performance | low | Critical |
2,718,871,122 | TypeScript | Inconsistent narrowing of adjacently tagged unions | ### ๐ Search Terms
narrow adjacently tagged union
### ๐ Version & Regression Information
This changed between versions v4.8.4 and v4.9.5:
Prior to v4.9.5, TypeScript would work the same both for `SomeInterface | string | undefined` and `SomeInterface | string`, which is still undesired behavior, but it is consistent.
### โฏ Playground Link
https://www.typescriptlang.org/play/?ts=5.8.0-dev.20241204#code/JYOwLgpgTgZghgYwgAgMpjpAPAFQJ4AOEAfMgN4BQyyAzhpAFzL5EDcFAvhRaJLIigCqNaADk4AWxSVqMYFDpM6UUAHN21ADZw6AfiVgVIdZ24B6M8gDCcEAHIwyAK4hgAexDIQcKFDcB3ZDcYWkM1ZAAfIIAjACsIBDAGbgQPOi9JCHRMCABGZABecg5kHWcQAGsQAM8ymWQANzhNJwgDIxNqOhymbOxlNWJ2Eqj6ppa25GExTI1QnrR6CCxpqHEpIc52HhCACjBCCGCMqT68gDpx1sKCooAiAeM7gEpyKhOspdzz7shaTGANDkEBoixyWEeqk2XAoFmQACEnI5gI5-G4oBVQcAQgBJUoAE3x5XxEDkIAgRLAbmQYAAFigJMAAB7JCipEDpbynJYAJkKxVKoJcVRqgre1CukxcJLJFLmv0mZyw0tJoAp0Mi4sazVa7TU8qWvSWELCxg1o3ekqYq3WEANCyVNsy0O22OQ+0Oxy5nxyPMuOpQt3ukJeWu9Zz9Cv+YEBwNBSsh0O45ks8I8TlBuSYAHkCAQ3DQUSh2fiUe5PID7I52YW6BBwDS3G5WW6PUQvZkzt9JTd7m44gkwHdkAAyEfIO5yBRD5CgD5d-0TV71cNfH5LaOx4AgsHYJ0bYbcVsHdshVe+xfXIMT-vxRLDscTqd0Ydz8+QP2S5fvd8QSMbmgASBbd42NfcSEPWFU3TUEeSYKx6QQCpwgzcIAANATQ5B8TcEEq2QNEMSCTxojcOlynLGgWz2QF0A6XZf27ANnm-ahGPXHJN2AncE1NKFINbWi+IYzteUvCAWLDUSLyjQCY240DwUTSCYBcRJy1nGg6LUXYrVKEA8GeJge0BUIOi1KAIDAJwoE8E8jhCHtrwePi7lMZM4QAQRAIkYHRGl6WQSzUigfEABoAtM0y6RQezkDxDMKUbZBoicYBNEpekRAotJWTigARBI3AkfMRHxFhlgq0gigq5AICZSAfNBCr3l0LVqAlAMmAquYOvmRhd0qw5Nj6rhqCYckGmgVggA
### ๐ป Code
```ts
interface State<Type> {
state: Type;
}
interface UserName {
first: string;
last?: string;
}
// Can't union narrow of string | object:
const nameState1 = {} as unknown as {
value: string;
state: State<string>;
} | {
value: UserName;
state: State<UserName>;
};
if (typeof nameState1.value === "string") {
nameState1.state satisfies State<string>;
// ^^^^^^^^^
// Type 'State<string> | State<UserName>' does not satisfy the expected type 'State<string>'.
// Type 'State<UserName>' is not assignable to type 'State<string>'.
// Type 'UserName' is not assignable to type 'string'.(1360)
}
// But it works if I add undefined to the mix:
const nameState2 = {} as unknown as {
value: undefined;
state: State<undefined>;
} | {
value: string;
state: State<string>;
} | {
value: UserName;
state: State<UserName>;
};
if (typeof nameState2.value === "string") {
nameState2.state satisfies State<string>;
}
```
### ๐ Actual behavior
`nameState1` won't narrow down to `State<string>`, but when `undefined` is in the union (`nameState2`) it does narrow down with the same condition.
### ๐ Expected behavior
I would expect `nameState1` to correctly narrow down to `State<string>` or at least not to change behavior when adding `undefined` to the mix.
### Additional information about the issue
This inconsistency is not consistent across different narrowing methods too:
```ts
// Bonus 1: Opposite condition isn't consistent too:
if (typeof nameState1.value === "object" && "first" in nameState1.value) {
nameState1.state satisfies State<UserName>;
// ^^^^^^^^^
}
if (typeof nameState2.value === "object" && "first" in nameState2.value) {
// No type error!
nameState2.state satisfies State<UserName>;
}
// Bonus 2: Checking using `is` doesn't work on both unions:
if (isString(nameState1.value)) {
nameState1.state satisfies State<string>;
// ^^^^^^^^^
}
if (isString(nameState2.value)) {
nameState2.state satisfies State<string>;
// ^^^^^^^^^
}
function isString(value: any): value is string {
return typeof value === "string"
}
``` | Suggestion,Experimentation Needed | low | Critical |
2,718,882,631 | PowerToys | Feature Suggestion: Alt + Tab Enhancement to Show Only Visible Windows | ### Description of the new feature / enhancement
I would like to suggest adding an option in FancyZones to modify how the **Alt + Tab** shortcut works. Specifically, the idea is to list only the first visible window in each zone, instead of listing all windows in all zones.
This option would be **optional** and user-activated, particularly useful for users who have multiple windows per zone and use the **PageUp** and **PageDn** shortcuts to switch between windows within zones.
This would allow users to quickly cycle through visible windows and avoid cluttering the Alt + Tab list with unnecessary windows.
### Scenario when this would be used?
This feature would be beneficial in scenarios where users work with multiple applications in a multi-zone setup. For example, when using FancyZones to divide the screen into different zones and managing multiple windows within each zone, users could activate this feature to focus on the first visible window in each zone while ignoring windows that are not immediately in view. It would be particularly useful for users who frequently switch between windows in a highly organized workspace, where efficiency and focus are key. This would help reduce visual clutter and make the workflow faster and more streamlined.
If it's not possible to integrate this directly into Alt + Tab, it would be great to have a separate customizable shortcut that lets users cycle through the first window in each zone. This would maintain workflow control while offering a quick way to switch between active windows.
I believe this feature would significantly improve the user experience, especially for those who use FancyZones to manage multiple windows and zones. By allowing users to cycle only through visible windows, this functionality would reduce cognitive load and increase efficiency when working with multiple applications at once.
### Supporting information
_No response_ | Idea-New PowerToy,Needs-Triage | low | Minor |
2,718,907,313 | vscode | `Add line comment` should await the tokenization to get the correct language at the position | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.95.3
Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813
Steps to Reproduce:
1. ALL vs code IDE windows must be closed
2. Open IDE, it must be at startup open previous project at tab with `html` file with input cursor put inside `script` tag
3. Try to use comment line command (`ctrl + /`) to comment JS code inside this `script` tag
AR
nothing happens (until your click on another tab with `.js` file)
ER
must be commented line
This happens from time to time, but not always | bug,editor-contrib | low | Critical |
2,718,912,154 | next.js | Server action with redirect to external URL returns undefined to client | ### Link to the code that reproduces this issue
https://github.com/alexeden/server-action-returns-undefined
### To Reproduce
1. Clone repo, install, `npm dev`
2. Open browser & console
3. Click button that calls action with relative URL redirect, note that the action result isn't logged (expected)
4. Click buttons that calls action with redirect to external URL, return value is logged as `undefined`

### Current vs. Expected behavior
**Current** Server actions that redirect to external URLs return `undefined` to the client component, even if the action itself returns a real value
**Expected** Server actions with redirects behave the same regardless of the URL they redirect to, insofar as their callers can expect `never` to mean that they don't return anything
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 22.5.0: Mon Apr 24 20:53:19 PDT 2023; root:xnu-8796.121.2~5/RELEASE_ARM64_T6020
Available memory (MB): 32768
Available CPU cores: 12
Binaries:
Node: 20.17.0
npm: 10.8.2
Yarn: 1.22.22
pnpm: 9.14.4
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: N/A
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
This breaks (albeit short-lived, since the redirect still proceeds) any subsequent code in a client that calls the action if the action usually returns normal values, e.g. data from a server.
Discovered the issue on a project when someone applied network throttling and noticed that a form component was flashing an error message right before the browser redirected because it was attempting to do something with an `undefined` value.
I tried different spins on this like using `useTransition`; no go. | bug,Navigation,Runtime | low | Critical |
2,718,923,298 | flutter | [ios]Further investigate iOS 18.2 WebView behavior | This is a follow-up for the original issue [here](https://github.com/flutter/flutter/issues/158961)
On iOS 18.2 the web view's gesture recognizer likely cached the old state of our `FlutterDelayingGestureRecognizer` (see PR description [here](https://github.com/flutter/engine/pull/56804)).
The current workaround doesn't address a problem where a platform view receives the gesture even when it shouldn't have. (See [discussion here](https://github.com/flutter/flutter/issues/158961#issuecomment-2518741174))
Also this is potentially a bug on iOS, so we may wanna create a reproducible project in UIKit and file the radar with Apple. | platform-ios,engine,P2,team-ios,triaged-ios | low | Critical |
2,718,924,060 | vscode | Confusion on 'Filter Coverage To Test' | Testing #234946
I'm a bit confused on, if there is only one relevant test, how moving from 'All Tests' to a single test changes the coverage. I would expect 'All tests' to be the intersection of all the component tests that you could filter down to?
Perhaps this is just an implementation quirk? (Looks like 'all tests' matches things like comments while the individual tests don't?)
https://github.com/user-attachments/assets/4d70e51b-2a25-481a-b90c-6fc7e2e4de7e
| polish,testing | low | Minor |
2,718,930,959 | godot | No velocity pivot equivalent for radial acceleration in `ParticleProcessMaterial` | ### Tested versions
- Reproducible in: 4.3.dev5
### System information
macOS 15.1.1 - Godot v4.3.dev5 - Forward+ - 14" MacBook M1 Pro
### Issue description
I'm trying to create a GPUParticles2D that causes a cluster of particles to follow a point (specifically, the cursor). ParticleProcessMaterial allows changing the velocity pivot to direct where radial acceleration moves the particles toward; however, radial acceleration always goes toward the origin, with no equivalent "acceleration pivot" option available.
Moving the GPUParticles2D node itself presents issues as the cursor may leave the viewport and there's no method to disable the visibility rect, but I'm open to workarounds.
### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
N/A | feature proposal,topic:2d,topic:particles,topic:vfx | low | Minor |
2,718,934,457 | vscode | Copy-paste screws up indentation for no reason AND alters contents of pasted string | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: Manjaro Linux
Steps to Reproduce:
=
1. I start with this code:

2. Then I start adding this:

3. I copy this portion of code from above:

4. And I paste it here:

Expected behavior:
=
I should get this:

Observed behavior:
=
I get this instead:

This is completely bonkers! Not only is this **not sensible indentation**, but crucially, the pasted code **alters the contents of the string**!!!!!!
Sometimes, when pasting a multi-line string into a region of code with different indentation than the one the original string was copied from, one doesn't get the desired indentation and that's expected because the contents of the string are sacred. The IDE doesn't know whether the whitespaces inside the string matter, so, at least by default, it preserves the contents of the string at the expense of indentation. And that's expected.
But here, the resulting indentation makes no sense AND the contents of the pasted string are altered with respect to the copied one!! This makes no sense whatsoever! The indentation of the code where the new code is pasted is the same as where the code is copied from, so there's no need to alter the indentation in the first place, but the IDE is removing spaces for no reason whatsoever AND it's changing the contents of a pasted string, even unnecessarily. | bug,info-needed,editor-autoindent | low | Critical |
2,718,935,471 | flutter | Create API to set keyboard brightness on Android | ### Steps to reproduce
This is mostly a re-open of #75521
Dismissing it as invalid is wrong based on the fact that for example .NET MAUI is able to do it just fine. Therefore there is in fact a way to control it. even if it may be indirect.
Another fact that makes me think that Flutter already does somehow alter the keyboard brightness on Android, is that when I set my phone to Night mode, the keyboard I use changes in other applications to respect that. But in Flutter the keyboard stays bright. As if Flutter was forcing it to that state.
### Expected results
Brightness.dark should change the keyboard's appearance to dark (given that the keyboard is reasonably smart about it)
### Actual results
The keyboard is bright no matter the app theme, the phone's theme/night mode being enabled
### Code sample
`flutter create --empty` and add a `TextField`
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
fvm flutter doctor -v
[โ] Flutter (Channel stable, 3.24.3, on macOS 14.3 23D56 darwin-arm64, locale en-CZ)
โข Flutter version 3.24.3 on channel stable at /Users/michalhazdra/fvm/versions/3.24.3
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 2663184aa7 (3 months ago), 2024-09-11 16:27:48 -0500
โข Engine revision 36335019a8
โข Dart version 3.5.3
โข DevTools version 2.37.3
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/xx/Library/Android/sdk
โข Platform android-35, build-tools 34.0.0
โข ANDROID_HOME = /Users/xx/Library/Android/sdk
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.3)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 15E204a
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web (Cannot find Chrome executable at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[โ] Android Studio (version 2023.2)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
[โ] IntelliJ IDEA Ultimate Edition (version 2023.2.3)
โข IntelliJ at /Applications/IntelliJ IDEA.app
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
[โ] VS Code (version 1.95.2)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.102.0
[โ] Connected device (4 available)
[โ] Network resources
โข All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| c: new feature,platform-android,P2,team-android,triaged-android | low | Major |
2,718,943,225 | svelte | css_nesting_selector_invalid_placement false positive using tailwind 4 import reference | ### Describe the bug
Have style block like this
```svelte
<style>
@import '../../app.css' reference;
</style>
```
using new tailwind 4 syntax, [see](https://github.com/tailwindlabs/tailwindcss/pull/15228)
Svelte diagnostic emits top level warning `Nesting selectors can only be used inside a rule or as the first selector inside a lone ':global(...)'`
### Reproduction
n/a
### Logs
_No response_
### System Info
```shell
npmPackages:
svelte: ^5.6.2 => 5.6.2
```
### Severity
annoyance | css | low | Critical |
2,718,969,977 | storybook | [Bug]: expect.any(String) always fails | ### Describe the bug
`expect.any` is modified due to `@storybook/instrumenter` so it's asymmetric matching capabilities are broken.
```ts
import { expect} from '@storybook/test';
expect('hi').toEqual(expect.any(String)) // throws an error
```
running the above line using expect from `vitest` passes as expected.
### Reproduction link
https://stackblitz.com/edit/github-uhys1k?file=src%2Fstories%2FButton.stories.ts
### Reproduction steps
1. Go to above link.
2. Open Button -> Primary story
3. View "Instrumentation" add-on
4. See error
### System
```bash
Storybook Environment Info:
System:
OS: macOS 14.6.1
CPU: (12) arm64 Apple M2 Max
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.15.1 - ~/.nodenv/versions/20.15.1/bin/node
Yarn: 3.2.4 - ~/.nodenv/versions/20.15.1/bin/yarn <----- active
npm: 10.7.0 - ~/.nodenv/versions/20.15.1/bin/npm
Browsers:
Chrome: 131.0.6778.109
Safari: 17.6
npmPackages:
chromatic: ^11.5.1 => 11.5.1
```
### Additional context
_No response_ | bug,sev:S2,test utilities | low | Critical |
2,718,984,725 | go | internal/coverage/cfile: TestCoverageApis/emitToNonexistentDir failures | ```
#!watchflakes
default <- pkg == "internal/coverage/cfile" && test == "TestCoverageApis/emitToNonexistentDir"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8729443650529759569)):
=== RUN TestCoverageApis/emitToNonexistentDir
=== PAUSE TestCoverageApis/emitToNonexistentDir
=== CONT TestCoverageApis/emitToNonexistentDir
emitdata_test.go:166: running: /home/swarming/.swarming/w/ir/x/t/TestCoverageApis2155603058/001/build1/harness.exe -tp emitToNonexistentDir -o /home/swarming/.swarming/w/ir/x/t/TestCoverageApis2155603058/001/emitToNonexistentDir-edir-y with rdir=/home/swarming/.swarming/w/ir/x/t/TestCoverageApis2155603058/001/emitToNonexistentDir-rdir-y and GOCOVERDIR=false
emitdata_test.go:166: running: /home/swarming/.swarming/w/ir/x/t/TestCoverageApis2155603058/001/build1/harness.exe -tp emitToNonexistentDir -o /home/swarming/.swarming/w/ir/x/t/TestCoverageApis2155603058/001/emitToNonexistentDir-edir-x with rdir=/home/swarming/.swarming/w/ir/x/t/TestCoverageApis2155603058/001/emitToNonexistentDir-rdir-x and GOCOVERDIR=true
emitdata_test.go:293: I run last.
internal error in coverage meta-data tracking:
encountered bad pkgID: 0 at slot: 3420 fnID: 6 numCtrs: 1
list of hard-coded runtime package IDs needs revising.
...
fatal error: exit hook invoked panic
goroutine 1 gp=0xc000004380 m=0 mp=0x70e140 [running]:
runtime.throw({0x5b98cd?, 0xc000004380?})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:1099 +0x92 fp=0xc0000768c0 sp=0xc000076890 pc=0x4e4e72
internal/runtime/exithook.Run.func1()
/home/swarming/.swarming/w/ir/x/w/goroot/src/internal/runtime/exithook/hooks.go:69 +0x55 fp=0xc0000768e0 sp=0xc0000768c0 pc=0x503795
panic({0x5ab3e0?, 0xc0001d0030?})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:787 +0x202 fp=0xc000076990 sp=0xc0000768e0 pc=0x4e48c2
runtime.goPanicSliceAcap(0x10000623d, 0x6c42)
...
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/proc.go:435 +0x24a fp=0xc00005e630 sp=0xc00005e610 pc=0x4e518a
runtime.runfinq()
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mfinal.go:196 +0x3ce fp=0xc00005e7e0 sp=0xc00005e630 pc=0x4288ee
runtime.goexit({})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc00005e7e8 sp=0xc00005e7e0 pc=0x4eee01
created by runtime.createfing in goroutine 1
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mfinal.go:166 +0x86
emitdata_test.go:294: running 'harness -tp emitToNonexistentDir': exit status 2
--- FAIL: TestCoverageApis/emitToNonexistentDir (0.05s)
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,719,072,990 | ollama | KV Cache quants run into issues every couple of messages. | ### What is the issue?
This is the error message I run into when running my scripts with KV cache q4_0 or q8_0. These are the models:
```
gemma2:27b-instruct-q4_0 (will be switching to q4_K_S)
minicpm-v-2.6-8b-q8_0
```
```
Traceback (most recent call last):
File "C:\Users\user\PycharmProjects\vector_companion\vector_companion\main.py", line 564, in <module>
config.asyncio.run(main())
File "C:\Users\user\.conda\envs\vector_companion\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Users\user\.conda\envs\vector_companion\lib\asyncio\base_events.py", line 647, in run_until_complete
return future.result()
File "C:\Users\user\PycharmProjects\vector_companion\vector_companion\main.py", line 520, in main
await queue_agent_responses(
File "C:\Users\user\PycharmProjects\vector_companion\vector_companion\main.py", line 178, in queue_agent_responses
await config.asyncio.gather(process_sentences(), play_audio_queue())
File "C:\Users\user\PycharmProjects\vector_companion\vector_companion\main.py", line 157, in process_sentences
async for sentence in sentence_generator:
File "C:\Users\user\PycharmProjects\vector_companion\vector_companion\config\config.py", line 109, in fetch_stream
for chunk in stream:
File "C:\Users\user\.conda\envs\vector_companion\lib\site-packages\ollama\_client.py", line 90, in _stream
raise ResponseError(e)
ollama._types.ResponseError: an error was encountered while running the model: read tcp 127.0.0.1:34105->127.0.0.1:34102: wsarecv: An existing connection was forcibly closed by the remote host.
```
So when I look at the server log I see this:
```
C:\a\ollama\ollama\llama\ggml-cuda\cpy.cu:531: ggml_cuda_cpy: unsupported type combination (q4_0 to f32)
time=2024-12-04T19:38:14.673-05:00 level=DEBUG source=server.go:1092 msg="stopping llama server"
[GIN] 2024/12/04 - 19:38:14 | 200 | 5.073219s | 127.0.0.1 | POST "/api/chat"
time=2024-12-04T19:38:14.674-05:00 level=DEBUG source=sched.go:407 msg="context for request finished"
time=2024-12-04T19:38:14.674-05:00 level=DEBUG source=sched.go:339 msg="runner with non-zero duration has gone idle, adding timer" modelPath=C:\Users\carlo\.ollama\models\blobs\sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc duration=2562047h47m16.854775807s
time=2024-12-04T19:38:14.674-05:00 level=DEBUG source=sched.go:357 msg="after processing request finished event" modelPath=C:\Users\carlo\.ollama\models\blobs\sha256-d7e4b00a7d7a8d03d4eed9b0f3f61a427e9f0fc5dea6aeb414e41dee23dc8ecc refCount=0
```
This is per the RC uploaded today.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.8 RC | bug | low | Critical |
2,719,074,050 | rust | Misleading error when a value moved into a closure gets depleted (moved away) when that closure is called | ### Code
Check out this repo:
https://github.com/mcclure/rs-bug/tree/z_bug_clone_error
branch `z_bug_clone_error`.
There are two adjacent commits. `2b22470` exhibits the bad error, `be44a0022` fixes the bad error (by adding a `.clone()`.
To reproduce, you must run `cargo build --example vim`.
This is a somewhat heaviweight example. I attempted to produce a simpler example, and it actually produced a subtly different (better actually, but still confusing) error. So I guess I now have two examples. Here's example #2.
```Rust
fn main() {
#[derive(Debug, Clone)]
struct Garbage {
a: i32,
b: i32
}
let garbage = Garbage { a:2, b:3 };
let mut next_value = move || {
let garbage2 = garbage;
println!("{garbage2:?}");
};
while true {
next_value();
}
println!("Hello, world!");
}
```
### Current output
Example #1 ("z_bug_clone_error"):
```Shell
error[E0525]: expected a closure that implements the `FnMut` trait, but this closure only implements `FnOnce`
--> examples/vim.rs:1108:26
|
1108 | let mut next_value = move || {
| ^^^^^^^ this closure implements `FnOnce`, not `FnMut`
...
1146 | state.adjust = reset_adjust; // Implicit reset each loop. Consider making customizable?
| ------------ closure is `FnOnce` because it moves the variable `reset_adjust` out of its environment
...
1204 | &mut next_value
| --------------- the requirement to implement `FnMut` derives from here
|
= note: required for the cast from `&mut {closure@examples/vim.rs:1108:26: 1108:33}` to `&mut dyn FnMut() -> f32`
```
Example #2 (inline in post above)
```shell
error[E0382]: use of moved value: `next_value`
--> src/main.rs:16:9
|
16 | next_value();
| ^^^^^^^^^^--
| |
| `next_value` moved due to this call, in previous iteration of loop
|
note: closure cannot be invoked more than once because it moves the variable `garbage` out of its environment
--> src/main.rs:11:24
|
11 | let garbage2 = garbage;
| ^^^^^^^
note: this value implements `FnOnce`, which causes it to be moved when called
--> src/main.rs:16:9
|
16 | next_value();
| ^^^^^^^^^^
```
### Rationale and extra context
In both code examples, the same process is happening: There is a `move ||` closure, the closure captures a variable, the closure does something that causes the variable to get consumed/moved out, the closure gets called more than once. The solution is to *either* call the closure only once, *or* to clone the value inside the closure rather than moving it.
### Desired output
I have a couple problems with these errors.
The biggest problem is both errors describe my closure as "implementing" FnOnce or FnMut. Of course I don't implement any traits, I'm relying on builtin syntax to create the closure. I would understand the sentence "you've written a closure that can only be called once, but you call it more than once" but both errors word it "you implemented FnOnce instead of FnMut". This is *technically correct*, but it is phrasing the problem in terms of compiler concepts which are here hidden from the user.
The second problem is exclusive to error #1, and it is that the error describes the problem as *a problem with the type of the closure* whereas the actual issue is *a problem with how next_value gets called* or *a problem with how `reset_adjust`/`garbage` is being treated*. This is not made clear in the current wording. I don't understand why the two bits of very similar code produce different errors.
I am also a little confused by the wording "moves the variable `reset_adjust out of its environment`". In retrospect, I'm not sure it's reasonable I was confused, but I'll do my best: I have a `move` closure which moves the reset_adjust value out of its original environment (the stack frame from which it is captured). Then the line `state.adjust = reset_adjust` moves it out of its *new* environment and into state.adjust (although it doesn't move very far, because `state.adjust` *also* lives inside the `move ||`; maybe this is my fault for not understanding the meaning of "environment", but that kinda looks like the same environment to me). Because there are multiple moves and multiple different possible "environment"s in play, "the variable is moved out of its environment" does not guide me to the correct point in the code.
That's a lot of typing and I never said what my desired output would be. I don't actually know how you shold fix this error. But I think my recommendation would be to make the error look more like "example #2" than "example #1", and add another hint to indicate that the problem can be solved by cloning (or, in general, by making a change to `reset_adjust`/`garbage`). Earlier I commented there are two possible fixes: Fix the usage of `next_value` outside the closure, or fix the usage of `reset_adjust`/`garbage` inside the closure. The "Example #2" error above pushes you toward the first solution, but neither error pushes me toward the second.
I did get this error "in the wild", and it was slow for me to figure out the fix even though the fix is simple.
### Rust Version
```Shell
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,719,093,150 | pytorch | torch.distributed.new_group got stuck when use backend=mpi | ### ๐ Describe the bug
```python
# test.py
import torch
import os
rank = int(os.getenv("LOCAL_RANK"))
torch.distributed.init_process_group('mpi')
g = torch.distributed.new_group(ranks=[0])
torch.distributed.barrier()
print("Done!")
```
### Description
The problem same with #134314 , but I run them with `backend=mpi`. And I test on torch2.5.1, but cannot be solved.
Please help me to solve this problem, thanks !
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,module: mpi | low | Critical |
2,719,095,493 | terminal | Snippets pane doesn't include snippets from .wt.json | It probably should. | Issue-Bug,Area-Settings,Product-Terminal,Needs-Discussion | low | Minor |
2,719,104,024 | pytorch | xpu: torch distributed ops like c10d::allgather are not implemented for XPU backend | With:
* (pytorch) https://github.com/pytorch/pytorch/commit/00134d68af2ce50560fa5a74473665ea229e6c9d
* https://github.com/intel/torch-xpu-ops/commit/98f47b621e3d8757ea6c03c4dacbad491dc84014
Running with `gloo` torch distributed backend, the following aten operators are not currently implemented for XPU backend (likely there are more not implemented ops in the same series...):
* This one does not allow manual CPU fallback, `PYTORCH_ENABLE_XPU_FALLBACK=1` will fail:
- [ ] `c10d::allgather_`
* These allow manual CPU fallback, `PYTORCH_ENABLE_XPU_FALLBACK=1` will pass:
- [ ] `c10d::_allgather_base_`
- [ ] `c10d::allgather_into_tensor_coalesced_`
To reproduce the following scripts can be used ran thru `torchrun` (with `--nnodes=1` can be reproduced in the single GPU). **NOTE: some projects might use torch distributed package running even on a single GPU device setting `world_size=1` and step into this issue. One example of such project is [llama-models](https://github.com/meta-llama/llama-models/tree/main) - they use torch distributed running vision models. [Here](https://github.com/meta-llama/llama-models/blob/17107dbe165f48270eebb17014ba880c6eb6a7c9/models/llama3/reference_impl/multimodal/model.py#L66) is a call to the allgather.**
* For `c10d::allgather_`:
```
$ cat run.py
import os
import torch
torch.distributed.init_process_group('gloo')
world_size=int(os.environ["WORLD_SIZE"])
local_rank=int(os.environ['LOCAL_RANK'])
print(f'world_size={world_size}')
print(f'local_rank={local_rank}')
device = torch.device(f'xpu:0')
tensor_list = [torch.zeros(2, dtype=torch.int64, device=device) for _ in range(world_size)]
print(tensor_list)
tensor = torch.arange(2, dtype=torch.int64, device=device) + 1 + 2 * local_rank
print(tensor)
torch.distributed.all_gather(tensor_list, tensor)
print(tensor_list)
$ PYTORCH_DEBUG_XPU_FALLBACK=1 torchrun --nnodes=1 --nproc-per-node=1 run.py
...
[rank0]: NotImplementedError: The operator 'c10d::allgather_' is not currently implemented for the XPU device.
...
```
* For `c10d::_allgather_base_`:
```
$ car run.py
import os
import torch
torch.distributed.init_process_group('gloo')
world_size=int(os.environ["WORLD_SIZE"])
local_rank=int(os.environ['LOCAL_RANK'])
print(f'world_size={world_size}')
print(f'local_rank={local_rank}')
device = torch.device(f'xpu:0')
tensor_in = torch.arange(2, dtype=torch.int64, device=device) + 1 + 2 * local_rank
print(tensor_in)
tensor_out = torch.zeros(world_size * 2, dtype=torch.int64, device=device)
torch.distributed.all_gather_into_tensor(tensor_out, tensor_in)
print(tensor_out)
$ PYTORCH_DEBUG_XPU_FALLBACK=1 torchrun --nnodes=1 --nproc-per-node=1 run.py
...
[rank0]:[W1204 16:55:39.981637569 RegisterXPU.cpp:46330] Warning: The operator 'c10d::_allgather_base_ on the XPU backend is falling back to run on the CPU. (function xpu_fallback_impl)
...
```
* I don't have simple repro script for `c10d::allgather_into_tensor_coalesced_`. I did see this one running Llama3.2-11B-Vision-Instruct model working with llama-models project, see https://github.com/meta-llama/llama-models/pull/233.
NOTE: above examples for `c10d::allgather_` and `c10d::_allgather_base_` work on CUDA with `gloo` torch distributed backend.
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,719,108,129 | ollama | Mini-CPM-V-2.6-q8_0 produces incoherent responses after applying KV Cache q4_0 or q8_0. | ### What is the issue?
This happens when running ollama `/generate` via the python API. The output looks like the model is having a seizure. It seems to be able to see the images but its output is so random and erratic I can't make out anything from the text. I didn't change any other parameter about the model.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.8 RC | bug | low | Minor |
2,719,114,902 | node | `.pipeTo(Writable.toWeb(process.stdout))` returns a never-settling Promise | ### Version
- v22.9.0
- v22.12.0
- v23.3.0
### Platform
```text
Linux a54ff73afbfe 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
```
```text
Linux SURFACE9PRO 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
```
### Subsystem
node:stream
### What steps will reproduce the bug?
create file `test.mjs`
```javascript
import { Readable, Writable } from 'node:stream'
await Readable.toWeb(process.stdin).pipeTo(Writable.toWeb(process.stdout))
```
run
```terminal
echo test | node test.mjs
test
Warning: Detected unsettled top-level await at file:///workspace/test.mjs:2
await Readable.toWeb(process.stdin).pipeTo(Writable.toWeb(process.stdout))
^
```
### How often does it reproduce? Is there a required condition?
always
### What is the expected behavior? Why is that the expected behavior?
no warning, `await` to work as expected
### What do you see instead?
warning about unsettled top-level await is shown
### Additional information
_No response_ | web streams | low | Critical |
2,719,135,927 | langchain | Lancedb hybrid search use reranker will throw ValueError | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
vector_store = LanceDB(embedding=embedding_model, uri="./lancedb", reranker=reranker)
...
retrieved_docs = vector_store.similarity_search(query=state["question"], query_type="hybrid")
```
### Error Message and Stack Trace (if applicable)

File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langgraph\pregel\__init__.py:1927, in Pregel.invoke(self, input, config, stream_mode, output_keys, interrupt_File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langgraph\pregel\__init__.py:1927, in Pregel.invoke(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, **kwargs)
[1925](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1925) else:
[1926](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1926) chunks = []
-> [1927](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1927) for chunk in self.stream(
[1928](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1928) input,
[1929](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1929) config,
[1930](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1930) stream_mode=stream_mode,
[1931](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1931) output_keys=output_keys,
[1932](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1932) interrupt_before=interrupt_before,
[1933](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1933) interrupt_after=interrupt_after,
[1934](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1934) debug=debug,
[1935](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1935) **kwargs,
[1936](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1936) ):
[1937](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1937) if stream_mode == "values":
[1938](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1938) latest = chunk
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langgraph\pregel\__init__.py:1647, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
[1641](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1641) # Similarly to Bulk Synchronous Parallel / Pregel model
[1642](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1642) # computation proceeds in steps, while there are channel updates
[1643](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1643) # channel updates from step N are only visible in step N+1
[1644](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1644) # channels are guaranteed to be immutable for the duration of the step,
[1645](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1645) # with channel updates applied only at the transition between steps
[1646](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1646) while loop.tick(input_keys=self.input_channels):
-> [1647](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1647) for _ in runner.tick(
[1648](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1648) loop.tasks.values(),
[1649](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1649) timeout=self.step_timeout,
[1650](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1650) retry_policy=self.retry_policy,
[1651](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1651) get_waiter=get_waiter,
[1652](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1652) ):
[1653](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1653) # emit output
[1654](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1654) yield from output()
[1655](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1655) # emit output
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langgraph\pregel\runner.py:104, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter)
[102](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/runner.py:102) t = tasks[0]
[103](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/runner.py:103) try:
--> [104](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/runner.py:104) run_with_retry(t, retry_policy, writer=writer)
[105](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/runner.py:105) self.commit(t, None)
[106](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/runner.py:106) except Exception as exc:
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langgraph\pregel\retry.py:40, in run_with_retry(task, retry_policy, writer)
[38](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/retry.py:38) task.writes.clear()
[39](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/retry.py:39) # run the task
---> [40](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/retry.py:40) task.proc.invoke(task.input, config)
[41](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/retry.py:41) # if successful, end
[42](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/retry.py:42) break
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langgraph\utils\runnable.py:410, in RunnableSeq.invoke(self, input, config, **kwargs)
[408](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:408) context.run(_set_config_context, config)
[409](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:409) if i == 0:
--> [410](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:410) input = context.run(step.invoke, input, config, **kwargs)
[411](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:411) else:
[412](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:412) input = context.run(step.invoke, input, config)
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langgraph\utils\runnable.py:184, in RunnableCallable.invoke(self, input, config, **kwargs)
[182](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:182) else:
[183](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:183) context.run(_set_config_context, config)
--> [184](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:184) ret = context.run(self.func, input, **kwargs)
[185](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:185) if isinstance(ret, Runnable) and self.recurse:
[186](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:186) return ret.invoke(input, config)
Cell In[16], [line 17](vscode-notebook-cell:?execution_count=16&line=17)
[12](vscode-notebook-cell:?execution_count=16&line=12) def retrieve(state: State):
[13](vscode-notebook-cell:?execution_count=16&line=13) # retriever = vector_store.as_retriever(
[14](vscode-notebook-cell:?execution_count=16&line=14) # search_type="similarity_score_threshold",
[15](vscode-notebook-cell:?execution_count=16&line=15) # search_kwargs={"score_threshold": 0.5}, # A greater value returns items with more relevance
[16](vscode-notebook-cell:?execution_count=16&line=16) # )
---> [17](vscode-notebook-cell:?execution_count=16&line=17) retrieved_docs = vector_store.similarity_search(query=state["question"], query_type="hybrid")
[18](vscode-notebook-cell:?execution_count=16&line=18) # retrieved_docs = retriever.invoke(state["question"])
[19](vscode-notebook-cell:?execution_count=16&line=19) return {"context": retrieved_docs, "source": [doc.metadata["source"] for doc in retrieved_docs]}
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langchain_community\vectorstores\lancedb.py:525, in LanceDB.similarity_search(self, query, k, name, filter, fts, **kwargs)
[501](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:501) def similarity_search(
[502](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:502) self,
[503](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:503) query: str,
(...)
[508](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:508) **kwargs: Any,
[509](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:509) ) -> List[Document]:
[510](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:510) """Return documents most similar to the query
[511](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:511)
[512](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:512) Args:
(...)
[523](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:523) List of documents most similar to the query.
[524](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:524) """
--> [525](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:525) res = self.similarity_search_with_score(
[526](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:526) query=query, k=k, name=name, filter=filter, fts=fts, score=False, **kwargs
[527](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:527) )
[528](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:528) return res
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langchain_community\vectorstores\lancedb.py:490, in LanceDB.similarity_search_with_score(self, query, k, filter, **kwargs)
[487](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:487) else:
[488](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:488) _query = query # type: ignore
--> [490](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:490) res = self._query(_query, k, filter=filter, **kwargs)
[491](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:491) return self.results_to_docs(res, score=score)
[492](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:492) else:
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langchain_community\vectorstores\lancedb.py:385, in LanceDB._query(self, query, k, filter, name, **kwargs)
[377](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:377) lance_query = (
[378](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:378) tbl.search(query=query, vector_column_name=self._vector_key)
[379](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:379) .limit(k)
[380](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:380) .metric(metrics)
[381](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:381) .where(filter, prefilter=prefilter)
[382](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:382) )
[383](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:383) else:
[384](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:384) lance_query = (
--> [385](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:385) tbl.search(query=query, vector_column_name=self._vector_key)
[386](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:386) .limit(k)
[387](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:387) .where(filter, prefilter=prefilter)
[388](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:388) )
[389](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:389) if query_type == "hybrid" and self._reranker is not None:
[390](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:390) lance_query.rerank(reranker=self._reranker)
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\lancedb\table.py:1570, in LanceTable.search(self, query, vector_column_name, query_type, ordering_field_name, fts_columns)
[1567](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1567) except Exception as e:
[1568](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1568) raise e
-> [1570](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1570) return LanceQueryBuilder.create(
[1571](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1571) self,
[1572](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1572) query,
[1573](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1573) query_type,
[1574](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1574) vector_column_name=vector_column_name,
[1575](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1575) ordering_field_name=ordering_field_name,
[1576](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1576) fts_columns=fts_columns,
[1577](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1577) )
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\lancedb\query.py:192, in LanceQueryBuilder.create(cls, table, query, query_type, vector_column_name, ordering_field_name, fts_columns)
[184](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:184) return LanceFtsQueryBuilder(
[185](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:185) table,
[186](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:186) query,
[187](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:187) ordering_field_name=ordering_field_name,
[188](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:188) fts_columns=fts_columns,
[189](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:189) )
[191](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:191) if isinstance(query, list):
--> [192](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:192) query = np.array(query, dtype=np.float32)
[193](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:193) elif isinstance(query, np.ndarray):
[194](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:194) query = query.astype(np.float32)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.before, interrupt_after, debug, **kwargs)
[1925](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1925) else:
[1926](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1926) chunks = []
-> [1927](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1927) for chunk in self.stream(
[1928](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1928) input,
[1929](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1929) config,
[1930](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1930) stream_mode=stream_mode,
[1931](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1931) output_keys=output_keys,
[1932](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1932) interrupt_before=interrupt_before,
[1933](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1933) interrupt_after=interrupt_after,
[1934](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1934) debug=debug,
[1935](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1935) **kwargs,
[1936](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1936) ):
[1937](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1937) if stream_mode == "values":
[1938](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1938) latest = chunk
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langgraph\pregel\__init__.py:1647, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
[1641](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1641) # Similarly to Bulk Synchronous Parallel / Pregel model
[1642](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1642) # computation proceeds in steps, while there are channel updates
[1643](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1643) # channel updates from step N are only visible in step N+1
[1644](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1644) # channels are guaranteed to be immutable for the duration of the step,
[1645](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1645) # with channel updates applied only at the transition between steps
[1646](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1646) while loop.tick(input_keys=self.input_channels):
-> [1647](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1647) for _ in runner.tick(
[1648](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1648) loop.tasks.values(),
[1649](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1649) timeout=self.step_timeout,
[1650](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1650) retry_policy=self.retry_policy,
[1651](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1651) get_waiter=get_waiter,
[1652](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1652) ):
[1653](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1653) # emit output
[1654](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1654) yield from output()
[1655](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/__init__.py:1655) # emit output
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langgraph\pregel\runner.py:104, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter)
[102](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/runner.py:102) t = tasks[0]
[103](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/runner.py:103) try:
--> [104](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/runner.py:104) run_with_retry(t, retry_policy, writer=writer)
[105](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/runner.py:105) self.commit(t, None)
[106](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/runner.py:106) except Exception as exc:
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langgraph\pregel\retry.py:40, in run_with_retry(task, retry_policy, writer)
[38](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/retry.py:38) task.writes.clear()
[39](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/retry.py:39) # run the task
---> [40](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/retry.py:40) task.proc.invoke(task.input, config)
[41](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/retry.py:41) # if successful, end
[42](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/pregel/retry.py:42) break
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langgraph\utils\runnable.py:410, in RunnableSeq.invoke(self, input, config, **kwargs)
[408](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:408) context.run(_set_config_context, config)
[409](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:409) if i == 0:
--> [410](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:410) input = context.run(step.invoke, input, config, **kwargs)
[411](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:411) else:
[412](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:412) input = context.run(step.invoke, input, config)
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langgraph\utils\runnable.py:184, in RunnableCallable.invoke(self, input, config, **kwargs)
[182](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:182) else:
[183](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:183) context.run(_set_config_context, config)
--> [184](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:184) ret = context.run(self.func, input, **kwargs)
[185](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:185) if isinstance(ret, Runnable) and self.recurse:
[186](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langgraph/utils/runnable.py:186) return ret.invoke(input, config)
Cell In[16], [line 17](vscode-notebook-cell:?execution_count=16&line=17)
[12](vscode-notebook-cell:?execution_count=16&line=12) def retrieve(state: State):
[13](vscode-notebook-cell:?execution_count=16&line=13) # retriever = vector_store.as_retriever(
[14](vscode-notebook-cell:?execution_count=16&line=14) # search_type="similarity_score_threshold",
[15](vscode-notebook-cell:?execution_count=16&line=15) # search_kwargs={"score_threshold": 0.5}, # A greater value returns items with more relevance
[16](vscode-notebook-cell:?execution_count=16&line=16) # )
---> [17](vscode-notebook-cell:?execution_count=16&line=17) retrieved_docs = vector_store.similarity_search(query=state["question"], query_type="hybrid")
[18](vscode-notebook-cell:?execution_count=16&line=18) # retrieved_docs = retriever.invoke(state["question"])
[19](vscode-notebook-cell:?execution_count=16&line=19) return {"context": retrieved_docs, "source": [doc.metadata["source"] for doc in retrieved_docs]}
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langchain_community\vectorstores\lancedb.py:525, in LanceDB.similarity_search(self, query, k, name, filter, fts, **kwargs)
[501](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:501) def similarity_search(
[502](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:502) self,
[503](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:503) query: str,
(...)
[508](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:508) **kwargs: Any,
[509](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:509) ) -> List[Document]:
[510](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:510) """Return documents most similar to the query
[511](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:511)
[512](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:512) Args:
(...)
[523](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:523) List of documents most similar to the query.
[524](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:524) """
--> [525](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:525) res = self.similarity_search_with_score(
[526](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:526) query=query, k=k, name=name, filter=filter, fts=fts, score=False, **kwargs
[527](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:527) )
[528](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:528) return res
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langchain_community\vectorstores\lancedb.py:490, in LanceDB.similarity_search_with_score(self, query, k, filter, **kwargs)
[487](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:487) else:
[488](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:488) _query = query # type: ignore
--> [490](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:490) res = self._query(_query, k, filter=filter, **kwargs)
[491](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:491) return self.results_to_docs(res, score=score)
[492](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:492) else:
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\langchain_community\vectorstores\lancedb.py:385, in LanceDB._query(self, query, k, filter, name, **kwargs)
[377](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:377) lance_query = (
[378](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:378) tbl.search(query=query, vector_column_name=self._vector_key)
[379](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:379) .limit(k)
[380](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:380) .metric(metrics)
[381](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:381) .where(filter, prefilter=prefilter)
[382](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:382) )
[383](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:383) else:
[384](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:384) lance_query = (
--> [385](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:385) tbl.search(query=query, vector_column_name=self._vector_key)
[386](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:386) .limit(k)
[387](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:387) .where(filter, prefilter=prefilter)
[388](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:388) )
[389](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:389) if query_type == "hybrid" and self._reranker is not None:
[390](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/langchain_community/vectorstores/lancedb.py:390) lance_query.rerank(reranker=self._reranker)
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\lancedb\table.py:1570, in LanceTable.search(self, query, vector_column_name, query_type, ordering_field_name, fts_columns)
[1567](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1567) except Exception as e:
[1568](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1568) raise e
-> [1570](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1570) return LanceQueryBuilder.create(
[1571](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1571) self,
[1572](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1572) query,
[1573](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1573) query_type,
[1574](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1574) vector_column_name=vector_column_name,
[1575](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1575) ordering_field_name=ordering_field_name,
[1576](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1576) fts_columns=fts_columns,
[1577](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/table.py:1577) )
File d:\ProgramData\miniforge3\envs\t2\Lib\site-packages\lancedb\query.py:192, in LanceQueryBuilder.create(cls, table, query, query_type, vector_column_name, ordering_field_name, fts_columns)
[184](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:184) return LanceFtsQueryBuilder(
[185](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:185) table,
[186](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:186) query,
[187](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:187) ordering_field_name=ordering_field_name,
[188](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:188) fts_columns=fts_columns,
[189](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:189) )
[191](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:191) if isinstance(query, list):
--> [192](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:192) query = np.array(query, dtype=np.float32)
[193](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:193) elif isinstance(query, np.ndarray):
[194](file:///D:/ProgramData/miniforge3/envs/t2/Lib/site-packages/lancedb/query.py:194) query = query.astype(np.float32)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.
### Description
1. The first time, the above code will be thrown a `TypeError: langchain_community.vectorstores.lancedb.LanceDB._query() got multiple values for keyword argument 'name'`, I modify `langchian_commutity.vectorstores.lancedb.py:490` `res = self._query(_query, k, name=name, filter=filter, **kwargs)`, kwargs already include name, so I just remove `name=name`
2. then, It throws `unsported query type 'tuple'`, so I modify `langchian_commutity.vectorstores.lancedb.py:486` `_query = (embedding, query)` to `_query = [embedding, query]`
3. now, It throws `ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.`
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.26100
> Python Version: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 15:47:54) [MSC v.1941 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.21
> langchain: 0.3.8
> langchain_community: 0.3.8
> langsmith: 0.1.146
> langchain_ollama: 0.2.0
> langchain_openai: 0.2.10
> langchain_text_splitters: 0.3.2
> langgraph_sdk: 0.1.40
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.7
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> ollama: 0.4.1
> openai: 1.55.1
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.1
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2 | โฑญ: vector store | low | Critical |
2,719,163,021 | PowerToys | Activity Modes | ### Description of the new feature / enhancement
I very often need to adjust the screen saver timeout, display off timeout, and sleep settings of windows. These settings are related, but the dialogs to change them are all over the place and difficult to open. It would be nice to be able to configure and save various modes with different values for these settings, then have a WIN+SHIFT hotkey to cycle through modes to choose an active one. In addition, it would be nice to have WIN+SHIFT shortcuts to immediately activate one of them, like put the pc to sleep now or turn off the monitor now or start the screensaver now.
### Scenario when this would be used?
I use Chrome Remote Desktop to remote into PCs, which cannot wake them. I'm constantly turning sleep off or on, depending on my need to remote into the PC later.
I might normally have aggressive timeouts for screensaver at 10 min, display turn off at 15 min, sleep at 30 min. However, when I am researching or doing something without using the mouse for long periods, I need a less conservative mode. I might set the screensaver to 235 min, display turn off to 240 min, and sleep to never.
Some keyboards have a dedicated sleep button, which is great for immediately activating sleep. For pcs that have them, I can leave sleep off and just hit that button when I am done working. However, I usually have to resort to making desktop shortcuts to turn on a screensaver or fumble around looking for the power button to turn off monitors. It would be nice to just have consistent keyboard shortcuts across devices to do these very basic tasks.
### Supporting information
It's so difficult to find these dialogs in the new interactive settings app and old control panel dialogs that I find myself leaving them open 24/7 to be able to change and apply new settings. It's extremely cumbersome.
Forgetting to increase timeouts during a session can force you to have to authenticate every few minutes, often involving finding your phone and responding to 2FA popups.
Forgetting to disable sleep can make it impossible to "chromote" in later.
| Idea-New PowerToy,Needs-Triage,Needs-Team-Response | low | Minor |
2,719,184,334 | rust | Likely unintentionally stabilized support for inhabitedness checks to permeate `Pin` | ```rs
use std::pin::Pin;
enum Void {}
fn demo(x: Pin<Void>) {
match x {}
}
```
this compiles successfully since 1.82
The issue with this is that generally, the check does *not* look through private implementation details. However, the field of `Pin` is technically public (only hidden and unstable) in order to support the `pin!` macro.
I would have expected the above code to still fail compiling. Itโs quite unlikely that anyone depends on this behavior already, because `Pin` isnโt supposed to be used with non-pointers anyway.
This could be fixed by changing the inhabitedness check to treat unstable fields like private fields.
Alternatively, if kept as-is, we should add a test case for this so we at least notice the breakage if `Pin` is ever re-structured to no longer using a public field.
cc @Nadrieril, I guess | T-lang,T-compiler,C-bug | low | Minor |
2,719,196,047 | ollama | model requires more system memory than is available when useMmap | ### What is the issue?
When I use continue vscode extension to call ollama config like
```
{
"model": "qwen2.5-coder:14b",
"title": "qwen2.5-coder:14b",
"provider": "ollama",
"completionOptions": {
"keepAlive": 9999999,
"useMmap": true
}
},
```
It still checks system memory disregard the `"useMmap": true` option. And return 500 internal error like:
```
{"error":"model requires more system memory (17.7 GiB) than is available (13.6 GiB)"}
```
### OS
Windows
### GPU
_No response_
### CPU
Other
### Ollama version
0.4.7 | bug | low | Critical |
2,719,214,397 | deno | [Feature Request] Workspace task with --env-file | In my project we have the same env variables on all our workspace. It would be nice to have `--env-file` on task so that we would only set env on RootDir deno.json file
```json
{
"tasks" {
"build": "deno task --recursive build",
"build:dev": "deno task --env-file=.env.dev build",
"build:prod": "deno task --env-file=.env.prod build"
}
}
``` | suggestion,task runner | low | Minor |
2,719,230,202 | rust | rustdoc: support #![cfg(feature)] that disables doc tests | Code in doc comments may require specific Cargo features or platforms, but currently the syntax for disabling doctests is non-obvious, verbose, and by mixing languages and syntaxes, it doesn't play well with Markdown syntax highlighting:
````rust
#![cfg_attr(feature = "alloc", doc = " ```")]
#![cfg_attr(not(feature = "alloc"), doc = " ```ignore")]
//! code
//! ```
````
I suggest supporting `#![cfg(โฆ)]` inside doctests, injected into the test code in a way that disables a block of code containing the test. Currently `#![cfg(feature = โฆ)]` doesn't work at all in doctests, because it gets hoisted to be a real crate attribute, and ends up disabling entire the test module.
````rust
//! ```rust
//! #![cfg(feature = "alloc")] // proposed syntax
//! code
//! ```
````
In the implementation I think it would require wrapping "everything_else" code in extra `{ }`, and keeping `#![cfg(feature = โฆ)]` attrs in the code, instead of extracting them and hoisting them to the top level of the doctest crate. | T-rustdoc,C-feature-request,A-doctests,A-cfg | low | Minor |
2,719,248,324 | neovim | LSP: need a strategy for handling stateful data | ### Problem
Some LSP request responses need to be retained after processing the results, such as:
- `textDocument/foldingRange` needs to retain state for closing a specific `foldingRangeKind`. For neovim, folding information must be cached.
- `textDocument/inlayHint` and `textDocument/codeLens` require maintaining state for subsequent requests like `resolve`.
- `textDocument/documentHighlight` could retain state to enable forward/backward navigation.
### Expected behavior
Implement an internal-only module dedicated to handling these stateful data, by:
- Creating `bufstate` and corresponding callback functions when a buffer has a client that supports the relevant methods.
- Destroying `bufstate` and corresponding callback functions when no clients in the buffer support the relevant methods.
Use this module for the methods mentioned above to avoid relying on nil-checks (which might mask bugs) and automatically creating tables via metatable methods (which can be difficult to debug).
The `bufstate` should only contain results directly obtained from the server (e.g., `lsp.foldingRange[]`, `lsp.InlayHint[]`) and derived information (e.g., computed caches or whether they are applied). It should not store user-managed settings (e.g., `enabled`). | enhancement,lsp,architecture | low | Critical |
2,719,252,123 | pytorch | [compiled autograd] Does compiled autograd under torch 2.4 support using selective checkpoint? | ### ๐ Describe the bug
While implementing selective checkpointing in compiled autograd, we encountered the following error:
```
torch._dynamo.exc.Unsupported: 'inline in skipfiles: CheckpointFunction.backward | backward /.../site-packages/torch/utils/checkpoint.py,
skipped according to trace_rules.lookup SKIP_DIRS'
```
The error is thrown here:
```python
def call_backward(backward_c_function, saved_tensors, *args):
fake = FakeBackwardCFunction(backward_c_function, saved_tensors)
# error throw point
grads = fake._forward_cls.backward(fake, *args) # type: ignore[attr-defined]
# in eager, we wrap in a tuple when there's only one grad output
if type(grads) is not tuple:
grads = (grads,)
return grads
```
Is it accurate to say that torch 2.4 still does not support selective checkpointing in compiled autograd?"
### Versions
[pip3] numpy==1.24.4
[pip3] onnx==1.16.2
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0+cu124
[pip3] torchlibrosa==0.1.0
[pip3] torchvision==0.19.0+cu124
[pip3] triton==3.0.0
[conda] Could not collect
cc @chauhang @penguinwu @xmfan @yf225 | triaged,oncall: pt2,module: compiled autograd | low | Critical |
2,719,259,764 | PowerToys | Desktop Background Manager | ### Description of the new feature / enhancement
Windows 11 has decent multi-monitor support for desktop backgrounds. You can use a random or alphabetical slideshow of images from a specified folder and make the backgrounds unique or mirrored across monitors. If you right-click the background, you can cycle to the next image in the slideshow, but it only changes one monitor and not necessarily the one you wanted to change.
It would be nice to have a Power Toys module that gave further control. I would love to have a WIN+SHIFT shortcut to cycle images, like the right-click shortcut. It would be nice to have a shortcut to pause the timer cycling to keep them fixed for a while, perhaps to make a set of consistent screenshots. It would be nice to have a keyboard shortcut (and/or a right-click shortcut) to cycle a specific monitor's image. It would be nice to configure preset images per monitor, maybe even including random or alphabetical for some monitors and fixed images on other monitors or even a different set of slideshow folders for each monitor, then allow shortcuts to activate or cycle those presets.
You could get even crazier with ai, allowing filtering of the slideshow images with a description, like "green palette" or "cars" or "2024" or "anime". Using it to find images of people or particular things, like "my car" or "my pet", would probably require defining those things with a selected image that it could use to find similar ones. I won't be paying monthly fees for any ai service though, so unless there is a free way to implement ai features, like Microsoft Copilot, I'm not interested.
### Scenario when this would be used?
This would be useful in any multiple monitor configuration. It would save a lot of time when you want to change a background image. It would allow you to make presets that would be useful for taking consistent screenshots. It would allow you to set a simple background image and pause the background slideshow when "chromoting" into a pc with limited bandwidth. The possibilities are endless.
### Supporting information
If I want to change a background from a slideshow when I have 3 monitors, I currently have to right click the desktop and click "next desktop background" up to 3 times, which is extremely cumbersome. If I only want to change one of the three, there is no way to reliably do this. | Idea-New PowerToy,Needs-Triage | low | Minor |
2,719,261,762 | transformers | Is there a way to find the earliest version of transformers that has a certain model? | ### Feature request
Is there a way to find the earliest version of transformers that has a certain model? For example, I want to use CLIP into my project, but the existing transformers version is old, I want to upgrade transformers to a lowest version that can use CLIP, so that other parts of my code do not change.
### Motivation
There are situations where I need to use a new model in the existing codebase. But when updating transformers, there could be some parts of the code out of date, and need to be modified.
### Your contribution
I don't know, but I will try to help. | Feature request | low | Major |
2,719,341,854 | angular | UMD version of Zone.js is incorrectly transpiled | ### Which @angular/* package(s) are the source of the bug?
zone.js
### Is this a regression?
No
### Description
The implementation of ZoneAwarePromise.allWithCallback uses a `for(... of ...)` loop. This is expected and correct, as [Promise.all](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) expects an iterable.
However, the latest published [UMD version](https://unpkg.com/[email protected]/bundles/zone.umd.js) of zone.js transpiles that down to `for (var _i = 0, values_3 = values; _i < values_3.length; _i++) {`, which will only work with Arrays (or ArrayLike) values.
This fails when Promise.all is used with Iterables
### Please provide a link to a minimal reproduction of the bug
[JSFiddle showing bug when umd version of zone is installed](https://jsfiddle.net/32o5ydsj/4/)
[JSFiddle showing correct behaviour without zone.js](https://jsfiddle.net/32o5ydsj/3/)
### Please provide the exception or error you saw
```true
N/A
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
N/A
```
### Anything else?
_No response_ | area: zones | low | Critical |
2,719,397,916 | ant-design | Feature to identify the exact reason of triggering event | ### What problem does this feature solve?
This feature solves the ambiguity in handling the onChange event in the Ant Design AutoComplete component by allowing developers to identify whether the event was triggered by user typing, selecting an option, or clearing the input. This enhances event handling precision and reduces the need for workaround logic.
### What does the proposed API look like?
The proposed API introduces an additional parameter to the onChange event handler in the Ant Design AutoComplete component. This parameter specifies the reason for the onChange event being triggered.
onChange={(value, reason) => {
console.log('Value:', value, 'Reason:', reason);
}}
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ฃ Discussion,Inactive | low | Minor |
2,719,432,608 | neovim | LSP: Improve textDocument/diagnostic performance by conforming to LSP spec | ### Problem
Users of the Roslyn LSP (used for C# in VS Code) have encountered significant delays when retrieving pull diagnostics in large documents while using Neovim. For instance, diagnostics in a 2000-line .cs file can take over 20 seconds to display after edits in Neovim, whereas in VS Code, diagnostics for the same file are displayed almost instantly.
As [@mparq noted](https://github.com/seblj/roslyn.nvim/issues/93#issuecomment-2508940330) in https://github.com/seblj/roslyn.nvim/issues/93, VS Code leverages additional parameters specified in the [LSP documentation for textDocument/diagnostic](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#documentDiagnosticParams), specifically:
- previousResultId
- identifier
### Expected behavior
When requesting diagnostics, Neovim should include the `previousResultId` and `identifier` parameters as part of the request.
These parameters enable the server to utilize caching and return incremental results.
Support for maintaining state is already present in the [textDocument/semanticTokens implementation](https://github.com/neovim/neovim/blob/8f84167c30692555d3332565605e8a625aebc43c/runtime/lua/vim/lsp/semantic_tokens.lua#L289).
A similar mechanism could probably be implemented in `textDocument/diagnostic` handler. | enhancement,performance,lsp | low | Major |
2,719,498,530 | deno | deno task: globstar does not work properly on task dependencies ? | Version: Deno 2.1.2
```
PS C:\GitHub\mizu> deno task build:readmes
Task build:readmes deno run --quiet --allow-read --allow-env --allow-write=@mizu/**/README.md,README.md --allow-run=deno .github/tools/mod_html_to_readme_md.ts $INIT_CWD
glob: no matches found 'C:\GitHub\mizu/--allow-write=@mizu/**/README.md,README.md'
```
I think it may be a mishandling of windows backslashes maybe ?
___
Edit: Actually it seems to occur on linux too, it may be possible this is because this task is a dependency of another task ?
```
Task build:readmes deno run --quiet --allow-read --allow-env --allow-write=@mizu/**/README.md,README.md --allow-run=deno .github/tools/mod_html_to_readme_md.ts $INIT_CWD
glob: no matches found '/home/runner/work/mizu/mizu/--allow-write=@mizu/**/README.md,README.md' | cli,task runner | low | Minor |
2,719,522,371 | flutter | [macOS] : To update flutter/examples supporting desktop platform | ### Use case
Based on https://github.com/flutter/flutter/issues/84306 and
https://github.com/flutter/flutter/issues/84306#issuecomment-2158870461
I am filing separate issue to update examples to support macOS platform.
### Proposal
Update flutter/examples to support macOS platform, although I do see https://github.com/flutter/flutter/pull/102539 but not sure if it covers all examples or not. | platform-mac,d: examples,c: proposal,a: desktop,P3,team-macos,triaged-macos | low | Minor |
2,719,526,858 | flutter | [windows] : To update flutter/examples supporting desktop platform | ### Use case
Based on #84306 and
#84306 (comment)
I am filing separate issue to update examples to support windows platform.
### Proposal
Update flutter/examples to support windows platform. | platform-windows,d: examples,c: proposal,a: desktop,P3,team-windows,triaged-windows | low | Minor |
2,719,529,612 | flutter | [Linux] : To update flutter/examples supporting desktop platform | ### Use case
Based on #84306 and
#84306 (comment)
I am filing separate issue to update examples to support Linux platform.
### Proposal
Update flutter/examples to support Linux platform. | d: examples,platform-linux,c: proposal,a: desktop,P3,team-linux,triaged-linux | low | Minor |
2,719,591,265 | rust | Tracking Issue for ui test suite cleanups and maintenance | This is a tracking issue for an initiative of improving ui test suite organization and ui test usability. This issue is not meant for general discussions, but is instead intended for tracking logistics of PRs. For specific matters, please discuss in the zulip thread https://rust-lang.zulipchat.com/#narrow/channel/131828-t-compiler/topic/Discussion.20for.20ui.20test.20suite.20improvements.
## Context
The `ui` test suite (`tests/ui/`) has *a lot* of tests. Often, many ui tests suffer from:
- Uninformative test names[^test-names] when it's not clear that a test is a specific regression test for some edge case of a particular feature, for example.
- Lack of backlinks: links to the issue for regression tests, relevant discussions, relevant context, further resources are *very* helpful.
- Yes, you *might* be able to find via git archaeology, but tests get moved and tests get changed, and it's also at least an extra layer of indirection.
- Lack of test intention documentation: many ui tests don't describe what they intend to check. This makes future work that modify or fail the test very hard and tedious because you have to first figure out what the test is *intending* to check via git archaeology or asking the authors/reviewers. Furthermore, the test might fail to actually check what it *intends* to check!
- Lack of docs on *how* the test plans to check what it intends to check, when this is not trivial.
- Confusing organization: some ui tests are placed randomly, like directly under `tests/ui/`. Some ui tests fall into multiple categories, which is fine, but it may make sense to rehome an ui test if it's better organized under a different directory.
- Duplicate tests. There are *certainly* ui tests that are duplicated, but it's a pain to figure out which ones are *full* "true" duplicates as opposed to only overlapping.
## Possible improvements
The guiding rationale for improving ui test usability is to:
1. Make it easier to figure out test intention:
1. What the test *intends* to check.
2. What are the relevant context (issues, PRs, discussions, RFCs) or areas.
2. Make it easier to find tests. E.g. keywords, better directory organizations, better test names.
3. Don't regress existing ui test coverage: for example, if a parser test relies on *specific* formatting, do not `rustfmt` the test as it will regress test coverage.
See [Best practices for writing tests](https://rustc-dev-guide.rust-lang.org/tests/best-practices.html#best-practices-for-writing-tests) in rustc-dev-guide for advice on how to make ui tests more useful. However, don't take the advice at face value -- they should be evaluated on a case-by-case basis. For instance, some subdirectory might contain a collection of specific regression tests related to issues, and in that case having tests be named just `issue-xxxxx.rs` isn't bad. On the contrary, a top-level `issue-xxxxx.rs` under `tests/ui/` is not very informative.
Example of things that *might* be done, but only if it makes sense on a case-by-case basis:
- Improve test documentation:
- Briefly describe test intention.
- Backlink to issue, e.g. `//! Issue: <https://github.com/rust-lang/rust/issues/374>.` or whatever useful relevant context.
- If the test checks something that's not obvious in how the check is achieved, elaborate on how the test achieves its purpose.
- Rename the test file to something more informative, e.g. `macro-empty-suggestion-span-123456.rs`.
- Rehome the test under a more fitting subdirectory, e.g. `tests/ui/macro-empty-suggestion-span-123456.rs` -> `tests/ui/hir-typeck/suggestions/macro-empty-suggestion-span-123456.rs` (hypothetical, or some other better organization).
- Reformat the test, but only if the formatting is sufficiently weird, and that the test does not rely on the exact formatting.
- Remove distractions that are not important to the test's purpose: for example, don't use lowercase type names if the test is actually exercising codegen that would be unaffected by lowercase type name, and that only serves as distraction. Be **very careful** to not change things that would invalidate the test!
Because these **require discretion** (changes are not always improvements!), this issue is labeled `E-medium` and not just `E-easy`. Having "insider" compiler implementation knowledge helps *a lot* here.
### Example test doc comment
No fixed format, adapt as suitable for the test at hand. But an example:
```rs
//! Check that `-A warnings` cli flag applies to *all* warnings, including feature gate warnings.
//!
//! This test tries to exercise that by checking that the "empty trait list for derive" warning for
//! `#[derive()]` is permitted by `-A warnings`, which is a non-lint warning.
//!
//! # Relevant context
//!
//! - Original impl PR: <https://github.com/rust-lang/rust/pull/21248>.
//! - RFC 507 "Release channels":
//! <https://github.com/rust-lang/rfcs/blob/c017755b9bfa0421570d92ba38082302e0f3ad4f/text/0507-release-channels.md>.
```
### Long-term plan
1. Reorganize all the stray tests immediately under `tests/ui` and place them into suitable subdirectories, improving the tests themselves along the way.
2. Review and audit the immediate subdirectories under `tests/ui/`, and see if they need to be fusioned/fissioned/renamed or otherwise adjusted. Where suitable, we can also introduce some subdirectory-level `README.md` to document subdirectory intention/area.
3. Kill off generic `tests/ui/issues/` directory and rehome the tests properly.
## Implementation history
- #133996
- #133900
- #134024
- #134418
[^test-names]: not *always* problematic! Requires discretion. | C-cleanup,A-testsuite,T-compiler,E-medium,C-tracking-issue,E-tedious,S-tracking-forever | low | Minor |
2,719,647,246 | flutter | [in_app_purchase_android]: Add functionality to `setOriginalExternalTransactionId` | ### Use case
Documentation: https://developer.android.com/google/play/billing/alternative/alternative-billing-with-user-choice-in-app#subscriptions_bought_through_an_alternative_billing_system
The documentation says:
> Instead of specifying a `SubscriptionUpdateParams` object in the parameters, use `setOriginalExternalTransactionId`, providing the external transaction ID for the original purchase.
### Proposal
Provide the functionality to `setOriginalExternalTransactionId`
```kotlin
.setSubscriptionUpdateParams(
BillingFlowParams.SubscriptionUpdateParams.newBuilder()
.setOriginalExternalTransactionId(externalTransactionId)
.build()
``` | c: new feature,platform-android,p: in_app_purchase,package,c: proposal,P2,team-android,triaged-android | low | Minor |
2,719,670,037 | rust | Tracking issue for release notes of #132390: bootstrap: show diagnostics relative to rustc src dir |
This issue tracks the release notes text for #132390.
### Steps
- [x] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Libraries
- [Panics in the standard library now have a leading `library/` in their path](https://github.com/rust-lang/rust/pull/132390)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @RalfJung, @albertlarsan68 -- origin issue/PR authors and assignees for starting to draft text
| relnotes,T-libs,relnotes-tracking-issue | low | Minor |
2,719,760,577 | PowerToys | Example animations on Welcome pages not scaled correctly | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Welcome / PowerToys Tour window
### Steps to reproduce
I hope I'm not the only user with a 4K screen @100% scale, but this has been an ongoing issue with each installation for me at least. When on the Welcome and tours windows, the upper example animations do not fully show (as boxed), so much of the helpful info is not in view. There is however, a mass of whitespace (in yellow) below the tool title, How to launch & Tips & tricks.

### โ๏ธ Expected Behavior
Thanks for all your hard work and dedication. I have been a PowerToys user for many years and don't know how I'd do without some of them.
I would hope to see a larger window for the animations, that reduces the redundant whitespace. I understand we can get to the full info from the Learn more option, but this necessitates opening Edge when a fuller description and animation would sometimes suffice. I've hashed a IMO better view that may help demonstrate my thinking;

### โ Actual Behavior
A very small area for the animation is shown and often the actual part of the animation that relates to the tool is out of view.
### Other Software
_No response_ | Issue-Bug,Area-OOBE,Needs-Triage | low | Minor |
2,719,760,669 | opencv | Memory leak in cv2.getWindowImageRect on Mac M1 Air | ### System Information
OpenCV python version: 4.10.0
Operating System / Platform: MacOS Sonoma 14.2
Python version: 3.10
### Detailed description
I have made a simple highgui functionality to keep/restore aspect ratio after changing the size of the Window.
It uses:
- cv2.getWindowImageRect(self.window_name)
- cv2.resizeWindow(self.window_name, width=nw, height=nh)
However even if it never changes and only calls cv2.getWindowImageRect the memory grows like 100MB every few seconds.
I did not experience it on Windows.
### Steps to reproduce
Set up the window: `cv2.namedWindow(self.window_name, cv2.WINDOW_NORMAL)`
In reading image and processing loop:
```
rect = cv2.getWindowImageRect(self.window_name)
x, y, w, h = rect
nh = int(round(w * self.aspect_ratio_yx))
nw = int(round(nh / self.aspect_ratio_yx))
window.show_image(image)
key = cv2.waitKey(1)
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [ ] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,platform: ios/osx,needs investigation | low | Minor |
2,719,767,082 | go | proposal: cmd/go: go test flag -skip should be cacheable | ### Proposal Details
Running tests with the `-skip` flag should be able to use the test result cache.
In the original design of the test result cache (#11193), the test flag `-run` is a cacheable flag. However, the subsequent proposal (#41583) that added the `-skip` flag to go test but did not make it cacheable.
Both `-skip` and `-run` are flags used to match or filter the test cases. Since the `-run` parameter is cacheable, the `-skip` parameter should also be cacheable, too.
| Proposal | low | Minor |
2,719,768,068 | bitcoin | Args: -noconnect=0 is interpreted as -connect=0.0.0.1 | ### Motivation
`-noconnect=0` is interpreted as `-connect=1` which is interpreted as `-connect=0.0.0.1`
```
โฟ build/src/bitcoind -noconnect=0 -debug=net
```
Produces the following output:
```
2024-12-05T08:20:41Z Warning: parsed potentially confusing double-negative -connect=0
2024-12-05T08:20:41Z Bitcoin Core version v28.99.0-95a0104f2e98-dirty (release build)
2024-12-05T08:20:41Z parameter interaction: -connect or -maxconnections=0 set -> setting -dnsseed=0
2024-12-05T08:20:41Z parameter interaction: -connect or -maxconnections=0 set -> setting -listen=0
2024-12-05T08:20:41Z parameter interaction: -listen=0 -> setting -natpmp=0
2024-12-05T08:20:41Z parameter interaction: -listen=0 -> setting -discover=0
2024-12-05T08:20:41Z parameter interaction: -listen=0 -> setting -listenonion=0
2024-12-05T08:20:41Z parameter interaction: -listen=0 -> setting -i2pacceptincoming=0
...
2024-12-05T08:20:44Z net thread start
2024-12-05T08:20:49Z [net] connection attempt to 0.0.0.1:8333 timed out
2024-12-05T08:20:50Z [net] trying v2 connection 1 lastseen=0.0hrs
```
#### Main issue
`bitcoind` should not try to connect to the `0.0.0.1` IPv4 address.
#### Bonus
`-noconnect=0` should not result in `-dnsseed=0` and `-listen=0` in the parameter interaction logic.
Issue inspiration: https://github.com/bitcoin/bitcoin/pull/31212#issuecomment-2519529282
### Possible solution
Probably best to fail the Init-stage for invalid `-(no)connect(=value)` permutations.
Should include a functional test verifying that there is an Init-error for this case, possibly added to *test/functional/feature_config_args.py*. | Utils/log/libs | medium | Critical |
2,719,795,387 | PowerToys | customizing the powertoys Run window. | ### Description of the new feature / enhancement
an additional option which allows us to set a background image to the powertoys run window would be really nice.
### Scenario when this would be used?
anytime when we use the powertoys run feature
### Supporting information
_No response_ | Idea-Enhancement,Product-PowerToys Run,Needs-Triage | low | Minor |
2,719,823,600 | pytorch | Incorrent output with dtyper=float64 in `torch.nn.functional.conv1d` and `torch.nn.functional.conv3d` operations | ### ๐ Describe the bug
Similar to [issue#141221](https://github.com/pytorch/pytorch/issues/141221)๏ผ
the same issue occurs with `torch.nn.functional.conv1d` and `torch.nn.functional.conv3d`.
Code:
```python
def test_conv_api(conv_func, input_shape, weight_shape, dilation, dtype):
x = torch.ones(input_shape, dtype=dtype)
w = torch.ones(weight_shape, dtype=dtype)
out = conv_func(x, w, dilation=dilation)
print(out.max()) # Output the max value, similar to the original test case
# Test conv1d
print("Testing conv1d:")
test_conv_api(torch.nn.functional.conv1d, (2, 1, 16), (4, 1, 3), dilation=2, dtype=torch.float32)
test_conv_api(torch.nn.functional.conv1d, (2, 1, 16), (4, 1, 3), dilation=2, dtype=torch.float64)
# Test conv3d
print("Testing conv3d:")
test_conv_api(torch.nn.functional.conv3d, (2, 1, 8, 8, 8), (4, 1, 3, 3, 3), dilation=2, dtype=torch.float32)
test_conv_api(torch.nn.functional.conv3d, (2, 1, 8, 8, 8), (4, 1, 3, 3, 3), dilation=2, dtype=torch.float64)
```
The output:
```
Testing conv1d:
tensor(3.)
tensor(9., dtype=torch.float64)
Testing conv3d:
tensor(27.)
tensor(729., dtype=torch.float64)
```
### Versions
```
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 (10.0.19045 64 ไฝ)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.8rc1 (tags/v3.8.8rc1:dfd7d68, Feb 17 2021, 11:01:21) [MSC v.1928 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19041-SP0
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Name: AMD Ryzen 7 5700X 8-Core Processor
Manufacturer: AuthenticAMD
Family: 107
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 3401
MaxClockSpeed: 3401
L2CacheSize: 4096
L2CacheSpeed: None
Revision: 8450
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.4.1+cu121
[pip3] torchaudio==2.4.1+cu121
[pip3] torchvision==0.19.1+cu121
[conda] _anaconda_depends 2023.09 py311_mkl_1
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46357
[conda] mkl-service 2.4.0 py311h2bbff1b_1
[conda] mkl_fft 1.3.8 py311h2bbff1b_0
[conda] mkl_random 1.2.4 py311h59b6b97_0
[conda] numpy 1.24.3 py311hdab7c0b_1
[conda] numpy-base 1.24.3 py311hd01c5d8_1
[conda] numpydoc 1.5.0 py311haa95532_0
[conda] torch 2.1.0 pypi_0 pypi
```
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | needs reproduction,module: windows,module: nn,triaged | low | Critical |
2,719,841,216 | storybook | [Bug]: If not export all members I can see the error "The requested module 'XXX' does not provide an export named 'XXXProps'" | ### Describe the bug
If the ".stories.ts" file doesn't import the component directly but imports a file that exports the component, the storybook reports the error. I can reproduce this by using the generated demo. Here is what I did:
- After executing the ` npx storybook@latest init`, create a file "import.ts" under the ".stories" file with the content `export {Button, ButtonProps} from "./Button";`.
- Then change the "Button.stories.ts" file to import from the "import.ts" instead of the "Button.tsx".
- Run `npm run storybook` and open the Button story, you will see the error "The requested module '/stories/Button.tsx' does not provide an export named 'ButtonProps'".
If I change the "import.ts" to `export * from "./Button"`, then everything works.
This only occurs when using typescript.
### Reproduction link
https://github.com/zhaoyu1999/storybook-test
### Reproduction steps
1. Checkout the repo.
2. Run `npm run storybook`
3. Go to the "Button" stories.
### System
```bash
Storybook Environment Info:
System:
OS: Windows 11 10.0.22631
CPU: (16) x64 Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz
Binaries:
Node: 18.12.0 - C:\Libs\node18.12.0\node.EXE
npm: 8.19.2 - C:\Libs\node18.12.0\npm.CMD <----- active
Browsers:
Edge: Chromium (127.0.2651.74)
npmPackages:
@storybook/addon-essentials: ^8.4.6 => 8.4.6
@storybook/addon-interactions: ^8.4.6 => 8.4.6
@storybook/addon-onboarding: ^8.4.6 => 8.4.6
@storybook/blocks: ^8.4.6 => 8.4.6
@storybook/react: ^8.4.6 => 8.4.6
@storybook/react-vite: ^8.4.6 => 8.4.6
@storybook/test: ^8.4.6 => 8.4.6
storybook: ^8.4.6 => 8.4.6
```
### Additional context
_No response_ | bug,has workaround,docgen | low | Critical |
2,719,842,834 | godot | Controller vibration set in an unfinished tween persists after stopping instance with the editor "Stop Running Project" button | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
windows 11, xbox series controller
### Issue description
When running a game from the editor, setting the controller vibration through a tween and stopping the game 1. while the tween runs and 2. using the editor's stop button will cause the controller vibration to continue past closing the running game.
Closing the editor stops the vibration, as well as running the game again.
### Steps to reproduce
1. Run this script with a controller plugged in:
```gdscript
extends Node
func _ready() -> void:
var t := create_tween()
t.tween_method(
func(magnitude: float): Input.start_joy_vibration(0, magnitude, magnitude, 0),
0.0, 1.0, 10
)
```
2. within the lifespan of the tween (10s), stop the running game with the stop button

3. The controller should still be vibrating
### Minimal reproduction project (MRP)
have a controller and see script above | bug,topic:editor,topic:input | low | Minor |
2,719,862,498 | kubernetes | Add more features to `kubectl` packages | I will describe the issue taking Debian package as example, but as far I can see it is the same issues are present in RPM package as well.
So let's look at Debian package contents. The package installed by [official instruction](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management) from official repo:
```
root@kubectl-deb-rant:~# apt show kubectl
Package: kubectl
Version: 1.29.3-1.1
Priority: optional
Section: admin
Maintainer: Kubernetes Authors <[email protected]>
Installed-Size: 49.8 MB
Homepage: https://kubernetes.io
Download-Size: 10.5 MB
APT-Manual-Installed: yes
APT-Sources: https://pkgs.k8s.io/core:/stable:/v1.29/deb Packages
Description: Command-line utility for interacting with a Kubernetes cluster
Command-line utility for interacting with a Kubernetes cluster.
N: There are 3 additional records. Please use the '-a' switch to see them.
root@kubectl-deb-rant:~# dpkg --listfiles kubectl
/.
/usr
/usr/bin
/usr/bin/kubectl
/usr/share
/usr/share/doc
/usr/share/doc/kubectl
/usr/share/doc/kubectl/LICENSE
/usr/share/doc/kubectl/README.md
root@kubectl-deb-rant:~#
```
# Shell completion
[Official installation guide says](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#enable-shell-autocompletion) that the user needs to manually add something like `source <(kubectl completion bash)`. This is... wrong. It is not the way it has been done in Linux for ages and has numerous disadvantages:
1. If Alice installs kubectl tool on some machine and enables completion by this guide for her user, Bob will not get completion for his user
2. It immensely slows down shell startup. On my current laptop `kubectl completion bash` takes almost 100ms. And this point still stands even if it will be pared down to 10ms.
3. This is just additional actions that should not be performed by a human and should work out of the box
## The correct approach
Completion files should be generated just once during package build time and stored in package as
- `/usr/share/bash-completion/completions/kubectl`
- `/usr/share/fish/completions/kubectl`
- ??? `/usr/share/zsh/functions/Completion/Linux/_kubectl` (not sure about this, not familiar with zsh)
The `kubectl` package should add optional dependency either as suggests or recommends for the package `bash-completion`.
This setup ensures that the bash completion will load on demand, eliminating any shell startup delays.
# Man page
The man page is not present. This has been already reported in kubernetes/kubectl#1291 which has been closed with response that man page exists and should be packaged by package maintainers. As far as I understand this is the proper place to ask maintainers of official deb package to include man page. | kind/feature,sig/release,needs-triage | low | Major |
2,719,874,753 | ui | [bug]: | ### Describe the bug
Getting error while initiating Next Project,

### Affected component/components
Initial setup
### How to reproduce
init with Next js
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows , 11
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,719,879,136 | godot | When using inherited scenes, renaming nodes in the super scene leads to data loss in sub scene | ### Tested versions
- Reproducible in: 4.3, 4.4dev5 (other versions weren't tested)
### System information
Godot v4.4.dev5 - macOS 15.0.1 - Multi-window, 1 monitor - Metal (Forward+) - integrated Apple M1 Pro (Apple7) - Apple M1 Pro (8 threads)
### Issue description
When you rename a node in a super scene, all changes to that node are lost in the sub scenes. I'd expect the changed values to be preserved. This makes inherited scenes a rather risky choice.
### Steps to reproduce
1. Create a super scene with a child node
2. Create a sub scene from that super scene
3. Change some properties of the child node in the sub scene
4. Rename the child node in the super scene
Now, all changes you have made to the child node in the sub scene are lost.
### Minimal reproduction project (MRP)
[inherited-scenes-test.zip](https://github.com/user-attachments/files/18020856/inherited-scenes-test.zip)
| bug,topic:editor | low | Critical |
2,719,995,697 | pytorch | [Bug]need check for duplicate malloc calls in CUDAPluggableAllocator | ### ๐ Describe the bug
Noticing that CUDAPluggableAllocator checks for duplicate free calls on the same pointer, but there is no check for duplicate malloc calls in CUDAPluggableAllocator.
Duplicate call of malloc in the same pointer would occur if the statically allocated PluggableAllocator is used.
Suppose there are two sets of memory allocation requests, representing Tensor-0 and Tensor-1, respectively. If Tensor-0 is allocated first and not released until the end of the entire training task, while Tensor-1 uses the same address as Tensor-0, memory corruption has already occurred. However, since the second free operation on this address happens at the very end of the process, the error might not be detected during the training process.
It's worth noting that some Tensors might not be explicitly released; instead, they are automatically freed when the process ends, thus bypassing the call to CUDAPluggableAllocator's raw_delete(). This will even cause this Duplicate malloc behavior to not get any Error report.
## Code
file-path : `torch/csrc/cuda/CUDAPluggableAllocator.cpp`
check for duplicate free calls in `CUDAPluggableAllocator::raw_delete()`
```c++
void CUDAPluggableAllocator::raw_delete(void* ptr) {
cudaStream_t stream{};
c10::DeviceIndex device_idx = -1;
size_t size = 0;
{
const std::lock_guard<std::mutex> lock(allocator_mutex_);
TORCH_CHECK(
allocation_metadata_.count(ptr),
"Trying to free a pointer not allocated here");
_AllocationMetadata& metadata = allocation_metadata_[ptr];
size = metadata.size;
device_idx = metadata.device_idx;
stream = metadata.stream;
allocation_metadata_.erase(ptr);
}
free_fn_(ptr, size, device_idx, stream);
}
```
no check for duplicate alloc calls in `CUDAPluggableAllocator::malloc()`
```c++
void* CUDAPluggableAllocator::malloc(
size_t size,
c10::DeviceIndex device,
cudaStream_t stream) {
void* r = alloc_fn_(size, device, stream);
{
const std::lock_guard<std::mutex> lock(allocator_mutex_);
allocation_metadata_.emplace(r, _AllocationMetadata(size, device, stream));
}
return r;
}
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Oct 4 2024, 08:01:31) [Clang 16.0.0 (clang-1600.0.26.4)] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M2
Versions of relevant libraries:
[pip3] numpy==2.0.2
[conda] Could not collect
cc @ptrblck @msaroufim @eqy | module: cuda,triaged,module: CUDACachingAllocator | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.