id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,683,305,422 |
tauri
|
[bug] error running android dev on windows 10
|
### Describe the bug
I followed the https://v2.tauri.app/start/prerequisites/#android this page configuration,
run the 'npm run tauri android dev' to make mistakes

### Reproduction
_No response_
### Expected behavior
run android emulator
### Full `tauri info` output
```text
[β] Environment
- OS: Windows 10.0.19045 x86_64 (X64)
β WebView2: 130.0.2849.80
β MSVC: Visual Studio Community 2022
β rustc: 1.81.0 (eeb90cda1 2024-09-04)
β cargo: 1.81.0 (2dbb1af80 2024-08-20)
β rustup: 1.27.1 (54dd3d00f 2024-04-24)
β Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.12.1
- pnpm: 9.5.0
- npm: 10.5.0
[-] Packages
- tauri π¦: 2.1.1
- tauri-build π¦: 2.0.3
- wry π¦: 0.47.2
- tao π¦: 0.30.8
- @tauri-apps/api ξ: 2.1.1
- @tauri-apps/cli ξ: 2.1.0
[-] Plugins
- tauri-plugin-shell π¦: 2.0.2
- @tauri-apps/plugin-shell ξ: 2.0.1
- tauri-plugin-sql π¦: 2.0.2
- @tauri-apps/plugin-sql ξ: not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
```text
E:\workspace\Tauri\menu> npm run tauri android dev
> [email protected] tauri
> tauri android dev
Info Detected connected device: Pixel_8_API_29 (Android SDK built for x86) with target "i686-linux-android"
Info Using 192.168.3.82 to access the development server.
Info Replacing devUrl host with 192.168.3.82. If your frontend is not listening on that address, try configuring your development server to use the `TAURI_DEV_HOST` environment variable or 0.0.0.0 as host.
Running BeforeDevCommand (`npm run dev`)
> [email protected] dev
> vite
VITE v5.4.11 ready in 272 ms
β Local: http://localhost:1420/
β Network: http://198.18.0.1:1420/
β Network: http://192.168.3.82:1420/
warning: `C:\Users\rex19\.cargo\config` is deprecated in favor of `config.toml`
note: if you need to support cargo 1.38 or earlier, you can symlink `config` to `config.toml`
Compiling menu v0.1.0 (E:\workspace\Tauri\menu\src-tauri)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 2.14s
Info symlinking lib "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so" in jniLibs dir "E:\\workspace\\Tauri\\menu\\src-tauri\\gen/android\\app/src/main/jniLibs/x86"
Info "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so" requires shared lib "libandroid.so"
Info "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so" requires shared lib "libdl.so"
Info "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so" requires shared lib "liblog.so"
Info "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so" requires shared lib "libm.so"
Info "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so" requires shared lib "libc.so"
Info symlink at "E:\\workspace\\Tauri\\menu\\src-tauri\\gen/android\\app/src/main/jniLibs/x86\\libmenu_lib.so" points to "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so"
<==========---> 84% EXECUTING [1s]
> :app:rustBuildX86Debug
> [email protected] tauri
> tauri android android-studio-script --target i686
warning: `C:\Users\rex19\.cargo\config` is deprecated in favor of `config.toml`
note: if you need to support cargo 1.38 or earlier, you can symlink `config` to `config.toml`
Compiling menu v0.1.0 (E:\workspace\Tauri\menu\src-tauri)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 2.22s
Info symlinking lib "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so" in jniLibs dir "E:\\workspace\\Tauri\\menu\\src-tauri\\gen/android\\app/src/main/jniLibs/x86"
Info "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so" requires shared lib "libandroid.so"
Info "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so" requires shared lib "libdl.so"
Info "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so" requires shared lib "liblog.so"
Info "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so" requires shared lib "libm.so"
Info "E:\\workspace\\Tauri\\menu\\src-tauri\\target\\i686-linux-android\\debug\\libmenu_lib.so" requires shared lib "libc.so"
Performing Streamed Install
Success
Starting: Intent { cmp=com.menu.app/.MainActivity }
--------- beginning of main
11-22 14:01:31.818 12534 12534 I com.menu.app: Late-enabling -Xcheck:jni
11-22 14:01:31.843 12534 12534 E com.menu.app: Unknown bits set in runtime_flags: 0x8000
11-22 14:01:31.844 12534 12534 W com.menu.app: Unexpected CPU variant for X86 using defaults: x86
11-22 14:01:32.129 12534 12595 W libc : Unable to set property "qemu.gles" to "1": connection failed; errno=13 (Permission denied)
Info Watching E:\workspace\Tauri\menu\src-tauri for changes...
11-22 14:01:32.377 12534 12534 W com.menu.app: Accessing hidden method Landroid/view/View;->computeFitSystemWindows(Landroid/graphics/Rect;Landroid/graphics/Rect;)Z (greylist, reflection, allowed)
11-22 14:01:32.378 12534 12534 W com.menu.app: Accessing hidden method Landroid/view/ViewGroup;->makeOptionalFitsSystemWindows()V (greylist, reflection, allowed)
11-22 14:01:32.396 12534 12534 I WebViewFactory: Loading com.google.android.webview version 74.0.3729.185 (code 373018518)
11-22 14:01:32.399 12534 12534 I com.menu.app: The ClassLoaderContext is a special shared library.
11-22 14:01:32.400 12534 12534 I com.menu.app: The ClassLoaderContext is a special shared library.
11-22 14:01:32.416 12534 12534 I cr_LibraryLoader: Time to load native libraries: 1 ms (timestamps 5702-5703)
11-22 14:01:32.423 12534 12534 I chromium: [INFO:library_loader_hooks.cc(50)] Chromium logging enabled: level = 0, default verbosity = 0
11-22 14:01:32.423 12534 12613 I RustStdoutStderr: [INFO:library_loader_hooks.cc(50)] Chromium logging enabled: level = 0, default verbosity = 0
11-22 14:01:32.423 12534 12534 I cr_LibraryLoader: Expected native library version number "74.0.3729.185", actual native library version number "74.0.3729.185"
11-22 14:01:32.427 12534 12620 W cr_ChildProcLH: Create a new ChildConnectionAllocator with package name = com.google.android.webview, sandboxed = true
11-22 14:01:32.428 12534 12620 W com.menu.app: Accessing hidden method Landroid/content/Context;->bindServiceAsUser(Landroid/content/Intent;Landroid/content/ServiceConnection;ILandroid/os/Handler;Landroid/os/UserHandle;)Z (greylist, reflection, allowed)
11-22 14:01:32.431 12534 12534 I cr_BrowserStartup: Initializing chromium process, singleProcess=false
11-22 14:01:32.497 12534 12534 W com.menu.app: Accessing hidden method Landroid/view/textclassifier/logging/SmartSelectionEventTracker;-><init>(Landroid/content/Context;I)V (greylist, reflection, allowed)
11-22 14:01:32.497 12534 12534 W com.menu.app: Accessing hidden method Landroid/view/textclassifier/logging/SmartSelectionEventTracker;->logEvent(Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent;)V (greylist, reflection, allowed)
11-22 14:01:32.498 12534 12534 W com.menu.app: Accessing hidden method Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent;->selectionStarted(I)Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent; (greylist, reflection, allowed)
11-22 14:01:32.498 12534 12534 W com.menu.app: Accessing hidden method Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent;->selectionModified(II)Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent; (greylist,
reflection, allowed)
11-22 14:01:32.498 12534 12534 W com.menu.app: Accessing hidden method Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent;->selectionModified(IILandroid/view/textclassifier/TextClassification;)Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent; (greylist, reflection, allowed)
11-22 14:01:32.498 12534 12534 W com.menu.app: Accessing hidden method Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent;->selectionModified(IILandroid/view/textclassifier/TextSelection;)Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent; (greylist, reflection, allowed)
11-22 14:01:32.498 12534 12534 W com.menu.app: Accessing hidden method Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent;->selectionAction(III)Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent; (greylist, reflection, allowed)
11-22 14:01:32.498 12534 12534 W com.menu.app: Accessing hidden method Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent;->selectionAction(IIILandroid/view/textclassifier/TextClassification;)Landroid/view/textclassifier/logging/SmartSelectionEventTracker$SelectionEvent; (greylist, reflection, allowed)
11-22 14:01:32.532 12534 12593 W OpenGLRenderer: Failed to choose config with EGL_SWAP_BEHAVIOR_PRESERVED, retrying without...
11-22 14:01:32.588 12534 12593 W Gralloc3: mapper 3.x is not supported
11-22 14:01:33.000 12534 12647 W cr_media: Requires BLUETOOTH permission
11-22 14:01:33.019 12534 12593 E eglCodecCommon: glUtilsParamSize: unknow param 0x000088ef
11-22 14:01:33.019 12534 12593 E eglCodecCommon: glUtilsParamSize: unknow param 0x000088ef
11-22 14:01:33.058 12534 12659 I VideoCapabilities: Unsupported profile 4 for video/mp4v-es
11-22 14:01:33.060 12534 12659 W cr_MediaCodecUtil: HW encoder for video/avc is not available on this device.
11-22 14:01:33.105 12534 12659 E eglCodecCommon: glUtilsParamSize: unknow param 0x000088ef
11-22 14:01:33.105 12534 12659 E eglCodecCommon: glUtilsParamSize: unknow param 0x000088ef
11-22 14:01:33.128 12534 12659 E eglCodecCommon: glUtilsParamSize: unknow param 0x000088ef
11-22 14:01:33.128 12534 12659 E eglCodecCommon: glUtilsParamSize: unknow param 0x000088ef
11-22 14:01:33.150 12534 12534 E Tauri/Console: File: http://tauri.localhost/ - Line 514 - Msg: Uncaught SyntaxError: Unexpected token .
11-22 14:01:33.208 12534 12534 E Tauri/Console: File: http://tauri.localhost/@vite/client - Line 34 - Msg: Uncaught SyntaxError: Unexpected token .
11-22 14:01:33.286 12534 12534 E Tauri/Console: File: http://tauri.localhost/@vite/client - Line 34 - Msg: Uncaught SyntaxError: Unexpected token .
```
### Additional context
_No response_
|
type: bug,status: needs triage,platform: Android
|
low
|
Critical
|
2,683,308,977 |
terminal
|
Request: "Select color scheme..." mouse-hover should provide color-scheme preview
|
### Description of the new feature
This is a tiny quality-of-life improvement request:
CTRL + SHIFT + P ----> "Select color scheme..."
We see now see a dropdown of all our color schemes.
The UP and DOWN arrows trigger a preview of the highlighted color-scheme.
But hovering the mouse over a color-scheme ...only results in a 'gentle highlight', no preview.
My request: hovering over a color-scheme should also trigger a preview in the current tab.
This may seem trivial, but it would be much appreciated.
### Proposed technical implementation details
_No response_
|
Help Wanted,Product-Terminal,Issue-Task,Needs-Tag-Fix,Area-CmdPal
|
low
|
Minor
|
2,683,329,077 |
rust
|
invalid suggestions when the name could not be resolved in derive macro output
|
### Code
```Rust
// cargo new banana
// cd banana
// cargo add [email protected]
// cat src/main.rs
#[derive(rkyv::bytecheck::CheckBytes)]
#[repr(u8)]
enum Fruit { Apple, Banana }
fn main() {
println!("Hello, world!");
}
```
### Current output
```
error[E0433]: failed to resolve: could not find `bytecheck` in the list of imported crates
--> src/main.rs:1:10
|
1 | #[derive(rkyv::bytecheck::CheckBytes)]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ could not find `bytecheck` in the list of imported crates
|
= note: this error originates in the derive macro `rkyv::bytecheck::CheckBytes` (in Nightly builds, run with -Z macro-backtrace for more info)
help: consider importing this crate
|
1 + use rkyv::bytecheck;
|
error[E0433]: failed to resolve: could not find `bytecheck` in the list of imported crates
--> src/main.rs:1:10
|
1 | #[derive(rkyv::bytecheck::CheckBytes)]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ could not find `bytecheck` in the list of imported crates
|
= note: this error originates in the derive macro `rkyv::bytecheck::CheckBytes` (in Nightly builds, run with -Z macro-backtrace for more info)
help: consider importing this crate
|
1 + use rkyv::rancor;
|
```
and then after adding `use rkyv::bytecheck`:
```
error[E0433]: failed to resolve: could not find `bytecheck` in the list of imported crates
--> src/main.rs:3:10
|
3 | #[derive(rkyv::bytecheck::CheckBytes)]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ could not find `bytecheck` in the list of imported crates
|
= note: this error originates in the derive macro `rkyv::bytecheck::CheckBytes` (in Nightly builds, run with -Z macro-backtrace for more info)
help: consider importing one of these crates
|
1 + use crate::bytecheck;
|
1 + use rkyv::bytecheck;
|
error[E0433]: failed to resolve: could not find `bytecheck` in the list of imported crates
--> src/main.rs:3:10
|
3 | #[derive(rkyv::bytecheck::CheckBytes)]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ could not find `bytecheck` in the list of imported crates
|
= note: this error originates in the derive macro `rkyv::bytecheck::CheckBytes` (in Nightly builds, run with -Z macro-backtrace for more info)
help: consider importing one of these crates
|
1 + use crate::bytecheck::rancor;
|
1 + use rkyv::rancor;
|
warning: unused import: `rkyv::bytecheck`
--> src/main.rs:1:5
|
1 | use rkyv::bytecheck;
| ^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
```
### Desired output
Perhaps in this case `use`-ing crates should not be suggested. What's even weirder is the compiler suggesting an import of `rancor` which is unrelated to the error!
### Rationale and extra context
It looks like for this derive macro to be usable the crate must be made available as a proper crate (i.e. `--extern` or `extern crate`.)
### Other cases
```Rust
```
### Rust Version
```Shell
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
and
rustc 1.84.0-nightly (b19329a37 2024-11-21)
binary: rustc
commit-hash: b19329a37cedf2027517ae22c87cf201f93d776e
commit-date: 2024-11-21
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.4
```
### Anything else?
_No response_
|
A-diagnostics,T-compiler
|
low
|
Critical
|
2,683,369,745 |
fucking-algorithm
|
Switching language to English has no effect
|
**Describe the bug**
I have tried switching the language to English in the settings but it has no effect. Even after refreshing, everything is in Chinese. I have tried quitting Chrome and trying again, but still there is no English.
To clarify, the English explanations do appear on Leetcode, but the extension menu itself is always in Chinese.
**Screenshots**
<img width="414" alt="image" src="https://github.com/user-attachments/assets/5b166f75-68b9-46d8-a117-ec032f90eb9e">
**Platform**
Chrome on MacOS.
|
chrome-extension-bug
|
low
|
Critical
|
2,683,417,329 |
vscode
|
Copilot onActivated event
|
<!-- β οΈβ οΈ Do Not Delete This! feature_request_template β οΈβ οΈ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
We use the following check to decide whether or not to display copilot-related CTAs:
```
vscode.extensions.getExtension('github.copilot-chat')?.isActive
```
This check doesn't properly work through right after our participant extension startup. It seems that it takes time to activate the copilot and it happens after our extension is up and running. Is there a way to listen for the activation of `github.copilot-chat`?
I also tried `vscode.extensions.onDidChange` to hide copilot-related CTAs when a user disables the copilot chat extension, but it looks like the event doesn't happen on disabling, only when you enable the extension.
|
chat
|
low
|
Minor
|
2,683,521,534 |
vscode
|
Missed GTK window (min-max-close) control icons/buttons
|
Type: <b>Feature Request</b>
Window close, minimize and maximize buttons (icon shape) with the new look in VSCode Insider latest update on ubuntu24 - automatically changed from GTK window control icons appearance to windows system looking ones.
VS Code version: Code - Insiders 1.96.0-insider (69acde7458f428f0e6869de8915c9dd995cdda1a, 2024-11-21T05:04:38.064Z)
OS version: Linux x64 6.8.0-49-generic
Modes:
<!-- generated by issue reporter -->
|
upstream,linux,electron,under-discussion,titlebar
|
low
|
Minor
|
2,683,570,127 |
go
|
x/vuln/cmd/govulncheck: unmask module versions in SBOM testdata
|
Go will soon release a feature where versions of the main module and dependencies built from an untagged or dirty commit produce a valid Go version. Before, those versions were `(devel)`. The new change is already at go tip, making some of the tests fail that expect `(devel)` version. We currently mask the dirty versions with `(devel)` too, but when all builders' go versions have this feature, we should just use the actual version.
|
vulncheck or vulndb
|
low
|
Minor
|
2,683,623,167 |
godot
|
BBCode italic tag can override enclosing font size tag
|
### Tested versions
- Reproducible in v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1650 (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz (16 Threads)
### Issue description
Applying BBCode tags for italics inside of a font size change can undo the enclosing font size tag's effect. On the left of each image below is the Godot Inspector showing the text I placed into a RichTextLabel and on the right of the same image is that same RichTextLabel running "in-game" (using Run Current Scene on just the label).
**Example 1: Two paragraphs of text rendered exactly as expected, no font size changes, no italics.**

**Example 2: Two paragraphs of text rendered exactly as expected, one font size change wrapped around both paragraphs, no italics.**

**Example 3: Two paragraphs of text manifesting the bug, because the introduction of italics in paragraph two has undone the font size that should still be in effect.**

Note that the only change from Example 2 to Example 3 was the introduction of the `[i]...[/i]` tags.
### Steps to reproduce
- Open a new project.
- Create a single scene whose root node is a RichTextLabel.
- Resize the label to be large enough to see its content.
- Enable BBCode for the label in the Inspector.
- Enter in the Text field of the label any of the example texts you see in the images above.
- Use F6 to run the scene and observe the behavior shown in the images above.
This can be expedited use the MRP I attach below.
### Minimal reproduction project (MRP)
[mrp-for-bbcode-bug-report.zip](https://github.com/user-attachments/files/17872469/mrp-for-bbcode-bug-report.zip)
|
bug,topic:gui
|
low
|
Critical
|
2,683,625,447 |
deno
|
Deno.serve: request.signal is aborted even though the response finished successfully
|
Version: Deno 2.1.1
Example:
```ts
Deno.serve((req: Request) => {
req.signal.addEventListener("abort", () => {
console.log("Request aborted on the server side:", req.signal.aborted);
});
return new Response("Hello, World!");
});
const constructor = new AbortController();
const { signal } = constructor;
const res = await fetch("http://localhost:8000", { signal });
console.log(await res.text());
console.log("Request aborted on the client side:", signal.aborted);
```
Prints:
```
Request aborted on the server side: true
Hello, World!
Request aborted on the client side: false
```
This is awkward, and may even lead to potential bugs.
Bun doesn't seem to have this problem.
|
ext/http,triage required π
|
low
|
Critical
|
2,683,632,394 |
vscode
|
Editor GPU: Move decoration style parsing and validation into ViewGpuContext
|
This should move into ViewGpuContext so lines with inline decorations that aren't supported don't make it that far:
https://github.com/microsoft/vscode/blob/27687b2229467b1409b51a30f9bd024758feec7e/src/vs/editor/browser/gpu/fullFileRenderStrategy.ts#L377-L394
Probably blocked on https://github.com/microsoft/vscode/issues/234473
|
plan-item,editor-gpu
|
low
|
Minor
|
2,683,640,902 |
vscode
|
Editor GPU: Avoid invalidating lines and clearing render data when we can avoid it
|
Currently the up to date line cache and render buffer is cleared more aggressively than should be necessary:
https://github.com/microsoft/vscode/blob/27687b2229467b1409b51a30f9bd024758feec7e/src/vs/editor/browser/gpu/fullFileRenderStrategy.ts#L140-L177
|
plan-item,perf,editor-gpu
|
low
|
Minor
|
2,683,658,163 |
ui
|
[feat]: Add iconLibrary documentation
|
### Feature description
Documentation for the new property can be added to the components.json page, please.
Page:
https://ui.shadcn.com/docs/components-json
Property:
"iconLibrary": "lucide"
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs
|
area: request
|
low
|
Minor
|
2,683,665,190 |
ollama
|
Loosing useragent after HTTP redirect while pulling models
|
### What is the issue?
When pulling a model, a first HTTP GET call is issued using a specific ollama user agent (like `ollama/0.4.2`). The following GET requests to cloudflare use a different user agent (`Go-http-client/1.1`).
My company use firewall rules based on domains and user agents. Ollama should use a consistent useragent for all its http requests
Logs from my IT department (pulling nomic-embed-text):
```
[22/Nov/2024:15:47:19 +0100] "" int.ern.ali.pv4 307 "GET https://registry.ollama.ai/v2/library/nomic-embed-text/blobs/sha256:970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6 HTTP/2.0" "Business, Software/Hardware" "Minimal Risk" "text/html" 1304 "ollama/0.4.2 (arm64 darwin) Go/go1.23.3" "" "0" ipv6:here:removed
[22/Nov/2024:15:47:50 +0100] "" int.ern.ali.pv4 206 "GET https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/97/970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=xxx&X-Amz-Date=20241122T144719Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=xxx HTTP/1.1" "Content Server" "Minimal Risk" "application/octet-stream" 74291068 "Go-http-client/1.1" "" "0" ipv6:here:removed
[22/Nov/2024:15:47:51 +0100] "" int.ern.ali.pv4 206 "GET https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/97/970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=xxx&X-Amz-Date=20241122T144719Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=xxx HTTP/1.1" "Content Server" "Minimal Risk" "application/octet-stream" 100000413 "Go-http-client/1.1" "" "0" ipv6:here:removed
[22/Nov/2024:15:47:55 +0100] "" int.ern.ali.pv4 206 "GET https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/sha256/97/970aa74c0a90ef7482477cf803618e776e173c007bf957f635f1015bfcfef0e6/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=xxx&X-Amz-Date=20241122T144719Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=xxx HTTP/1.1" "Content Server" "Minimal Risk" "application/octet-stream" 100000404 "Go-http-client/1.1" "" "0" ipv6:here:removed
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.4.2
|
bug
|
low
|
Minor
|
2,683,727,830 |
pytorch
|
Burn down test/functorch/test_aotdispatch.py OpInfo test failures
|
As part of PT2 hardening / user empathy for AOTDispatcher:
(1) we have a big suite of tests that run all of our OpInfos through AOTDispatcher, and [roughly ~71 of them xfailed ](https://github.com/pytorch/pytorch/blob/main/test/functorch/test_aotdispatch.py#L6333)(or even worse, skipped). We should burn down these failures. Many of them appear to be e.g. missing symbolic meta functions. We hit one of these from the last user empathy day ([issue](https://github.com/pytorch/pytorch/issues/141187))
(2) If we think that this class of errors will stick around for a while, we should consider making the error message better. For example, given the crash in this [issue](https://github.com/pytorch/pytorch/issues/141187), there are quite a few things we could have improved about the error message:
**improvement ideas**
(a) The error didn't include the name of the op that we failed to trace (`_upsample_bilinear2d_aa_backward`)
(b) The error didn't mention anything about "missing fake tensor rule" - it gave a pretty opaque error about `RuntimeError: isIntList() INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/core/ivalue_inl.h":1979`
(c) If we fix both of those, it should theoretically be possible to figure out that the "culprit" user code is coming from a call to `torch.nn.functional.interpolate(..., antialias=True)`, and recommend that they file an issue / work around with a "traceable" annotation
cc @mruberry @ZainRizvi @chauhang @penguinwu @zou3519 @yf225
|
module: tests,triaged,oncall: pt2,module: aotdispatch,module: pt2-dispatcher
|
low
|
Critical
|
2,683,746,842 |
pytorch
|
Support saving pytorch models in OCI registries
|
### π The feature, motivation and pitch
This is an idea that come to my mind while I was thinking on how to help users to use ML model in the cloud. I'm aware that there are dedicated projects on build and distributing ML model in the cloud like Kubeflow. But I think it could not worth the effort to deploy this infrastructure when users have a small environment and just want to put a model to run somewhere and do some proof of concept. Therefore, I have the idea to use one of the most common component in a cloud environment to distribute models, the container registries. To be more precise an OCI registry. This is not exactly a new idea. The [Kubewarden](https://www.kubewarden.io/) project already use OCI registry to distribute WASM modules. Thus, I think the same principle could be applied to distribute pytorch models.
I'm not considering this a best practice for a production environment considering all the meta data associated to a model. But it could be a starting point for something interesting in the future.
> [!WARNING]
> As I'm not familiarized with the AIOps, I don't know if this make sense to the project. But I think it worth to share and get some feedback
### Alternatives
Maybe projects dedicated to the machine learning models, like Kubeflow, could be changes to make usage of OCI registries removing this from pytorch. But this is out of scope of this project.
### Additional context
- This is a random idea that came to my mind while I was learning more about OCI \
- This POC is a result of a SUSE Hack week spend on learning more about OCI and pytorch
cc @mruberry @mikaylagawarecki
|
module: serialization,triaged,enhancement
|
low
|
Minor
|
2,683,781,737 |
next.js
|
Calling notFound() in ISR generates static files in app router.
|
### Link to the code that reproduces this issue
https://github.com/peterlidee/isr-not-found-bug
### To Reproduce
1. Run next build
2. Check build folder: `.next/app/post/a.html` and `.next/pages/post2/a.html` exist (SSG)
3. Run next start
4. Visit `/post/a` (app router) and `/post2/a` (pages router). Pages appear as expected.
5. Visit `/post/b` (app router) and `/post2/b` (pages router). Pages appear as expected. (ISR works confirmed)
6. Visit `/post/c` (app router) and `/post2/c` (pages router). 404 as expected. (`{ notfound: true }` and `notFound()` work)
7. Check build pages folder: `.next/pages/post2/` now contains 2 static html files: `a.html` and `b.html`. No `c.html` because we returned `{ notfound: true }` from `getStaticProps`. This is as expected.
8. Check build app folder: `.next/app/post/` now contains 3 static html files: `a.html` and `b.html` as expected but also `c.html`.
9. `c.html` should not have been generated because we called `notFound()`.
### Current vs. Expected behavior
Following the steps from the previous section, I expected `c.html` to *not* exist in the app router build folder but it did exist.
I setup an example running ISR in the app router ([https://github.com/peterlidee/isr-not-found-bug/blob/main/src/app/post/[slug]/page.tsx](https://github.com/peterlidee/isr-not-found-bug/blob/main/src/app/post/%5Bslug%5D/page.tsx)). I use `generateStaticParams` to generate one page: `post/a`:
```
export function generateStaticParams() {
return [{ slug: 'a' }];
}
```
In the actual functional component, I only allow slugs "a" or "b":
```
export default async function page({ params }: Props) {
const slug = (await params).slug;
const validParams = ['a', 'b'];
if (!validParams.includes(slug)) {
notFound();
}
return <div>post slug: {slug}</div>;
}
```
Slug "a" has been prerendered at build time. When visiting `/post/b`, next will do ISR and generate static html in the app build folder: `.next/app/post/b.html`. This confirms that ISR works.
The problem lies with slug "c" (or anything other then "a" or "b"). When the slug is "c" `notFound()` gets called and we get an error 404 as expected. However, when visiting the build folder we see that a static `c.html` was generated: `.next/app/post/c.html`.
This is unexpected behaviour: we just told next this page doesn't exist so why generate a static file for this? I copied the functionality of this page in the pages router ([https://github.com/peterlidee/isr-not-found-bug/blob/main/src/pages/post2/[slug].tsx](https://github.com/peterlidee/isr-not-found-bug/blob/main/src/pages/post2/%5Bslug%5D.tsx)) (`/post2`) using `getStaticPaths` and `getStaticProps` and in the pages router `c.html` does not get generated in the build folder.
I'm not 100% this is a bug because the app router behaviour is similar in Next 13, 14, 15 and canary.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Education
Available memory (MB): 32617
Available CPU cores: 16
Binaries:
Node: 20.17.0
npm: 10.8.2
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: 15.0.3
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next build (local), next start (local)
### Additional context
_No response_
|
bug
|
low
|
Critical
|
2,683,804,692 |
rust
|
Tracking issue for release notes of #102575: const_collections_with_hasher, build_hasher_default_const_new
|
This issue tracks the release notes text for #123197 and https://github.com/rust-lang/rust/issues/102575.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Stabilized APIs
- [`BuildHasherDefault::new`](https://doc.rust-lang.org/stable/std/hash/struct.BuildHasherDefault.html#method.new)
# Const stabilized APIs
- [`HashMap::with_hasher`](https://doc.rust-lang.org/stable/std/collections/struct.HashMap.html#method.with_hasher)
- [`HashSet::with_hasher`](https://doc.rust-lang.org/stable/std/collections/struct.HashSet.html#method.with_hasher)
- [`BuildHasherDefault::new`](https://doc.rust-lang.org/stable/std/hash/struct.BuildHasherDefault.html#method.new)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @krtab, @krtab -- origin issue/PR authors and assignees for starting to draft text
|
T-libs-api,relnotes,relnotes-tracking-issue
|
low
|
Minor
|
2,683,906,185 |
angular
|
in angular.dev add signal/non-signal toggle
|
### Which @angular/* package(s) are relevant/related to the feature request?
_No response_
### Description
right now there are two ways to do things in Angular, and they are in some situations completely different approaches how to solve them - e.g. using computed vs using input setter - and while this isn't a temporary situation as Angular team promised to keep the support for zone.js for a long future
this will mean as an Angular developer I will have to deal with both signal based and non-signal based way of doing things, and from the ngPoland I understand that there will be even more changes coming to Forms, Routing and HttpClient to be built with signals in mind
that will make Angular projects in the future become completely different flavor of being signal/zoneless or zoned/non-signal
that leaves a big problem in the long term documentation, of course they will be updated to match the new signal flavor, but when developers want to go to to the docs to find how to do things in a non-signal project they will not find any
### Proposed solution
the angular.dev should have where it make sense a tab/toggle to show the content that is related to signal or non-signal
it should behave similar to https://typescript-eslint.io/users/configs/#projects-without-type-checking where you switch between flat config and legacy config but between signal/non-signal way, it could be even called signal/legacy to show people that they are using an old API and encourage them to update
I understand that will make the angular.dev more complicated but it's useful for developers who started their career and learning Angular with Signals with mind, then go to work on a company where they have still some code using the non-signal way
### Alternatives considered
two alternatives
1. that when people want to see the other way to do things to go to old docs https://v17.angular.io/docs
this approach have a problem about changes that were made to these things, let's imagine in the future there is a new method added to the non-signal route service, now these old docs won't have it, that's especially useful for scenarios where Angular team might have added this method to make it easier to switch to the signal-based route
2. to have seperate pages for signal and non-signals things that are significantly different and different sections or a note for the other pages that have small yet important change between them
this approach is less complicated from technical perspective, but less user friendly since users will be seeing pages/sections unrelated to what they are looking for
|
area: docs
|
low
|
Minor
|
2,683,916,604 |
kubernetes
|
Flexvolumes should be mountable when non-attachable flaky tests
|
### Failure cluster [4176883069dfc8454c66](https://go.k8s.io/triage#4176883069dfc8454c66)

##### Error text:
```
[FAILED] Failed to create client pod: Timed out after 300.000s.
Expected Pod to be in <v1.PodPhase>: "Running"
Got instead:
<*v1.Pod | 0xc003627688>:
metadata:
creationTimestamp: "2024-11-09T11:27:28Z"
labels:
role: flex-client
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:role: {}
f:spec:
f:affinity:
.: {}
f:nodeAffinity:
.: {}
f:requiredDuringSchedulingIgnoredDuringExecution: {}
f:containers:
k:{"name":"flex-client"}:
.: {}
f:command: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:securityContext:
.: {}
f:privileged: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:volumeMounts:
.: {}
k:{"mountPath":"/opt/0"}:
.: {}
f:mountPath: {}
f:name: {}
f:workingDir: {}
f:dnsPolicy: {}
f:enableServiceLinks: {}
```
#### Recent failures:
[11/22/2024, 1:38:37 PM pr:pull-kubernetes-e2e-gce-canary](https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/128889/pull-kubernetes-e2e-gce-canary/1859939306492137472)
[11/22/2024, 1:24:18 PM pr:pull-kubernetes-e2e-gce-canary](https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/125230/pull-kubernetes-e2e-gce-canary/1859935684870017024)
[11/22/2024, 1:07:39 PM pr:pull-kubernetes-e2e-gce-canary](https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/128650/pull-kubernetes-e2e-gce-canary/1859930965229441024)
[11/22/2024, 12:36:06 PM pr:pull-kubernetes-e2e-gce-canary](https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/125230/pull-kubernetes-e2e-gce-canary/1859923588459532288)
[11/22/2024, 10:35:44 AM pr:pull-kubernetes-e2e-gce-canary](https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/128889/pull-kubernetes-e2e-gce-canary/1859893268624445440)
https://prow.k8s.io/job-history/gs/kubernetes-ci-logs/pr-logs/directory/pull-kubernetes-e2e-gce-canary
/kind failing-test
/kind flake
/sig storage
|
sig/storage,kind/flake,kind/failing-test,needs-triage
|
low
|
Critical
|
2,683,921,060 |
tailwindcss
|
[v4] --radius-full is not available as a variable
|
<!-- Please provide all of the information requested below. We're a small team and without all of this information it's not possible for us to help and your bug report will be closed. -->
**What version of Tailwind CSS are you using?**
4 beta.2
**What build tool (or framework if it abstracts the build tool) are you using?**
Tailwind Play
**What version of Node.js are you using?**
N/A
**What browser are you using?**
Firefox
**What operating system are you using?**
macOS
**Reproduction URL**
https://play.tailwindcss.com/bpvuR7VY5Z
**Describe your issue**
`--radius-full` does not appear to be available as a variable. As an obviously pointless example`rounded-(--radius-lg)` will result in an element with rounded corners, `rounded-(--radius-full)` will not.
|
v4
|
low
|
Critical
|
2,683,948,456 |
rust
|
cargo doc should render crate examples and link to them on main documentation page
|
(re-filing of https://github.com/rust-lang/rust/issues/34022)
Cargo allows crate developers to create examples to show off their crate's API (as shown in http://doc.crates.io/guide.html#project-layout). These examples are very useful to see how the individual items in a crate can be used together, in a way that the standard documentation doesn't capture very well.
When viewing a crate's documentation, it would be useful if examples were rendered as standard source files, and there were links to view them on the main library overview page.
|
T-rustdoc,C-enhancement,A-rustdoc-ui,A-rustdoc-scrape-examples,T-rustdoc-frontend
|
medium
|
Major
|
2,683,950,706 |
godot
|
Subtractive Blending doesn't work on Compatibility Renderer
|
### Tested versions
I've tested various versions of Godot 4 going all the way back to when the Compatibility Renderer was first added. None of them have correctly rendered subtractive blending. It should be reproducible in any version of Godot 4 with the Compatibility Renderer.
### System information
Godot v4.4.dev5 - Windows 10.0.19045 - Multi-window, 3 monitors - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6094) - AMD Ryzen 7 5800X 8-Core Processor (16 threads)
### Issue description
Subtractive Blended Materials show a solid color, instead of blending with whatever is drawn behind. This isn't an issue for Additive or Multiplicative Blended Materials.
In editor, it shows one of your editor theme colors:

In game, it seems to show a solid gray:

### Steps to reproduce
1. Create a project, set to Compatibility Renderer
2. Open a 3D Scene
3. Add a Camera3D
4. Add a MeshInstance3D, with a PlaneMesh rotated towards the Camera
5. Add a StandardMaterial3D to the MeshInstance3D, Shading Mode set to Unshaded, Blend Mode set to Subtract
### Minimal reproduction project (MRP)
N/A
|
bug,topic:rendering,needs testing,topic:3d
|
low
|
Minor
|
2,683,954,811 |
next.js
|
SASS import error when adding carbon-components to nextjs with turbopack
|
### Link to the code that reproduces this issue
https://github.com/krab7191/turbopack-sass-carbon-issue
### To Reproduce
1. Create a new nextjs project and choose to use Turbopack
2. Adjust React & React DOM versions to 18.2.0 as required peer deps
3. Follow instructions to add [carbon-components](https://carbondesignsystem.com/developing/react-tutorial/overview/) (step 1)
### Current vs. Expected behavior
When running `npm run dev` the following error appears:
```
Error evaluating Node.js code
Error: Can't find stylesheet to import.
β·
1 β @use '@carbon/react';
```
When changing the sass use statement to `@use '@carbon/react/index.scss';` the error changes to:
```
Error evaluating Node.js code
Error: Can't find stylesheet to import.
β·
8 β @forward '@carbon/colors';
```
Removing the `--turbopack` option from the "dev" script and restarting makes everything render properly, aka the button appears on the page.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 21.7.3
npm: 10.5.0
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: 15.0.3
react: 18.2.0
react-dom: 18.2.0
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_
|
bug,Turbopack,linear: turbopack
|
low
|
Critical
|
2,683,968,825 |
vscode
|
Explorerer becomes empty when `package.json` is presented
|
Type: <b>Bug</b>
When there is a file named `package.json`, the whole explorer just becomes empty:

As shown in the terminal on the right side, the directory is not empty, but those files are not displayed.
If I run `mv package.json <some other name>`, the explorer return to normal.

Renaming it back to `package.json`, the files disappears again.

VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz (16 x 2304)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|23.78GB (6.72GB free)|
|Process Argv|--crash-reporter-id ad512ff0-65b0-4430-b33b-feecc2d05dc6|
|Screen Reader|no|
|VM|47%|
</details>Extensions: none<details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
jg8ic977:31013176
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc1:31185841
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter -->
|
bug,file-explorer
|
low
|
Critical
|
2,683,975,659 |
TypeScript
|
Module resolution: NodeNext breaks typechecking
|
### Demo Repo
https://github.com/damianobarbati/ts-repro
### Which of the following problems are you reporting?
The module specifier resolves to the right file, but something about the types are wrong
### Demonstrate the defect described above with a code sample.
To reproduce:
```sh
git clone [email protected]:damianobarbati/ts-repro.git
cd ts-repro
git checkout typescript-constructable-issue
pnpm i
pnpm tsc
```
Here the line where this can be seen:
https://github.com/damianobarbati/ts-repro/blob/typescript-constructable-issue/services/api/src/index.ts#L2
### Run `tsc --showConfig` and paste its output here
```sh
{
"compilerOptions": {
"moduleResolution": "nodenext",
"module": "nodenext",
"target": "esnext",
"lib": [
"dom",
"dom.iterable",
"esnext",
"webworker"
],
"allowJs": true,
"allowImportingTsExtensions": true,
"skipLibCheck": true,
"strict": true,
"forceConsistentCasingInFileNames": true,
"noEmit": true,
"noUnusedLocals": false,
"esModuleInterop": true,
"resolveJsonModule": true,
"strictPropertyInitialization": false,
"isolatedModules": true,
"jsx": "preserve",
"incremental": true,
"baseUrl": "./",
"tsBuildInfoFile": "../../../../tmp/tsbuildinfo",
"moduleDetection": "force",
"allowSyntheticDefaultImports": true,
"resolvePackageJsonExports": true,
"resolvePackageJsonImports": true,
"preserveConstEnums": true,
"useDefineForClassFields": true,
"noImplicitAny": true,
"noImplicitThis": true,
"strictNullChecks": true,
"strictFunctionTypes": true,
"strictBindCallApply": true,
"strictBuiltinIteratorReturn": true,
"alwaysStrict": true,
"useUnknownInCatchVariables": true
},
"files": [
"./services/api/src/Foe.spec.ts",
"./services/api/src/Foe.ts",
"./services/api/src/index.ts",
"./services/api/src/helper/fn.spec.ts",
"./services/api/src/helper/fn.ts",
"./services/types/src/User.spec.ts",
"./services/types/src/User.ts"
]
}
```
### Run `tsc --traceResolution` and paste its output here
```
File '/Users/damians/Desktop/ts-repro/services/api/src/package.json' does not exist.
Found 'package.json' at '/Users/damians/Desktop/ts-repro/services/api/package.json'.
======== Resolving module 'ioredis' from '/Users/damians/Desktop/ts-repro/services/api/src/index.ts'. ========
Explicitly specified module resolution kind: 'NodeNext'.
Resolving in ESM mode with conditions 'import', 'types', 'node'.
'baseUrl' option is set to '/Users/damians/Desktop/ts-repro', using this value to resolve non-relative module name 'ioredis'.
Resolving module name 'ioredis' relative to base url '/Users/damians/Desktop/ts-repro' - '/Users/damians/Desktop/ts-repro/ioredis'.
Loading module as file / folder, candidate module location '/Users/damians/Desktop/ts-repro/ioredis', target file types: TypeScript, JavaScript, Declaration, JSON.
Directory '/Users/damians/Desktop/ts-repro/ioredis' does not exist, skipping all lookups in it.
File '/Users/damians/Desktop/ts-repro/services/api/src/package.json' does not exist according to earlier cached lookups.
File '/Users/damians/Desktop/ts-repro/services/api/package.json' exists according to earlier cached lookups.
Loading module 'ioredis' from 'node_modules' folder, target file types: TypeScript, JavaScript, Declaration, JSON.
Searching all ancestor node_modules directories for preferred extensions: TypeScript, Declaration.
Directory '/Users/damians/Desktop/ts-repro/services/api/src/node_modules' does not exist, skipping all lookups in it.
Found 'package.json' at '/Users/damians/Desktop/ts-repro/services/api/node_modules/ioredis/package.json'.
'package.json' does not have a 'typesVersions' field.
'package.json' does not have a 'typings' field.
'package.json' has 'types' field './built/index.d.ts' that references '/Users/damians/Desktop/ts-repro/services/api/node_modules/ioredis/built/index.d.ts'.
File '/Users/damians/Desktop/ts-repro/services/api/node_modules/ioredis/built/index.d.ts' exists - use it as a name resolution result.
'package.json' does not have a 'peerDependencies' field.
Resolving real path for '/Users/damians/Desktop/ts-repro/services/api/node_modules/ioredis/built/index.d.ts', result '/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/ioredis/built/index.d.ts'.
======== Module name 'ioredis' was successfully resolved to '/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/ioredis/built/index.d.ts' with Package ID 'ioredis/built/[email protected]'. ========
File '/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/ioredis/built/package.json' does not exist.
Found 'package.json' at '/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/ioredis/package.json'.
======== Resolving module './Redis' from '/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/ioredis/built/index.d.ts'. ========
Explicitly specified module resolution kind: 'NodeNext'.
Resolving in CJS mode with conditions 'require', 'types', 'node'.
Loading module as file / folder, candidate module location '/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/ioredis/built/Redis', target file types: TypeScript, JavaScript, Declaration, JSON.
File '/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/ioredis/built/Redis.ts' does not exist.
File '/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/ioredis/built/Redis.tsx' does not exist.
File '/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/ioredis/built/Redis.d.ts' exists - use it as a name resolution result.
File '/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/ioredis/package.json' exists according to earlier cached lookups.
'package.json' does not have a 'peerDependencies' field.
....OMISSIS....
======== Module name '@typescript/lib-webworker' was not resolved. ========
File '/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/typescript/lib/package.json' does not exist according to earlier cached lookups.
File '/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/typescript/package.json' exists according to earlier cached lookups.
services/api/src/index.ts:2:19 - error TS2351: This expression is not constructable.
Type 'typeof import("/Users/damians/Desktop/ts-repro/node_modules/.pnpm/[email protected]/node_modules/ioredis/built/index")' has no construct signatures.
2 const redis = new Redis();
~~~~~
Found 1 error in services/api/src/index.ts:2
βELIFECYCLEβ Command failed with exit code 1.
```
### Paste the `package.json` of the *importing* module, if it exists
```json
{
"name": "api",
"type": "module",
"imports": {
"#api/*": "./src/*",
"#api/helper/*": "./src/helper/*"
},
"scripts": {
"test": "vitest run",
"start:dev": "node --no-warnings --experimental-transform-types --watch ./src/index.ts",
"start": "node --no-warnings --experimental-transform-types ./src/index.ts"
},
"dependencies": {
"ioredis": "^5.4.1",
"types": "workspace:^"
}
}
```
### Paste the `package.json` of the *target* module, if it exists
```json
{
"name": "ioredis",
"version": "5.4.1",
"description": "A robust, performance-focused and full-featured Redis client for Node.js.",
"main": "./built/index.js",
"types": "./built/index.d.ts",
"files": [
"built/"
],
"scripts": {
"test:tsd": "npm run build && tsd",
"test:js": "TS_NODE_TRANSPILE_ONLY=true NODE_ENV=test mocha \"test/helpers/*.ts\" \"test/unit/**/*.ts\" \"test/functional/**/*.ts\"",
"test:cov": "nyc npm run test:js",
"test:js:cluster": "TS_NODE_TRANSPILE_ONLY=true NODE_ENV=test mocha \"test/cluster/**/*.ts\"",
"test": "npm run test:js && npm run test:tsd",
"lint": "eslint --ext .js,.ts ./lib",
"docs": "npx typedoc --logLevel Error --excludeExternals --excludeProtected --excludePrivate --readme none lib/index.ts",
"format": "prettier --write \"{,!(node_modules)/**/}*.{js,ts}\"",
"format-check": "prettier --check \"{,!(node_modules)/**/}*.{js,ts}\"",
"build": "rm -rf built && tsc",
"prepublishOnly": "npm run build",
"semantic-release": "semantic-release"
},
"repository": {
"type": "git",
"url": "git://github.com/luin/ioredis.git"
},
"keywords": [
"redis",
"cluster",
"sentinel",
"pipelining"
],
"tsd": {
"directory": "test/typing"
},
"author": "Zihua Li <[email protected]> (http://zihua.li)",
"license": "MIT",
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/ioredis"
},
"dependencies": {
"@ioredis/commands": "^1.1.1",
"cluster-key-slot": "^1.1.0",
"debug": "^4.3.4",
"denque": "^2.1.0",
"lodash.defaults": "^4.2.0",
"lodash.isarguments": "^3.1.0",
"redis-errors": "^1.2.0",
"redis-parser": "^3.0.0",
"standard-as-callback": "^2.1.0"
},
"devDependencies": {
"@ioredis/interface-generator": "^1.3.0",
"@semantic-release/changelog": "^6.0.1",
"@semantic-release/commit-analyzer": "^9.0.2",
"@semantic-release/git": "^10.0.1",
"@types/chai": "^4.3.0",
"@types/chai-as-promised": "^7.1.5",
"@types/debug": "^4.1.5",
"@types/lodash.defaults": "^4.2.7",
"@types/lodash.isarguments": "^3.1.7",
"@types/mocha": "^9.1.0",
"@types/node": "^14.18.12",
"@types/redis-errors": "^1.2.1",
"@types/sinon": "^10.0.11",
"@typescript-eslint/eslint-plugin": "^5.48.1",
"@typescript-eslint/parser": "^5.48.1",
"chai": "^4.3.6",
"chai-as-promised": "^7.1.1",
"eslint": "^8.31.0",
"eslint-config-prettier": "^8.6.0",
"mocha": "^9.2.1",
"nyc": "^15.1.0",
"prettier": "^2.6.1",
"semantic-release": "^19.0.2",
"server-destroy": "^1.0.1",
"sinon": "^13.0.1",
"ts-node": "^10.4.0",
"tsd": "^0.19.1",
"typedoc": "^0.22.18",
"typescript": "^4.6.3",
"uuid": "^9.0.0"
},
"nyc": {
"reporter": [
"lcov"
]
},
"engines": {
"node": ">=12.22.0"
},
"mocha": {
"exit": true,
"timeout": 8000,
"recursive": true,
"require": "ts-node/register"
}
}
```
### Any other comments can go here
I'm using NodeNext to leverage the new node flag ```--transform-types```.
|
Needs More Info
|
low
|
Critical
|
2,683,978,648 |
go
|
os: consider mapping Unix permissions bits to Windows ACLs
|
The `os.FileMode`/`io/fs.FileMode` type represents a file's "mode and permission bits".
Permission bits are a Unix concept. `FileMode` exposes these bits directly: "The nine least-significant bits are the standard Unix rwxrwxrwx permissions." (The `FileMode` documentation doesn't even bother to specify which bit is which, assuming that this is common knowledge.)
On Windows, the `os` package translates the permission bits to the `FILE_ATTRIBUTE_READONLY` attribute:
- When creating a file, the file is created with `FILE_ATTRIBUTE_READONLY` if the bit 0o200 (user-writable) is not set.
- When statting a file, the mode bits are set to 0o444 if `FILE_ATTRIBUTE_READONLY` is set, 0o666 otherwise.
The `os` package does not attempt to distinguish between the user/group/other bits. It is not possible to create a file readable only by the current user. `os.CreateTemp` is documented as creating files with mode 0o600; while technically accurate (the best kind of accurate!), this is somewhat misleading on Windows because the file will be world read/writable.
A similar issue with Python temporary files was assigned CVE-2024-4030. After some discussion, we're inclined to say that `os.CreateTemp`'s current behavior is not a vulnerability since the `os` package's lack of support for Windows permissions is well-known and `CreateTemp` does indeed behave according to its documentation in that light. But it's certainly unfortunate.
This issue is to consider the possibility of mapping Unix permission bits to Windows ACLs in the `os` package. There are well-known security identifiers (SIDs) that translate fairly closely to the Unix concepts of user/group/other: `SECURITY_CREATOR_OWNER_RID`, `SECURITY_CREATOR_GROUP_RID`, and `SECURITY_WORLD_RID`. The OS package could set an ACL on new files using these SIDs, and possibly translate simple ACLs to Unix permissions when returning a `FileMode`.
Making this change has the risk of breaking existing programs that depend on the current behavior. We'd definitely want a GODEBUG to revert to the old behavior.
|
NeedsDecision
|
low
|
Critical
|
2,683,982,622 |
TypeScript
|
Missing `Go to definition` links when accessing properties after `jsdoc` typecast
|
### π Search Terms
jsdoc typecast, typecast property access, jsdoc go to definition
### π Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about jsdoc, typecast
### β― Playground Link
https://www.typescriptlang.org/play/?ts=5.6.3&filetype=js#code/PQKhAIAEBcE8AcCmATRAzcBvT4B2BDAW0QC5wBnaAJwEtcBzcAXyfADEB7D8EYAKD4BjDrkrgOAIwBW4ALzhQEGAkRZOHVrwAUmJgEoA3H2DBwACQ4A3RFXAB3GtAAW4AAaDqAG1fjrt1+o+EoieHHbgyByI5LgA5NDgVIjC9Lg0AF6qzlkqEeh0jjQiQiJiklIAjHIKYFBwSGpcmsA6+gB0BMRGwqIJ5QBM1Yp1uZjqza16ANqxnYixALpGfEA
### π» Code
```ts
/** @typedef {{ name: string }} Foo */
const obj = /** @type {Foo} */({});
// Hover with `ctrl` over `Foo` below doesn't recognize the type definition
const obj1 = /** @type {Foo} */({}).name;
const obj2 = /** @type {Foo} */({})['name'];
```
### π Actual behavior
Hovering over `Foo` with `ctrl` pressed doesn't recognize the type definition
### π Expected behavior
Hovering over `Foo` with `ctrl` pressed recognizes the type definition
### Additional information about the issue
_No response_
|
Suggestion,Help Wanted,Experience Enhancement
|
low
|
Minor
|
2,683,990,651 |
tensorflow
|
tf.lite.Interpreter num_threads argument inconsistent documentation and functionality
|
The `tf.lite.Interpreter` [documentation](https://www.tensorflow.org/api_docs/python/tf/lite/Interpreter) claims:
num_threads:
Sets the number of threads used by the interpreter and available to CPU kernels. If not set, the interpreter will use an implementation-dependent default number of threads. Currently, only a subset of kernels, such as conv, support multi-threading. num_threads should be >= -1. Setting num_threads to 0 has the effect to disable multithreading, which is equivalent to setting num_threads to 1. **If set to the value -1, the number of threads used will be implementation-defined and platform-dependent.**
However, the [code](https://github.com/tensorflow/tensorflow/blob/857e530ca9cef0dc12ca4aa8fa488a682d189748/tensorflow/lite/python/interpreter.py#L465) is different
```python
if num_threads is not None:
if not isinstance(num_threads, int):
raise ValueError('type of num_threads should be int')
if num_threads < 1:
raise ValueError('num_threads should >= 1')
```
Therefore, if the num_threads variable is -1, it raises ValueError.
|
type:docs-bug,stat:awaiting tensorflower,comp:lite
|
medium
|
Critical
|
2,684,070,395 |
PowerToys
|
Allow filling of all available zones on desktop (not maximizing) instead of only two vertical columns.
|
### Description of the new feature / enhancement
This is pretty basic. For example, use the Priority default layout. Hover a windows over two zones, and it will fill two of the three zones. We should be able to fill all zones without having to maximize the window.
For instance, instead of just resizing the window to fill two of the three zones, it should be possible to fill all three zones of Priority (default FancyZone layout) when dragging, or through a shortcut, perhaps.
### Scenario when this would be used?
This would be used in all scenarios where a maximized window is not needed, but more than than two vertical columns of FancyZones are required.
### Supporting information
_No response_
|
Needs-Triage
|
low
|
Minor
|
2,684,108,316 |
next.js
|
Parallel, intercepted catchall routes provided wrong params
|
### Link to the code that reproduces this issue
https://github.com/Drewsapple/repro-intercept-catchall
### To Reproduce
1. Start the app (`next dev` or `next build && next start`)
2. navigate to `/reproduction`
3. click the link to `/reproduction/double/double`
4. refresh the page or otherwise load the same route so navigation is not intercepted
### Current vs. Expected behavior
When the default export from `(.)[...catchall]/page.tsx` is loaded, the params object passed is are follows:
Navigating to `/reproduction/single`
```
{"catchall":["single"]}
```
Navigating to `/reproduction/double/double`
```
{"catchall":["doubledouble"]}
```
Instead of the route segments being concatenated as strings, they should be concatenated as array elements, i.e.
```
{"catchall":["double","double"]}
```
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Debian 6.11.6-1 (2024-11-04)
Available memory (MB): 15748
Available CPU cores: 8
Binaries:
Node: 20.11.1
npm: 10.9.0
Yarn: N/A
pnpm: 9.14.2
Relevant Packages:
next: 14.2.18 // An outdated version detected (latest is 15.0.3), upgrade is highly recommended!
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
I tried this in v14.2.14 as well, to see if it was a regression introduced by #65063 in v14.2.15, but the behavior was the same.
In v15.0.3 (with awaited params), the catchall group doesn't intercept a navigation with multiple path segments at all. This modification is tracked in the `v15.0.3` branch of the reproduction repo.
|
bug,Parallel & Intercepting Routes
|
low
|
Minor
|
2,684,128,696 |
vscode
|
API to tell if a `Terminal` is a Task `Terminal`
|
Type: <b>Bug</b>
Currently there is no way for extensions to detect the active terminal is a task. This is needed to determine if send text would work or not.
VS Code version: Code - Insiders 1.96.0-insider (90868576241dd25c6c5da64adadc0a09de91a9fe, 2024-11-22T09:56:24.579Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz (8 x 1498)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.60GB (14.23GB free)|
|Process Argv|--folder-uri file:///c%3A/GIT/LSP/pygls --log trace --log ms-python.python=info --crash-reporter-id 4fb1ebc1-cf4c-4880-a88a-47738ec3768d|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (19)</summary>
Extension|Author (truncated)|Version
---|---|---
tsl-problem-matcher|amo|0.6.2
ruff|cha|2024.56.0
esbuild-problem-matchers|con|0.0.3
vscode-eslint|dba|3.0.10
gitlens|eam|16.0.2
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
copilot|Git|1.245.1224
copilot-chat|Git|0.23.2024112103
vscode-github-actions|git|0.27.0
vscode-pull-request-github|Git|0.101.2024112104
debugpy|ms-|2024.12.0
python|ms-|2024.21.0-dev
vscode-pylance|ms-|2024.11.3
vscode-python-envs|ms-|0.0.0
remote-containers|ms-|0.388.0
remote-wsl|ms-|0.88.5
extension-test-runner|ms-|0.0.12
code-spell-checker|str|4.0.21
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vsc_aacf:30263846
pythonvspyt551:31179976
vscod805cf:30301675
vsaa593cf:30376535
py29gd2263:31024238
c4g48928:30535728
vscrpc:30624061
a9j8j154:30646983
962ge761:30841072
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
cppperfnew:30980852
pythonait:30973460
0ee40948:31013168
dvdeprecation:31040973
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
nativerepl1:31134653
pythonrstrctxt:31093868
nativeloc1:31118317
cf971741:31144450
notreesitter:31116712
e80f6927:31120813
iacca1:31150324
notype1:31143044
dwcopilot:31158714
h409b430:31177054
5b1c1929:31184661
```
</details>
<!-- generated by issue reporter -->
|
feature-request,api,tasks
|
low
|
Critical
|
2,684,130,070 |
pytorch
|
PyTorch crashes with `mkl_lapack_dgetrf` if `from_numpy and set_num_threads` are used
|
### π Describe the bug
Running
```python
import torch
print(torch.linalg.inv(torch.rand(100, 200, 200)))
```
Hangs/fails with
```
Intel MKL ERROR: Parameter 6 was incorrect on entry to DLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to DLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to DLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to DLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to DLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to DLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to DLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to DLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to DLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to DLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to DLASWP.
Intel MKL ERROR: Parameter 6 was incorrect on entry to DLASWP.
```
And following backtrace
```
Process 2200668 stopped
* thread #1, name = 'pt_main_thread', stop reason = signal SIGSTOP
frame #0: 0x00007fffe648f267 libtorch_cpu.so`mkl_lapack_cdag1d_probe_task + 71
libtorch_cpu.so`mkl_lapack_cdag1d_probe_task:
-> 0x7fffe648f267 <+71>: movq 0x40(%rbp), %r10
0x7fffe648f26b <+75>: movq (%rbp), %r9
0x7fffe648f26f <+79>: leaq 0x1(%r10), %rdx
0x7fffe648f273 <+83>: cmpq %r9, %rdx
(lldb) bt
* thread #1, name = 'pt_main_thread', stop reason = signal SIGSTOP
* frame #0: 0x00007fffe648f267 libtorch_cpu.so`mkl_lapack_cdag1d_probe_task + 71
frame #1: 0x00007fffe61a011e libtorch_cpu.so`thread_team_ctxt_thread_for_task_hint + 30
frame #2: 0x00007fffe648fe4a libtorch_cpu.so`mkl_lapack_thread_team_ctxt_get_task + 682
frame #3: 0x00007fffe61a047b libtorch_cpu.so`mkl_lapack_dgetrf._omp_fn.0 + 395
frame #4: 0x00007fffe61a2bd5 libtorch_cpu.so`mkl_lapack_dgetrf + 3381
frame #5: 0x00007fffe61463dc libtorch_cpu.so`dgetrf_ + 748
frame #6: 0x00007fffe04cdde4 libtorch_cpu.so`void at::native::lapackLu<double>(int, int, double*, int, int*, int*) + 36
frame #7: 0x00007fffe04ee1b9 libtorch_cpu.so`void at::internal::invoke_parallel<void at::parallel_for<void at::native::(anonymous namespace)::apply_lu_factor<double>(at::Tensor const&, at::Tensor const&, at::Tensor const&, bool)::'lambda'(long, long)>(long, long, long, double const&)::'lambda'(long, long)>(long, long, long, double const&) (._omp_fn.0) + 297
frame #8: 0x00007ffff707f05f libgomp-a34b3233.so.1`GOMP_parallel + 63
frame #9: 0x00007fffe04f1b20 libtorch_cpu.so`at::native::(anonymous namespace)::lu_factor_kernel(at::Tensor const&, at::Tensor const&, at::Tensor const&, bool) + 3664
```
### Versions
2.3.1, 2.5.1
cc @albanD @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
|
module: crash,triaged,module: mkl,module: multithreading,module: linear algebra,module: intel
|
low
|
Critical
|
2,684,137,183 |
flutter
|
Flutter Widget Previews Architecture
|
### Document Link
https://flutter.dev/go/widget-previews-architecture
### What problem are you solving?
While one of the main selling points of Flutter is its rapid iterative development cycle thatβs enabled by hot reload, it does require developers to have an active target device or simulator to work. In addition, it only allows for viewing the UI in a single configuration at a time without running an application on multiple devices, making it difficult for developers to visually verify the impact of variables like screen size, orientation, font size, and locale on their application.
Flutter Widget Previews is a development tool designed to improve the Flutter development workflow by allowing developers to quickly visualize and iterate on their widgets in various configurations without running the full application. This tool aims to significantly reduce development time and improve the overall developer experience by providing a fast and convenient way to experiment with widget variations, themes, and different device settings.
|
design doc,:scroll:
|
medium
|
Critical
|
2,684,172,479 |
godot
|
OS.move_to_trash() makes window lose and gain focus
|
### Tested versions
Reproduced in 4.3 stable and 4.2 stable
### System information
Godot v4.3.stable - Windows 10 - GLES3 (Compatibility)
### Issue description
calling the OS.move_to_trash() function will cause the application window to lose and gain focus.
there are more issues with the window losing/gaining focus (see #99438), but I dont know if thats related.
### Steps to reproduce
create a node and attache a script with the following code:
```gdscript
func _ready() -> void:
get_window().focus_entered.connect(func_print.bind("enter"))
get_window().focus_exited.connect(func_print.bind("exit"))
await get_tree().create_timer(0.5).timeout
OS.move_to_trash("res://not_exists.txt") # this will also cause an error 124 in the debug (note: error 124 is not defined in the return values btw), because input was not a global path, but it does not matter for the demonstration (window always loses & gains focus - no matter, if operation was successful or not)
func func_print(text):
print(text)
```
the window will lose focus and gain it again shown in the output:
```
exit
enter
```
### Minimal reproduction project (MRP)
[trash-bug.zip](https://github.com/user-attachments/files/17874761/trash-bug.zip)
|
bug,topic:editor,needs testing
|
low
|
Critical
|
2,684,176,208 |
react-native
|
SectionList's `onScrollToIndexFailed` does not catch requested sectionIndex & itemIndex
|
### Description
`onScrollToIndexFailed` returns index,heighestMeasuredIndex, and averageItemHeight, which is enough in FlatList.
but in sectionList it should return requested sectionIndex and itemIndex, and heighestMeasured section & item index in addition to flat index, because we have only scrollToLocation method available for sectionList and section and item index params are needed.
### Steps to reproduce
run the snack, press on `scroll to no where`, you will see the alert
### React Native Version
0.76.3
### Affected Platforms
Runtime - Android, Runtime - iOS, Runtime - Web, Runtime - Desktop, Build - MacOS, Build - Windows, Build - Linux
### Output of `npx react-native info`
```text
System:
OS: Windows 11 10.0.22000
CPU: (8) x64 Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
Memory: 8.47 GB / 15.89 GB
Binaries:
Node:
version: 23.0.0
path: C:\Program Files\nodejs\node.EXE
Yarn:
version: 1.22.0
path: C:\Program Files (x86)\Yarn\bin\yarn.CMD
npm:
version: 10.9.0
path: C:\Program Files\nodejs\npm.CMD
Watchman:
version: 20241027.093345.0
path: E:\Software\Watchman\bin\watchman.EXE
SDKs:
Android SDK: Not Found
Windows SDK:
AllowDevelopmentWithoutDevLicense: Enabled
AllowAllTrustedApps: Disabled
IDEs:
Android Studio: Not Found
Visual Studio: Not Found
Languages:
Java:
version: 21.0.3
path: /c/Program Files/Eclipse Adoptium/jdk-21.0.3.9-hotspot/bin/javac
Ruby: Not Found
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.2
wanted: 0.76.2
react-native-windows: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
scroll to index failed
```
### Reproducer
https://snack.expo.dev/@arasrezaie/sectionlist-example
### Screenshots and Videos
_No response_
|
Issue: Author Provided Repro,Component: SectionList,Needs: Attention,Needs: Version Info
|
low
|
Critical
|
2,684,254,130 |
go
|
all: reopen tree for Go 1.25 development
|
### Current Tree Status: Release freeze for Go 1.24 (see [golang-dev announcement](https://groups.google.com/g/golang-dev/c/JkRenbsSBv4/m/dhcurHI5CwAJ))
Now that we've entered the freeze for Go 1.24, we will eventually need to reopen the tree for Go 1.25 development. This is the tracking issue for that reopening, created early so that it's available for planning. The tree reopening is estimated to begin around week 3 of January 2025 (see https://go.dev/s/release#timeline), exact timing depends on how well the Go 1.24 release preparations are going.
As usual, the tree will initially be open to changes that must land early:
- [ ] Bump `internal/goversion.Version` to 25βthis should be the very first CL to be submitted as it marks the start of main branch representing Go 1.25 (rather than Go 1.24). ([Example CL](https://go.dev/cl/600176).)
- #40705
- [ ] Initialize `doc/next` for the Go 1.25 cycle. (See "For the release team" in [doc/README](https://cs.opensource.google/go/go/+/master:doc/README.md).)
- [ ] Submit CLs that are ready and marked AutoSubmit+1, but blocked on wait-release ([query](https://go-review.googlesource.com/q/is:open+label:Auto-Submit%3D%252B1+label:Code-Review%3D%252B2+-label:Hold%3D%252B1+-label:TryBots-Pass%3DNEED+-label:Legacy-TryBots-Pass%3DNEED+-has:unresolved+hashtag:wait-release+-is:wip+-label:Do-Not-Review%3DNEED+-label:Do-Not-Submit%3DNEED)).
- [ ] Look over issues labelled as [early-in-cycle](https://github.com/golang/go/issues?q=is%3Aopen+is%3Aissue+label%3Aearly-in-cycle+milestone%3AGo1.25) for anything needs to land very early.
- [ ] Submit CL for #49580.
- _(Anything else that is missing but should be added here, edit or comment below.)_
- [ ] A brief soft-reopening window, where any fix/stabilization CLs are okay to submit.
- [ ] Finally, open the tree for all general Go 1.25 changes.
CC @golang/release.
|
NeedsInvestigation,early-in-cycle,umbrella
|
low
|
Minor
|
2,684,307,826 |
vscode
|
Customizable Marketplace message for denylisted extensions and extension updates
|
<!-- β οΈβ οΈ Do Not Delete This! feature_request_template β οΈβ οΈ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
**This feature request is an add-on to the extension allowlist/denylist under development in https://github.com/microsoft/vscode/issues/84756.** cc @isidorn who requested that this be filed as a separate request.
The VS Code Marketplace is a great place for engineers to discover new extensions directly in their editor. We would love to maintain the usability of this Marketplace even with a strict allowlist policy in place so that our engineers have actionable information on what to do to allowlist an extension (or its update) within our organization.
Organizations will have different procedures for modifying the allowlist. Stripe has an online approval flow for new software dependencies that would be augmented with VS Code extensions once the allowlist functionality is available. Users would need to navigate to this form, fill it out for the specific extension and version, and get it approved.
It would be wonderful if VS Code supported modifying the message that shows up or is displayed when an extension is denylisted. If the message could be templated with the `extension_id` and `version`, we could even render a link directly to a pre-filled out web form, which would remove a lot of the friction from allowlisting a new extension.
For example, we might render something like:
> $extension_id@$extension_version is not an approved VS Code extension at Stripe. You can begin the approval for this extension at http://go/vscode-extn/$extension_id@$extension_version
Ideally, the link would be clickable and redirect to the relevant form, with most fields pre-filled out.
**If there is no UI space for an interactable message like this**, then I would also accept a configurable URL (templated with extension ID and version) that is put somewhere for the user to visit for more info; we could link to a templated help document that then links to the form if the user decides they want to allowlist the extension. (Maybe a "More info" button or link?)
|
feature-request,extensions
|
medium
|
Major
|
2,684,362,922 |
ollama
|
minimum viable GGUF crashes server on run
|
### What is the issue?
I ran `ollama run bmizerany/smol`, and saw the server crash violently.
I expected ollama to tell me, from the terminal session running `ollama run`, it could not run the model for `<reasons>`, and for the server to remain running an unaffected.
```
# Client
; ollama run bmizerany/smol
```
```
# Server
[GIN] 2024/11/22 - 11:32:09 | 200 | 305.583Β΅s | 127.0.0.1 | HEAD "/"
[GIN] 2024/11/22 - 11:32:09 | 200 | 2.020416ms | 127.0.0.1 | POST "/api/show"
[GIN] 2024/11/22 - 11:32:09 | 200 | 550.916Β΅s | 127.0.0.1 | POST "/api/show"
time=2024-11-22T11:32:09.588-08:00 level=WARN source=memory.go:115 msg="model missing blk.0 layer size"
panic: runtime error: integer divide by zero
goroutine 27 [running]:
github.com/ollama/ollama/llm.EstimateGPULayers({_, _, _}, _, {_, _, _},
{{0x0, 0x800, 0x200, ...}, ...})
github.com/ollama/ollama/llm/memory.go:122 +0x13f0
github.com/ollama/ollama/llm.PredictServerFit({0x14000495cb8?, 0x104ae52b4?, 0x1400001a090?}, 0x1400059a920, {0x199?, 0x105681bc0?, _}, {_, _, _}, ...)
github.com/ollama/ollama/llm/memory.go:20 +0xa8
github.com/ollama/ollama/server.pickBestFitGPUs(0x140001d0900, 0x1400059a920, {0x140004aa780?, 0xfffffffffffffffc?, 0x105286653?})
github.com/ollama/ollama/server/sched.go:627 +0x2a0
github.com/ollama/ollama/server.(*Scheduler).processPending(0x140000c39e0, {0x10575b8d0, 0x140000c5ea0})
github.com/ollama/ollama/server/sched.go:170 +0xac0
github.com/ollama/ollama/server.(*Scheduler).Run.func1()
github.com/ollama/ollama/server/sched.go:96 +0x28
created by github.com/ollama/ollama/server.(*Scheduler).Run in goroutine 1
github.com/ollama/ollama/server/sched.go:95 +0xc4
2024/11/22 11:32:10 routes.go:1060: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/Users/bmizerany/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR:]"
time=2024-11-22T11:32:10.640-08:00 level=INFO source=images.go:725 msg="total blobs: 6"
time=2024-11-22T11:32:10.641-08:00 level=INFO source=images.go:732 msg="total unused blobs removed: 0"
time=2024-11-22T11:32:10.642-08:00 level=INFO source=routes.go:1106 msg="Listening on 127.0.0.1:11434 (version 0.1.45)"
time=2024-11-22T11:32:10.652-08:00 level=WARN source=assets.go:100 msg="unable to cleanup stale tmpdir" path=/var/folders/db/svmm3t1x3yn4d1skpbq3ddv00000gn/T/ollama2998818457 error="remove /var/folders/db/svmm3t1x3yn4d1skpbq3ddv00000gn/T/ollama2998818457: directory not empty"
time=2024-11-22T11:32:10.652-08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/db/svmm3t1x3yn4d1skpbq3ddv00000gn/T/ollama827611131/runners
time=2024-11-22T11:32:10.679-08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [metal]"
time=2024-11-22T11:32:10.740-08:00 level=INFO source=types.go:98 msg="inference compute" id=0 library=metal compute="" driver=0.0 name="" total="96.0 GiB" available="96.0 GiB"
```
The GGUF (xxd):
```
00000000: 4747 5546 0300 0000 0000 0000 0000 0000 GGUF............
00000010: 0000 0000 0000 0000 ........
```
### OS
Darwin MacBook-Pro-3.attlocal.net 23.4.0 Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:37 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6031 arm64
### GPU
local
### CPU
see above
### Ollama version
ollama version is 0.4.3
|
bug
|
low
|
Critical
|
2,684,363,619 |
pytorch
|
Doc for `assign` parameter of `load_state_dict` is not rendered correctly
|
### π The doc issue
https://pytorch.org/docs/stable/generated/torch.nn.Module.html

https://github.com/pytorch/pytorch/blob/7c5c38da2349c23fdfcbae065153ccef32471b14/torch/nn/modules/module.py#L2495-L2500
```:class:`~torch.nn.Parameter`s for which the value from the module is preserved. ``` is not shown.
### Suggest a potential alternative/fix
_No response_
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @mruberry @mikaylagawarecki
|
module: docs,module: serialization,triaged,actionable
|
low
|
Minor
|
2,684,388,955 |
godot
|
CI / CD Headless Mono Export(s) don't work
|
### Tested versions
- Reproducible in Godot 4.3 Mono
### System information
Godot v4.3.stable.mono unknown - Arch Linux #1 SMP PREEMPT_DYNAMIC Sun, 17 Nov 2024 16:06:17 +0000 - Wayland - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 (nvidia; 565.57.01) - AMD Ryzen 5 5600X 6-Core Processor (12 Threads)
### Issue description
When exporting Godot Mono projects from the CLI. Using `godot-mono --headless --export-release "Linux-Profile" "build/path/game.x86_64"` from a docker container does not yield a playable build. Using a CI / CD docker container with supporting dependencies again isn't yielding a playable build with godo mono. Building projects locally through either a local CLI, or the GUI to export a build works as intended. When running one of these builds locally, it yields errors saying the executable cannot locate resources. Even though there are valid *.pck file produced with the build and the supported resources are included in the produced *.pck file.
### Steps to reproduce
Here is a link to a minimally reproducible build [https://gitlab.com/benscodeworks/godot-docker-ci](godot gitlab project). There are two different ways to recreate the issue. One is by cloning / forking my repository and running the pipeline and observing the artifacts produced. Then trying to run them on a local x86 supported Linux platform.
The other way is to clone the repository and have docker installed on your local machine.
1. run `cd into/the/repo`
2. run `docker build --tag my-docker-tag .`
3. run `docker run -it --name godot-test-container -v "$(pwd)":/app -w /app my-docker-tag /bin/bash`
4. run `mkdir -p build`
5. run `godot-mono --headless --verbose --export-release "CI-Linux" build/mvp.x86_64`
6. exit the docker container using `exit` from the command line
7. run the game file from outside the docker container and observe the errors produced. `./build/mvp.x86`
### Minimal reproduction project (MRP)
https://gitlab.com/benscodeworks/godot-docker-ci
Minimally reproducible project with Dockerfile and relevant gitlab pipelines.
https://hub.docker.com/repository/docker/benscodeworks/godot-mono-ci/general
Additionally, I have my docker build image out there for further investigation.
|
bug,topic:export
|
low
|
Critical
|
2,684,457,978 |
go
|
runtime/pprof: theoretical appendLocsForStack panic with SIGPROF between cgocallback and exitsyscall
|
[`runtime/pprof.appendLocsForStack`](https://cs.opensource.google/go/go/+/master:src/runtime/pprof/proto.go;l=445;drc=ca63101df47a4467bc80faa654fc19d68e583952) asserts that an inline-expanded PC always expands to the same number of logical PCs (inlined frames). This is a static property of a given PC, so it should always be true.
When handling SIGPROF, if we are on an extra M running in C, we [don't try to traceback](https://cs.opensource.google/go/go/+/master:src/runtime/signal_unix.go;l=443;drc=944df9a7516021f0405cd8adb1e6894ae9872cb5) since this is C anyway, and just add a sample with stack `{pc, runtime._ExternalCode}`.
"We are on an extra M running in C" is defined as `gp.m.isExtraInC`. In `cgocallbackg`, we [clear this field](https://cs.opensource.google/go/go/+/master:src/runtime/cgocall.go;l=341;drc=4aa1c02daee42c37ddd30ae2aa91bd3fd3b72e77;bpv=1;bpt=1) after `exitsyscall` returns. This leaves a fairly long window when we are in fact running Go code, but the SIGPROF handler will think it is in C.
A lot of this code (particularly in `exitsyscall`) is reachable from normal Go code as well. If any of this code has more than 2 inlined frames at a single PC, then a SIGPROF from a normal Go context followed by a SIGPROF in this `cgocallback` context could trigger this `appendLocsForStack` panic.
I do not know if any code reachable in this window actually has more than 2 inlined frames. Only 2 frames is insufficient, as `appendLocsForStack` wouldn't actually care that the second frame is `runtime._ExternalCode` instead of the proper frame.
One potential fix is to attempt to do inline expansion in `sigprofNonGoPC` in case it actually is a Go PC.
|
NeedsInvestigation,compiler/runtime
|
low
|
Minor
|
2,684,461,378 |
deno
|
`--unstable-node-globals` fails with `--check` option
|
To support libraries that use node global variables, a new `--unstable-node-globals` variable was added in https://github.com/denoland/deno/issues/26611
However, using this flag with the `--check` flag fails because Typescript (and the IDE) isn't aware of this flag
Ex: if you have the following code
```
type Foo = Buffer;
```
It will fail with
```
TS2580 [ERROR]: Cannot find name 'Buffer'.
type Foo = Buffer;
```
This can be fixed by adding the following to your project
```ts
declare global {
type Buffer = typeof import("node:buffer").Buffer;
}
```
Or (possibly?) with specifying some `lib` settings. Ideally somehow Deno would know to make this work if you use the flag (but I don't see how the IDE could know this), or at least make a more useful error message that tells you to change your `deno.json` file to fix this error
|
bug,tsc
|
low
|
Critical
|
2,684,519,221 |
react
|
Bug: Can't built chrome extension - yarn build-for-devtools fails
|
I attempted to follow the instructions in the README for react-devtools-extensions, but I'm not able to build devtools, it fails on "build-for-devtools".
1. yarn install from the root repo (worked) (from the repo root)
2. followed by yarn build-for-devtools (fails) (from the repo root)
build-for-tools was successful up to "react-refresh-runtime.production.js", it fails with the core message:
Error: Command failed: npm pack build/node_modules/react
npm error Invalid property "node"
The log file had the following relevant content:
12 verbose stack Error: Invalid property "node"
12 verbose stack at checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/node_modules/npm-install- checks/lib/dev-engines.js:100:13)
12 verbose stack at Pack.checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/lib/base-cmd.js:153:22)
12 verbose stack at async #exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:251:7)
12 verbose stack at async Npm.exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:207:9)
12 verbose stack at async module.exports (/Users/admin/.npm-global/lib/node_modules/npm/lib/cli/entry.js:74:5)
(I include the full shell and log file below)
# Below is the full log file content
0 verbose cli /usr/local/bin/node /Users/admin/.npm-global/bin/npm
1 info using [email protected]
2 info using [email protected]
3 silly config load:file:/Users/admin/.npm-global/lib/node_modules/npm/npmrc
4 silly config load:file:/Users/admin/local-projects/react/.npmrc
5 silly config load:file:/Users/admin/.npmrc
6 silly config load:file:/Users/admin/.npm-global/etc/npmrc
7 verbose title npm pack build/node_modules/react-art
8 verbose argv "pack" "build/node_modules/react-art"
9 verbose logfile logs-max:10 dir:/Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-
10 verbose logfile /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
0 verbose cli /usr/local/bin/node /Users/admin/.npm-global/bin/npm
1 info using [email protected]
2 info using [email protected]
3 silly config load:file:/Users/admin/.npm-global/lib/node_modules/npm/npmrc
4 silly config load:file:/Users/admin/local-projects/react/.npmrc
5 silly config load:file:/Users/admin/.npmrc
6 silly config load:file:/Users/admin/.npm-global/etc/npmrc
7 verbose title npm pack build/node_modules/react-debug-tools
8 verbose argv "pack" "build/node_modules/react-debug-tools"
9 verbose logfile logs-max:10 dir:/Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-
10 verbose logfile /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
0 verbose cli /usr/local/bin/node /Users/admin/.npm-global/bin/npm
1 info using [email protected]
2 info using [email protected]
3 silly config load:file:/Users/admin/.npm-global/lib/node_modules/npm/npmrc
4 silly config load:file:/Users/admin/local-projects/react/.npmrc
5 silly config load:file:/Users/admin/.npmrc
6 silly config load:file:/Users/admin/.npm-global/etc/npmrc
7 verbose title npm pack build/node_modules/jest-react
8 verbose argv "pack" "build/node_modules/jest-react"
9 verbose logfile logs-max:10 dir:/Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-
10 verbose logfile /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
0 verbose cli /usr/local/bin/node /Users/admin/.npm-global/bin/npm
1 info using [email protected]
2 info using [email protected]
3 silly config load:file:/Users/admin/.npm-global/lib/node_modules/npm/npmrc
4 silly config load:file:/Users/admin/local-projects/react/.npmrc
5 silly config load:file:/Users/admin/.npmrc
6 silly config load:file:/Users/admin/.npm-global/etc/npmrc
7 verbose title npm pack build/node_modules/react-dom
8 verbose argv "pack" "build/node_modules/react-dom"
9 verbose logfile logs-max:10 dir:/Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-
10 verbose logfile /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
0 verbose cli /usr/local/bin/node /Users/admin/.npm-global/bin/npm
1 info using [email protected]
2 info using [email protected]
3 silly config load:file:/Users/admin/.npm-global/lib/node_modules/npm/npmrc
4 silly config load:file:/Users/admin/local-projects/react/.npmrc
5 silly config load:file:/Users/admin/.npmrc
6 silly config load:file:/Users/admin/.npm-global/etc/npmrc
7 verbose title npm pack build/node_modules/react-is
8 verbose argv "pack" "build/node_modules/react-is"
9 verbose logfile logs-max:10 dir:/Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-
10 verbose logfile /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
0 verbose cli /usr/local/bin/node /Users/admin/.npm-global/bin/npm
1 info using [email protected]
2 info using [email protected]
3 silly config load:file:/Users/admin/.npm-global/lib/node_modules/npm/npmrc
4 silly config load:file:/Users/admin/local-projects/react/.npmrc
5 silly config load:file:/Users/admin/.npmrc
6 silly config load:file:/Users/admin/.npm-global/etc/npmrc
7 verbose title npm pack build/node_modules/react-refresh
8 verbose argv "pack" "build/node_modules/react-refresh"
9 verbose logfile logs-max:10 dir:/Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-
10 verbose logfile /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
0 verbose cli /usr/local/bin/node /Users/admin/.npm-global/bin/npm
1 info using [email protected]
2 info using [email protected]
3 silly config load:file:/Users/admin/.npm-global/lib/node_modules/npm/npmrc
4 silly config load:file:/Users/admin/local-projects/react/.npmrc
5 silly config load:file:/Users/admin/.npmrc
6 silly config load:file:/Users/admin/.npm-global/etc/npmrc
7 verbose title npm pack build/node_modules/react-test-renderer
8 verbose argv "pack" "build/node_modules/react-test-renderer"
9 verbose logfile logs-max:10 dir:/Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-
10 verbose logfile /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
0 verbose cli /usr/local/bin/node /Users/admin/.npm-global/bin/npm
1 info using [email protected]
2 info using [email protected]
3 silly config load:file:/Users/admin/.npm-global/lib/node_modules/npm/npmrc
4 silly config load:file:/Users/admin/local-projects/react/.npmrc
5 silly config load:file:/Users/admin/.npmrc
6 silly config load:file:/Users/admin/.npm-global/etc/npmrc
7 verbose title npm pack build/node_modules/react
8 verbose argv "pack" "build/node_modules/react"
9 verbose logfile logs-max:10 dir:/Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-
10 verbose logfile /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
11 silly logfile start cleaning logs, removing 1 files
11 silly logfile start cleaning logs, removing 1 files
11 silly logfile start cleaning logs, removing 1 files
11 silly logfile start cleaning logs, removing 1 files
11 silly logfile start cleaning logs, removing 1 files
11 silly logfile start cleaning logs, removing 1 files
11 silly logfile start cleaning logs, removing 1 files
11 silly logfile start cleaning logs, removing 1 files
12 verbose stack Error: Invalid property "node"
12 verbose stack at checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/node_modules/npm-install-checks/lib/dev-engines.js:100:13)
12 verbose stack at Pack.checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/lib/base-cmd.js:153:22)
12 verbose stack at async #exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:251:7)
12 verbose stack at async Npm.exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:207:9)
12 verbose stack at async module.exports (/Users/admin/.npm-global/lib/node_modules/npm/lib/cli/entry.js:74:5)
12 verbose stack Error: Invalid property "node"
12 verbose stack at checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/node_modules/npm-install-checks/lib/dev-engines.js:100:13)
12 verbose stack at Pack.checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/lib/base-cmd.js:153:22)
12 verbose stack at async #exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:251:7)
12 verbose stack at async Npm.exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:207:9)
12 verbose stack at async module.exports (/Users/admin/.npm-global/lib/node_modules/npm/lib/cli/entry.js:74:5)
12 verbose stack Error: Invalid property "node"
12 verbose stack at checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/node_modules/npm-install-checks/lib/dev-engines.js:100:13)
12 verbose stack at Pack.checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/lib/base-cmd.js:153:22)
12 verbose stack at async #exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:251:7)
12 verbose stack at async Npm.exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:207:9)
12 verbose stack at async module.exports (/Users/admin/.npm-global/lib/node_modules/npm/lib/cli/entry.js:74:5)
12 verbose stack Error: Invalid property "node"
12 verbose stack at checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/node_modules/npm-install-checks/lib/dev-engines.js:100:13)
12 verbose stack at Pack.checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/lib/base-cmd.js:153:22)
12 verbose stack at async #exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:251:7)
12 verbose stack at async Npm.exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:207:9)
12 verbose stack at async module.exports (/Users/admin/.npm-global/lib/node_modules/npm/lib/cli/entry.js:74:5)
13 error Invalid property "node"
13 error Invalid property "node"
13 error Invalid property "node"
13 error Invalid property "node"
12 silly logfile done cleaning log files
12 silly logfile done cleaning log files
12 silly logfile done cleaning log files
14 verbose cwd /Users/admin/local-projects/react
14 verbose cwd /Users/admin/local-projects/react
15 verbose os Darwin 24.1.0
15 verbose os Darwin 24.1.0
14 verbose cwd /Users/admin/local-projects/react
14 verbose cwd /Users/admin/local-projects/react
15 verbose os Darwin 24.1.0
15 verbose os Darwin 24.1.0
16 verbose node v22.11.0
17 verbose npm v10.9.0
16 verbose node v22.11.0
12 verbose stack Error: Invalid property "node"
12 verbose stack at checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/node_modules/npm-install-checks/lib/dev-engines.js:100:13)
12 verbose stack at Pack.checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/lib/base-cmd.js:153:22)
12 verbose stack at async #exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:251:7)
12 verbose stack at async Npm.exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:207:9)
12 verbose stack at async module.exports (/Users/admin/.npm-global/lib/node_modules/npm/lib/cli/entry.js:74:5)
17 verbose npm v10.9.0
18 verbose exit 1
16 verbose node v22.11.0
19 verbose code 1
16 verbose node v22.11.0
17 verbose npm v10.9.0
13 error Invalid property "node"
18 verbose exit 1
17 verbose npm v10.9.0
18 verbose exit 1
20 error A complete log of this run can be found in: /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
19 verbose code 1
19 verbose code 1
13 verbose stack Error: Invalid property "node"
13 verbose stack at checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/node_modules/npm-install-checks/lib/dev-engines.js:100:13)
13 verbose stack at Pack.checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/lib/base-cmd.js:153:22)
13 verbose stack at async #exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:251:7)
13 verbose stack at async Npm.exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:207:9)
13 verbose stack at async module.exports (/Users/admin/.npm-global/lib/node_modules/npm/lib/cli/entry.js:74:5)
18 verbose exit 1
14 error Invalid property "node"
19 verbose code 1
20 error A complete log of this run can be found in: /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
20 error A complete log of this run can be found in: /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
13 verbose stack Error: Invalid property "node"
13 verbose stack at checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/node_modules/npm-install-checks/lib/dev-engines.js:100:13)
13 verbose stack at Pack.checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/lib/base-cmd.js:153:22)
13 verbose stack at async #exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:251:7)
13 verbose stack at async Npm.exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:207:9)
13 verbose stack at async module.exports (/Users/admin/.npm-global/lib/node_modules/npm/lib/cli/entry.js:74:5)
14 error Invalid property "node"
20 error A complete log of this run can be found in: /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
13 verbose stack Error: Invalid property "node"
13 verbose stack at checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/node_modules/npm-install-checks/lib/dev-engines.js:100:13)
13 verbose stack at Pack.checkDevEngines (/Users/admin/.npm-global/lib/node_modules/npm/lib/base-cmd.js:153:22)
13 verbose stack at async #exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:251:7)
13 verbose stack at async Npm.exec (/Users/admin/.npm-global/lib/node_modules/npm/lib/npm.js:207:9)
13 verbose stack at async module.exports (/Users/admin/.npm-global/lib/node_modules/npm/lib/cli/entry.js:74:5)
14 error Invalid property "node"
14 verbose cwd /Users/admin/local-projects/react
15 verbose os Darwin 24.1.0
15 verbose cwd /Users/admin/local-projects/react
15 verbose cwd /Users/admin/local-projects/react
16 verbose node v22.11.0
17 verbose npm v10.9.0
16 verbose os Darwin 24.1.0
17 verbose node v22.11.0
18 verbose npm v10.9.0
18 verbose exit 1
19 verbose code 1
16 verbose os Darwin 24.1.0
19 verbose exit 1
17 verbose node v22.11.0
20 verbose code 1
18 verbose npm v10.9.0
20 error A complete log of this run can be found in: /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
19 verbose exit 1
21 error A complete log of this run can be found in: /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
20 verbose code 1
15 verbose cwd /Users/admin/local-projects/react
21 error A complete log of this run can be found in: /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
16 verbose os Darwin 24.1.0
17 verbose node v22.11.0
18 verbose npm v10.9.0
19 verbose exit 1
20 verbose code 1
21 error A complete log of this run can be found in: /Users/admin/.npm/_logs/2024-11-22T20_03_58_977Z-debug-0.log
[shell.log](https://github.com/user-attachments/files/17876136/shell.log)
# My environment
I'm using an Intel mac.
$ node -v
v22.11.0
$ java -version
java version "23.0.1" 2024-10-15
Java(TM) SE Runtime Environment (build 23.0.1+11-39)
Java HotSpot(TM) 64-Bit Server VM (build 23.0.1+11-39, mixed mode, sharing)
$ git branch -v
* main e697386c10 [compiler] First cut at dep inference (#31386)
|
Status: Unconfirmed
|
medium
|
Critical
|
2,684,584,644 |
PowerToys
|
PowerRename - Invert Selection
|
### Description of the new feature / enhancement
Having the ability to invert the selected items would be really handy
### Scenario when this would be used?
When loading a folder into powerrename and inputting a selection for a rename, sometimes it only applies to half the files so I uncheck half the files. If you then had an option to say Invert selection, and then just modify the parameters, it would be so much faster.
example, you have files starting with
1., 2., 3, ... 10., 11., 12., ...
So I might want to rename the "1." (through 9.) to "S01E01.", and the "10." (through 20. or 99.) to "S01E10", currently when I load all files I check the boxes for 1-9 and do the rename, then uncheck those boxes and check the other boxes.
I know you could go into the folder as well and only select the 1-9 files and load into powerrename, then close out and go back in after selecting the rest. But it would just be so much faster to have this Invert Selection, much like windows explorer has, and it would make things quicker.
### Supporting information
_No response_
|
Needs-Triage
|
low
|
Minor
|
2,684,702,784 |
pytorch
|
Add vLLM to torchbench
|
We should add at least one vllm model to torchbench, to make sure torch.compile support doesn't regress. This might be a non-standard "model" though, because vllm controls the compilation process.
cc @chauhang @penguinwu
|
triaged,oncall: pt2,vllm-compile
|
low
|
Major
|
2,684,710,962 |
TypeScript
|
Improve call expression source maps
|
### π Search Terms
- source map call expression
- emitCallExpression source map
- source map function call
### π Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about "Source Maps"
### β― Playground Link
https://www.typescriptlang.org/play/?module=1&inlineSourceMap=true#code/JYWwDg9gTgLgBCCBjA1gUQB4wKZQHYCGANnAGZQQhwBEAdAPSKq0BWAztQNwBQ3pArniQxgEPAmQoAPABUAfAAoCUAOYB+AFxwZASgDeAX15IxbeATgBeCagU6eJvGbgAjKzZRwFcOPe6PnJHcmaTMoYDwVOR8FagJ4gmo-APgAE2DJOHoAKgRsCAB3OGz6Lx8aZRUARmpyv256UoA5CBwtUmgkbDgYNiCYCDgVbDxcAhw4AnEI1OAobGE4JGIif1N4busQzBx8YnKymlrfHiA
### π» Code
```ts
import mockExternal from "./mock.js";
function mock<T>(arg?: T){}
const a = mock();
const b = mock ( );
const c = mock<string> ("aaaa");
const d = mock /* meow */ ( "arg1" );
// Note: force tsc to generate an indirect call
const e = mockExternal ( "" );
```
Compiled:
```js
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
const mock_js_1 = __importDefault(require("./mock.js"));
function mock(arg) { }
const a = mock();
const b = mock();
const c = mock("aaaa");
const d = mock /* meow */("arg1");
// Note: force tsc to generate an indirect call
const e = (0, mock_js_1.default)("");
//# sourceMappingURL=data:application/json;base64,eyJ2ZXJzaW9uIjozLCJmaWxlIjoiaW5wdXQuanMiLCJzb3VyY2VSb290IjoiIiwic291cmNlcyI6WyJpbnB1dC50c3giXSwibmFtZXMiOltdLCJtYXBwaW5ncyI6Ijs7Ozs7QUFBQSx3REFBcUM7QUFFckMsU0FBUyxJQUFJLENBQUksR0FBTyxJQUFFLENBQUM7QUFFM0IsTUFBTSxDQUFDLEdBQUcsSUFBSSxFQUFFLENBQUM7QUFDakIsTUFBTSxDQUFDLEdBQUcsSUFBSSxFQUFLLENBQUM7QUFDcEIsTUFBTSxDQUFDLEdBQUcsSUFBSSxDQUFXLE1BQU0sQ0FBQyxDQUFDO0FBQ2pDLE1BQU0sQ0FBQyxHQUFHLElBQUksQ0FBQyxVQUFVLENBQUssTUFBTSxDQUFJLENBQUM7QUFFekMsK0NBQStDO0FBQy9DLE1BQU0sQ0FBQyxHQUFHLElBQUEsaUJBQVksRUFBTyxFQUFFLENBQUcsQ0FBQyJ9
```
### π Actual behavior
The JS is compiled perfectly and works fine, however due to the way NodeJS reports call sites the call site of each `mock` function invocation is as follows:
| position in JS (line, col) | substring in JS at position | position after source mapping | substring in TS at position |
|----------------------------------|---------------------------------------|----------------------------------------------|---------------------------------------|
| (8, 11) | `mock` | (5, 11) | `mock` |
| (9, 11) | `mock` | (6, 11) | `mock` |
| (10, 11) | `mock` | (7, 11) | `mock` |
| (11, 11) | `mock` | (8, 11) | `mock` |
| (13, 31) | `("");` | (11, 29) | ` "" );` |
When looking at a call expressions reported call site I personally think that indirect call expressions positions are misleading after being source mapped as it seems to imply the first argument is the section of the code affected when there are spaces between the first argument and the open parenthesis token.
### π Expected behavior
I would expect the (13,31) call site to map directly to (12, 27) in the source map, so that users can easily see the invocation position of the function call, in a consistent and predictable way.
### Additional information about the issue
I debated on whether this would be considered a bug or a feature request, but decided on bug due to the current call site position being inconsistent for users trying to map their codes call location. If you think this should be a feature request, I am more than happy to change this issue to be a feature request.
If this feature is deemed to be an acceptable change to typescripts source map emissions, I have created a simple implementation that in my own testing seems to work, I am happy to add automated tests and raise a PR (https://github.com/mini-ninja-64/TypeScript/tree/improve-source-map). My current implementation works by emitting a source map position in the `emitNodeList` function based on the start position of the provided `NodeArray<T>`, this means it could also improve the source map production for other node lists; however this could increase the size of source maps generated by tsc.
|
Bug,Help Wanted
|
low
|
Critical
|
2,684,738,094 |
pytorch
|
[Regression] Torch 2.4+ Fails to Export TorchAudio Wav2Vec Model (Was Good in Torch 2.3)
|
### π Describe the bug
Export torchaudio wav2vec model
```
import torch
import torchaudio
bundle = torchaudio.pipelines.WAV2VEC2_BASE.get_model()
bundle.eval()
feature_extractor = bundle.feature_extractor
encoder = bundle.encoder
example_wav_features = torch.randn(1, 512)
example_length = torch.rand(1)
example_features = feature_extractor(example_wav_features, example_length)
exported_encoder_model = torch.export.export(encoder, example_features)
print(exported_encoder_model)
```
Everything was good in torch & torchaudio 2.3, but fails in 2.4 and 2.5 with
```
RuntimeError: cannot mutate tensors with frozen storage
```
`strict=False` still gets this error
### Versions
torch 2.5.0
torchaudio 2.5.0
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
|
module: regression,oncall: pt2,oncall: export
|
low
|
Critical
|
2,684,744,301 |
flutter
|
[macos][platform_view] mac webview plugin example unable to dismiss the overlay menu when tapping on top of webview
|
### Steps to reproduce
1. Build and run webview plugin's example on macOS.
2. Tap on the 3 dots menu on top right corner
3. Tap on empty region on the header, the menu got dismissed successfully
4. Tap on the 3 dots menu again
5. Tap on the webview region, the menu is not dismissed successfully. Also the web-view itself is tappable.
This means that platform view receives the touches even when the touch should have been blocked. We may have to do something similar on iOS (use a "delaying recognizer" to block or release the touches).
### Expected results
See above
### Actual results
See above
### Code sample
NA
### Screenshots or Video

### Logs
_No response_
### Flutter Doctor output
NA
|
platform-mac,f: gestures,a: platform-views,package,P2,team-macos,triaged-macos
|
low
|
Major
|
2,684,747,123 |
vscode
|
Auto detection mistakenly assumes a markdown cell as a Perl code
|
### Applies To
- [X] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
On writing the following markdown segment:
```markdown
# Kaggle Course on Python
I am very happy to share my works to the world.
```
The Jupyter shows the following suggestion:

Which is pointless.
It is probably happening since the markdown content contains the `my` keyword from the Perl language. However, it should not be happening as it is a common word. The detection needs to be improved.
### VS Code Version
Version: 1.86.2 (user setup) Commit: 903b1e9d8990623e3d7da1df3d33db3e42d80eda Date: 2024-02-13T19:40:56.878Z Electron: 27.2.3 ElectronBuildId: 26908389 Chromium: 118.0.5993.159 Node.js: 18.17.1 V8: 11.8.172.18-electron.0 OS: Windows_NT x64 10.0.19045
### Jupyter Extension Version
v2024.1.1
### Jupyter logs
```shell
Visual Studio Code (1.86.2, undefined, desktop)
Jupyter Extension Version: 2024.1.1.
Python Extension Version: 2024.0.1.
Pylance Extension Version: 2024.2.2.
Platform: win32 (x64).
Workspace folder ~\code\python, Home = c:\Users\Rohan
09:22:58.711 [info] Start refreshing Kernel Picker (1708660378711)
09:22:58.733 [info] Using Pylance
09:22:58.872 [warn] Failed to get activated env vars for ~\AppData\Local\Programs\Python\Python312\python.exe in 120ms
09:22:58.884 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -c "import site;print("USER_BASE_VALUE");print(site.USER_SITE);print("USER_BASE_VALUE");"
09:22:58.961 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -m pip list
09:22:59.753 [info] End refreshing Kernel Picker (1708660378711)
09:23:09.514 [info] Start refreshing Interpreter Kernel Picker
09:23:09.514 [info] Start refreshing Kernel Picker (1708660389514)
09:23:09.582 [info] No interpreter for Pylance for Notebook URI "Untitled-1.ipynb"
09:23:10.913 [info] End refreshing Kernel Picker (1708660389514)
09:23:19.283 [info] No interpreter for Pylance for Notebook URI "~\code\python\Playground.ipynb"
09:23:19.869 [info] No interpreter for Pylance for Notebook URI "~\code\python\Playground.ipynb"
09:23:20.187 [info] Starting Kernel startUsingPythonInterpreter, .jvsc74a57bd077ea1d837ba88d39d1da8a46a378df4d39daecc0b2ec75acfbdc990c6cc53527.~\AppData\Local\Programs\Python\Python312\python.exe.~\AppData\Local\Programs\Python\Python312\python.exe.-m#ipykernel_launcher (Python Path: ~\AppData\Local\Programs\Python\Python312\python.exe, Unknown, 3.12.1) for '~\code\python\Playground.ipynb' (disableUI=true)
09:23:20.591 [warn] Failed to get activated env vars for ~\AppData\Local\Programs\Python\Python312\python.exe in 303ms
09:23:20.673 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
09:23:20.686 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe ~\.vscode\extensions\ms-toolsai.jupyter-2024.1.1-win32-x64\pythonFiles\vscode_datascience_helpers\kernel_interrupt_daemon.py --ppid 12660
> cwd: ~\.vscode\extensions\ms-toolsai.jupyter-2024.1.1-win32-x64\pythonFiles\vscode_datascience_helpers
09:23:20.922 [warn] Stderr output when getting ipykernel version & path Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'ipykernel' for ~\AppData\Local\Programs\Python\Python312\python.exe
09:23:21.108 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -m ipykernel_launcher --f=~\AppData\Roaming\jupyter\runtime\kernel-v2-12660pqU8flzh1ohf.json
> cwd: ~\code\python
09:23:21.108 [info] Kernel process 12644.
09:23:21.186 [warn] StdErr from Kernel Process ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher
09:23:21.194 [error] Disposing kernel process due to an error Error: The kernel died. Error: ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher... View Jupyter [log](command:jupyter.viewOutput) for further details.
> Kernel Id = .jvsc74a57bd077ea1d837ba88d39d1da8a46a378df4d39daecc0b2ec75acfbdc990c6cc53527.~\AppData\Local\Programs\Python\Python312\python.exe.~\AppData\Local\Programs\Python\Python312\python.exe.-m#ipykernel_launcher
> Interpreter Id = ~\APPDATA\LOCAL\PROGRAMS\PYTHON\PYTHON312\PYTHON.EXE
> at ChildProcess.<anonymous> (~\.vscode\extensions\ms-toolsai.jupyter-2024.1.1-win32-x64\dist\extension.node.js:276:2012)
> stdErr = ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher
09:23:21.194 [info] Dispose Kernel process 12644.
09:23:21.198 [error] Failed to connect raw kernel session: Error: The kernel died. Error: ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher... View Jupyter [log](command:jupyter.viewOutput) for further details.
09:23:21.198 [error] Failed to connect raw kernel session: Error: The kernel died. Error: ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher... View Jupyter [log](command:jupyter.viewOutput) for further details.
09:23:21.199 [warn] Failed to shutdown kernel, .jvsc74a57bd077ea1d837ba88d39d1da8a46a378df4d39daecc0b2ec75acfbdc990c6cc53527.~\AppData\Local\Programs\Python\Python312\python.exe.~\AppData\Local\Programs\Python\Python312\python.exe.-m#ipykernel_launcher [TypeError: Cannot read properties of undefined (reading 'dispose')
at J_.shutdown (~\.vscode\extensions\ms-toolsai.jupyter-2024.1.1-win32-x64\dist\extension.node.js:281:15465)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async $_.shutdown (~\.vscode\extensions\ms-toolsai.jupyter-2024.1.1-win32-x64\dist\extension.node.js:281:24053)]
09:23:21.203 [warn] Error occurred while trying to start the kernel, options.disableUI=true Error: The kernel died. Error: ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher... View Jupyter [log](command:jupyter.viewOutput) for further details.
> Kernel Id = .jvsc74a57bd077ea1d837ba88d39d1da8a46a378df4d39daecc0b2ec75acfbdc990c6cc53527.~\AppData\Local\Programs\Python\Python312\python.exe.~\AppData\Local\Programs\Python\Python312\python.exe.-m#ipykernel_launcher
> Interpreter Id = ~\APPDATA\LOCAL\PROGRAMS\PYTHON\PYTHON312\PYTHON.EXE
> at ChildProcess.<anonymous> (~\.vscode\extensions\ms-toolsai.jupyter-2024.1.1-win32-x64\dist\extension.node.js:276:2012)
> stdErr = ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher
09:38:17.951 [info] Handle Execution of Cells 0 for ~\code\python\Playground.ipynb
09:38:17.953 [info] Starting Kernel startUsingPythonInterpreter, .jvsc74a57bd077ea1d837ba88d39d1da8a46a378df4d39daecc0b2ec75acfbdc990c6cc53527.~\AppData\Local\Programs\Python\Python312\python.exe.~\AppData\Local\Programs\Python\Python312\python.exe.-m#ipykernel_launcher (Python Path: ~\AppData\Local\Programs\Python\Python312\python.exe, Unknown, 3.12.1) for '~\code\python\Playground.ipynb' (disableUI=false)
09:38:18.306 [warn] Failed to get activated env vars for ~\AppData\Local\Programs\Python\Python312\python.exe in 344ms
09:38:18.316 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
09:38:18.328 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -m ipykernel_launcher --f=~\AppData\Roaming\jupyter\runtime\kernel-v2-12660Gnk99hf3ixQz.json
> cwd: ~\code\python
09:38:18.328 [info] Kernel process 10868.
09:38:18.399 [warn] Stderr output when getting ipykernel version & path Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'ipykernel' for ~\AppData\Local\Programs\Python\Python312\python.exe
09:38:18.405 [warn] StdErr from Kernel Process ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher
09:38:18.412 [error] Disposing kernel process due to an error Error: The kernel died. Error: ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher... View Jupyter [log](command:jupyter.viewOutput) for further details.
> Kernel Id = .jvsc74a57bd077ea1d837ba88d39d1da8a46a378df4d39daecc0b2ec75acfbdc990c6cc53527.~\AppData\Local\Programs\Python\Python312\python.exe.~\AppData\Local\Programs\Python\Python312\python.exe.-m#ipykernel_launcher
> Interpreter Id = ~\APPDATA\LOCAL\PROGRAMS\PYTHON\PYTHON312\PYTHON.EXE
> at ChildProcess.<anonymous> (~\.vscode\extensions\ms-toolsai.jupyter-2024.1.1-win32-x64\dist\extension.node.js:276:2012)
> stdErr = ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher
09:38:18.412 [info] Dispose Kernel process 10868.
09:38:18.413 [error] Failed to connect raw kernel session: Error: The kernel died. Error: ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher... View Jupyter [log](command:jupyter.viewOutput) for further details.
09:38:18.414 [error] Failed to connect raw kernel session: Error: The kernel died. Error: ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher... View Jupyter [log](command:jupyter.viewOutput) for further details.
09:38:18.414 [warn] Failed to shutdown kernel, .jvsc74a57bd077ea1d837ba88d39d1da8a46a378df4d39daecc0b2ec75acfbdc990c6cc53527.~\AppData\Local\Programs\Python\Python312\python.exe.~\AppData\Local\Programs\Python\Python312\python.exe.-m#ipykernel_launcher [TypeError: Cannot read properties of undefined (reading 'dispose')
at J_.shutdown (~\.vscode\extensions\ms-toolsai.jupyter-2024.1.1-win32-x64\dist\extension.node.js:281:15465)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async $_.shutdown (~\.vscode\extensions\ms-toolsai.jupyter-2024.1.1-win32-x64\dist\extension.node.js:281:24053)]
09:38:18.415 [warn] Error occurred while trying to start the kernel, options.disableUI=false Error: The kernel died. Error: ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher... View Jupyter [log](command:jupyter.viewOutput) for further details.
> Kernel Id = .jvsc74a57bd077ea1d837ba88d39d1da8a46a378df4d39daecc0b2ec75acfbdc990c6cc53527.~\AppData\Local\Programs\Python\Python312\python.exe.~\AppData\Local\Programs\Python\Python312\python.exe.-m#ipykernel_launcher
> Interpreter Id = ~\APPDATA\LOCAL\PROGRAMS\PYTHON\PYTHON312\PYTHON.EXE
> at ChildProcess.<anonymous> (~\.vscode\extensions\ms-toolsai.jupyter-2024.1.1-win32-x64\dist\extension.node.js:276:2012)
> stdErr = ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher
09:38:18.416 [warn] Kernel Error, context = start Error: The kernel died. Error: ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher... View Jupyter [log](command:jupyter.viewOutput) for further details.
> Kernel Id = .jvsc74a57bd077ea1d837ba88d39d1da8a46a378df4d39daecc0b2ec75acfbdc990c6cc53527.~\AppData\Local\Programs\Python\Python312\python.exe.~\AppData\Local\Programs\Python\Python312\python.exe.-m#ipykernel_launcher
> Interpreter Id = ~\APPDATA\LOCAL\PROGRAMS\PYTHON\PYTHON312\PYTHON.EXE
> at ChildProcess.<anonymous> (~\.vscode\extensions\ms-toolsai.jupyter-2024.1.1-win32-x64\dist\extension.node.js:276:2012)
> stdErr = ~\AppData\Local\Programs\Python\Python312\python.exe: No module named ipykernel_launcher
09:38:18.451 [warn] Failed to get activated env vars for ~\AppData\Local\Programs\Python\Python312\python.exe in 32ms
09:38:18.460 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -c "import ipykernel;print('6af208d0-cb9c-427f-b937-ff563e17efdf')"
09:38:18.527 [info] Check & install missing Kernel dependencies for ~\AppData\Local\Programs\Python\Python312\python.exe, ui.disabled=false for resource '~\code\python\Playground.ipynb'
09:38:18.534 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -c "import ipykernel;print('6af208d0-cb9c-427f-b937-ff563e17efdf')"
09:38:18.601 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -c "import pip;print('6af208d0-cb9c-427f-b937-ff563e17efdf')"
09:38:22.975 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -m pip list
09:38:22.980 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -c "import pip;print('6af208d0-cb9c-427f-b937-ff563e17efdf')"
09:38:23.087 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -m pip install -U --user ipykernel
09:39:23.327 [warn] Failed to get activated env vars for ~\AppData\Local\Programs\Python\Python312\python.exe in 18ms
09:39:23.333 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -c "import ipykernel;print('6af208d0-cb9c-427f-b937-ff563e17efdf')"
09:39:24.248 [info] Dispose Kernel '~\code\python\Playground.ipynb' associated with '~\code\python\Playground.ipynb'
09:39:24.254 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -m pip list
09:39:24.255 [info] Starting Kernel startUsingPythonInterpreter, .jvsc74a57bd077ea1d837ba88d39d1da8a46a378df4d39daecc0b2ec75acfbdc990c6cc53527.~\AppData\Local\Programs\Python\Python312\python.exe.~\AppData\Local\Programs\Python\Python312\python.exe.-m#ipykernel_launcher (Python Path: ~\AppData\Local\Programs\Python\Python312\python.exe, Unknown, 3.12.1) for '~\code\python\Playground.ipynb' (disableUI=false)
09:39:24.493 [warn] Failed to get activated env vars for ~\AppData\Local\Programs\Python\Python312\python.exe in 234ms
09:39:24.500 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
09:39:24.516 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -m ipykernel_launcher --f=~\AppData\Roaming\jupyter\runtime\kernel-v2-12660IIsbmPh4VCA1.json
> cwd: ~\code\python
09:39:24.516 [info] Kernel process 4748.
09:39:26.581 [warn] StdErr from Kernel Process 0.00s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
09:39:27.114 [info] Registering Kernel Completion Provider from kernel Python 3.12.1 for language python
09:39:27.126 [info] Kernel acknowledged execution of cell 0 @ 1708661367125
09:39:27.135 [info] End cell 0 execution after 0.01s, completed @ 1708661367135, started @ 1708661367125
09:40:08.456 [info] Handle Execution of Cells 0 for ~\code\python\Playground.ipynb
09:40:08.466 [info] Kernel acknowledged execution of cell 0 @ 1708661408466
09:40:08.469 [info] End cell 0 execution after 0.002s, completed @ 1708661408468, started @ 1708661408466
09:42:20.118 [info] Handle Execution of Cells 1 for ~\code\python\Playground.ipynb
09:42:20.130 [info] Kernel acknowledged execution of cell 1 @ 1708661540130
09:42:20.133 [info] End cell 1 execution after 0.002s, completed @ 1708661540132, started @ 1708661540130
09:42:47.052 [info] Handle Execution of Cells 1 for ~\code\python\Playground.ipynb
09:42:47.066 [info] Kernel acknowledged execution of cell 1 @ 1708661567066
09:42:47.068 [info] End cell 1 execution after 0.002s, completed @ 1708661567068, started @ 1708661567066
09:42:56.084 [info] Handle Execution of Cells 1 for ~\code\python\Playground.ipynb
09:42:56.094 [info] Kernel acknowledged execution of cell 1 @ 1708661576094
09:42:56.096 [info] End cell 1 execution after 0.001s, completed @ 1708661576095, started @ 1708661576094
09:43:23.540 [info] Handle Execution of Cells 1 for ~\code\python\Playground.ipynb
09:43:23.556 [info] Kernel acknowledged execution of cell 1 @ 1708661603555
09:43:23.557 [info] End cell 1 execution after 0.002s, completed @ 1708661603557, started @ 1708661603555
09:43:48.612 [info] Handle Execution of Cells 1 for ~\code\python\Playground.ipynb
09:43:48.624 [info] Kernel acknowledged execution of cell 1 @ 1708661628624
09:43:48.625 [info] End cell 1 execution after 0.001s, completed @ 1708661628625, started @ 1708661628624
09:50:22.141 [info] Handle Execution of Cells 1 for ~\code\python\Playground.ipynb
09:50:22.152 [info] Kernel acknowledged execution of cell 1 @ 1708662022151
09:50:22.154 [info] End cell 1 execution after 0.002s, completed @ 1708662022153, started @ 1708662022151
09:50:25.123 [info] Handle Execution of Cells 1 for ~\code\python\Playground.ipynb
09:50:25.130 [info] Kernel acknowledged execution of cell 1 @ 1708662025130
09:50:25.135 [info] End cell 1 execution after 0.005s, completed @ 1708662025135, started @ 1708662025130
09:54:28.888 [info] Handle Execution of Cells 2 for ~\code\python\Playground.ipynb
09:54:28.898 [info] Kernel acknowledged execution of cell 2 @ 1708662268897
09:54:28.900 [info] End cell 2 execution after 0.003s, completed @ 1708662268900, started @ 1708662268897
09:54:42.079 [info] Handle Execution of Cells 2 for ~\code\python\Playground.ipynb
09:54:42.086 [info] Kernel acknowledged execution of cell 2 @ 1708662282086
09:54:42.091 [info] End cell 2 execution after 0.004s, completed @ 1708662282090, started @ 1708662282086
09:54:45.392 [info] Handle Execution of Cells 3 for ~\code\python\Playground.ipynb
09:54:45.394 [info] End cell 3 execution after 0s, completed @ undefined, started @ undefined
09:54:47.043 [info] Handle Execution of Cells 4 for ~\code\python\Playground.ipynb
09:54:47.044 [info] End cell 4 execution after 0s, completed @ undefined, started @ undefined
09:54:53.469 [info] Disposing request as the cell (-1) was deleted ~\code\python\Playground.ipynb
09:55:02.033 [info] Disposing request as the cell (-1) was deleted ~\code\python\Playground.ipynb
09:55:10.483 [info] Handle Execution of Cells 2 for ~\code\python\Playground.ipynb
09:55:10.489 [info] Kernel acknowledged execution of cell 2 @ 1708662310489
09:55:10.496 [info] End cell 2 execution after 0.007s, completed @ 1708662310496, started @ 1708662310489
09:55:19.536 [info] Handle Execution of Cells 2 for ~\code\python\Playground.ipynb
09:55:19.546 [info] Kernel acknowledged execution of cell 2 @ 1708662319545
09:55:19.548 [info] End cell 2 execution after 0.003s, completed @ 1708662319548, started @ 1708662319545
09:55:21.616 [info] Handle Execution of Cells 2 for ~\code\python\Playground.ipynb
09:55:21.625 [info] Kernel acknowledged execution of cell 2 @ 1708662321621
09:55:21.632 [info] End cell 2 execution after 0.011s, completed @ 1708662321632, started @ 1708662321621
09:55:35.128 [info] Handle Execution of Cells 2 for ~\code\python\Playground.ipynb
09:55:35.136 [info] Kernel acknowledged execution of cell 2 @ 1708662335134
09:55:35.141 [info] End cell 2 execution after 0.007s, completed @ 1708662335141, started @ 1708662335134
09:56:18.070 [info] Handle Execution of Cells 2 for ~\code\python\Playground.ipynb
09:56:18.081 [info] Kernel acknowledged execution of cell 2 @ 1708662378080
09:56:18.083 [info] End cell 2 execution after 0.002s, completed @ 1708662378082, started @ 1708662378080
09:56:59.486 [info] Disposing request as the cell (0) was deleted ~\code\python\Playground.ipynb
09:56:59.486 [info] Disposing request as the cell (0) was deleted ~\code\python\Playground.ipynb
09:56:59.486 [info] Disposing request as the cell (1) was deleted ~\code\python\Playground.ipynb
09:56:59.487 [info] Disposing request as the cell (1) was deleted ~\code\python\Playground.ipynb
09:56:59.487 [info] Disposing request as the cell (1) was deleted ~\code\python\Playground.ipynb
09:56:59.487 [info] Disposing request as the cell (1) was deleted ~\code\python\Playground.ipynb
09:56:59.487 [info] Disposing request as the cell (1) was deleted ~\code\python\Playground.ipynb
09:56:59.487 [info] Disposing request as the cell (1) was deleted ~\code\python\Playground.ipynb
09:56:59.487 [info] Disposing request as the cell (1) was deleted ~\code\python\Playground.ipynb
09:56:59.487 [info] Disposing request as the cell (2) was deleted ~\code\python\Playground.ipynb
09:56:59.487 [info] Disposing request as the cell (2) was deleted ~\code\python\Playground.ipynb
09:56:59.488 [info] Disposing request as the cell (2) was deleted ~\code\python\Playground.ipynb
09:56:59.488 [info] Disposing request as the cell (2) was deleted ~\code\python\Playground.ipynb
09:56:59.488 [info] Disposing request as the cell (2) was deleted ~\code\python\Playground.ipynb
09:56:59.488 [info] Disposing request as the cell (2) was deleted ~\code\python\Playground.ipynb
09:56:59.488 [info] Disposing request as the cell (2) was deleted ~\code\python\Playground.ipynb
09:56:59.488 [info] Dispose Kernel '~\code\python\Playground.ipynb' associated with '~\code\python\Playground.ipynb'
09:56:59.489 [info] Dispose Kernel process 4748.
09:56:59.499 [info] Process Execution: c:\Windows\System32\taskkill.exe /F /T /PID 4748
09:56:59.640 [info] No interpreter for Pylance for Notebook URI "~\code\python\Arithmetic and Variables.ipynb"
09:58:36.278 [info] Handle Execution of Cells 3 for ~\code\python\Arithmetic and Variables.ipynb
09:58:36.281 [info] Starting Kernel startUsingPythonInterpreter, .jvsc74a57bd077ea1d837ba88d39d1da8a46a378df4d39daecc0b2ec75acfbdc990c6cc53527.~\AppData\Local\Programs\Python\Python312\python.exe.~\AppData\Local\Programs\Python\Python312\python.exe.-m#ipykernel_launcher (Python Path: ~\AppData\Local\Programs\Python\Python312\python.exe, Unknown, 3.12.1) for '~\code\python\Arithmetic and Variables.ipynb' (disableUI=true)
09:58:36.505 [warn] Failed to get activated env vars for ~\AppData\Local\Programs\Python\Python312\python.exe in 221ms
09:58:36.511 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -m pip list
09:58:36.516 [warn] Failed to get activated env vars for ~\AppData\Local\Programs\Python\Python312\python.exe in 227ms
09:58:36.522 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
09:58:36.536 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -m ipykernel_launcher --f=~\AppData\Roaming\jupyter\runtime\kernel-v2-12660Yo40FSta0WMy.json
> cwd: ~\code\python
09:58:36.536 [info] Kernel process 564.
09:58:37.931 [warn] StdErr from Kernel Process 0.00s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
09:58:38.505 [info] Registering Kernel Completion Provider from kernel Python 3.12.1 for language python
09:58:38.519 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe ~\.vscode\extensions\ms-toolsai.jupyter-2024.1.1-win32-x64\pythonFiles\printJupyterDataDir.py
09:58:38.530 [info] Kernel acknowledged execution of cell 3 @ 1708662518530
09:58:38.533 [info] End cell 3 execution after 0.003s, completed @ 1708662518533, started @ 1708662518530
09:58:43.436 [info] Handle Execution of Cells 3 for ~\code\python\Arithmetic and Variables.ipynb
09:58:43.442 [info] Kernel acknowledged execution of cell 3 @ 1708662523442
09:58:43.447 [info] End cell 3 execution after 0.005s, completed @ 1708662523447, started @ 1708662523442
09:58:46.381 [info] Handle Execution of Cells 3 for ~\code\python\Arithmetic and Variables.ipynb
09:58:46.388 [info] Kernel acknowledged execution of cell 3 @ 1708662526387
09:58:46.395 [info] End cell 3 execution after 0.008s, completed @ 1708662526395, started @ 1708662526387
09:58:51.245 [info] Handle Execution of Cells 3 for ~\code\python\Arithmetic and Variables.ipynb
09:58:51.255 [info] Kernel acknowledged execution of cell 3 @ 1708662531255
09:58:51.258 [info] End cell 3 execution after 0.003s, completed @ 1708662531258, started @ 1708662531255
09:59:06.507 [info] Handle Execution of Cells 3 for ~\code\python\Arithmetic and Variables.ipynb
09:59:06.518 [info] Kernel acknowledged execution of cell 3 @ 1708662546518
09:59:06.521 [info] End cell 3 execution after 0.003s, completed @ 1708662546521, started @ 1708662546518
10:08:00.708 [info] No interpreter for Pylance for Notebook URI "Untitled-1.ipynb"
10:08:04.312 [info] No interpreter for Pylance for Notebook URI "~\code\python\Functions.ipynb"
10:08:04.625 [info] No interpreter for Pylance for Notebook URI "~\code\python\Functions.ipynb"
10:08:04.949 [info] No interpreter for Pylance for Notebook URI "~\code\python\Functions.ipynb"
10:08:04.956 [info] Starting Kernel startUsingPythonInterpreter, .jvsc74a57bd077ea1d837ba88d39d1da8a46a378df4d39daecc0b2ec75acfbdc990c6cc53527.~\AppData\Local\Programs\Python\Python312\python.exe.~\AppData\Local\Programs\Python\Python312\python.exe.-m#ipykernel_launcher (Python Path: ~\AppData\Local\Programs\Python\Python312\python.exe, Unknown, 3.12.1) for '~\code\python\Functions.ipynb' (disableUI=true)
10:08:05.403 [warn] Failed to get activated env vars for ~\AppData\Local\Programs\Python\Python312\python.exe in 444ms
10:08:05.409 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -m pip list
10:08:05.413 [warn] Failed to get activated env vars for ~\AppData\Local\Programs\Python\Python312\python.exe in 361ms
10:08:05.419 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
10:08:05.432 [info] Process Execution: ~\AppData\Local\Programs\Python\Python312\python.exe -m ipykernel_launcher --f=~\AppData\Roaming\jupyter\runtime\kernel-v2-126607FZT38NKvOnl.json
> cwd: ~\code\python
10:08:05.432 [info] Kernel process 10284.
10:08:06.967 [warn] StdErr from Kernel Process 0.00s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
10:08:07.389 [info] Registering Kernel Completion Provider from kernel Python 3.12.1 for language python
10:13:30.303 [warn] Timeout (after 20000ms) waiting to inspect code 'def and'
10:13:34.835 [warn] Timeout (after 20000ms) waiting to inspect code 'def add_two_to(vars'
10:13:35.656 [warn] Timeout (after 20000ms) waiting to inspect code 'def add_two_to(pass'
10:14:01.482 [info] Handle Execution of Cells 0 for ~\code\python\Functions.ipynb
10:14:01.490 [info] Kernel acknowledged execution of cell 0 @ 1708663441490
10:14:01.496 [info] End cell 0 execution after 0.006s, completed @ 1708663441496, started @ 1708663441490
```
### Coding Language and Runtime Version
Python 3.14.1
### Language Extension Version (if applicable)
v2024.0.1
### Anaconda Version (if applicable)
_No response_
### Running Jupyter locally or remotely?
Local
|
bug
|
low
|
Critical
|
2,684,749,331 |
storybook
|
[Bug]: Args types are always Partial<Props> rather than the values I passed
|
### Describe the bug
I think this is an issue with discriminated unions in TypeScript, but there's a variant of that in this project.
# The Issue
Something's up with `StoryObj` types after upgrading to Storybook v8. It's saying all arg types are `Partial<MyComponent>` no matter what. The type itself has `infer`, but that's inferring the wrong type for some reason.
## Before (v7)
Heavily simplified:
```ts
const storybookMeta: Meta<MyComponentProps> = {
title: "MyComponent",
component: MyComponent,
args: {
// my default args
},
};
export default storybookMeta;
export const ActionButton: StoryObj<MyComponentProps> = { // Also doesn't work passing `typeof MyComponent` as a generic.
args: {
actionLabel: "Open app settings",
},
render: function C(args) {
return (
<MyComponent // TypeScript errors in v8 because `Partial<MyComponentProps>` isn't valid for `MyComponentProps`. That means `actionLabel` is now `string | undefined`. It doesn't care that `ActionButton.args.actionLabel` is a string. It's also not aware of any default values in the `Meta` type definition.
{...args}
/>
)
},
};
```
## After (v8)
```ts
const globalArgs = {
// my default args
} as const satisfies Partial<MyComponentProps>
const storybookMeta: Meta<typeof MyComponent> = {
title: "MyComponent",
component: MyComponent,
args: globalArgs,
};
export default storybookMeta;
const actionButtonArgs = {
...globalArgs,
actionLabel: "Open app settings",
} as const satisfies MyComponentProps
export const ActionButton: StoryObj<typeof actionButtonArgs> = {
args: actionButtonArgs,
render: function C(args) {
return (
<MyComponent
{...args}
/>
);
},
};
```
# Generics don't match Storybook's docs
After going through the reproduction, I noticed this project has the generics passed incorrectly. I changed them to match the TypeScript reproduction, but it didn't fix it.
Looking further, it appears to be an issue with optional props (discriminated unions) where in one case they're required, in another case they're optional and never.
I modified the Button and its stories in the reproduction:
```ts
(
| {
label2: string;
actionLabel?: never;
}
| {
label2?: never;
actionLabel: string;
}
)
```
```ts
export const Primary: Story = {
args: {
primary: true,
label: 'Button',
actionLabel: "yo"
},
render: (args) => {
args.actionLabel // This should be of type `string`, but it's `string | undefined`.
return <Button {...args} />
}
};
```
### Reproduction link
https://stackblitz.com/edit/github-q1puzq?file=src%2Fstories%2FButton.stories.tsx,src%2Fstories%2FButton.tsx&preset=node
### Reproduction steps
1. Go to the above link.
2. Open `Button.stories.tsx`.
3. Notice the `Primary` story has the wrong type for `args.actionLabel` even though a value is passed.
### System
```bash
Storybook Environment Info:
System:
OS: macOS 14.7.1
CPU: (12) arm64 Apple M2 Max
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.9.0 - /private/var/folders/gk/c_gsyb190312g8l2l77scx4c0000gp/T/xfs-f3745574/node
Yarn: 4.1.0 - /private/var/folders/gk/c_gsyb190312g8l2l77scx4c0000gp/T/xfs-f3745574/yarn <----- active
npm: 10.1.0 - ~/.nvm/versions/node/v20.9.0/bin/npm
Browsers:
Chrome: 131.0.6778.86
Safari: 18.1.1
npmPackages:
@storybook/addon-links: ^8.4.5 => 8.4.5
@storybook/blocks: ^8.4.5 => 8.4.5
@storybook/react: ^8.4.5 => 8.4.5
@storybook/react-vite: ^8.4.5 => 8.4.5
storybook-addon-rtl-direction: ^0.0.19 => 0.0.19
```
### Additional context
_No response_
|
bug,typescript,argtypes
|
low
|
Critical
|
2,684,884,779 |
pytorch
|
FP8 basic creation operations zeros / ones / full don't work under inductor
|
### π Describe the bug
FP8 basic creation operations such as zeros / ones / full work under eager mode but not backend=inductor compile.
same error as https://github.com/pytorch/pytorch/issues/128370 in <2.6, >2.6 see second comment
```python
import torch
def test():
return (torch.zeros(512, 512, device="cuda", dtype=torch.float8_e4m3fn), torch.ones(512, 512, device="cuda", dtype=torch.float8_e4m3fn))
def test_cast():
return (torch.zeros(512, 512, device="cuda").to(dtype=torch.float8_e4m3fn), torch.ones(512, 512, device="cuda").to(dtype=torch.float8_e4m3fn))
@torch.compile(backend="inductor")
def test_compile():
return (torch.zeros(512, 512, device="cuda", dtype=torch.float8_e4m3fn), torch.ones(512, 512, device="cuda", dtype=torch.float8_e4m3fn))
@torch.compile(backend="inductor")
def test_compile_cast():
return (torch.zeros(512, 512, device="cuda").to(dtype=torch.float8_e4m3fn), torch.ones(512, 512, device="cuda").to(dtype=torch.float8_e4m3fn))
test() #works
test_cast() #works
test_compile() #fails
test_compile_cast() #fails
```
```
### Error logs
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: "_local_scalar_dense_cuda" not implemented for 'Float8_e4m3fn'
While executing %full : [num_users=1] = call_function[target=torch.ops.aten.full.default](args = ([512, 512], 0), kwargs = {dtype: torch.float8_e4m3fn, layout: torch.strided, device: cuda, pin_memory: False})
Original traceback:
File "/vol/zraid1/Projects/AI2/FP8/ones_test.py", line 8, in test_compile
return (torch.zeros(512, 512, device="cuda", dtype=torch.float8_e4m3fn), torch.ones(512, 512, device="cuda", dtype=torch.float8_e4m3fn))
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: version 3.30.3
Libc version: glibc-2.40
Python version: 3.11.10 (main, Sep 9 2024, 22:11:19) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-6.6.43-273-tkg-bore-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX 6000 Ada Generation
GPU 1: NVIDIA RTX 6000 Ada Generation
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 21%
CPU max MHz: 5300.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.5.1
[pip3] torch-xla==2.5.0
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] No relevant packages
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @yanbing-j @vkuzo @albanD
|
triaged,bug,oncall: pt2,module: inductor,module: float8
|
low
|
Critical
|
2,684,905,513 |
godot
|
Modified .po files are not updated when running a game with USB remote debugging
|
### Tested versions
Reproducible in Godot 4.3
### System information
Godot v4.3.stable - macOS 15.1.0 - Vulkan (Mobile) - integrated Apple M3 Pro - Apple M3 Pro (12 Threads)
### Issue description
When using .po files to translate your game and testing the game with USB remote debugging, it looks like Godot is using the last installed .po files and not the last updated ones. You need to remove the app from your phone and run in again to see the updates. When you run through the editor on your computer, all the data are up-to-date as expected.
### Steps to reproduce
You must have a game with the translations enabled and using .po files.
Switch from Godot to the external text editor to update any of the .po files.
Run your game on a device with USB remote debugging
The game displays old values of the .po files and not the most recent .po files
- It only works if you remove the app from your device and run again with USB remote debugging
- No issue when running game through the editor on your computer
### Minimal reproduction project (MRP)
N/A
|
topic:editor,needs testing
|
low
|
Critical
|
2,684,985,815 |
PowerToys
|
MouseWithoutBorders periodically wakes pc up from display off state
|
### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
- Open mousewithoutborders
- wait for first pc to turn off screen after a few minues of inactivity
- start working on second pc, without moving cursor over to the first one
- first pc acts like I was moving my mouse from time to time - turns its screen back on
### βοΈ Expected Behavior
_No response_
### β Actual Behavior
_No response_
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Major
|
2,684,985,862 |
flutter
|
With EnsureSemantics(), a Scrollable Widget with Internal Widget with NeverScrollableScrollPhysics Causes Jerky Scrolling
|
### Steps to reproduce
1. Have a web app with ensureSemantics ran
2. Have a listview with internal listview
3. Have the internal listview have NeverScrollableScrollPhysics()
4. Run web app and scroll
### Expected results
Scroll is smooth
### Actual results
Scroll jerks around/jumps when scrolling up or down
If ensureSemantics is removed or if GridView or ListView (with NeverScrollableScrollPhysics) is removed or replaced, scroll is smooth.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
void main() {
WidgetsFlutterBinding.ensureInitialized();
SemanticsBinding.instance.ensureSemantics();
runApp(const TestApp());
}
class TestApp extends StatelessWidget {
const TestApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: Scaffold(
body: TestPage(),
),
);
}
}
class TestPage extends StatelessWidget {
const TestPage({super.key});
@override
Widget build(BuildContext context) {
return ListView(
padding: EdgeInsets.zero,
children: [
SizedBox(
height: 500.0,
width: 400.0,
child: ListView(
scrollDirection: Axis.horizontal,
physics: const NeverScrollableScrollPhysics(),
children: [
Container(color: Colors.red, height: 500, width: 100.0),
Container(color: Colors.blue, height: 500, width: 100.0),
Container(color: Colors.green, height: 500, width: 100.0),
],
),
),
...List.generate(
20,
(index) => ListTile(
title: Text('Test Item $index'),
leading: const Icon(Icons.list),
trailing: const Icon(Icons.arrow_forward),
),
),
],
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/b9a318dc-1b36-4c36-9bd1-df3468f67b82
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
Was able to repro on Flutter version 3.24.2 and 3.27.0-0.1.pre
Unable to repro on Flutter version 3.22.2
</details>
|
c: regression,framework,a: accessibility,f: scrolling,platform-web,has reproducible steps,P2,workaround available,customer: castaway,team-framework,triaged-framework,found in release: 3.27
|
low
|
Major
|
2,684,991,876 |
neovim
|
Regression: race condition(?) in msgpack handling (from dc37c1550bed46fffbb677d343cdc5bc94056219)
|
### Problem
When sending very large completion lists to a GUI client that supports ext_popup with text input going at the same time, it's possible to trigger a race condition that causes neovim to return one of the completion results with the same message id as we expect for the RPC result response. I've attached a log with a debug trace of some of the RPC transactions happening between neovim-gtk and neovim that seem to cause this. [race-reproduced.txt](https://github.com/user-attachments/files/17877569/race-reproduced.txt)
I managed to do some bisecting on neovim and tracked the breaking change down to dc37c1550bed46fffbb677d343cdc5bc94056219
### Steps to reproduce
I know, I'm really sorry :(. There's supposed to be some steps here, but I am going to need a bit of assistance from y'all in coming up with a reasonable way to reproduce this without pulling in my whole setup. I'm going to do my best to figure something out next week or after Thanksgiving if I can, but hopefully this is a start and maybe you guys have some ideas on how I could come up with a simpler reproducer. I think the key is going to be figuring out how I can get nvim to spit out a super large list of completion results without having my .vimrc present or needing to be in a project source tree.
Also FWIW: The good news is I can reproduce this very reliably on my own system, as can a number of other people with substantially different setups than my own (and am going to ask anyone with an easy reproducer for this to make a post here)[race-reproduced.txt](https://github.com/user-attachments/files/17877567/race-reproduced.txt)
. Anyway: I'm going to give you the steps I'm following locally here, and hopefully we can figure something out.
* Have LSP setup on neovim, the LSP configuration I'm currently using is for rust-analyzer and the code I'm using to reproduce this problem is a kernel source tree here
* Use neovim-gtk (hopefully other clients can hit this, but with how hard it was to find a consistent reproducer for this it's -probably- better just to use my client for now)
* Open up a project where you can get neovim to generate a really, really big set of results for autocompletion. Ideally, it should lag things a bit.
* Start typing such that neovim starts popping up completion results using neovim-gtk's ext_popup support
* Repeat until it crashes and says it no longer can read messages from neovim.
### Expected behavior
nvim_input should never return anything but a u64 in the response field when responding to an nvim_input call.
### Nvim version (nvim -v)
v0.10.2
### Vim (not Nvim) behaves the same?
N/A (this is related to RPC, so...)
### Operating system/version
Fedora 40
### Terminal name/version
neovim-gtk
### $TERM environment variable
N/A (I think?)
### Installation
From fedora repository, but I can trigger it by building from source
|
channels-rpc,needs:response,bug-regression,has:bisected
|
low
|
Critical
|
2,685,018,813 |
rust
|
rustc hangs with gordian knot of trait bounds
|
Given the following code, `rustc` hangs:
```rust
mod asn1 {
pub trait Asn1Writable: Sized {}
pub trait SimpleAsn1Writable: Sized {}
impl<T: SimpleAsn1Writable> Asn1Writable for T {}
impl<T: SimpleAsn1Writable> SimpleAsn1Writable for &T {}
impl<T: SimpleAsn1Writable> SimpleAsn1Writable for Box<T> {}
impl<T: Asn1Writable> Asn1Writable for Option<T> {}
pub trait Asn1DefinedByWritable: Sized {}
}
mod common {
use crate::asn1;
pub struct AlgorithmIdentifier<'a> {
pub params: AlgorithmParameters<'a>,
}
impl<'a> asn1::SimpleAsn1Writable for AlgorithmIdentifier<'a> where
AlgorithmParameters<'a>: asn1::Asn1DefinedByWritable
{
}
pub enum AlgorithmParameters<'a> {
Sha1,
Pbkdf2(PBKDF2Params<'a>),
}
impl<'a> asn1::Asn1DefinedByWritable for AlgorithmParameters<'a>
where
PBES2Params<'a>: asn1::Asn1Writable,
PBKDF2Params<'a>: asn1::Asn1Writable,
{
}
pub const PSS_SHA1_HASH_ALG: AlgorithmIdentifier<'_> = AlgorithmIdentifier {
params: AlgorithmParameters::Sha1,
};
pub struct RsaPssParameters<'a> {
pub hash_algorithm: AlgorithmIdentifier<'a>,
}
impl<'a> asn1::SimpleAsn1Writable for RsaPssParameters<'a> {}
pub struct PBES2Params<'a> {
pub key_derivation_func: Box<AlgorithmIdentifier<'a>>,
}
impl<'a> asn1::SimpleAsn1Writable for PBES2Params<'a> where
Box<AlgorithmIdentifier<'a>>: asn1::Asn1Writable
{
}
pub struct PBKDF2Params<'a> {
pub salt: &'a [u8],
}
impl<'a> asn1::SimpleAsn1Writable for PBKDF2Params<'a> where
Box<AlgorithmIdentifier<'a>>: asn1::Asn1Writable
{
}
}
pub fn write_element<T: asn1::Asn1Writable>(val: &T) {
todo!()
}
pub fn f(p: &common::RsaPssParameters<'_>) {
write_element(&Some(&p.hash_algorithm));
}
fn main() {}
```
```
/t/q β―β―β― rustc src/main.rs
[no amount of patience is enough]
```
This is minimized from https://github.com/pyca/cryptography and https://github.com/alex/rust-asn1.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: aarch64-apple-darwin
release: 1.82.0
LLVM version: 19.1.1
```
|
A-trait-system,I-compiletime,T-compiler,C-bug,fixed-by-next-solver
|
low
|
Critical
|
2,685,019,285 |
PowerToys
|
Change WinX Menu
|
### Description of the new feature / enhancement
At Windows 10 it is possible to change it, but at Windows 11 23H2 can't change it from anywhere, If it is possible for you, add ability to change Windows WinX Menu
### Scenario when this would be used?
When use want to change WinX Menu?
For example i removed Terminal from Win 11 and now want to replace WinX Powershell to CMD, but it's impossible in Windows 11, no registry or other trick or app works
### Supporting information
_No response_
|
Needs-Triage
|
low
|
Minor
|
2,685,056,333 |
godot
|
Issue with layering TileMapLayers in playtest
|
### Tested versions
-Reproducible in: Godot v4.3 (stable), v4.4-dev5
### System information
Godot v4.3.stable - Android - GLES3 (Compatibility) - Mali-G68 MC4 - (8 Threads) / Godot v4.3.stable - Windows 10.0.26311 - GLES3 (Compatibility) - ANGLE (Microsoft, Microsoft Basic Render Driver (0x0000008C) Direct3D11 vs_5_0 ps_5_0, D3D11-10.0.26311.5000) - Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz (6 Threads)
### Issue description

In the attached image you can see the red fading texture going above the white outline inside the editor, but not in the playtest. After some experimenting I also found out that this issue does not occur on an exported exe file, which makes this issue exclusively to the playtesting.
Both the Windows and the Android version are affected by this issue and it only happens with the TileMapLayer. The TileMap node doesn't seem to replicate this problem.
### Steps to reproduce
Play the Scene.tscn in the Scenes folder to reproduce the bug.
### Minimal reproduction project (MRP)
[TileMapLayerIssue.zip](https://github.com/user-attachments/files/17877788/TileMapLayerIssue.zip)
|
bug,topic:rendering,topic:2d
|
low
|
Critical
|
2,685,062,448 |
rust
|
Overflow while adding drop-check rules on a generic tree
|
<!--
Thank you for filing a bug report! π Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
enum Tree<T: Scopable> {
Group(Vec<Tree<T>>),
Subtree(Box<Tree<T::SubType>>),
Leaf(T),
}
trait Scopable: Sized {
type SubType: Scopable;
}
impl<T: Scopable> Tree<T> {
fn foo(self) -> Self { // error[E0320]: overflow while adding drop-check rules for Tree<T>
self
}
}
```
I expected this to compile as it doesn't seem to unconditionally introduce a type of infinite length, but no luck, drop checking does not succeed.
When the type is not passed in as an argument, or passed by reference of course, there are no issues & the code compiles.
```rust
fn bar() {
let _ = Tree::Leaf(()); // no problem!
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
Reproduced on these versions,
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: aarch64-apple-darwin
release: 1.81.0
LLVM version: 18.1.7
```
```
rustc 1.83.0-nightly (1bc403daa 2024-10-11)
binary: rustc
commit-hash: 1bc403daadbebb553ccc211a0a8eebb73989665f
commit-date: 2024-10-11
host: aarch64-apple-darwin
release: 1.83.0-nightly
LLVM version: 19.1.1
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Error</summary>
<p>
```
error[E0320]: overflow while adding drop-check rules for Tree<T>
--> src/main.rs:68:21
|
68 | fn foo(self) -> Self {
| ^^^^
|
= note: overflowed on Tree<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<T as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType as Scopable>::SubType>
```
</p>
</details>
|
T-compiler,C-bug,T-types
|
low
|
Critical
|
2,685,105,983 |
yt-dlp
|
Add issue template for wiki issues/requests
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
We should consider adding an issue template for issues/requests for the wiki, given the main yt-dlp issue tracker is used as the wiki repo issue tracker.
These issues should have the `wiki` label automatically applied.
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_
|
docs/meta/cleanup,enhancement,triage,wiki
|
low
|
Critical
|
2,685,112,191 |
PowerToys
|
PDF Thumbnail random pop-up
|
### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
File Explorer: Thumbnail preview
### Steps to reproduce
Enable .pdf in File Management -> File Explorer add-ons -> Thumbnail Icon Preview
### βοΈ Expected Behavior
The pop up should not show up
### β Actual Behavior
This pop up from PowerToys.PdfThumbnailProvider.exe seldomly appears without any obvious trigger.

### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Minor
|
2,685,114,037 |
TypeScript
|
Type Instantiation is Excessively Deep Error Regression in #37348
|
### π Search Terms
type instantiation is excessively deep, unused type parameter, unused tuple element, #37348
### π Version & Regression Information
This changed in PR #37348
### β― Playground Link
https://www.typescriptlang.org/play/?#code/CYUwxgNghgTiAEkoGdnwPoEkB2AXEM2UEAKgJ4AOIAIlLlALID2oE8A3gFDw-wVwVYNAgEsAbiGC16ALngBtKNjIAaeErIBdANycAvrs6gkcRNFTxyVaYxYgIAHm68A9C-iYAZvFwALEWi4lAiCMFAAtiD4MPAB8KxRkj6+CGAiMGAArtAwIkHwAOZMIGhQAO5QZAB0zjwAqtiZyJIqtfDUohJSdFCtAHzwIAAe+NjAaFh4BESkwTbMrBxt-CChwrldNnJ+ccOj45ZzPQv2DhpqItieBO2dkjZ9bbzwAPzwbpb+aM1gTGODEBAkTwsWwyQQuEyFEBsVKEGQTHg2HAJWQsDI8E8TBiCMi8DgKD+NWeJN48h2yDUGk0T14cg0uj0QA
### π» Code
```ts
declare class _InternalTypeDataModel {
prepareDerivedData: [any, any];
};
declare class TypeDataModel<
// If this type parameter is deleted the circularity goes away.
Unused,
DerivedData,
> extends _InternalTypeDataModel {
prepareDerivedData: this extends TypeDataModel<any, infer DerivedData>
? // This second element in the tuple is also necessary for some reason.
[this, any]
: any;
}
```
### π Actual behavior
```
Type instantiation is excessively deep and possibly infinite.
```
### π Expected behavior
No error.
### Additional information about the issue
I understand the weirdness in this minimized code but the actual code I was reducing actually does useful things and doesn't really seem so contrived in context. In actual code all type parameters and tuple elements actually do useful things but this reproduction shows it doesn't matter if they're used, just that they exist, for some reason.
|
Bug,Help Wanted
|
low
|
Critical
|
2,685,116,604 |
rust
|
Coherence with object types with overlapping supertrait projections is incomplete
|
I tried this code:
```rust
trait Sup<T> {
type Assoc;
}
impl<T> Sup<T> for () {
type Assoc = T;
}
impl<T, U> Dyn<T, U> for () {}
trait Dyn<A, B>: Sup<A, Assoc = A> + Sup<B, Assoc = B> {}
trait Trait {
type Assoc;
}
impl Trait for dyn Dyn<(), ()> {
type Assoc = &'static str;
}
impl<A, B> Trait for dyn Dyn<A, B> {
type Assoc = usize;
}
fn call<A, B>(x: usize) -> <dyn Dyn<A, B> as Trait>::Assoc {
x
}
fn main() {
let x: &'static str = call::<(), ()>(0xDEADBEEF);
println!("{x}");
}
```
I expected to see this happen: It does not work.
Instead, this happened: Segfault
### Meta
`rustc --version --verbose`:
```
rustc 1.85.0-nightly (a47555110 2024-11-22)
```
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"compiler-errors"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END -->
|
P-high,I-unsound,C-bug,T-types,A-coherence,A-trait-objects
|
low
|
Critical
|
2,685,135,421 |
kubernetes
|
HPA development is not active
|
/sig autoscaling
/cc @kubernetes/sig-autoscaling-misc
## What
HPA's development has not been active recently.
It causes many PRs to struggle to get reviews, including some KEPs.
Essentially, this is the problem of lacking approvers in HPA.
## Context (AFAIK)
Currently, @mwielgus is the only approver, but already left the sig-autoscaling chair, and I'm not sure if they're still willing to help in reviewing.
https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/podautoscaler/OWNERS#L7
I know the step down doesn't always mean a stop working on things, but, in this case, actually the last review from them was more than 1 year ago.
https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+reviewed-by%3Amwielgus+is%3Aclosed
There are some minor changes made in HPA though, all of them are approved by someone else (root approvers), not stamps from sig-autoscaling.
https://github.com/kubernetes/kubernetes/commits/master/pkg/controller/podautoscaler
Also, @gjtempleton is trying to take over the position, and he's (only) a reviewer (not yet approver) apart from @mwielgus now.
- https://github.com/kubernetes/kubernetes/pull/124607
- https://github.com/kubernetes/kubernetes/pull/124661
## Proposal
We're trapped in a vicious cycle; HPA development is not active because of the lack of reviewers/approvers, and no reviewer is newly born because HPA development is not active.
There's (probably, AFAIK) no other active person who is eligible for the reviewer/approver of HPA based on [the official criteria](https://github.com/kubernetes/community/blob/master/community-membership.md).
But, we shouldn't keep the current situation, and I'd propose having some volunteers to join reviewers/approvers to break through (even if they're not officially eligible).
Regarding the approver, I cannot come up with any idea other than asking @gjtempleton to be an approver and start approving some PRs. (... I know they're also busy though)
Also, when I was doing the container-based HPA enhancement, I remember @pbarker also helped reviewing a lot, might be a good idea to ask them to join the reviewer list. ([PRs](https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+author%3Apbetkier+is%3Aclosed), [reviews](https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+reviewed-by%3Apbetkier+is%3Aclosed+label%3Asig%2Fautoscaling)) I can also help in being a reviewer too. ([PRs](https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+author%3Asanposhiho+is%3Aclosed+label%3Asig%2Fautoscaling), [reviews](https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+reviewed-by%3Asanposhiho+is%3Aclosed+label%3Asig%2Fautoscaling+))
|
sig/autoscaling,needs-triage
|
low
|
Minor
|
2,685,210,649 |
vscode
|
Make "Reopen editor with" have a higher priority than the "editor.defaultBinaryEditor" setting.
|
<!-- β οΈβ οΈ Do Not Delete This! feature_request_template β οΈβ οΈ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I set `editor.defaultBinaryEditor` setting to an editor provide by an extension(Hex Editor), but I think the problem do not come from the extension.
I'm viewing a log file capture from a serial port. This log file mostly contains readable text content, but some non-text content is also written to the file when the device restarts, which is why I believe VSCode detects this file as a binary file (I have pinpointed the issue to a file segment that includes a \0 terminator). Therefore, VSCode opens this file using the defaultBinaryEditor, which is not a problem for me. I am just trying to use the "Reopen editor with" feature to reopen the file in the default text editor.
However, VSCode consistently detects this file as a binary file and attempts to redirect it to the defaultBinaryEditor.
I know I can clear the defaultBinaryEditor or set the default text editor, but this would require me to reselect the hexeditor every time I open a binary file.
I feel that when a binary file is detected, the "Reopen editor with" feature should have a higher priority than the defaultBinaryEditor.
|
bug,custom-editors,workbench-editor-resolver
|
low
|
Minor
|
2,685,233,742 |
godot
|
When you "Save to File" a mesh from a .glb, if UIDs are used the .import file becomes corrupted
|
### Tested versions
- Reproducible in: v4.4.dev5.official [9e6098432]
### System information
Godot v4.4.dev5 - Windows 10.0.22631 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Laptop GPU (NVIDIA; 32.0.15.6614) - AMD Ryzen 5 5600H with Radeon Graphics (12 threads)
### Issue description
When you have a mesh with "Save to File" enabled from a .glb, if you reimport the .glb, causing the "save_to_file/path" in the import to switch to UID, the .import file becomes corrupted and you'll be unable to open the .glb import page.
These are the changes to the .import file that happen when you reimport:

### Steps to reproduce
1. Import a .glb.
2. Open its import page and set one of its meshes to "Save to File". Set the path and save it to a new (.res) file.
3. Open the .glb's import page again. Notice that it opens.
4. In the .glb's import page, click into the "Save to File" path textbox and move your text cursor around a bit. Then reimport. What this did was switch the "save_to_file/path" in the .import file to UID, because for some reason when you do it the first time it'll save as a "res://" path.
5. Confirm that it's switched to UID by opening the .glb .import file in a text editor and checking that the "save_to_file/path" is now a UID.
6. Open the .glb's import page again. Notice that it does not open, and the .glb icon in the Godot FileSystem tab is now a red X.
### Minimal reproduction project (MRP)
[glb-export-mesh.zip](https://github.com/user-attachments/files/17878328/glb-export-mesh.zip)
|
bug,topic:editor,topic:import,regression
|
low
|
Minor
|
2,685,285,155 |
kubernetes
|
scheduler plugin podTopologySpread performs not well enough in a disaster recovery scenario
|
### What happened?
I ran a disaster recovery scenario with 2000 kwok fake nodes, 100,000 pending pods recently.
Each pod has topologySpreadConstraints specified as below
`
"topologySpreadConstraints": [
{
"labelSelector": {
"matchLabels": {
"app": "fake-pod"
}
},
"maxSkew": 1,
"topologyKey": "topology.kubernetes.io/region",
"whenUnsatisfiable": "ScheduleAnyway"
},
{
"labelSelector": {
"matchLabels": {
"app": "fake-pod"
}
},
"maxSkew": 1,
"topologyKey": "kubernetes.io/hostname",
"whenUnsatisfiable": "ScheduleAnyway"
}
]
`
The overall performance of kube-scheduler degrades as the number of pods on each node increases.

As I digged further with pprof , the most time-consuming process was found in podTopologySpead, within the PreScore stage, in below 2 functions:
- countPodsMatchSelector
- PodMatchesNodeSelectorAndAffinityTerms

### What did you expect to happen?
rate(scheduler_schedule_attempts_total[1m]) should keep steady during the test run
### How can we reproduce it (as minimally and precisely as possible)?
1. Create a cluster with 2000 nodes, stop kube-scheduler first
2. Create a deployment with 100,000 replicas, topologySpreadConstraints described as above
3. Start kube-scheduler and watch the deployment status and kube-scheduler metrics, until all pods are scheduled
### Anything else we need to know?
countPodsMatchSelector is also called in Score stage. It can be a good starting point to boost overall performance.
In most scenarios, pod has an owner (rs/dp/ds/sts etc), and topologySpread is applied to all pods with the same owner.
So based on the above theory, this can be optimized to an O(1) procedure, after we apply some tricks on it:
- add a cache (map[UID][]*PodInfo) in nodeInfo, indexed by the pod's owner Id
- the cache will be updated with the Add/Update/Del callback of pod informer
- in countPodsMatchSelector
* If the pod has an owner, we use owner's Id to find the pre-aggregated count.
* If it doesn't, fallback to the old ways.
Any comments are welcomed.
If this is feasible without defects, I can try to create a PR for this.
The improvement can be (in a same test baseline):
- 100,000 pod scheduled total time: drops from 7min30s to 4min
- rate(scheduler_schedule_attempts_total[1m]) keeps steady above 420 compared to an avg of 220

### Kubernetes version
1.20.7 but also applies to the lastest release
### Cloud provider
not specific to any cloud provider
### OS version
not specific to any os version
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
|
kind/bug,sig/scheduling,needs-triage
|
low
|
Major
|
2,685,302,103 |
pytorch
|
Could you provide a standardized interface or documentation for mapping all versions of torch, torchvision, and torchaudio?
|
### π The feature, motivation and pitch
Could you provide a standardized interface or documentation for mapping all versions of torch, torchvision, and torchaudio? This would facilitate automated builds and the installation of related dependencies.
### Alternatives
_No response_
### Additional context
_No response_
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke
|
module: docs,feature,oncall: releng,triaged
|
low
|
Minor
|
2,685,308,499 |
ui
|
[bug]: DarkMode is not working in RemixV2
|
### Describe the bug
The instructions of Darkmode for Remix, it doesnβt work with the structure of the last version. The theme is not applied to the class using the toggle.
### Affected component/components
All
### How to reproduce
Follow the steps for Darkmode. They are not the same as the root in the last version.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Local storage not working
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues
|
bug
|
low
|
Critical
|
2,685,339,707 |
rust
|
Tracking issue for release notes of #133349: Stabilize the 2024 edition
|
This issue tracks the release notes text for #133349.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Language
- [The 2024 Edition is now stable.](https://github.com/rust-lang/rust/pull/133349)
See [the edition guide](https://doc.rust-lang.org/nightly/edition-guide/rust-2024/index.html) for more details.
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
### Rust 2024
We are excited to announce that the Rust 2024 Edition is now stable!
Editions are a mechanism for opt-in changes that may otherwise pose a backwards compatibility risk. See [the edition guide](https://doc.rust-lang.org/edition-guide/editions/index.html) for details on how this is achieved, and detailed instructions on how to migrate.
This is the largest edition we have released. The [edition guide](https://doc.rust-lang.org/edition-guide/rust-2024/index.html) contains detailed information about each change, but as a summary, here are all the changes:
- Language
- [RPIT lifetime capture rules](https://doc.rust-lang.org/edition-guide/rust-2024/rpit-lifetime-capture.html) β Changes the default impl trait `use<..>` capturing.
- [`if let` temporary scope](https://doc.rust-lang.org/edition-guide/rust-2024/temporary-if-let-scope.html) β Changes the scope of temporaries for `if let` expressions.
- [Tail expression temporary scope](https://doc.rust-lang.org/edition-guide/rust-2024/temporary-tail-expr-scope.html) β Changes the scope of temporaries for the tail expression in a block.
- [Match ergonomics reservations](https://doc.rust-lang.org/edition-guide/rust-2024/match-ergonomics.html) β New restrictions on pattern binding modes.
- [Unsafe `extern` blocks](https://doc.rust-lang.org/edition-guide/rust-2024/unsafe-extern.html) β `extern` blocks now require the `unsafe` keyword.
- [Unsafe attributes](https://doc.rust-lang.org/edition-guide/rust-2024/unsafe-attributes.html) β The `export_name`, `link_section`, and `no_mangle` attributes must now be marked as `unsafe`.
- [`unsafe_op_in_unsafe_fn` warning](https://doc.rust-lang.org/edition-guide/rust-2024/unsafe-op-in-unsafe-fn.html) β The [`unsafe_op_in_unsafe_fn`](https://doc.rust-lang.org/rustc/lints/listing/allowed-by-default.html#unsafe-op-in-unsafe-fn) lint now warns by default, requiring explicit `unsafe {}` blocks in `unsafe` functions.
- [Disallow references to `static mut`](https://doc.rust-lang.org/edition-guide/rust-2024/static-mut-references.html) β References to `static mut` items now generate a deny-by-default error.
- [Never type fallback change](https://doc.rust-lang.org/edition-guide/rust-2024/never-type-fallback.html) β Changes to how the never type `!` coerces, and changes the [`never_type_fallback_flowing_into_unsafe`](https://doc.rust-lang.org/rustc/lints/listing/warn-by-default.html#never-type-fallback-flowing-into-unsafe) lint level to "deny".
- [Macro fragment specifiers](https://doc.rust-lang.org/edition-guide/rust-2024/macro-fragment-specifiers.html) β The `expr` macro fragment specifier in `macro_rules!` macros now also matches `const` and `_` expressions.
- [Missing macro fragment specifiers](https://doc.rust-lang.org/edition-guide/rust-2024/missing-macro-fragment-specifiers.html) β The [`missing_fragment_specifier`](https://doc.rust-lang.org/rustc/lints/listing/deny-by-default.html#missing-fragment-specifier) lint is now a hard error, rejecting macro meta variables without a fragment specifier kind.
- [`gen` keyword](https://doc.rust-lang.org/edition-guide/rust-2024/gen-keyword.html) β Reserves the `gen` keyword in anticipation of adding generator blocks in the future.
- [Reserved syntax](https://doc.rust-lang.org/edition-guide/rust-2024/reserved-syntax.html) β Reserves `#"foo"#` style strings and `##` tokens in anticipation of changing how guarded string literals may change in the future.
- Standard library
- [Changes to the prelude](https://doc.rust-lang.org/edition-guide/rust-2024/prelude.html) β Adds `Future` and `IntoFuture` to the prelude.
- [Add `IntoIterator` for `Box<[T]>`](https://doc.rust-lang.org/edition-guide/rust-2024/intoiterator-box-slice.html) β Changes how iterators work with boxed slices.
- [Newly unsafe functions](https://doc.rust-lang.org/edition-guide/rust-2024/newly-unsafe-functions.html) β `std::env::set_var`, `std::env::remove_var`, and `std::os::unix::process::CommandExt::before_exec` are now unsafe functions.
- Cargo
- [Cargo: Rust-version aware resolver](https://doc.rust-lang.org/edition-guide/rust-2024/cargo-resolver.html) β Changes the default dependency resolver behavior to consider the `rust-version` field.
- [Cargo: Table and key name consistency](https://doc.rust-lang.org/edition-guide/rust-2024/cargo-table-key-names.html) β Removes some outdated `Cargo.toml` keys.
- [Cargo: Reject unused inherited default-features](https://doc.rust-lang.org/edition-guide/rust-2024/cargo-inherited-default-features.html) β Changes how `default-features = false` works with inherited workspace dependencies.
- Rustdoc
- [Rustdoc combined tests](https://doc.rust-lang.org/edition-guide/rust-2024/rustdoc-doctests.html) β Doctests are now combined into a single executable, significantly improving performance.
- [Rustdoc nested `include!` change](https://doc.rust-lang.org/edition-guide/rust-2024/rustdoc-nested-includes.html) β Changes to the relative path behavior of nested `include!` files.
- Rustfmt
- [Rustfmt: Style edition](https://doc.rust-lang.org/edition-guide/rust-2024/rustfmt-style-edition.html) β Introduces the concept of "style editions", which allow you to independently control the formatting edition from the Rust edition.
- [Rustfmt: Formatting fixes](https://doc.rust-lang.org/edition-guide/rust-2024/rustfmt-formatting-fixes.html) β A large number of fixes to formatting various situations.
- [Rustfmt: Combine all delimited exprs as last argument](https://doc.rust-lang.org/edition-guide/rust-2024/rustfmt-overflow-delimited-expr.html) β Changes to multi-line expressions as the last argument.
- [Rustfmt: Raw identifier sorting](https://doc.rust-lang.org/edition-guide/rust-2024/rustfmt-raw-identifier-sorting.html) β Changes to how `r#foo` identifiers are sorted.
- [Rustfmt: Version sorting](https://doc.rust-lang.org/edition-guide/rust-2024/rustfmt-version-sorting.html) β Changes to how identifiers that contain integers are sorted.
#### Migrating to 2024
The guide includes migration instructions for all new features, and in general
[transitioning an existing project to a new edition](https://doc.rust-lang.org/edition-guide/editions/transitioning-an-existing-project-to-a-new-edition.html).
In many cases `cargo fix` can automate the necessary changes. You may even
find that no changes in your code are needed at all for 2024!
*Many* people came together to create this edition. We'd like to thank them all for their hard work!
<!-- consider a similar thanks thread like https://github.com/rust-lang/rust/issues/88623? -->
````
cc @ehuss, @traviscross -- origin issue/PR authors and assignees for starting to draft text
|
T-lang,relnotes,A-edition-2024,relnotes-tracking-issue
|
low
|
Critical
|
2,685,391,639 |
next.js
|
Parallel routes and route groups conflicting
|
### Link to the code that reproduces this issue
https://github.com/chris-orgorg/parallel-routes
### To Reproduce
1. yarn dev
2. visit http://localhost:3000/subfolder/mypage
### Current vs. Expected behavior
Current:
No default component was found for a parallel route rendered on this page. Falling back to nearest NotFound boundary.
Learn more: https://nextjs.org/docs/app/building-your-application/routing/parallel-routes#defaultjs
Missing slots: @breadcrumb Error Component Stack
Expected:
/ layout
/ subfolder layout
/ mypage page
/ subfolder/@breadcrumb page
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:00 PDT 2024; root:xnu-10063.141.2~1/RELEASE_X86_64
Available memory (MB): 40960
Available CPU cores: 16
Binaries:
Node: 22.0.0
npm: 10.5.1
Yarn: 1.22.22
pnpm: 9.12.2
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: 15.0.3
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
It seems that by having a route group (in this case (app)) at the same level as another subfolder, both using the same name for a parallel route (in this case @breadcrumb) causes the incorrect resolution for getting the parallel routes. Maybe the parallel routes are getting fetched by path (but route grouping doesn't show up in the path, so it's looking in the wrong place).
I would expect the parallel routes to resolve or search by actual folder structure.
Changing the parallel routes to different names will fix the issue.
|
bug,Parallel & Intercepting Routes
|
low
|
Critical
|
2,685,469,043 |
pytorch
|
What is "recompilation profiler" in doc? (Seems to have a dangling link)
|
### π The doc issue
https://pytorch.org/docs/stable/torch.compiler_faq.html says:

But by clicking on it, it jumps to nowhere. I would appreciate it if I could know how to debug this excessive recompilation issue.
### Suggest a potential alternative/fix
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
|
triaged,oncall: pt2,module: dynamo
|
low
|
Critical
|
2,685,497,685 |
PowerToys
|
The drop shadow in the PowerToys Run search box is missing
|
### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
none
### βοΈ Expected Behavior
The drop shadow in the PowerToys Run search box is missing.
I think there used to be a drop shadow.
It is better to have a drop shadow because it is very hard to see.
At the very least, I would like to be able to select whether the drop shadow is on or off.
If you're going to copy macOS features, copy the UI and design too.
Apple doesn't use drop shadows for no reason.
### β Actual Behavior
No drop shadow in search box.
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Minor
|
2,685,545,883 |
go
|
x/net/http2: panic: runtime error: comparing uncomparable type
|
### Go version
go version go1.23.3 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='amd64'
GOBIN=''
```
### What did you do?
Recently, I fixed a potential issue with the etcd client. During our [review and discussion](https://github.com/etcd-io/etcd/pull/18893#issuecomment-2486730295), we noticed that there are similar scenarios in the Go standard library. I conducted some tests and got the same results: https://go.dev/play/p/-1W0mi3EMQS
My perspective on this is [here](https://github.com/etcd-io/etcd/pull/18893#issuecomment-2491179026). If this is confirmed to be a bug, it might be necessary to perform a comprehensive review of similar usages in the standard library.
### What did you see happen?
```
panic: runtime error: comparing uncomparable type main.uncomparableCtx
goroutine 1 [running]:
golang.org/x/net/http2.shouldRetryDial(0xc0000262c0, 0x7724a0?)
/tmp/gopath4185787864/pkg/mod/golang.org/x/[email protected]/http2/client_conn_pool.go:297 +0x5d
golang.org/x/net/http2.(*clientConnPool).getClientConn(0xc000016e40, 0xc00012c280, {0xc000012170, 0xe}, 0x1)
/tmp/gopath4185787864/pkg/mod/golang.org/x/[email protected]/http2/client_conn_pool.go:98 +0x22a
golang.org/x/net/http2.(*clientConnPool).GetClientConn(0x6f88a6?, 0x5?, {0xc000012170?, 0xe?})
/tmp/gopath4185787864/pkg/mod/golang.org/x/[email protected]/http2/client_conn_pool.go:55 +0x1d
golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc000108f00, 0xc00012c280, {0x10?, 0x0?})
/tmp/gopath4185787864/pkg/mod/golang.org/x/[email protected]/http2/transport.go:621 +0x1a2
golang.org/x/net/http2.(*Transport).RoundTrip(0x10df80?, 0x76fba0?)
/tmp/gopath4185787864/pkg/mod/golang.org/x/[email protected]/http2/transport.go:579 +0x17
net/http.send(0xc00012c280, {0x76fba0, 0xc000108f00}, {0xc000104c01?, 0x411d6b?, 0x0?})
/usr/local/go-faketime/src/net/http/client.go:259 +0x5e4
net/http.(*Client).send(0xc000016d80, 0xc00012c280, {0x6f88bc?, 0x6?, 0x0?})
/usr/local/go-faketime/src/net/http/client.go:180 +0x98
net/http.(*Client).do(0xc000016d80, 0xc00012c280)
/usr/local/go-faketime/src/net/http/client.go:725 +0x8bc
net/http.(*Client).Do(...)
/usr/local/go-faketime/src/net/http/client.go:590
main.main()
/tmp/sandbox806163367/prog.go:35 +0x146
```
### What did you expect to see?
Never panicking
|
NeedsInvestigation
|
low
|
Critical
|
2,685,624,109 |
neovim
|
highlights: :Inspect does not show matchadd() info
|
### Problem
Imagine in a plain text buffer there are many words highlighted by `:call matchadd(<group>, <pattern>)`, it is hard to get the group name of the character under the cursor.
`:call hlID()` does not accepts cursor position args, it asks for highlight group name as its arg and returns highlight ID. `:call synID()` accepts cursor position args but it is for syntax highlight, not match highlight. The buffer is plain text, `:Inspect` in neovim does not work either and gives `no items found at position x,y in buffer i`.
### Expected behavior
A function ( e.g. `matID()`) for match highlight, like `synID()` for syntax highlight.
|
enhancement,highlight
|
low
|
Minor
|
2,685,687,435 |
PowerToys
|
FancyZone layers
|
### Description of the new feature / enhancement
More than one layer of FancyZones. Eg, Shift snaps to 4 columns, but maybe Shift+Ctrl snaps to 2 columns, both are assigned to the same monitor.
### Scenario when this would be used?
With larger screens I find myself wanting more than one set of FancyZones per monitor. The windows Snap feature is just ok for multiple screen setups, I don't want to have to dance on the line between monitors just to Snap left/right. FancyZones solves that but I still want more options :)
### Supporting information
_No response_
|
Needs-Triage
|
low
|
Minor
|
2,685,692,800 |
vscode
|
Add node as npm script runner
|
<!-- β οΈβ οΈ Do Not Delete This! feature_request_template β οΈβ οΈ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
As Node.js 22 being promoted to LTS recently, `node --run` should be considered sable. Can we add it as a npm script runner?
https://nodejs.org/docs/latest-v22.x/api/cli.html#--run
|
feature-request,good first issue,npm
|
low
|
Minor
|
2,685,699,304 |
godot
|
[3.x] body_entered() with Bullet physics
|
### Tested versions
Tested with Godot 3.6 final
### System information
Windows 11
### Issue description
If using Bullet physics, body_entered() does not catch every collision (tried increasing physics fps but same result).
GodotPhysics works right.
### Steps to reproduce
Small test, ball and couple of objs and prints --HIT-- when collision happens (and when Bullet physics used, first hit works, second does not).
### Minimal reproduction project (MRP)
[_PhysicsContactTest_.zip](https://github.com/user-attachments/files/17879122/_PhysicsContactTest_.zip)
|
bug,topic:physics
|
low
|
Minor
|
2,685,701,083 |
node
|
The `http.Server` adds a `Transfer-Encoding: chunked` header when the response has no body
|
### Version
22 (but maybe every version)
### Platform
```text
Microsoft Windows NT 10.0.22631.0 x64 (but maybe not platform-specific)
```
### Subsystem
node:http
### What steps will reproduce the bug?
The `Transfer-Encoding: chunked` header is always added when there is no response body. There seems to be no way to remove it.
However, since this header is not added when the request method is HEAD, the response headers are different when the request method is GET and HEAD, which is against the HTTP specification.
Example server code:
``` javascript
import { createServer } from 'node:http';
import { finished } from 'node:stream/promises';
createServer(function (req, res) {
finished(req.resume()).then(() => {
res.writeHead(200, { 'Cache-Control': 'no-store' });
res.end(); // No body
}).catch((err) => {
this.emit('error', err);
})
}).listen(8000, '0.0.0.0');
```
GET:
```
$ curl -i http://localhost:8000/
HTTP/1.1 200 OK
Cache-Control: no-store
Date: Sat, 23 Nov 2024 09:01:22 GMT
Connection: keep-alive
Keep-Alive: timeout=5
Transfer-Encoding: chunked
```
HEAD:
```
$ curl --head -i http://localhost:8000/
HTTP/1.1 200 OK
Cache-Control: no-store
Date: Sat, 23 Nov 2024 09:01:17 GMT
Connection: keep-alive
Keep-Alive: timeout=5
```
A monkey patching solution I found is to always add the `Content-Length: 0` header.
However, I'm not sure if this is really the intended behavior. I think it should be possible to send a response without adding a `Content-Length: 0` header when there is no body.
Or, if it's intended behavior, it should be documented.
### How often does it reproduce? Is there a required condition?
Always.
### What is the expected behavior? Why is that the expected behavior?
I would guess that the generally expected behavior is that the `Transfer-Encoding: chunked` header is NOT added when there is no response body.
### What do you see instead?
.
### Additional information
Perhaps this issue https://github.com/denoland/deno/issues/20063 may be related.
|
http
|
low
|
Critical
|
2,685,765,951 |
PowerToys
|
EXTREME KEYBOARD LAG
|
### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update, GitHub
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
Set auto start at boot, restart windows
### βοΈ Expected Behavior
Normal typing
### β Actual Behavior
for almost 30 seconds, the keyboard has 1-2 second delay from the press to actually type
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Major
|
2,685,783,061 |
langchain
|
Not using Tools in Langchain_Ollama causes: 'NoneType' object is not iterable
|
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_ollama import ChatOllama
llm = ChatOllama(
model="llama3.1",
temperature=0,
disable_streaming=True
)
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
print(ai_msg)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/user1/Repo/ollama-cloud/use4.py", line 16, in <module>
ai_msg = llm.invoke(messages)
File "/home/user1/Repo/ollama-cloud/venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
self.generate_prompt(
File "/home/user1/Repo/ollama-cloud/venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/home/user1/Repo/ollama-cloud/venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate
raise e
File "/home/user1/Repo/ollama-cloud/venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
self._generate_with_cache(
File "/home/user1/Repo/ollama-cloud/venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache
result = self._generate(
File "/home/user1/Repo/ollama-cloud/venv/lib/python3.10/site-packages/langchain_ollama/chat_models.py", line 648, in _generate
final_chunk = self._chat_stream_with_aggregation(
File "/home/user1/Repo/ollama-cloud/venv/lib/python3.10/site-packages/langchain_ollama/chat_models.py", line 560, in _chat_stream_with_aggregation
tool_calls=_get_tool_calls_from_response(stream_resp),
File "/home/user1/Repo/ollama-cloud/venv/lib/python3.10/site-packages/langchain_ollama/chat_models.py", line 71, in _get_tool_calls_from_response
for tc in response["message"]["tool_calls"]:
TypeError: 'NoneType' object is not iterable
### Description
Trying to simple get a chat response from ChatOllama.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #49~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Nov 6 17:42:15 UTC 2
> Python Version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.20
> langchain: 0.3.7
> langchain_community: 0.3.7
> langsmith: 0.1.144
> langchain_ollama: 0.2.0
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.7
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> ollama: 0.4.0
> orjson: 3.10.11
> packaging: 24.2
> pydantic: 2.10.1
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> typing-extensions: 4.12.2
|
investigate
|
low
|
Critical
|
2,685,785,938 |
godot
|
Dropdown bug
|
### Tested versions
Reproductible in Godot v4.3 and later.
### System information
Godot v4.4.dev4.mono - Windows 10.0.19045 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (NVIDIA; 32.0.15.6614) - AMD Ryzen 7 5800X 8-Core Processor (16 threads)
### Issue description
When I click on a dropdown arrow in the inspector to open it, and click again on the arrow to close it, sometimes the dropdown box stays, but all informations inside disappear. The dropdown box will close if I click a third time on the arrow.
Here is a video that showcase that issue:
https://github.com/user-attachments/assets/0fde6689-e48d-4e28-8276-7b453c6dbc20
### Steps to reproduce
You can reproduce what is shown in the video.
### Minimal reproduction project (MRP)
No project needed.
|
bug,topic:gui
|
low
|
Critical
|
2,685,818,645 |
godot
|
Window which show project loading state getting pseudolocalized
|
### Tested versions
Reproducible in: v4.4.dev5.official [9e6098432]
### System information
Godot v4.4.dev5 - Fedora Linux 41.20241122.0 (Silverblue) on Wayland - X11 display driver, Single-window, 1 monitor - OpenGL 3 (Compatibility) - AMD Radeon RX 570 Series (radeonsi, polaris10, LLVM 19.1.0, DRM 3.59, 6.11.8-300.fc41.x86_64) - Intel(R) Core(TM) i3-10100F CPU @ 3.60GHz (8 threads)
### Issue description
Seems it is same issue as #97853
When project setting `internationalization/pseudolocalization/use_pseudolocalization` is true, then window which show resources loading state during project loading gets pseudolocalized.

### Steps to reproduce
1. Set project setting `internationalization/pseudolocalization/use_pseudolocalization` to true
2. Add some assets to project, like images.
3. Return to project list menu.
4. Reopen project.
5. Observe that strings are getting pseudolocalized.
### Minimal reproduction project (MRP)
Any project will do, but adding more and heavier assets will *delay* window to longer, giving more time to observe.
|
bug,topic:editor
|
low
|
Major
|
2,685,908,812 |
rust
|
Tracking issue for release notes of #133374: show abi_unsupported_vector_types lint in future breakage reports
|
This issue tracks the release notes text for #133374.
### Steps
- [x] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Compatibility notes
- [Show `abi_unsupported_vector_types` lint in future breakage reports](https://github.com/rust-lang/rust/pull/133374)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @RalfJung, @jieyouxu -- origin issue/PR authors and assignees for starting to draft text
|
A-lints,T-lang,T-compiler,relnotes,A-SIMD,A-ABI,A-target-feature,relnotes-tracking-issue,L-abi_unsupported_vector_types
|
low
|
Minor
|
2,685,926,172 |
ollama
|
could anyone help me? something is not work. use a special gpu
|
### What is the issue?
when i follow the instruction to install ollama with source code. i can not to finish gen.linux.sh
there are error information
`CMake Error at ggml/src/CMakeLists.txt:440 (find_package):
By not providing "Findhip.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "hip", but
CMake did not find one.
Could not find a package configuration file provided by "hip" with any of
the following names:
hipConfig.cmake
hip-config.cmake
Add the installation prefix of "hip" to CMAKE_PREFIX_PATH or set "hip_DIR"
to a directory containing one of the above files. If "hip" provides a
separate development package or SDK, be sure it has been installed.`
i try to modify as [https://github.com/ROCm/HIP/tree/master/samples/2_Cookbook/12_cmake_hip_add_executable#including-findhip-cmake-module-in-the-project](url)
but the cmakelist can not be modify, when i restart gen.linux.sh , camkelists.txt on ggml/src do not modify
could anyone help me?
### OS
Linux
### GPU
AMD, Other
### CPU
_No response_
### Ollama version
_No response_
|
bug
|
low
|
Critical
|
2,685,944,945 |
svelte
|
support abortcontroller / abortsignal to all event handler / life hooks
|
### Describe the problem
when leaving the current page in a SPA, i would like to clean things up myself, using `onDestroy` works great for such wonders.
But if i'm finish with that listener that i have added then i would also like to remove that event listener also.
I think it's complicated to learn all new non-standard event handler that all frameworks try to re-invent... ideally i would just simply want to have a standard `EventTarget` instead that do support the `{ once, signal }` options as a third argument.
being able to use abortController would help tremendously. even for fetching data that needs to be used on a particular page.
### Describe the proposed solution
```js
import { onDestroy } from 'svelte'
const ctrl = new AbortController()
const signal = ctrl.signal
signal.addEventlistener('abort', {
elm.remove()
elm = null // remove a element since we don't need it anymore
}, { once: true, signal })
something.addEventlistener('keyup', evt => {
if (evt.key === 'escape') {
ctrl.abort()
}
}, { signal })
// unless this listener isn't removed it will still keep a references to some
// variable and prevent garbage collection if the added listener can't be removed...
onDestroy(() => {
elm.remove() // Error: can't call remove() of null
}, { signal}) // <-- add support for this
```
### Importance
would make my life easier
|
awaiting submitter
|
low
|
Critical
|
2,685,945,454 |
godot
|
The Copy button in documentation code blocks has a broken hitbox
|
### Tested versions
v4.4.dev.custom_build [0c45ace15]
### System information
Godot v4.4.dev (0c45ace15) - macOS 15.1.0 - Multi-window, 1 monitor - Metal (Forward+) - integrated Apple M1 Max (Apple7) - Apple M1 Max (10 threads)
### Issue description
It seems to be offset vertically which makes it harder to click
https://github.com/user-attachments/assets/4b67907d-168f-40ba-9e14-e59361b45770
### Steps to reproduce
N/A
### Minimal reproduction project (MRP)
N/A
|
bug,topic:editor,usability,topic:gui
|
low
|
Critical
|
2,686,002,649 |
angular
|
The menu items are invisible in Dark Mode specifically for V18 Angular dev page
|
### Describe the problem that you experienced
The menu items are not visible in dark mode; they only appear on mouse hover. In light mode, their visibility is also not satisfactory. This issue occurs across all browsers.
### Enter the URL of the topic with the problem
https://v18.angular.dev/
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
_No response_
### Add a screenshot if that helps illustrate the problem

### If this problem caused an exception or error, please paste it here
```true
```
### If the problem is browser-specific, please specify the device, OS, browser, and version
```true
```
### Provide any additional information here in as much as detail as you can
```true
```
|
area: docs-infra
|
low
|
Critical
|
2,686,012,129 |
tauri
|
[bug] Fail to sign protoc sidecar with Azure Trusted Signing
|
> [!IMPORTANT]
> The issue was that the sidecar binary was read-only but Tauri silenced the permission error. See https://github.com/tauri-apps/tauri/issues/11778#issuecomment-2495504198 for more info.
### Describe the bug
I'm trying to get Azure Trusted Signing working for my app [github.com/mountain-loop/yaak](https://github.com/mountain-loop/yaak). It signs the main `.exe` correctly, and correctly skips the already-signed NodeJS sidecar. However, it seems to fail on the unsigned `protoc` sidecar.
Here is the output from https://github.com/mountain-loop/yaak/actions/runs/11976760384/job/33393142512
```shell
Finished `release` profile [optimized] target(s) in 11m 48s
warning: the following packages contain code that will be rejected by a future version of Rust: iso8601 v0.3.0, nom v4.2.3
note: to see what the problems were, use the option `--future-incompat-report`, or run `cargo report future-incompatibilities --id 1`
Built application at: D:\a\yaak\yaak\src-tauri\target\release\yaak-app.exe
Signing D:\a\yaak\yaak\src-tauri\target\release\yaak-app.exe
Signing D:\a\yaak\yaak\src-tauri\target\release\yaak-app.exe with a custom signing command
Info "[\r\n {\r\n \"cloudName\": \"AzureCloud\",\r\n \"homeTenantId\": \"***\",\r\n \"id\": \"b045e283-89f9-42ff-bd9a-95f6e7a9b035\",\r\n \"isDefault\": true,\r\n \"managedByTenants\": [],\r\n \"name\": \"Yaak Subscription\",\r\n \"state\": \"Enabled\",\r\n \"tenantId\": \"***\",\r\n \"user\": {\r\n \"name\": \"***\",\r\n \"type\": \"servicePrincipal\"\r\n }\r\n }\r\n]\r\n\r\nTrusted Signing\r\n\r\nVersion: 1.0.60\r\n\r\n\"Metadata\": {\r\n \"Endpoint\": \"[https://eus.codesigning.azure.net/\](https://eus.codesigning.azure.net//)",\r\n \"CodeSigningAccountName\": \"Yaak\",\r\n \"CertificateProfileName\": \"yaakapp\",\r\n \"ExcludeCredentials\": []\r\n}\r\n\r\nSubmitting digest for signing...\r\n\r\nOperationId 1eb5fd2a-01d6-45e8-b43a-bacda9cd5f54: InProgress\r\n\r\nSigning completed with status 'Succeeded' in 1.6176595s\r\n\r\nSuccessfully signed: D:\\a\\yaak\\yaak\\src-tauri\\target\\release\\yaak-app.exe\r\r\n\r\nNumber of files successfully Signed: 1\r\r\nNumber of warnings: 0\r\r\nNumber of errors: 0\r\r\n"
File: vendored\node\yaaknode-x86_64-pc-windows-msvc.exe
Index Algorithm Timestamp
========================================
0 sha256 RFC3161
Successfully verified: vendored\node\yaaknode-x86_64-pc-windows-msvc.exe
Info sidecar at "vendored\node\yaaknode-x86_64-pc-windows-msvc.exe" already signed. Skipping...
File: vendored\protoc\yaakprotoc-x86_64-pc-windows-msvc.exe
Index Algorithm Timestamp
========================================
SignTool Error: No signature found.
Number of errors: 1
Signing vendored\protoc\yaakprotoc-x86_64-pc-windows-msvc.exe
Signing vendored\protoc\yaakprotoc-x86_64-pc-windows-msvc.exe with a custom signing command
failed to bundle project: `failed to run trusted-signing-cli`
Error failed to bundle project: `failed to run trusted-signing-cli`
Error: Command failed with exit code 1: npm run tauri build
```
### Reproduction
As seen in the [`tauri.conf.json#L82`](https://github.com/mountain-loop/yaak/blob/38e0f5ede7d52f4d9287d1abf91c3ba92275fc7f/src-tauri/tauri.conf.json#L82), the sign command I'm using is:
```json
"windows": {
"signCommand": "trusted-signing-cli -e https://eus.codesigning.azure.net/ -a Yaak -c yaakapp %1"
}
```
To debug this, I created a new workflow to simply run this command on the `protoc` binary (committed it directly to the repo for simplicity), and it succeeded: https://github.com/mountain-loop/yaak/actions/runs/11975020946/job/33387429516
```yaml
- name: Sign files
run: trusted-signing-cli -e https://eus.codesigning.azure.net/ -a Yaak -c yaakapp yaakprotoc-x86_64-pc-windows-msvc.exe
env:
AZURE_CLIENT_ID: ${{ matrix.platform == 'windows-latest' && secrets.AZURE_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ matrix.platform == 'windows-latest' && secrets.AZURE_CLIENT_SECRET }}
AZURE_TENANT_ID: ${{ matrix.platform == 'windows-latest' && secrets.AZURE_TENANT_ID }}
```
### Expected behavior
`protoc` binary should sign successfully during `tauri-action` build
### Full `tauri info` output
```text
[β] Environment
- OS: Mac OS 15.1.1 arm64 (X64)
β Xcode Command Line Tools: installed
β rustc: 1.82.0 (f6e511eec 2024-10-15)
β cargo: 1.82.0 (8f40fc59f 2024-08-21)
β rustup: 1.27.1 (54dd3d00f 2024-04-24)
β Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.8.0
- npm: 10.8.2
- deno: deno 1.44.0
[-] Packages
- tauri π¦: 2.1.1
- tauri-build π¦: 2.0.3
- wry π¦: 0.47.0
- tao π¦: 0.30.8
- @tauri-apps/api ξ: 2.0.2 (outdated, latest: 2.1.1)
- @tauri-apps/cli ξ: 2.1.0
[-] Plugins
- tauri-plugin-dialog π¦: 2.0.3
- @tauri-apps/plugin-dialog ξ: 2.0.0 (outdated, latest: 2.0.1)
- tauri-plugin-updater π¦: 2.0.2
- @tauri-apps/plugin-updater ξ: not installed!
- tauri-plugin-log π¦: 2.0.1
- @tauri-apps/plugin-log ξ: 2.0.0
- tauri-plugin-window-state π¦: 2.0.1
- @tauri-apps/plugin-window-state ξ: not installed!
- tauri-plugin-os π¦: 2.0.1
- @tauri-apps/plugin-os ξ: 2.0.0
- tauri-plugin-fs π¦: 2.0.3
- @tauri-apps/plugin-fs ξ: 2.0.0 (outdated, latest: 2.0.2)
- tauri-plugin-shell π¦: 2.0.2
- @tauri-apps/plugin-shell ξ: 2.0.0 (outdated, latest: 2.0.1)
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
```
### Stack trace
_No response_
### Additional context
_No response_
|
type: bug,status: needs triage
|
low
|
Critical
|
2,686,033,460 |
TypeScript
|
Extend documentation of String.indexOf to mention what happens when substring is not found
|
### π Search Terms
function documentation
jsdoc
NOT-FOUND
### β
Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### β Suggestion
The documentation of `String.indexOf` should mention that -1 is returned when the substring is not found.
### π Motivating Example
The documentation for indexOf on arrays already mentioned that the method returns -1 when the value is not found:

But for strings, this information was missing thus far:

Of course the user would have been able to puzzle this together by seeing that the return type is just number, not something like number | undefined, thus allowing them to figure out that the sentinel value probably must be -1. Nevertheless, having to stop and think about this is unnecessary friction for people who don't remember this fact by heart, for example people who jump between programming languages a lot.
### π» Use Cases
1. What do you want to use this for?
To get more useful information from the language server while writing TS, which reduces interruptions / context switching, allowing the user to be more productive.
2. What shortcomings exist with current approaches?
The user has to stop and think about the not found case. Users coming from other languages might expect the `indexOf` function to maybe return `null` or `undefined` or throw an exception when the substring is not found.
3. What workarounds are you using in the meantime?
In cases where I've personally been faced with this issue in the past, I quickly used a js console to check the behaviour.
|
Suggestion,Experience Enhancement
|
low
|
Minor
|
2,686,033,541 |
rust
|
Invalid method call removal suggested when collecting `&str`s into `String`s
|
### Code
```Rust
fn main() {
String::new().lines().collect::<Vec<String>>();
}
```
### Current output
```Shell
error[E0277]: a value of type `Vec<String>` cannot be built from an iterator over elements of type `&str`
--> src/main.rs:2:37
|
2 | String::new().lines().collect::<Vec<String>>();
| ------- ^^^^^^^^^^^ value of type `Vec<String>` cannot be built from `std::iter::Iterator<Item=&str>`
| |
| required by a bound introduced by this call
|
= help: the trait `FromIterator<&str>` is not implemented for `Vec<String>`
but trait `FromIterator<String>` is implemented for it
= help: for that trait implementation, expected `String`, found `&str`
note: the method call chain might not have had the expected associated types
--> src/main.rs:2:19
|
2 | String::new().lines().collect::<Vec<String>>();
| ------------- ^^^^^^^ `Iterator::Item` is `&str` here
| |
| this expression has type `String`
note: required by a bound in `collect`
--> /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:1967:19
|
1967 | fn collect<B: FromIterator<Self::Item>>(self) -> B
| ^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `Iterator::collect`
help: consider removing this method call, as the receiver has type `String` and `String: FromIterator<&str>` trivially holds
|
2 - String::new().lines().collect::<Vec<String>>();
2 + String::new().collect::<Vec<String>>();
|
```
### Desired output
```Shell
error[E0277]: a value of type `Vec<String>` cannot be built from an iterator over elements of type `&str`
--> src/main.rs:2:37
|
2 | String::new().lines().collect::<Vec<String>>();
| ------- ^^^^^^^^^^^ value of type `Vec<String>` cannot be built from `std::iter::Iterator<Item=&str>`
| |
| required by a bound introduced by this call
|
= help: the trait `FromIterator<&str>` is not implemented for `Vec<String>`
but trait `FromIterator<String>` is implemented for it
= help: for that trait implementation, expected `String`, found `&str`
note: the method call chain might not have had the expected associated types
--> src/main.rs:2:19
|
2 | String::new().lines().collect::<Vec<String>>();
| ------------- ^^^^^^^ `Iterator::Item` is `&str` here
| |
| this expression has type `String`
note: required by a bound in `collect`
--> /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:1967:19
|
1967 | fn collect<B: FromIterator<Self::Item>>(self) -> B
| ^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `Iterator::collect`
help: consider changing this type to `&str`
|
2 - String::new().lines().collect::<Vec<String>>();
2 + String::new().lines().collect::<Vec<&str>>();
|
help: consider converting `&str` to `String` with `ToString::to_string`
``
```
### Rationale and extra context
Current suggestion produces invalid code and changes the semantics of the program.
### Other cases
```Rust
```
### Rust Version
```Shell
1.84.0-nightly (2024-11-21 b19329a37cedf2027517)
```
### Anything else?
_No response_
|
A-diagnostics,T-compiler
|
low
|
Critical
|
2,686,039,804 |
ui
|
[bug]: Text selection handles on iOS are not responsive for Input components in Dialog
|
### Describe the bug
When Text Input is used in modal Dialog, it is impossible to drag the text selection handles (left & right side, to extend / shorten the selection) on iOS.
The bug can be tested on the examples on https://ui.shadcn.com/docs/components/dialog - the "Custom close button" example contains a Share Link input where it's possible to select text by double tapping the word, but the selection cannot be customised by dragging.
I guess it has something to do with the pointer-events that dialog manipulates with, z-index of the overlay and fixed dialog positioning, but I'm unfortunately not skilled enough FE dev to find out.
### Affected component/components
Dialog
### How to reproduce
1. Go to https://ui.shadcn.com/docs/components/dialog
2. Open the "Custom close button" example
3. Select text by double tapping the word,
4. Try to change selection range by dragging the handles -the selection is irresponsive
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
iOS 18.1
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues
|
bug
|
low
|
Critical
|
2,686,040,544 |
godot
|
kebab-case names are not translated properly into the other naming scheme options
|
### Tested versions
Reproducible in all Godot versions upwards of 897e2d9, which is part of v4.3.stable, where #78119 was merged and enabled kebab-case script names.
### System information
Not relevant. It happens on Windows, macOS and Linux
### Issue description
Since Godot version 897e2d9, it is possible to specify kebab-case as a standard for script names, but whenever those names have to be translated into other naming schemes like PascalCase, snake_case or camelCase, the translation fails and produces invalid names.
One example is the creation of human readable names for the "Template" dropdown field in the "Attach Node Script" popup. Those are generated by capitalizing the template scripts file name if the template doesn't explicitly specify the name in a "meta-name" comment. This works for all cases except kebab-case, where the first word of the generated name is capitalized, but the rest is just kebab-case.
 
The second example is the translation of script file names into a script templates \_CLASS\_ variable. Here, the generated \_CLASS\_ name is correct for all script naming schemes except for kebab-case, where the generated \_CLASS\_ name is capitalized on the first word, and the rest is treated as snake_case
 
The generated class_names for the scripts in the screenshot above are as follows:
| Script name | class_name |
| - | - |
| TestPascalCase.gd | TestPascalCase |
| test_snake_case.gd |TestSnakeCase |
| testCamelCase.gd | TestCamelCase |
| test-kebab-case.gd | Test_kebab_case |
### Steps to reproduce
To check the first case with the human readable template names:
- open the attached project in any Godot version upwards of 897e2d9
- select the "Root" node in the SceneTree, attach a new script and open the "Templates" dropdown in the "Attach Node Script" popup
- observe the human readable names of the project templates
The second case with wrongly generated \_CLASS\_ names is a bit more work. The simplest method is to give all nodes under the "Root" node a new script, select a template which includes a \_CLASS\_ variable (all of the four embedded project templates do so), and then save the script with the same name as the node. The generated \_CLASS\_ variable will be based on the scripts file name.
- open the attached project in any Godot version upwards of 897e2d9
- select the "TestPascalCase" node, create a new script, select any of the project templates, save the script as "TestPascalCase.gd"
- select the "test_snake_case" node, create a new script, select any of the project templates, save the script as "test_snake_case.gd"
- select the "testCamelCase" node, create a new script, select any of the project templates, save the script as "testCamelCase.gd"
- select the "test-kebab-case" node, create a new script, select any of the project templates, save the script as "test-kebab-case.gd"
- open all four of the generated scripts in the editor and check the generated class_name variable
### Minimal reproduction project (MRP)
[test.zip](https://github.com/user-attachments/files/17880000/test.zip)
|
bug,topic:editor
|
low
|
Minor
|
2,686,045,332 |
angular
|
add default value to resource and rxResource
|
### Which @angular/* package(s) are relevant/related to the feature request?
_No response_
### Description
I'm experimenting now with the new resource and rxResource new APIs and one thing that is a bit hassle is that it adds undefined to the the type of it
for example I find myself needing a lot to do `resource.value() ?? []` because before I was making the signal value empty array till the value is loaded
### Proposed solution
is to add initial value to the [ResourceOptions](https://angular.dev/api/core/ResourceOptions)
points to be discussed
1. if `hasValue()` should be true or false if it's the initial value
2. when `resource.reload()` is called if it should return to the default value or not
### Alternatives considered
is to keep using `?? initialValue`
|
area: core,core: reactivity,cross-cutting: signals
|
medium
|
Critical
|
2,686,046,384 |
pytorch
|
Enhance Memory Timeline Export with Detailed Category-wise Data
|
### π The feature, motivation and pitch
Summary:
The `export_memory_timeline` function currently does not provide detailed information in the generated JSON file, which makes it difficult to analyze memory usage in a more granular way. This PR proposes a rewritten version of the function that includes more detailed memory timeline data, grouped by memory usage categories.
Key Changes:
1. Grouped Memory Data by Category:
- Memory usage data is now grouped by categories, and each category contains a list of timestamps and the corresponding memory usage in GB.
- The structure of the JSON output is updated to reflect these changes, allowing for better clarity and insight into memory usage per category over time.
2. Category Name Handling:
- The category names are correctly handled to avoid errors during JSON serialization. Category names are extracted from the `_CATEGORY_TO_COLORS` mapping and are ensured to be represented as strings in the final JSON output.
3. Updated JSON Structure:
- The exported JSON now contains the following fields:
- `device`: The name of the device (e.g., `cuda:0`).
- `max_memory_allocated`: The maximum memory allocated, in GB.
- `max_memory_reserved`: The maximum memory reserved, in GB.
- `category_memory_timeline`: A dictionary containing memory usage data for each category, with timestamps and memory usage in GB.
Example of Updated JSON Output:
```json
{
"device": "cuda:0",
"max_memory_allocated": 1.23,
"max_memory_reserved": 1.56,
"category_memory_timeline": {
"Category1": [
{"time": 0.0, "memory_GB": 0.1},
{"time": 1.0, "memory_GB": 0.2},
{"time": 2.0, "memory_GB": 0.15}
],
"Category2": [
{"time": 0.0, "memory_GB": 0.05},
{"time": 1.0, "memory_GB": 0.1},
{"time": 2.0, "memory_GB": 0.2}
]
}
}
```
Impact:
- The existing functionality remains unchanged. Users can still use the original memory timeline plotting and exporting features.
- The JSON export functionality now includes more detailed and organized memory timeline data for better analysis.
### Alternatives
def export_memory_timeline(
self, path, device, figsize=(20, 12), title=None
) -> None:
import numpy as np
import json
mt = self._coalesce_timeline(device)
times, sizes = np.array(mt[0]), np.array(mt[1])
if sizes.ndim == 1:
stacked = np.cumsum(sizes) / 1024**3
else:
stacked = np.cumsum(sizes, axis=1) / 1024**3
max_memory_allocated = torch.cuda.max_memory_allocated(device)
max_memory_reserved = torch.cuda.max_memory_reserved(device)
category_memory_timeline = {}
names = ["Unknown" if i is None else i.name for i in _CATEGORY_TO_COLORS]
num = 0
for category, color in _CATEGORY_TO_COLORS.items():
name = names[num]
num += 1
i = _CATEGORY_TO_INDEX.get(category)
if i is None:
continue
memory_usage = stacked[:, i] if stacked.ndim > 1 else stacked
category_memory_timeline[name] = [
{"time": t, "memory_GB": mem} for t, mem in zip(times / 1e3, memory_usage)
]
timeline_data = {
"device": device,
"max_memory_allocated": max_memory_allocated / 1024**3, # GB
"max_memory_reserved": max_memory_reserved / 1024**3, # GB
"category_memory_timeline": category_memory_timeline
}
with open(path, "w") as f:
json.dump(timeline_data, f, indent=4)
### Additional context
No
```[tasklist]
### Tasks
```
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise
|
oncall: profiler
|
low
|
Critical
|
2,686,046,916 |
tailwindcss
|
[v4] Tailwind CLI high CPU on bad css
|
<!-- Please provide all of the information requested below. We're a small team and without all of this information it's not possible for us to help and your bug report will be closed. -->
**What version of Tailwind CSS are you using?**
4.0.0-beta.2
**What build tool (or framework if it abstracts the build tool) are you using?**
Tailwind CLI.
**What version of Node.js are you using?**
v22.11.0
**What browser are you using?**
Doesn't matter.
**What operating system are you using?**
Linux.
**Reproduction URL**
Steps in `README.md`:
https://github.com/dotfrag/tailwind-issue-repro
**Describe your issue**
Repro steps in repo README.
With the following CSS:
```css
@import 'tailwindcss';
@theme {
--font-sans: "Inter";
}
```
The node process is idling at 0.x CPU.
Then if you introduce bad css:
```css
@import 'tailwindcss';
@theme {
--font-sans: "Inter", theme('fontFamily.sans');
}
```
The node process CPU usage skyrockets to 20% causing 70C+ temps.
The CPU usage does **not** go down when removing the bad code.
|
v4
|
low
|
Critical
|
2,686,106,574 |
godot
|
Godot4.4-dev5: Editor Crash on Project Open - NVIDIA GPU Crash - `rendering_device_driver_vulkan.cpp:5325`
|
### Tested versions
Not Reproducible in Godot 4.3 Stable
100% Reproducible in Godot 4.4-dev5
### System information
Godot v4.4.dev5 - Windows 10.0.22631 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3080 Ti Laptop GPU (NVIDIA; 32.0.15.6614) - 12th Gen Intel(R) Core(TM) i9-12900HK (20 threads)
### Issue description
Attempted to migrate a functioning project from Godot 4.3 Stable to 4.4.5dev:
After opening project , the editor will crash before it loads the main scene file.
I have tried replacing the main scene file it loads with an empty one (by manually editing project.godot) and it still crashes.
I have deleted the .godot folder forcing a complete reimport of all assets, and updated graphics drivers to the latest version.
The only thing that has changed compared to an earlier reproduction test was the breadcrumb ID in the error output
this is the commandline I used to get this output logged to a text file, since running it in console mode by default gave no info and immediately closed upon crash:
Godot_v4.4-dev5_win64_console.exe --debug --accurate-breadcrumbs --verbose --log-file testlog.txt --editor --path C:\gunlab
relevant section at the end of the log:
```
ERROR: Printing last known breadcrumbs in reverse order (last executed first).
at: print_lost_device_info (drivers/vulkan/rendering_device_driver_vulkan.cpp:5325)
ERROR: Searching last breadcrumb. We've sent up to ID: 118
ERROR: Last breadcrumb ID found: 114
ERROR: Last known breadcrumb: BLIT_PASS
ERROR: Last known breadcrumb: UI_PASS
ERROR: Last known breadcrumb: BLIT_PASS
ERROR: Last known breadcrumb: UI_PASS
ERROR: Last known breadcrumb: BLIT_PASS
ERROR: Last known breadcrumb: UI_PASS
ERROR: Last known breadcrumb: BLIT_PASS
ERROR: Last known breadcrumb: UI_PASS
================================================================
CrashHandlerException: Program crashed with signal 11
Engine version: Godot Engine v4.4.dev5.official (9e6098432aac35bae42c9089a29ba2a80320d823)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] error(-1): no debug info in PE/COFF executable
[2] error(-1): no debug info in PE/COFF executable
[3] error(-1): no debug info in PE/COFF executable
[4] error(-1): no debug info in PE/COFF executable
[5] error(-1): no debug info in PE/COFF executable
[6] error(-1): no debug info in PE/COFF executable
[7] error(-1): no debug info in PE/COFF executable
[8] error(-1): no debug info in PE/COFF executable
[9] error(-1): no debug info in PE/COFF executable
[10] error(-1): no debug info in PE/COFF executable
[11] error(-1): no debug info in PE/COFF executable
[12] error(-1): no debug info in PE/COFF executable
[13] error(-1): no debug info in PE/COFF executable
[14] error(-1): no debug info in PE/COFF executable
[15] error(-1): no debug info in PE/COFF executable
[16] error(-1): no debug info in PE/COFF executable
[17] error(-1): no debug info in PE/COFF executable
[18] error(-1): no debug info in PE/COFF executable
[19] error(-1): no debug info in PE/COFF executable
[20] error(-1): no debug info in PE/COFF executable
-- END OF BACKTRACE --
================================================================
```
### Steps to reproduce
reproducible locally using the steps outlined in the issue description.
### Minimal reproduction project (MRP)
~~No MRP available, I test ported some smaller standalone projects using some of the assets and shaders and did not encounter this crash.~~
~~I am still attempting to reproduce and find an MRP case to upload.~~
Attached is a full verbose output log of the crash.
[testlog.txt](https://github.com/user-attachments/files/17880113/testlog.txt)
MRP: [mrpVulkanCrash_4.4-dev6.zip](https://github.com/user-attachments/files/18049321/mrpVulkanCrash_4.4-dev6.zip)
_Bugsquad edit: added MRP from [OP comment](https://github.com/godotengine/godot/issues/99587#issuecomment-2525257290)_
|
bug,topic:rendering,crash,regression
|
low
|
Critical
|
2,686,107,131 |
vscode
|
Editor GPU: Ignore token foreground color when there is a decoration with a foreground color
|
With https://github.com/microsoft/vscode/pull/234127 there is a new `charMetadata` that is passed along to the rasterizer and used as a key for the glyph:
https://github.com/microsoft/vscode/blob/9088a3747d6d1ec3c5d3a0c1149b51ac3843dc7b/src/vs/editor/browser/gpu/raster/glyphRasterizer.ts#L120-L124
https://github.com/microsoft/vscode/blob/9088a3747d6d1ec3c5d3a0c1149b51ac3843dc7b/src/vs/editor/browser/gpu/atlas/textureAtlasPage.ts#L92
A problem with this immediately is that the `tokenMetadata`'s color becomes redundant as it's foreground color is overridden by `charMetadata` (inline decorations). Meaning for different tokens with the same decoration, or even just same decoration color, the glyph will be duplicated in the atlas.
I think the best way forward here is to delete the portion of `tokenMetadata` if there is a decoration foreground color, similar to what we do here:
https://github.com/microsoft/vscode/blob/9088a3747d6d1ec3c5d3a0c1149b51ac3843dc7b/src/vs/editor/browser/gpu/atlas/textureAtlas.ts#L109-L110
Note that I think this also applies to partially transparent colors as it replaces the original color, not layers on top of it.
|
plan-item,editor-gpu
|
low
|
Minor
|
2,686,107,986 |
godot
|
Create a Plugin dialog can get lost behind its parent
|
### Tested versions
v4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3080 Ti (NVIDIA; 32.0.15.5612) - 12th Gen Intel(R) Core(TM) i9-12900KF (24 Threads)
### Issue description
If you open the Create a Plugin dialog via Project Settings -> Plugins -> Create New Plugin and then click on the Project Settings dialog, the Create a Plugin dialog is hidden behind it.
While in this state, the Project Settings dialog is unresponsive though the Close and Create New Plugin button do Hover. The mouse cursor acts oddly; if you mouse over the dialog border it switches to a resize cursor and then stays that way even when not over anything that can be resized.
### Steps to reproduce
Open the Create a Plugin dialog via Project Settings -> Plugins -> Create New Plugin and then click on the Project Settings dialog,
### Minimal reproduction project (MRP)
No specific project needed.
|
bug,topic:editor,usability
|
low
|
Minor
|
2,686,118,059 |
vscode
|
Editor GPU: Opening a folded section double renders the line
|

|
bug,editor-gpu
|
low
|
Minor
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.