id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,762,998,096 | rust | Tracking Issue for ByteStr/ByteString | Feature gate: `#![feature(bstr)]`
This is a tracking issue for the ByteStr/ByteString types, which represent human-readable strings that are usually, but not always, UTF-8. Unlike `&str`/`String`, these types permit non-UTF-8 contents, making them suitable for user input, non-native filenames (as `Path` only supports native filenames), and other applications that need to round-trip whatever data the user provides.
This was approved in ACP https://github.com/rust-lang/libs-team/issues/502 .
### Public API
```rust
// In core::bstr
#[repr(transparent)]
pub struct ByteStr(pub [u8]);
impl ByteStr {
pub fn new<B: ?Sized + AsRef<[u8]>>(bytes: &B) -> &Self { ... }
}
impl Debug for ByteStr { ... }
impl Display for ByteStr { ... }
impl Deref for ByteStr { type Target = [u8]; ... }
impl DerefMut for ByteStr { ... }
// Other trait impls from bstr, including From impls
// In alloc::bstr
#[repr(transparent)]
pub struct ByteString(pub Vec<u8>);
impl Debug for ByteString { ... }
impl Display for ByteString { ... }
impl Deref for ByteString { type Target = Vec<u8>; ... }
impl DerefMut for ByteString { ... }
// Other trait impls from bstr, including From impls
```
### Steps / History
- [x] Implementation: #135073
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
### Unresolved Questions
- Should we call this `BStr`/`BString`, or `ByteStr`/`ByteString`? The former will be more familiar to users of the `bstr` crate in the ecosystem. The latter is more explicit, and avoids potential naming conflicts (making it easier to, for instance, add it to the prelude).
- Should the `Display` impl use the Unicode replacement character, or do escaping like the `Debug` impl? | T-libs-api,C-tracking-issue | low | Critical |
2,763,010,525 | vscode | Extension Bisect should not re-enable most extensions throughout the process | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
This is directly related to #111669 in which the OP describes how bisect is unable to find _any_ "bad" extensions if there is more than one.
According to the docs, [bisect re-enables the extensions it considers "fine"](https://code.visualstudio.com/blogs/2021/02/16/extension-bisect). This is problematic because if you have more than one "bad" extension, it will never be able to narrow down the problem to a single extension, and it will incorrectly tell you the problem may be with VS Code.
This could be fixed by bisect not re-enabling any extensions it has removed from the problem set until one of the "bad" ones are found. This would allow you to keep finding problematic extensions by repeatedly running the bisect tool until none remain, instead of the tool only working at all when there is _only_ one "bad" extension.
# Detailed Design
Here is the example from the docs on how bisect works now:
- Bisect divides 24 extensions into two halves of 12 extensions each, and **it disables all 12 extensions of the second half.**
- In this sample, **the 8th extension is the "bad" one**, so it is the first half and not disabled. Things are still not working as we'd expect. Because there is still an issue, extension bisect repeats the process, dividing the first 12 extensions into two parts: **6 are enabled** and **6 are disabled**. **All other extensions are also re-enabled.**
...
That very last part is what is problematic. If we eliminate that step, the tool would be able to uncover any number of problematic extensions. I will give two examples of my own: one with one bad extension, and one with two. (I am not sure what bisect does when there are an odd number of extensions, so for my examples I will assume it will give the remainder to the upper half when partitioning an odd number)
- An enabled extension: …
- A disabled extension: X
- A "bad" extension: ☢️
## Example: 12 extensions with 1 bad extension
1. The bad extension is no. 3; bisect disables the upper half of all 12
| — | — | ☢️ | — | — | — | X | X | X | X | X | X |
|---|---|---|---|---|---|---|---|---|---|---|---|
2. There is still a problem So bisect disables the upper half of the lower 6
| — | — | ☢️ | X | X | X | X | X | X | X | X | X |
|---|---|---|---|---|---|---|---|---|---|---|---|
3. There is still a problem So bisect disables extensions 2 and 3
| — | X | X (☢️) | X | X | X | X | X | X | X | X | X |
|---|---|---|---|---|---|---|---|---|---|---|---|
4. Problem gone, so bisect disables the non-problematic extension (no. 1), and then re-enables the lower half of the previously disabled set (so, just the 2nd one)
| X | X | ☢️ | X | X | X | X | X | X | X | X | X |
|---|---|---|---|---|---|---|---|---|---|---|---|
5. Bisect can correctly identify no. 3 as the problematic extension
## Example: 12 extensions with 2 bad extensions
1. The bad extensions are no. 4 and 12; bisect disables the upper half of all 12
| — | — | — | ☢️ | — | — | X | X | X | X | X | X (☢️) |
|---|---|---|---|---|---|---|---|---|---|---|---|
2. There is still a problem, so bisect disables the upper half of the lower 6
| — | — | — | X (☢️) | X | X | X | X | X | X | X | X (☢️) |
|---|---|---|---|---|---|---|---|---|---|---|---|
2. There is still a problem So bisect disables the lower half of the lower 6 (the first 3). Now we are working with extensions 4-6. Bisect re-enables the lower half of this group, which is just no. 4
| X | X | X | ☢️ | X | X | X | X | X | X | X | X (☢️) |
|---|---|---|---|---|---|---|---|---|---|---|---|
3. A problem is found with only one extension enabled, so this round of bisecting completes with a "bad" extension identified. However the user still experiences a problem so they start a new bisect. Let us assume the user has removed the first bad extension so there are now 11 extensions left, and the 11th one is the problematic one. Bisect disables the upper half of all 11 extensions (6-11)
| — | — | — | — | — | X | X | X | X | X | X (☢️) |
|---|---|---|---|---|---|---|---|---|---|---|
4. The user no longer experiences the problem, so bisect disables the un-problematic half entirely and re-enables the lower half of the remaining extensions. This repeats until the 10th extension is the only one still enabled, still no problem.
| X | X | X | X | X | — | — | — | X | X | X (☢️) |
|---|---|---|---|---|---|---|---|---|---|---|
| X | X | X | X | X | X | X | X | — | X | X (☢️) |
|---|---|---|---|---|---|---|---|---|---|---|
| X | X | X | X | X | X | X | X | X | — | X (☢️) |
|---|---|---|---|---|---|---|---|---|---|---|
5. I assume it would finally enable the last extension to confirm the user experiences a problem at all. And the user does experience it with only the 11th extension enabled, so bisect identifies it as the problem to the user.
| X | X | X | X | X | X | X | X | X | X | ☢️ |
|---|---|---|---|---|---|---|---|---|---|---|
---
Another scenario worth noting is that in the worst case, where the first and second extensions are the bad ones, the user will choose "I can reproduce" every single time. The current implementation takes this opportunity to tell the user that it could not detect a bad extension and that the problem might be with VS Code itself. In cases like this where we cannot be 100% sure that a singular extension is the problem, my new approach should probably include that same disclaimer when it identifies a potentially problematic extension.
While imperfect, this is still more useful than not identifying any extensions at all. I can think of several improvements that can be made as well; for example we could modify the algorithm to test both halves of a set of extensions for problems when a problem is encountered on one half and alert the user. This could save time by letting the user know that both halves of some set of extensions (which may be _all_ of them if it is the first try) reproduce the problem, and let them choose to continue bisecting or not. | feature-request,extensions | low | Major |
2,763,039,939 | transformers | How can I disable legacy processing in llava-next | ### System Info
4.47.1
### Who can help?
vision models: @amyeroberts, @qubvel
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/blob/5cabc75b4bdb2e67935f7195f901afd150746eb3/src/transformers/models/llava_next/modeling_llava_next.py#L866-L916
### Sample script
```
def main():
args = parse_args()
processor = LlavaNextProcessor.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf")
model = LlavaNextForConditionalGeneration.from_pretrained(
"llava-hf/llava-v1.6-mistral-7b-hf",
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
attn_implementation="flash_attention_2",
).to("cuda:0")
setup_model_with_compression(model, args)
url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"
image = Image.open(requests.get(url, stream=True).raw)
conversation = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What is shown in this image?"},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
inputs = processor(image, prompt, return_tensors="pt").to("cuda:0")
output = model.generate(**inputs, max_new_tokens=100)
print(processor.decode(output[0], skip_special_tokens=True))
```
### Expected behavior
how does the legacy processing work? can I disable it ? | bug | low | Major |
2,763,067,126 | rust | `cannot assign to data in an index of HashMap<_, _>` diagnostic suggests solution that does not type check | ### Code
```Rust
use std::collections::HashMap;
fn main() {
let mut map: HashMap<bool, bool> = HashMap::default();
map[&true] = false;
}
```
### Current output
```Shell
error[E0594]: cannot assign to data in an index of `HashMap<bool, bool>`
--> src/main.rs:5:5
|
5 | map[&true] = false;
| ^^^^^^^^^^^^^^^^^^ cannot assign
|
= help: trait `IndexMut` is required to modify indexed content, but it is not implemented for `HashMap<bool, bool>`
help: to modify a `HashMap<bool, bool>`, use `.get_mut()`, `.insert()` or the entry API
|
5 | map.insert(&true, false);
| ~~~~~~~~ ~ +
5 | map.get_mut(&true).map(|val| { *val = false; });
| ~~~~~~~~~ ~~~~~~~~~~~~~~~~~~ ++++
5 | let val = map.entry(&true).or_insert(false);
| +++++++++ ~~~~~~~ ~~~~~~~~~~~~ +
For more information about this error, try `rustc --explain E0594`.
```
### Desired output
```Shell
error[E0594]: cannot assign to data in an index of `HashMap<bool, bool>`
--> src/main.rs:5:5
|
5 | map[&true] = false;
| ^^^^^^^^^^^^^^^^^^ cannot assign
|
= help: trait `IndexMut` is required to modify indexed content, but it is not implemented for `HashMap<bool, bool>`
help: to modify a `HashMap<bool, bool>`, use `.get_mut()`, `.insert()` or the entry API
|
5 | map.insert(true, false);
| ~~~~~~~~ ~ +
5 | map.get_mut(&true).map(|val| { *val = false; });
| ~~~~~~~~~ ~~~~~~~~~~~~~~~~~~ ++++
5 | let val = map.entry(true).or_insert(false);
| +++++++++ ~~~~~~~ ~~~~~~~~~~~~ +
For more information about this error, try `rustc --explain E0594`.
```
### Rationale and extra context
`HashMap::insert` and `HashMap::entry` take owned key values.
### Other cases
```Rust
```
### Rust Version
```Shell
$ rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
```
### Anything else?
Similar fixes might be necessary for other container types in `std::collections` | A-diagnostics,T-compiler | low | Critical |
2,763,068,681 | create-react-app | Unable to Create a New React App Using npx create-react-app | #### Description
I've been facing issues when creating a new React app using the `npx create-react-app` command for the past 1 1.5 months. Despite trying multiple solutions, the problem persists, making it impossible to set up a new React project.
#### Steps to Reproduce
1. Run the command: `npx create-react-app my-app`.
2. Attempted the following flags to bypass errors:
- `--force`
- `--legacy-peer-deps`
3. Tried on different systems and environments:
- **Operating Systems**: Windows 10/11, macOS
- **Networks**: Airtel and BSNL ISPs
4. Tested on various Node.js versions (all managed with NVM):
- 16.x (LTS)
- 18.x (LTS)
- 20.x (LTS)
- 22.x (LTS)
#### Expected Behavior
After running the `npx create-react-app` command, a new React app should be successfully created.
#### Actual Behavior
The command fails to complete successfully. I've attached screenshots of the errors encountered below for you to look over.
#### Attempts to Fix
- Used `--force` and `--legacy-peer-deps` flags.
- Tried different versions of Node.js.
- Tested on different operating systems and networks.
#### Environment
- **OS**: Windows 10/11, macOS
- **Node.js**: 16, 18, 20, and 22 (LTS versions)
- **NPM**: Latest version (installed with Node.js)
- **React**: The `npx create-react-app` version is the default fetched by NPM.
#### Attachments
(Screenshots of the error messages)
<img width="1680" alt="Screenshot 2024-12-30 at 4 37 35 PM" src="https://github.com/user-attachments/assets/fd709768-dc82-42f5-8b19-18fae6b215e3" />
<img width="1680" alt="Screenshot 2024-12-30 at 4 39 03 PM" src="https://github.com/user-attachments/assets/98ef900b-8090-4bb6-a01a-d2590bb9c0e8" />
<img width="1680" alt="Screenshot 2024-12-30 at 4 41 21 PM" src="https://github.com/user-attachments/assets/472a8776-a971-4898-84d0-55f294bf0816" />
<img width="1680" alt="Screenshot 2024-12-30 at 4 44 08 PM" src="https://github.com/user-attachments/assets/d53c7f19-ea02-4dad-b770-86058ac8c788" />
<img width="1680" alt="Screenshot 2024-12-30 at 4 44 42 PM" src="https://github.com/user-attachments/assets/f5ae486c-3a67-4c24-9984-b7e685e5cac9" />
<img width="1680" alt="Screenshot 2024-12-30 at 4 45 54 PM" src="https://github.com/user-attachments/assets/f234c888-359d-474e-94da-47962d7edf5b" />
<img width="1680" alt="Screenshot 2024-12-30 at 4 46 57 PM" src="https://github.com/user-attachments/assets/c3c70970-3c98-4e48-80a4-1329b26ecc4f" />
<img width="1680" alt="Screenshot 2024-12-30 at 4 48 06 PM" src="https://github.com/user-attachments/assets/2e197727-6847-4d0f-ba0e-e9938085934a" />
#### Additional Information
This issue has persisted across multiple environments and configurations for over a month. Please let me know if further details or debugging information are needed to assist in resolving this issue. | needs triage,issue: bug report | medium | Critical |
2,763,068,795 | react-native | `View` inside `ScrollView` makes scroll-to-top when multiline input grows | ### Description
I use this component:
```tsx
import React, { forwardRef } from "react";
import {
ScrollView,
ScrollViewProps,
View,
} from "react-native";
export type KeyboardAwareScrollViewProps = {
/** The distance between keyboard and focused `TextInput` when keyboard is shown. Default is `0`. */
bottomOffset?: number;
/** Prevents automatic scrolling of the `ScrollView` when the keyboard gets hidden, maintaining the current screen position. Default is `false`. */
disableScrollOnKeyboardHide?: boolean;
/** Controls whether this `KeyboardAwareScrollView` instance should take effect. Default is `true` */
enabled?: boolean;
/** Adjusting the bottom spacing of KeyboardAwareScrollView. Default is `0` */
extraKeyboardSpace?: number;
/** Custom component for `ScrollView`. Default is `ScrollView` */
ScrollViewComponent?: React.ComponentType<ScrollViewProps>;
} & ScrollViewProps;
const KeyboardAwareScrollView = forwardRef<
ScrollView,
React.PropsWithChildren<KeyboardAwareScrollViewProps>
>(
(
{
children,
...rest
},
ref,
) => {
return (
<>
<ScrollView {...rest} scrollEventThrottle={16}>
{children}
<View style={{height: 336, backgroundColor: 'red'}} />
</ScrollView>
</>
);
},
);
export default KeyboardAwareScrollView;
```
And I use it the code:
```tsx
import React, { useCallback, useEffect, useRef, useState } from "react";
import { Button, Platform, Switch, Text, View, TextInput, StyleSheet } from "react-native";
export default function AwareScrollView() {
const [_, setText] = useState("");
return (
<View style={styles.main}>
<KeyboardAwareScrollView>
<View style={styles.contentContainer}>
<TextInput
style={styles.contentText}
multiline
value={_}
onChangeText={(value) => setText(value)}
scrollEnabled={false}
/>
</View>
</KeyboardAwareScrollView>
</View>
);
}
export const styles = StyleSheet.create({
safeAreaView: {
flex: 1,
},
main: {
flex: 1,
marginBottom: 0,
},
contentContainer: {
padding: 15,
flexGrow: 1,
},
contentText: {
width: "100%",
backgroundColor: "#bcbcbc",
flex: 1,
fontSize: 16,
minHeight: 60,
textAlignVertical: "top",
},
});
```
### Steps to reproduce
1. Type a long message in multiline input (you can simply press Enter/move to the next line to speed up the process)
2. When TextInput gets obscured by keyboard scroll till the bottom to make sure the bottom of TextInput is visible
3. Start to type a long message (you can use predictive text to speed up the process)
4. When text goes to a new line `ScrollView` suddenly scrolls to top
Expected result is that `ScrollView` should remain its position.
### React Native Version
0.76.2
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1
CPU: (10) arm64 Apple M1 Pro
Memory: 118.81 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.18.0
path: ~/.nvm/versions/node/v20.18.0/bin/node
Yarn:
version: 1.22.22
path: ~/.nvm/versions/node/v20.18.0/bin/yarn
npm:
version: 10.8.2
path: ~/.nvm/versions/node/v20.18.0/bin/npm
Watchman:
version: 2024.10.21.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.14.3
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12483815
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.9
path: /usr/bin/javac
Ruby:
version: 2.7.6
path: /Users/kirylziusko/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.2
wanted: 0.76.2
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
I don't have any logs, bug from what I debugged in XCode methods like `scrollToOffset`/`scrollToEnd`/`scrollViewShouldScrollToTop` are not getting called.
```
### Reproducer
https://github.com/kirillzyusko/react-native-keyboard-controller/blob/main/example/src/screens/Examples/AwareScrollView/index.tsx
### Screenshots and Videos
https://github.com/user-attachments/assets/48078f8b-a1cc-42c2-8637-deeaa5a28346 | Issue: Author Provided Repro,Component: ScrollView,Needs: Author Feedback,Newer Patch Available | low | Critical |
2,763,091,894 | flutter | Getting pthread_mutex_lock crashes on various devices on google play app console | ### Steps to reproduce
```
(io.flutter.embedding.engine.renderer.SurfaceTextureSurfaceProducer.finalize+16)
#30 pc 0x0000000000256618 /system/lib64/libart.so (art::interpreter::Execute(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame&, art::JValue, bool) (.llvm.2851806063)+496)
#31 pc 0x000000000051d6a8 /system/lib64/libart.so (artQuickToInterpreterBridge+1032)
#32 pc 0x00000000005666fc /system/lib64/libart.so (art_quick_to_interpreter_bridge+92)
#33 pc 0x00000000001832d4 /system/framework/arm64/boot-core-libart.oat (java.lang.Daemons$FinalizerDaemon.doFinalize+100)
#34 pc 0x000000000018355c /system/framework/arm64/boot-core-libart.oat (java.lang.Daemons$FinalizerDaemon.runInternal+492)
#35 pc 0x000000000011185c /system/framework/arm64/boot-core-libart.oat (java.lang.Daemons$Daemon.run+76)
#36 pc 0x000000000025d2e8 /system/framework/arm64/boot.oat (java.lang.Thread.run+72)
#37 pc 0x000000000055d588 /system/lib64/libart.so (art_quick_invoke_stub+584)
#38 pc 0x00000000000cf740 /system/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+200)
#39 pc 0x0000000000463bd0 /system/lib64/libart.so (art::(anonymous namespace)::InvokeWithArgArray(art::ScopedObjectAccessAlreadyRunnable const&, art::ArtMethod*, art::(anonymous namespace)::ArgArray*, art::JValue*, char const*)+104)
#40 pc 0x0000000000464c98 /system/lib64/libart.so (art::InvokeVirtualOrInterfaceWithJValues(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, _jmethodID*, jvalue*)+424)
#41 pc 0x0000000000490010 /system/lib64/libart.so (art::Thread::CreateCallback(void*)+1120)
#42 pc 0x0000000000083824 /system/lib64/libc.so (__pthread_start(void*)+36)
#43 pc 0x000000000002340c /system/lib64/libc.so (__start_thread+68)
```
### Expected results
it should not cause issue
### Actual results
its causing anr s
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[✓] Flutter (Channel stable, 3.24.5, on macOS 15.2 24C101 darwin-arm64, locale en-US)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0-rc4)
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.2)
[✓] VS Code (version 1.96.2)
[✓] Connected device (6 available)
[✓] Network resources
• No issues found!
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| c: crash,platform-android,engine,needs repro info,team-android | low | Critical |
2,763,100,074 | PowerToys | [Mouse Without Borders] Can not Connect to Mocrosoft Surface Pro 8 | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
Microsoft PowerToys is installed on the following 3 machines:
```
Device-Name: Gaming-PC
Computer-Model: Custom Built PC
Operating System: Windows 11 Pro 64-Bit, Version 24H2 (Build 26100.2605)
OS-Language: German
Security Software: Microsoft Defender (no other security software installed)
PowerToys Version: 0.87.1
PowerToys Mode: Always start as Admin
Device-Name: Surface-Pro
Computer-Model: Microsoft Surface Pro 8
Operating System: Windows 11 Home 64-Bit, Version 24H2 (Build 26100.2605)
OS-Language: German
Security Software: Microsoft Defender (no other security software installed)
PowerToys Version: 0.87.1
PowerToys Mode: Always start as Admin
Device-Name: Surface-GO
Computer-Model: Microsoft Surface Go
Operating System: Windows 10 Home 64-Bit, Version 22H2 (Build 19045.5247)
OS-Language: German
Security Software: Microsoft Defender (no other security software installed)
PowerToys Version: 0.87.1
PowerToys Mode: Always start as Admin
```
In "Mouse Without Borders" I generated a new security-key on the "Gaming-PC", saved it in a text file on OneDrive, opened the text file on both surface devices and tried to connect to my "Gaming-PC" using this security-key.
This works on my "Surface-GO", but not on my "Surface-Pro".
I also tried to connect to my "Surface-Pro" from the "Gaming-PC", but this is also not possible.
I also tried to connect when I start "Microsoft PowerToys" as user (not as Admin) on all machines, but this is also not working.
And yes, "Mouse Without Borders" is **ENABLED** on all machines.
### ✔️ Expected Behavior
Connecting to the "Surface-Pro" is working.
### ❌ Actual Behavior
Connecting to the "Surface-Pro" is **NOT** possible.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,763,143,431 | opencv | 3D Reconstruction Task: In video processing tasks, the issue of slow detection speed of Aruco under low light, high resolution, and small target detection conditions. | ### Describe the feature and motivation
In video processing tasks, we encountered a problem where the detection speed of Aruco was very slow under the conditions described in the title. Through testing under different lighting conditions, we found that while the recognition accuracy of Aruco remained high, the detection speed was significantly slowed down. After analyzing the detection principle of Aruco, I concluded that the main factor affecting the speed was the noise caused by low lighting conditions.
To address this issue, I believe a fast preprocessing function module can be introduced to process the images being detected, improving Aruco's detection speed. This would help meet the real-time requirements of video processing tasks, while also enhancing robustness for tasks under normal lighting conditions.
Through a series of experiments, I have found some solutions to address the performance issues with speed, and I would like to raise this issue to spark discussion within the community, as well as to improve the current methods.
**Solution 1: AI Denoising** Denoising the image can reduce the noise to some extent and thus speed up detection. However, applying an AI denoise model takes 1-2 seconds, which is still unacceptable in real-time video processing tasks.
**Solution 2: YOLO + Aruco** By training YOLO on a large dataset of Aruco markers, we can still achieve fast detection and recognition in low-light environments. However, this method has a high cost: it requires organizing the dataset, training YOLO, and extracting the four corner coordinates of the quadrilateral through detection. It also requires extensive custom modifications to the original Aruco functions to obtain the ordered sequence of corner coordinates. The detection time is around 100ms per frame. Since the implementation cost of the YOLO method is high and the OpenCV community primarily uses traditional image processing methods for Aruco, we are considering alternative solutions, with this one as a backup.
**Solution 3: Video Tracking Points During video capture** I noticed that Aruco markers have a continuous property, meaning that the position of Aruco in the current frame doesn't move significantly in the next frame. First, Aruco markers are small target detections, which means that most of the other areas in the image are irrelevant, often filled with noise. By leveraging this continuity, we can focus detection on certain areas. The green region represents the detected area in the current frame, and the red region extends outward to form a reference for the next frame. Reducing the area can significantly speed up Aruco code detection. We need to save the positions of different Aruco markers in the previous frame and then define a small region for detection in the next frame, of course with appropriate boundary conditions.
**Solution 4: Canny edge detection algorithm combined with a dilation kernel** Solution 4 is based on improvements made to Solution 3. We found that both solutions focus on detecting areas of interest in the image and discarding most of the irrelevant regions to speed up Aruco detection. However, Solution 3 is relatively complex, involving read/write operations for some Aruco points. Additionally, video processing tasks often require multi-threading, and frames in the sequence might not be processed in the same thread. This requires further modifications to the thread handling code, giving rise to Solution 4.
We found that Aruco markers in images have clear boundary conditions. Based on this, we can use the Canny edge detection algorithm combined with a dilation kernel to expand the area of interest. However, it is important to note that this method works best in simple backgrounds; if the environment is too complex, the results may not be as good as those achieved with a YOLO-trained model. This is a key point I hope to discuss, as this acceleration method works well only in scenarios with less complex backgrounds. It meets the needs of my current project, but I invite the community to discuss and improve upon this approach.
### Additional context




| feature | low | Major |
2,763,172,795 | tensorflow | Custom Layer output shape isn't working with model.summary() | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18
### Custom code
Yes
### OS platform and distribution
Ubuntu 24.04 WSL
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
model.summary() output
**Current Behavior**
```
Model: "Q_KAN"
____________________________________________________________________________
Layer (type) Output Shape Param # Trainable
============================================================================
input_68 (InputLayer) [(None, 2)] 0 Y
DenseKAN (DenseQKan) None 18 Y
RescalePi (Rescale) None 0 N
============================================================================
Total params: 18 (144.00 Byte)
Trainable params: 18 (144.00 Byte)
Non-trainable params: 0 (0.00 Byte)
____________________________________________________________________________
```
**Expected Behavior**
```
Model: "Q_KAN"
____________________________________________________________________________
Layer (type) Output Shape Param # Trainable
============================================================================
input_68 (InputLayer) [(None, 2)] 0 Y
DenseKAN (DenseQKan) [(None, 10)] 18 Y
RescalePi (Rescale) [(None, 10)] 0 N
============================================================================
Total params: 18 (144.00 Byte)
Trainable params: 18 (144.00 Byte)
Non-trainable params: 0 (0.00 Byte)
____________________________________________________________________________
```
### Standalone code to reproduce the issue
**Custom layer code:**
```python
class Rescale(tf.keras.layers.Layer):
def __init__(self,scale_limit=np.pi,**kwargs):
super().__init__(trainable=False,**kwargs)
self.scale_limit = scale_limit
def call(self,inputs):
s = tf.reduce_sum(inputs,axis=-1)
return (inputs/s)*self.scale_limit
class DenseQKan(tf.keras.layers.Layer):
def __init__(self,units:int,circuit:qml.QNode,layers:int,**kwargs):
super().__init__(**kwargs)
self.circuit = circuit
self.qubits = len(circuit.device.wires)
self.units = units
self.qbatches = None
self.layers = layers
def build(self,input_shape):
if input_shape[-1]> self.qubits:
self.qbatches = np.ceil(input_shape[-1]/self.qubits).astype(np.int32)
else:
self.qbatches = 1
self.layer_weights = []
for u in range(self.units):
self.layer_weights.append(self.add_weight(shape=(self.qbatches,input_shape[-1]//self.qbatches,self.layers),
initializer=tf.keras.initializers.RandomUniform(minval=-np.pi, maxval=np.pi, seed=None),
trainable=True))
self.built = True
# W = np.random.uniform(low=-np.pi,high=np.pi,size=(self.units,self.qbatches,self.qubits,self.layers))
# @tf.function(reduce_retracing=True)
def compute_output_shape(self,input_shape):
print("Build Input Shape",input_shape)
return (input_shape[0],self.units)
def call(self,inputs):
assert self.qbatches != None
splits = tf.split(inputs,self.qbatches,-1)
out = []
for u in range(self.units):
unit_out = 0
for qb in range(self.qbatches):
qb_out = tf.reduce_sum(tf.stack(self.circuit(splits[qb],self.layer_weights[u][qb]),axis=-1),axis=-1)
unit_out = unit_out+qb_out
out.append(unit_out)
out = tf.stack(out,axis=-1)
return out
```
**Model definition code:**
```python
# def create_model(units,qubits,layers,circuit,input_shape=2):
inp = Input(shape=input_shape)
out = DenseQKan(units,circuit,layers,name="DenseKAN")(inp)
out = Rescale(name="RescalePi")(out)
model = Model(inputs=inp,outputs=out,name="Q_KAN")
model.summary(show_trainable=True)
```
### Relevant log output
_No response_ | stat:awaiting response,stale,comp:keras,TF 2.18 | low | Critical |
2,763,208,278 | ollama | Community Contribution: Open-Source Chinese Tutorial for Ollama | Hello, we'd like to contribute to the Ollama community by announcing the release of our open-source Chinese tutorial.
This tutorial aims to be comprehensive and easy to understand, covering:
- Ollama Introduction
- Ollama Installation and Configuration
- Custom Model Import
- Ollama REST API
- Using Ollama with LangChain
- Deployment of Ollama Visual Interfaces
- Application Examples
The repo is at : https://github.com/datawhalechina/handy-ollama
The tutorial is available at: https://datawhalechina.github.io/handy-ollama/
We would be happy to discuss the possibility of linking to this tutorial from the official documentation or resources to make it more accessible to Chinese users. Thank you for your consideration. | feature request | low | Minor |
2,763,214,108 | vscode | Support Line decoration editor offset retrieval | # Problem statement
Core contributions which are trying to render overlay widgets properly can currently rely only the line offsets, but those disregard decoration offsets.
Meaning that when decorations which contain text are rendered after the current content (e.g. via the inlineSuggest, but also other AI contributions), overlay widgets struggle to correctly render without overlaying the decoration (which might contain text that the user wants to read).
# Value statement
Allowing to get the decoration offset, allows to easily render any new overlay/absolutely positioned widgets, without hiding any interesting suggestions/decorations. This would allow for more flexibility with novel UX/UI interactions.
# Proposal
In the same way as the editor currently allows to retrieve an offset for a specific column, allow to get an offset for a decoration.
The decorations can be accessed in the `visible line` and can provide positional information, which can then be used to correctly calculate the *horizontal position* in the editor.
The returned value would be `left: number` for the right most (last) decoration, starting at the provided column and containing expected className. | file-decorations | low | Minor |
2,763,231,748 | deno | `deno install -g` doesn't use base of original imports | Version: Deno 2.1.4
Using Windows 11
I've posted a question on stackoverflow, at https://stackoverflow.com/questions/79317608/deno-2-configuration-file-not-detected-when-using-compile-command that may explain further
But, here's the jist. If I run `deno install -fgA src/bin/compile.js` with the current directory set to my project directory (~/vllu below), the files added to my ~/.deno/bin folder include the --no-config option, even though my project root contains a deno.jsonc file. FYI, my directory structure is:
```
~
vllu
deno.jsonc
src
lib
llutils.js
bin
compile.js
```
And I ALWAYS leave the current directory at my project root (i.e. ~/vllu)
The docs I see when running `deno compile --help` clearly state that the config file should be automatically detected and used. But that's not all. Since there is a --config option for specifying the config file to use, I tried using
`deno install -fgA --config ./deno.jsonc src/bin/compile.js`
which resulted in this odd behavior:
1) my config file was copied to the ~/.deno/bin directory, which is odd because the script being installed is NOT copied, but used directly, i.e. if I make changes to compile.js, the command would use those changes, but if I make changes to my deno.jsonc file, those wouldn't be used because the file is copied.
2) the copied file has the file extension .json even though my config file is named deno.jsonc. It is, in fact, not a valid JSON file since it contains comments and trailing commas. Attempts to load the file as a true JSON file is bound to fail.
Then, after executing the above install command, I found files `compile `and `compile.cmd` in my `~/.deno/bin` folder:
```
#!/bin/sh
# generated by deno install
deno "run" "--allow-all" "--config" "C:\Users\johnd\.deno\bin\.compile.deno.json" "file:///C:/Users/johnd/vllu/src/bin/compile.js" "$@"
```
```
% generated by deno install %
@deno "run" "--allow-all" "--config" "C:\Users\johnd\.deno\bin\.compile.deno.json" "file:///C:/Users/johnd/vllu/src/bin/compile.js" %*
```
I'm not sure why I need both, but anyway... when I try to run the command `compile llutils` (my compile script automatically finds the location of the src/lib/llutils.js file), I get the error:
$ compile llutils
error: Module not found "file:///C:/Users/johnd/.deno/bin/src/lib/llutils.js".
at file:///C:/Users/johnd/vllu/src/bin/compile.js:7:9
The file path it's looking for is entirely wrong. It seems to be looking for src/lib/llutils.js in my ~/.deno/bin folder. FYI, the compile.js script is importing from` '@jdeighan/vllu/llutils.js'` and the config file contains:
`"@jdeighan/vllu/": "./src/lib/",
`
in the imports section. I'm guessing that Deno changes the current directory, which would, I think, cause the bad file path. If Deno is going to change the working directory, it should modify the relative paths used in the config file.
Also, I tried changing the copied config file extension to` .jsonc`, and also changing that file extension in both the files `compile `and `compile.cmd`, but that made no difference. Perhaps deno loads it as a `.jsonc` file, even if the extension is `.json`, which I find very confusing. | install | low | Critical |
2,763,285,612 | PowerToys | Text Extractor not working when I have multiple monitors | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
TextExtractor
### Steps to reproduce
Text extractor works fine on a single monitor system. When I have 3 monitors connected, the overlay window (black screen) iss showing on top of the window that I need to capture and am unable to capture anything. I have to move the window around to make it work.
Any ideas how to fix it? Thanks.
### ✔️ Expected Behavior
Text extractor to work when 2 or monitors are connected.
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,763,289,614 | ui | [feat]: Export useCarousel hook | ### Feature description
I think it would be nice to export the useCarousel hook directly to make it accessible.
### Affected component/components
Carousel
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,763,342,666 | tauri | [bug] User selection client rect | ### Describe the bug
This code for get user input position on browser also I test on chrome and safari in story book it works fine
```typescript
window.getSelection()?.getRangeAt(0)?.getBoundingClientRect();
```
But in tauri the result not expected it will offset by a certain distance different with chrome and safari
It is on chrome with storybook
<img width="1512" alt="截屏2024-12-30 23 03 43" src="https://github.com/user-attachments/assets/8a765d30-09c7-491a-8ee0-26e45798c51f" />
it is on safari with storybook
<img width="1512" alt="截屏2024-12-30 23 06 03" src="https://github.com/user-attachments/assets/95465fde-9365-44f3-99ce-8ba83fabe703" />
it is on tauri
<img width="1512" alt="截屏2024-12-30 23 06 37" src="https://github.com/user-attachments/assets/4fea7de8-d06d-430d-aad8-9231dc870b0e" />
### Reproduction
I test on new a vite project with tauri just write this
```typescript
window.getSelection()?.getRangeAt(0)?.getBoundingClientRect();
```
And see what different on normal browser and tauri
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 15.2.0 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.8.0
- pnpm: 9.10.0
- npm: 10.8.2
- bun: 1.1.42
- deno: deno 2.1.4
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- cargo-tauri 🦀: 1.0.5
- @tauri-apps/api : not installed!
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-os 🦀: 2.2.0
- @tauri-apps/plugin-os : not installed!
- tauri-plugin-log 🦀: 2.2.0
- @tauri-apps/plugin-log : not installed!
- tauri-plugin-fs 🦀: 2.2.0
- @tauri-apps/plugin-fs : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,platform: macOS,status: needs triage | low | Critical |
2,763,348,025 | pytorch | [DTensor] Allow multiple dimensions to be sharded together (as if flattened) | ### 🚀 The feature, motivation and pitch
PyTorch's utilities for sequence parallelism seem to suppose that tensors will have two separate dimensions for batch (dim 0) and sequence (dim 1), and shard only along dim 1. However, if batch > 1, this means that each rank's shard will be non-contiguous. This is a problem because NCCL wants contiguous storages, hence we end up with extra kernels for (un-)compacting the data. You can clearly see them in this profiler trace obtained from such a job:

The simplest solution here, which wouldn't require changing user code, is for these tensors to be shared across the _joint_ 0 and 1 dimensions, i.e., as if those two dimensions were flattened. In other terms, we'd need DTensor to support a placement type that looks like `Shard(dims=[0, 1])`.
### Alternatives
The only alternative I see is for users to flatten those tensors manually in their own model code. This is not too hard, but I claim it's undesirable, since it _worsens_ the user code (makes it more opaque) our of a concern for efficiency, whereas the aim of torch.compile and DTensors should be to deal with efficiency without the user's intervention.
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu | oncall: distributed,module: dtensor | low | Major |
2,763,371,408 | PowerToys | Critical: .NET Install links are changing | ### Description of the new feature / enhancement
We are currently making an unexpected change to the way that .NET installers and archives are distributed. This change may affect you and may require changes in your development, CI, and/or production infrastructure. We expect that most users will not be directly affected, however, it is critical that you validate if you are affected and to watch for downtime or other kinds of breakage.
The most up-to-date status is being maintained at [dotnet/core #9671](https://github.com/dotnet/core/issues/9671). Please look to that issue to stay current.
https://devblogs.microsoft.com/dotnet/critical-dotnet-install-links-are-changing/
| Priority-0,Area-Build | low | Minor |
2,763,389,838 | godot | LSP performance issues on "slower" devices | ### Tested versions
- Reproducable in 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.26100 - Vulkan (Mobile) - integrated AMD Radeon(TM) Graphics (Advanced Micro Devices, Inc.; 31.0.21024.2004) - AMD Ryzen 7 7735U with Radeon Graphics (16 Threads)
### Issue description
Using the LSP server via NeoVim or VSCode on my laptop results in very slow responses (several seconds) when working with specific scripts in my project.
I have investigated the LSP client-side settings and could not find any differences between files that respond quickly and those with slow responses. Additionally, the project runs without any issues on my desktop PC. Running the LSP server on my desktop and connecting to it from my laptop also eliminates the problem.
File length does not seem to be a factor. Even after deleting the entire content of a problematic file, leaving only the class name and its extends declaration, the slow response persists. Similarly, the number of references to the class does not appear to influence performance. Certain classes consistently cause slow responses, while others resolve normally.
I have also noticed a significant increase in memory usage by Godot when editing files with slow responses—about 2–3 times the usual amount. This behavior does not occur on my desktop PC, even for the same files/classes.
Unfortunately, I don’t have any additional insights at this time, as I haven’t been able to identify a specific cause within the project, the files, or their references. However, it seems there might be a performance issue with the LSP server that the desktop's processing power overcomes but manifests as slow responses on the laptop.
If anyone has experienced a similar issue or can provide guidance on debugging the LSP server side, I would greatly appreciate your help.
Thanks in advance!
### Steps to reproduce
Unfortunately not yet.
### Minimal reproduction project (MRP)
Unfortunately not yet.
_Bugsquad Edit:_
- _MRP: [lsp_slowdown_large_scenes.zip](https://github.com/user-attachments/files/18301626/lsp_slowdown_large_scenes.zip)_ | topic:gdscript,topic:editor,performance | low | Critical |
2,763,425,093 | godot | UI issues using the Microsoft Surface | ### Tested versions
Using Godot v4.3 stable
### System information
Windows 11, Surface Pro 11th Edition
### Issue description
Hello! I got the new Surface Pro 11th Edition to work in the train, hotels or whereever live leads me.
There seem to be some UI problems dublicating buttons or moving important parts out of the frame. It's not just a visual problem, but also blocks other UI (invisible). Is that a known issue? Is it surface specific? I know a bit about computers (uhuuw....), but I'm not really sure where a surface is between tablet and pc. Already realized that there are for example weird problems with swiping in slay the spire, therefore it might be something surface specific?...or maybe something connected to touch screens?
If you got a surface and know your way around godot I would be happy about a fix.
Example of the bug:


### Steps to reproduce
Using the microsoft surface pro 11th Edition, the error should instanlty exist if you want to use Godot 4.3
To me the project appears as soon as I want to create a project and switch the type (mobile etc.)
### Minimal reproduction project (MRP)
No Project but Engine specific bug... it already happens outside of the project, as you see in the screenshot. But it also happens inside of the project. It can block blicks and for example even deny access to other development types in the starter that way.
| bug,platform:windows,topic:gui | low | Critical |
2,763,437,747 | pytorch | Add a knob to control how many blocks are used by persistent matmul/attn kernels | ### 🚀 The feature, motivation and pitch
We train a transformer-style model using FSDP, and we have a very good overlap between the matmul kernels (from cuBLAS) and the NCCL operation in the background. However, when profiling, we have observed that the **matmuls take 2x as long** to complete when they are overlapped with a NCCL kernel!
We believe this is easily explained: we're running on H100 GPUs and, upon inspection, all the matmuls look like they are using "persistent" kernels. That is, they launch as many CUDA blocks as there are SMs on the GPU (i.e., 132) and each of these blocks will process several tiles in a row. What we're observing is thus a form of "wave quantization" where, due to NCCL occupying some SMs, not all blocks of the matmuls can be scheduled at once, thus breaking them into two waves, which thus take twice as long to complete.
Since NCCL only occupies ~10% of the SMs, it would be much more efficient if the matmuls tried to launch a number of blocks that corresponds to ~90% of the SMs. This would allow the two kernels to run simultaneously in a single wave, with the matmuls only being ~10% slower, not ~50%!
For that, however, we need PyTorch to add a new knob allowing us to control such a value, and to forward that knob when launching its cuBLAS kernels (and others).
### Alternatives
None. We couldn't find any environment variable provided by cuBLAS that allows us to override the number of blocks launched.
### Additional context
With longer NCCL kernels, matmuls take a long time:
<img width="1555" alt="Screenshot 2024-12-30 at 17 29 23" src="https://github.com/user-attachments/assets/d91d192e-16e9-4108-9d8e-5cb7caef80f6" />
With shorter NCCL kernels, the non-overlapped matmuls now take less time:
<img width="1439" alt="Screenshot 2024-12-30 at 17 29 42" src="https://github.com/user-attachments/assets/6e1fff67-b1a8-4b3b-a582-6648fc8b00bf" />
cc @ptrblck @msaroufim @eqy @csarofeen @xwang233 @jianyuh @nikitaved @pearu @mruberry @walterddr @Lezcano | module: cuda,triaged,module: cublas,module: linear algebra | low | Major |
2,763,447,812 | svelte | Migration guide: Mention `$state.snapshot` as potentially being required | ### Describe the problem
Third party APIs (and some built-ins like `structuredClone`) may be unable to deal with state proxies, this can cause errors/confusion.
The fact that proxies are created by `$state` and how that matters is not explained. (There is only a single mention of proxies in the "Modern browser required" section.)
### Describe the proposed solution
- Explain that objects and arrays are proxied.
- Give an example where a proxy could cause an error and how `$state.snapshot` helps.
### Importance
nice to have | documentation | low | Critical |
2,763,461,431 | PowerToys | Always on Top - Excel Protected view stolen focus | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Always on Top
### Steps to reproduce
1. Open two instances of Microsoft Excel 2021 (no idea if other versions of Office have this behaviour), with at least one of them opened in protected view.
2. Enable Always on Top on one of the two instances
[PowerToysReport_2024-12-30-17-40-48.zip](https://github.com/user-attachments/files/18276391/PowerToysReport_2024-12-30-17-40-48.zip)
### ✔️ Expected Behavior
The non-pinned Excel instance should be interactable (i.e., the user should be able to move it, resize it, and so on).
### ❌ Actual Behavior
The instance pinned by Always on Top steals the focus from the other instance, rendering it impossible to move or resize.
If none of the instances are running in read/write mode, the behaviour doesn't occur.
The behaviour also doesn't occur with Word. I don't have the chance to test it with PowerPoint, but I suspect this issue only affects Excel.
Attaching a screen capture that illustrates this bug.
https://github.com/user-attachments/assets/6726f7e9-1a16-4ec0-9371-76eeb2846e1f
### Other Software
Microsoft Office 2021 | Issue-Bug,Needs-Triage | low | Critical |
2,763,480,272 | TypeScript | Allow dynamic usage of `import = require` syntax | ### 🔍 Search Terms
dynamic import, require, verbatimmodulesyntax
### ✅ Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
I propose that `import = require()` should be allowed anywhere a `require()` statement is allowed (under appropriate module settings).
```ts
function foo() {
import foo = require('./module.js');
}
```
### 📃 Motivating Example
I've been trying to convert a commonjs project to verbatimModuleSyntax. Currently dynamic JSON imports are written as
```ts
function foo() {
const json = await import('./foo.json');
}
```
which are rewritten as `Promise.resolve().then(() => require('./foo.json'));` by TS due to my module setting. The `import()` syntax currently has the advantage that TS statically analyzes and provides types for the imported JSON.
I'd like to convert the JSON import to a `require()` call, however, a bare `require()` loses static analysis. Therefore, I'd like to be able to use
```ts
function foo() {
import json = require('./foo.json');
}
```
in order to have type inference and static analysis of the json file (also to have it copied to the output build).
(using a node `import()`-from-CJS is not available to me at this time)
### 💻 Use Cases
1. What do you want to use this for? dynamic require()s with static analysis, under verbatimModuleSyntax
2. What shortcomings exist with current approaches? No static analysis of dynamic `require()`s.
4. What workarounds are you using in the meantime? Not using verbatimModuleSyntax
***
Somewhat relates to https://github.com/microsoft/TypeScript/issues/60598 | Suggestion,Awaiting More Feedback | low | Major |
2,763,495,103 | pytorch | Return attention weights in scaled_dot_product_attention | ### 🚀 The feature, motivation and pitch
I'd like to reopen the request #119811, but for a special case, namely generative inference, where the Q tensor is very small (just a single token). The request is to return the attention weights along with the normal MHA output.
Why is this important? In order to implement KV cache eviction strategies, like heavy hitter oracle, these weights are needed.
The suggestion in the issue, to pass an identity as V, is not useful, because this tensor would be huge, defying the whole purpose of efficient MHA in this case. It would be square the size of the KV cache at least. And the application is not just for some visualization, it must run efficiently inside generation.
I digged a bit into your sources. It seems that some functions called by `scaled_dot_product_attention` return attn_weights as second argument, for example `_scaled_dot_product_attention_math` in `aten/src/ATen/native/transformers/attention.cpp`. But most others do not. They return tuples with `(output, logsumexp, ...)`, and `scaled_dot_product_attention` returns `output`. I don't really know what `logsumexp` is, but it has the wrong shape, scales with size of Q.
Any hint for how to get this done would be appreciated, except for the "trick" to pass the identity as V.
### Alternatives
_No response_
### Additional context
_No response_ | triaged,module: sdpa | low | Minor |
2,763,495,538 | pytorch | DISABLED test_setting_meta_device_model_broadcasting_and_memory (__main__.TestStateDict) | Platforms: rocm
Started probably at https://github.com/pytorch/pytorch/pull/142845
https://hud.pytorch.org/hud/pytorch/pytorch/9d026000de01bbd4d5c97bdca88cc6228507617a/3?per_page=100&name_filter=distributed&mergeLF=true
https://github.com/pytorch/pytorch/actions/runs/12409302699/job/34672198799
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22distributed%2Fcheckpoint%2Ftest_state_dict.py%3A%3ATestStateDict%3A%3Atest_setting_meta_device_model_broadcasting_and_memory%22%5D)).
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | oncall: distributed,module: rocm,triaged,skipped | low | Critical |
2,763,501,670 | flutter | Android ANR when running on x86 device | ### Steps to reproduce
This ANR issue reported by the Google Play pre-flight report for a production app built with the Flutter 3.27.1
Running on device with the following specifications:
```
Model name: Small Desktop (x86) (virtual)
Android version: Android 12L (SDK 32)
Locale: en_CA
Screen size: 1366 x 768
Screen density (DPI): 160
RAM (total memory): -
OpenGL ES version: -
ABI: x86_64
CPU: -
```
### Expected results
The application should start normally.
### Actual results
ANR with the stacktrace linked below.
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
"main" tid=1 Native
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x00000000004a4d2d /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__threading_support:335)
#04 pc 0x00000000004bd57d /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/fml/synchronization/waitable_event.cc:75)
#05 pc 0x000000000049cd98 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/shell/platform/android/platform_view_android.cc:155)
#06 pc 0x00000000003a03ab /apex/com.android.art/lib64/libart.so (art_quick_generic_jni_trampoline+219)
#07 pc 0x000000000038d053 /apex/com.android.art/lib64/libart.so (nterp_helper+5643)
#08 pc 0x0000000000b77c34 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/oat/x86_64/base.vdex (io.flutter.embedding.engine.FlutterJNI.onSurfaceCreated+24)
#09 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#10 pc 0x0000000000b7f0aa /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/oat/x86_64/base.vdex (io.flutter.embedding.engine.renderer.FlutterRenderer.startRenderingToSurface+34)
#11 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#12 pc 0x0000000000b6c708 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/oat/x86_64/base.vdex (io.flutter.embedding.android.FlutterSurfaceView.connectSurfaceToRenderer+44)
#13 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#14 pc 0x0000000000b6c5c8 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/oat/x86_64/base.vdex (io.flutter.embedding.android.FlutterSurfaceView.access$200)
#15 pc 0x000000000038ba80 /apex/com.android.art/lib64/libart.so (nterp_helper+56)
#16 pc 0x0000000000b6c32a /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/oat/x86_64/base.vdex (io.flutter.embedding.android.FlutterSurfaceView$1.surfaceCreated+46)
#17 pc 0x000000000038d6f9 /apex/com.android.art/lib64/libart.so (nterp_helper+7345)
#18 pc 0x000000000031f374 /system/framework/framework.jar (android.view.SurfaceView.updateSurface+984)
#19 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#20 pc 0x000000000031d738 /system/framework/framework.jar (android.view.SurfaceView.lambda$new$0$SurfaceView+36)
#21 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#22 pc 0x000000000031cb04 /system/framework/framework.jar (android.view.SurfaceView$$ExternalSyntheticLambda2.onPreDraw+4)
#23 pc 0x000000000038d63d /apex/com.android.art/lib64/libart.so (nterp_helper+7157)
#24 pc 0x0000000000346f28 /system/framework/framework.jar (android.view.ViewTreeObserver.dispatchOnPreDraw+56)
#25 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#26 pc 0x0000000000343c42 /system/framework/framework.jar (android.view.ViewRootImpl.performTraversals+5278)
#27 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#28 pc 0x000000000034000a /system/framework/framework.jar (android.view.ViewRootImpl.doTraversal+62)
#29 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#30 pc 0x0000000000339bf4 /system/framework/framework.jar (android.view.ViewRootImpl$TraversalRunnable.run+4)
#31 pc 0x000000000038d63d /apex/com.android.art/lib64/libart.so (nterp_helper+7157)
#32 pc 0x00000000002dc22c /system/framework/framework.jar (android.view.Choreographer$CallbackRecord.run+40)
#33 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#34 pc 0x00000000002dc96a /system/framework/framework.jar (android.view.Choreographer.doCallbacks+150)
#35 pc 0x000000000038d096 /apex/com.android.art/lib64/libart.so (nterp_helper+5710)
#36 pc 0x00000000002dcc58 /system/framework/framework.jar (android.view.Choreographer.doFrame+540)
#37 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#38 pc 0x00000000002dc3ae /system/framework/framework.jar (android.view.Choreographer$FrameDisplayEventReceiver.run+22)
#39 pc 0x0000000002080b06 /memfd:jit-zygote-cache (android.os.Handler.dispatchMessage+86)
#40 pc 0x000000000038c945 /apex/com.android.art/lib64/libart.so (nterp_helper+3837)
#41 pc 0x00000000004595f6 /system/framework/framework.jar (android.os.Looper.loopOnce+334)
#42 pc 0x000000000209d9e6 /memfd:jit-zygote-cache (android.os.Looper.loop+550)
#43 pc 0x000000000038baed /apex/com.android.art/lib64/libart.so (nterp_helper+165)
#44 pc 0x00000000001c8a1e /system/framework/framework.jar (android.app.ActivityThread.main+202)
#45 pc 0x00000000003953f6 /apex/com.android.art/lib64/libart.so (art_quick_invoke_static_stub+806)
#46 pc 0x000000000041da89 /apex/com.android.art/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+233)
#47 pc 0x00000000008194c2 /apex/com.android.art/lib64/libart.so (_jobject* art::InvokeMethod<(art::PointerSize)8>(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, _jobject*, _jobject*, unsigned long)+1442)
#48 pc 0x0000000000772698 /apex/com.android.art/lib64/libart.so (art::Method_invoke(_JNIEnv*, _jobject*, _jobject*, _jobjectArray*)+56)
at io.flutter.embedding.engine.FlutterJNI.nativeSurfaceCreated (Native method)
at io.flutter.embedding.engine.FlutterJNI.onSurfaceCreated (FlutterJNI.java:617)
at io.flutter.embedding.engine.renderer.FlutterRenderer.startRenderingToSurface (FlutterRenderer.java:1087)
at io.flutter.embedding.android.FlutterSurfaceView.connectSurfaceToRenderer (FlutterSurfaceView.java:276)
at io.flutter.embedding.android.FlutterSurfaceView.access$200 (FlutterSurfaceView.java:36)
at io.flutter.embedding.android.FlutterSurfaceView$1.surfaceCreated (FlutterSurfaceView.java:59)
at android.view.SurfaceView.updateSurface (SurfaceView.java:1172)
at android.view.SurfaceView.lambda$new$0$SurfaceView (SurfaceView.java:162)
at android.view.SurfaceView$$ExternalSyntheticLambda2.onPreDraw (unavailable)
at android.view.ViewTreeObserver.dispatchOnPreDraw (ViewTreeObserver.java:1093)
at android.view.ViewRootImpl.performTraversals (ViewRootImpl.java:3362)
at android.view.ViewRootImpl.doTraversal (ViewRootImpl.java:2179)
at android.view.ViewRootImpl$TraversalRunnable.run (ViewRootImpl.java:8787)
at android.view.Choreographer$CallbackRecord.run (Choreographer.java:1037)
at android.view.Choreographer.doCallbacks (Choreographer.java:845)
at android.view.Choreographer.doFrame (Choreographer.java:780)
at android.view.Choreographer$FrameDisplayEventReceiver.run (Choreographer.java:1022)
at android.os.Handler.handleCallback (Handler.java:938)
at android.os.Handler.dispatchMessage (Handler.java:99)
at android.os.Looper.loopOnce (Looper.java:201)
at android.os.Looper.loop (Looper.java:288)
at android.app.ActivityThread.main (ActivityThread.java:7870)
at java.lang.reflect.Method.invoke (Native method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:548)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:1003)
"Signal Catcher" tid=5 Runnable
#00 pc 0x000000000073ccef /apex/com.android.art/lib64/libart.so (art::DumpNativeStack(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, int, BacktraceMap*, char const*, art::ArtMethod*, void*, bool)+127)
#01 pc 0x0000000000882530 /apex/com.android.art/lib64/libart.so (art::Thread::DumpStack(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, bool, BacktraceMap*, bool) const+368)
#02 pc 0x00000000008a32ba /apex/com.android.art/lib64/libart.so (art::DumpCheckpoint::Run(art::Thread*)+1082)
#03 pc 0x000000000089c16c /apex/com.android.art/lib64/libart.so (art::ThreadList::RunCheckpoint(art::Closure*, art::Closure*)+220)
#04 pc 0x000000000089b3db /apex/com.android.art/lib64/libart.so (art::ThreadList::Dump(std::__1::basic_ostream<char, std::__1::char_traits<char> >&, bool)+1723)
#05 pc 0x000000000089abef /apex/com.android.art/lib64/libart.so (art::ThreadList::DumpForSigQuit(std::__1::basic_ostream<char, std::__1::char_traits<char> >&)+1423)
#06 pc 0x0000000000835fa8 /apex/com.android.art/lib64/libart.so (art::Runtime::DumpForSigQuit(std::__1::basic_ostream<char, std::__1::char_traits<char> >&)+216)
#07 pc 0x000000000084c004 /apex/com.android.art/lib64/libart.so (art::SignalCatcher::HandleSigQuit()+1924)
#08 pc 0x000000000084ad75 /apex/com.android.art/lib64/libart.so (art::SignalCatcher::Run(void*)+341)
#09 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#10 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"Metrics Background Reporting Thread" tid=4 Native
#00 pc 0x000000000005ad38 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+24)
#01 pc 0x0000000000426124 /apex/com.android.art/lib64/libart.so (art::ConditionVariable::TimedWait(art::Thread*, long, int)+148)
#02 pc 0x00000000006ffd2a /apex/com.android.art/lib64/libart.so (art::metrics::MetricsReporter::BackgroundThreadRun()+2634)
#03 pc 0x0000000000701f16 /apex/com.android.art/lib64/libart.so (void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (art::metrics::MetricsReporter::*)(), art::metrics::MetricsReporter*> >(void*)+54)
#04 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#05 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"ADB-JDWP Connection Control Thread" tid=6 Waiting
#00 pc 0x00000000000b3a5a /apex/com.android.runtime/lib64/bionic/libc.so (__ppoll+10)
#01 pc 0x000000000006a21a /apex/com.android.runtime/lib64/bionic/libc.so (poll+74)
#02 pc 0x000000000000a711 /apex/com.android.art/lib64/libadbconnection.so (adbconnection::AdbConnectionState::RunPollLoop(art::Thread*)+849)
#03 pc 0x0000000000008c01 /apex/com.android.art/lib64/libadbconnection.so (adbconnection::CallbackFunction(void*)+1425)
#04 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#05 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"perfetto_hprof_listener" tid=7 Native
#00 pc 0x00000000000b26f5 /apex/com.android.runtime/lib64/bionic/libc.so (read+5)
#01 pc 0x0000000000029790 /apex/com.android.art/lib64/libperfetto_hprof.so (void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, ArtPlugin_Initialize::$_33> >(void*)+288)
#02 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#03 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"Binder:10035_1" tid=8 Native
#00 pc 0x00000000000b2997 /apex/com.android.runtime/lib64/bionic/libc.so (__ioctl+7)
#01 pc 0x0000000000067ad8 /apex/com.android.runtime/lib64/bionic/libc.so (ioctl+216)
#02 pc 0x0000000000058a9f /system/lib64/libbinder.so (android::IPCThreadState::talkWithDriver(bool)+319)
#03 pc 0x0000000000058d80 /system/lib64/libbinder.so (android::IPCThreadState::getAndExecuteCommand()+16)
#04 pc 0x000000000005982f /system/lib64/libbinder.so (android::IPCThreadState::joinThreadPool(bool)+63)
#05 pc 0x0000000000085857 /system/lib64/libbinder.so (android::PoolThread::threadLoop()+23)
#06 pc 0x0000000000013b69 /system/lib64/libutils.so (android::Thread::_threadLoop(void*)+313)
#07 pc 0x00000000000df2ac /system/lib64/libandroid_runtime.so (android::AndroidRuntime::javaThreadShell(void*)+140)
#08 pc 0x00000000000133d9 /system/lib64/libutils.so (thread_data_t::trampoline(thread_data_t const*)+425)
#09 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#10 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"Jit thread pool worker thread 0" tid=9 Native
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x0000000000425d1e /apex/com.android.art/lib64/libart.so (art::ConditionVariable::WaitHoldingLocks(art::Thread*)+110)
#02 pc 0x00000000008a4dd7 /apex/com.android.art/lib64/libart.so (art::ThreadPool::GetTask(art::Thread*)+103)
#03 pc 0x00000000008a40f1 /apex/com.android.art/lib64/libart.so (art::ThreadPoolWorker::Run()+113)
#04 pc 0x00000000008a3be8 /apex/com.android.art/lib64/libart.so (art::ThreadPoolWorker::Callback(void*)+264)
#05 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#06 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"Binder:10035_2" tid=10 Native
#00 pc 0x00000000000b2997 /apex/com.android.runtime/lib64/bionic/libc.so (__ioctl+7)
#01 pc 0x0000000000067ad8 /apex/com.android.runtime/lib64/bionic/libc.so (ioctl+216)
#02 pc 0x0000000000058a9f /system/lib64/libbinder.so (android::IPCThreadState::talkWithDriver(bool)+319)
#03 pc 0x0000000000058d80 /system/lib64/libbinder.so (android::IPCThreadState::getAndExecuteCommand()+16)
#04 pc 0x000000000005982f /system/lib64/libbinder.so (android::IPCThreadState::joinThreadPool(bool)+63)
#05 pc 0x0000000000085857 /system/lib64/libbinder.so (android::PoolThread::threadLoop()+23)
#06 pc 0x0000000000013b69 /system/lib64/libutils.so (android::Thread::_threadLoop(void*)+313)
#07 pc 0x00000000000df2ac /system/lib64/libandroid_runtime.so (android::AndroidRuntime::javaThreadShell(void*)+140)
#08 pc 0x00000000000133d9 /system/lib64/libutils.so (thread_data_t::trampoline(thread_data_t const*)+425)
#09 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#10 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"FinalizerWatchdogDaemon" tid=11 Waiting
at java.lang.Object.wait (Native method)
at java.lang.Object.wait (Object.java:442)
at java.lang.Object.wait (Object.java:568)
at java.lang.Daemons$FinalizerWatchdogDaemon.sleepUntilNeeded (Daemons.java:341)
at java.lang.Daemons$FinalizerWatchdogDaemon.runInternal (Daemons.java:321)
at java.lang.Daemons$Daemon.run (Daemons.java:139)
at java.lang.Thread.run (Thread.java:920)
"FinalizerDaemon" tid=12 Waiting
at java.lang.Object.wait (Native method)
at java.lang.Object.wait (Object.java:442)
at java.lang.ref.ReferenceQueue.remove (ReferenceQueue.java:190)
at java.lang.ref.ReferenceQueue.remove (ReferenceQueue.java:211)
at java.lang.Daemons$FinalizerDaemon.runInternal (Daemons.java:273)
at java.lang.Daemons$Daemon.run (Daemons.java:139)
at java.lang.Thread.run (Thread.java:920)
"ReferenceQueueDaemon" tid=13 Waiting
at java.lang.Object.wait (Native method)
at java.lang.Object.wait (Object.java:442)
at java.lang.Object.wait (Object.java:568)
at java.lang.Daemons$ReferenceQueueDaemon.runInternal (Daemons.java:217)
at java.lang.Daemons$Daemon.run (Daemons.java:139)
at java.lang.Thread.run (Thread.java:920)
"HeapTaskDaemon" tid=14 Waiting
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x0000000000425d1e /apex/com.android.art/lib64/libart.so (art::ConditionVariable::WaitHoldingLocks(art::Thread*)+110)
#02 pc 0x00000000005719c1 /apex/com.android.art/lib64/libart.so (art::gc::TaskProcessor::GetTask(art::Thread*)+529)
#03 pc 0x0000000000572242 /apex/com.android.art/lib64/libart.so (art::gc::TaskProcessor::RunAllTasks(art::Thread*)+66)
at dalvik.system.VMRuntime.runHeapTasks (Native method)
at java.lang.Daemons$HeapTaskDaemon.runInternal (Daemons.java:531)
at java.lang.Daemons$Daemon.run (Daemons.java:139)
at java.lang.Thread.run (Thread.java:920)
"Binder:10035_3" tid=15 Native
#00 pc 0x00000000000b2997 /apex/com.android.runtime/lib64/bionic/libc.so (__ioctl+7)
#01 pc 0x0000000000067ad8 /apex/com.android.runtime/lib64/bionic/libc.so (ioctl+216)
#02 pc 0x0000000000058a9f /system/lib64/libbinder.so (android::IPCThreadState::talkWithDriver(bool)+319)
#03 pc 0x0000000000058d80 /system/lib64/libbinder.so (android::IPCThreadState::getAndExecuteCommand()+16)
#04 pc 0x000000000005982f /system/lib64/libbinder.so (android::IPCThreadState::joinThreadPool(bool)+63)
#05 pc 0x0000000000085857 /system/lib64/libbinder.so (android::PoolThread::threadLoop()+23)
#06 pc 0x0000000000013b69 /system/lib64/libutils.so (android::Thread::_threadLoop(void*)+313)
#07 pc 0x00000000000df2ac /system/lib64/libandroid_runtime.so (android::AndroidRuntime::javaThreadShell(void*)+140)
#08 pc 0x00000000000133d9 /system/lib64/libutils.so (thread_data_t::trampoline(thread_data_t const*)+425)
#09 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#10 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"Profile Saver" tid=16 Native
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x0000000000425d1e /apex/com.android.art/lib64/libart.so (art::ConditionVariable::WaitHoldingLocks(art::Thread*)+110)
#02 pc 0x00000000005e787e /apex/com.android.art/lib64/libart.so (art::ProfileSaver::Run()+526)
#03 pc 0x00000000005edd7b /apex/com.android.art/lib64/libart.so (art::ProfileSaver::RunProfileSaverThread(void*)+171)
#04 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#05 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"pool-4-thread-1" tid=18 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2067)
at java.util.concurrent.LinkedBlockingQueue.take (LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at java.lang.Thread.run (Thread.java:920)
"Firebase Background Thread #0" tid=19 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2067)
at java.util.concurrent.LinkedBlockingQueue.take (LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.firebase.concurrent.CustomThreadFactory.lambda$newThread$0$com-google-firebase-concurrent-CustomThreadFactory (CustomThreadFactory.java:47)
at com.google.firebase.concurrent.CustomThreadFactory$$ExternalSyntheticLambda0.run (D8$$SyntheticClass)
at java.lang.Thread.run (Thread.java:920)
"Firebase Background Thread #1" tid=20 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2067)
at java.util.concurrent.LinkedBlockingQueue.take (LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.firebase.concurrent.CustomThreadFactory.lambda$newThread$0$com-google-firebase-concurrent-CustomThreadFactory (CustomThreadFactory.java:47)
at com.google.firebase.concurrent.CustomThreadFactory$$ExternalSyntheticLambda0.run (D8$$SyntheticClass)
at java.lang.Thread.run (Thread.java:920)
"Firebase Background Thread #2" tid=22 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2067)
at java.util.concurrent.LinkedBlockingQueue.take (LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.firebase.concurrent.CustomThreadFactory.lambda$newThread$0$com-google-firebase-concurrent-CustomThreadFactory (CustomThreadFactory.java:47)
at com.google.firebase.concurrent.CustomThreadFactory$$ExternalSyntheticLambda0.run (D8$$SyntheticClass)
at java.lang.Thread.run (Thread.java:920)
"Firebase Background Thread #3" tid=23 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await (AbstractQueuedSynchronizer.java:2067)
at java.util.concurrent.LinkedBlockingQueue.take (LinkedBlockingQueue.java:442)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at com.google.firebase.concurrent.CustomThreadFactory.lambda$newThread$0$com-google-firebase-concurrent-CustomThreadFactory (CustomThreadFactory.java:47)
at com.google.firebase.concurrent.CustomThreadFactory$$ExternalSyntheticLambda0.run (D8$$SyntheticClass)
at java.lang.Thread.run (Thread.java:920)
"GmsDynamite" tid=24 Waiting
at java.lang.Object.wait (Native method)
at java.lang.Object.wait (Object.java:442)
at java.lang.Object.wait (Object.java:568)
at com.google.android.gms.dynamite.zza.run (com.google.android.gms:play-services-basement@@18.3.0:2)
"DefaultDispatcher-worker-1" tid=26 Timed Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.parkNanos (LockSupport.java:353)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.park (CoroutineScheduler.kt:838)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.tryPark (CoroutineScheduler.kt:783)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker (CoroutineScheduler.kt:731)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run (CoroutineScheduler.kt:684)
"queued-work-looper" tid=27 Native
#00 pc 0x00000000000b395a /apex/com.android.runtime/lib64/bionic/libc.so (__epoll_pwait+10)
#01 pc 0x000000000001809a /system/lib64/libutils.so (android::Looper::pollInner(int)+250)
#02 pc 0x0000000000017f3e /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+126)
#03 pc 0x0000000000169fa3 /system/lib64/libandroid_runtime.so (android::android_os_MessageQueue_nativePollOnce(_JNIEnv*, _jobject*, long, int)+35)
#04 pc 0x00000000003a03ab /apex/com.android.art/lib64/libart.so (art_quick_generic_jni_trampoline+219)
#05 pc 0x0000000002073363 /memfd:jit-zygote-cache (android.os.MessageQueue.next+291)
#06 pc 0x000000000038c945 /apex/com.android.art/lib64/libart.so (nterp_helper+3837)
#07 pc 0x00000000004594b4 /system/framework/framework.jar (android.os.Looper.loopOnce+12)
#08 pc 0x000000000209d9e6 /memfd:jit-zygote-cache (android.os.Looper.loop+550)
#09 pc 0x000000000038baed /apex/com.android.art/lib64/libart.so (nterp_helper+165)
#10 pc 0x000000000042f2d0 /system/framework/framework.jar (android.os.HandlerThread.run+56)
#11 pc 0x0000000000395094 /apex/com.android.art/lib64/libart.so (art_quick_invoke_stub+756)
#12 pc 0x000000000041da7a /apex/com.android.art/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+218)
#13 pc 0x000000000081aabe /apex/com.android.art/lib64/libart.so (art::JValue art::InvokeVirtualOrInterfaceWithJValues<art::ArtMethod*>(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, art::ArtMethod*, jvalue const*)+478)
#14 pc 0x000000000087a08f /apex/com.android.art/lib64/libart.so (art::Thread::CreateCallback(void*)+1343)
#15 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#16 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
at android.os.MessageQueue.nativePollOnce (Native method)
at android.os.MessageQueue.next (MessageQueue.java:335)
at android.os.Looper.loopOnce (Looper.java:161)
at android.os.Looper.loop (Looper.java:288)
at android.os.HandlerThread.run (HandlerThread.java:67)
"RenderThread" tid=29 Native
#00 pc 0x00000000000b395a /apex/com.android.runtime/lib64/bionic/libc.so (__epoll_pwait+10)
#01 pc 0x000000000001809a /system/lib64/libutils.so (android::Looper::pollInner(int)+250)
#02 pc 0x0000000000017f3e /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+126)
#03 pc 0x000000000052f9d5 /system/lib64/libhwui.so (android::uirenderer::ThreadBase::waitForWork()+133)
#04 pc 0x000000000052f837 /system/lib64/libhwui.so (android::uirenderer::renderthread::RenderThread::threadLoop()+87)
#05 pc 0x0000000000013b69 /system/lib64/libutils.so (android::Thread::_threadLoop(void*)+313)
#06 pc 0x00000000000133d9 /system/lib64/libutils.so (thread_data_t::trampoline(thread_data_t const*)+425)
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"DefaultDispatcher-worker-2" tid=30 Timed Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.parkNanos (LockSupport.java:353)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.park (CoroutineScheduler.kt:838)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.tryPark (CoroutineScheduler.kt:783)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker (CoroutineScheduler.kt:731)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run (CoroutineScheduler.kt:684)
"OkHttp ConnectionPool" tid=32 Timed Waiting
at java.lang.Object.wait (Native method)
at com.android.okhttp.ConnectionPool$1.run (ConnectionPool.java:106)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at java.lang.Thread.run (Thread.java:920)
"Thread-8" tid=35 Native
#00 pc 0x000000000005ad38 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+24)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c691a /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_timedwait+90)
#03 pc 0x0000000000028dd5 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libffmpegkit.so (callbackThreadFunction+645)
#04 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#05 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"Okio Watchdog" tid=36 Waiting
at java.lang.Object.wait (Native method)
at java.lang.Object.wait (Object.java:442)
at java.lang.Object.wait (Object.java:568)
at com.android.okhttp.okio.AsyncTimeout.awaitTimeout (AsyncTimeout.java:313)
at com.android.okhttp.okio.AsyncTimeout.access$000 (AsyncTimeout.java:42)
at com.android.okhttp.okio.AsyncTimeout$Watchdog.run (AsyncTimeout.java:288)
"1.ui" tid=37 Native
#00 pc 0x00000000000b395a /apex/com.android.runtime/lib64/bionic/libc.so (__epoll_pwait+10)
#01 pc 0x000000000001809a /system/lib64/libutils.so (android::Looper::pollInner(int)+250)
#02 pc 0x0000000000017f3e /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+126)
#03 pc 0x000000000001938f /system/lib64/libandroid.so (ALooper_pollOnce+95)
#04 pc 0x00000000004c04df /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/fml/platform/android/message_loop_android.cc:67)
#05 pc 0x00000000004be018 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/fml/message_loop_impl.cc:94)
#06 pc 0x00000000004bde26 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__functional/function.h:1187)
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"Binder:10035_4" tid=38 Native
#00 pc 0x00000000000b2997 /apex/com.android.runtime/lib64/bionic/libc.so (__ioctl+7)
#01 pc 0x0000000000067ad8 /apex/com.android.runtime/lib64/bionic/libc.so (ioctl+216)
#02 pc 0x0000000000058a9f /system/lib64/libbinder.so (android::IPCThreadState::talkWithDriver(bool)+319)
#03 pc 0x0000000000058d80 /system/lib64/libbinder.so (android::IPCThreadState::getAndExecuteCommand()+16)
#04 pc 0x000000000005982f /system/lib64/libbinder.so (android::IPCThreadState::joinThreadPool(bool)+63)
#05 pc 0x0000000000085857 /system/lib64/libbinder.so (android::PoolThread::threadLoop()+23)
#06 pc 0x0000000000013b69 /system/lib64/libutils.so (android::Thread::_threadLoop(void*)+313)
#07 pc 0x00000000000df2ac /system/lib64/libandroid_runtime.so (android::AndroidRuntime::javaThreadShell(void*)+140)
#08 pc 0x00000000000133d9 /system/lib64/libutils.so (thread_data_t::trampoline(thread_data_t const*)+425)
#09 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#10 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"AsyncTask #1" tid=2 Waiting
at sun.misc.Unsafe.park (Native method)
at java.util.concurrent.locks.LockSupport.park (LockSupport.java:190)
at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill (SynchronousQueue.java:459)
at java.util.concurrent.SynchronousQueue$TransferStack.transfer (SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.take (SynchronousQueue.java:920)
at java.util.concurrent.ThreadPoolExecutor.getTask (ThreadPoolExecutor.java:1092)
at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1152)
at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
at java.lang.Thread.run (Thread.java:920)
"re.***" tid=10042 Unknown
#00 pc 0x00000000000b3137 /apex/com.android.runtime/lib64/bionic/libc.so (nanosleep+7)
#01 pc 0x000000000006cfcb /apex/com.android.runtime/lib64/bionic/libc.so (sleep+43)
#02 pc 0x00000000005d4059 /apex/com.android.art/lib64/libart.so (art::jit::RunPollingThread(void*)+41)
#03 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#04 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"re.***" tid=10089 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x000000000000d0d0 /vendor/lib64/libandroidemu.so (android::base::guest::MessageChannelBase::beforeRead()+48)
#04 pc 0x000000000000e800 /vendor/lib64/libandroidemu.so (android::base::guest::WorkPoolThread::threadFunc()+144)
#05 pc 0x000000000000e769 /vendor/lib64/libandroidemu.so (std::__1::__function::__func<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'(), std::__1::allocator<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'()>, long ()>::operator()()+9)
#06 pc 0x000000000000d5df /vendor/lib64/libandroidemu.so (android::base::guest::Thread::thread_main(void*)+95)
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"re.***" tid=10090 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x000000000000d0d0 /vendor/lib64/libandroidemu.so (android::base::guest::MessageChannelBase::beforeRead()+48)
#04 pc 0x000000000000e800 /vendor/lib64/libandroidemu.so (android::base::guest::WorkPoolThread::threadFunc()+144)
#05 pc 0x000000000000e769 /vendor/lib64/libandroidemu.so (std::__1::__function::__func<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'(), std::__1::allocator<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'()>, long ()>::operator()()+9)
#06 pc 0x000000000000d5df /vendor/lib64/libandroidemu.so (android::base::guest::Thread::thread_main(void*)+95)
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"re.***" tid=10091 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x000000000000d0d0 /vendor/lib64/libandroidemu.so (android::base::guest::MessageChannelBase::beforeRead()+48)
#04 pc 0x000000000000e800 /vendor/lib64/libandroidemu.so (android::base::guest::WorkPoolThread::threadFunc()+144)
#05 pc 0x000000000000e769 /vendor/lib64/libandroidemu.so (std::__1::__function::__func<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'(), std::__1::allocator<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'()>, long ()>::operator()()+9)
#06 pc 0x000000000000d5df /vendor/lib64/libandroidemu.so (android::base::guest::Thread::thread_main(void*)+95)
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"re.***" tid=10092 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x000000000000d0d0 /vendor/lib64/libandroidemu.so (android::base::guest::MessageChannelBase::beforeRead()+48)
#04 pc 0x000000000000e800 /vendor/lib64/libandroidemu.so (android::base::guest::WorkPoolThread::threadFunc()+144)
#05 pc 0x000000000000e769 /vendor/lib64/libandroidemu.so (std::__1::__function::__func<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'(), std::__1::allocator<android::base::guest::FunctorThread::FunctorThread<android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'(), void*>(android::base::guest::WorkPoolThread::WorkPoolThread()::'lambda'()&&, android::base::guest::ThreadFlags)::'lambda'()>, long ()>::operator()()+9)
#06 pc 0x000000000000d5df /vendor/lib64/libandroidemu.so (android::base::guest::Thread::thread_main(void*)+95)
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"1.raster" tid=10099 Unknown
#00 pc 0x00000000000b26f7 /apex/com.android.runtime/lib64/bionic/libc.so (read+7)
#01 pc 0x0000000000011798 /vendor/lib64/libOpenglSystemCommon.so (QemuPipeStream::commitBufferAndReadFully(unsigned long, void*, unsigned long)+232)
#02 pc 0x000000000015a16a /vendor/lib64/libvulkan_enc.so (goldfish_vk::VulkanStreamGuest::read(void*, unsigned long)+74)
#03 pc 0x00000000001be1ff /vendor/lib64/libvulkan_enc.so (goldfish_vk::VkEncoder::vkGetSwapchainGrallocUsageANDROID(VkDevice_T*, VkFormat, unsigned int, int*, unsigned int)+255)
#04 pc 0x000000000026b278 /vendor/lib64/libvulkan_enc.so (goldfish_vk::entry_vkGetSwapchainGrallocUsageANDROID(VkDevice_T*, VkFormat, unsigned int, int*)+88)
#05 pc 0x0000000000024be9 /system/lib64/libvulkan.so (vulkan::driver::CreateSwapchainKHR(VkDevice_T*, VkSwapchainCreateInfoKHR const*, VkAllocationCallbacks const*, VkSwapchainKHR_T**)+1945)
#06 pc 0x00000000008a3d2f /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/vulkan-deps/vulkan-headers/src/include/vulkan/vulkan_funcs.hpp:8604)
#07 pc 0x00000000008a30ce /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/impeller/renderer/backend/vulkan/swapchain/khr/khr_swapchain_impl_vk.cc:118)
#08 pc 0x00000000004914e3 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/impeller/renderer/backend/vulkan/swapchain/khr/khr_swapchain_vk.cc:18)
#09 pc 0x000000000049a625 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/shell/platform/android/platform_view_android.cc:152)
#10 pc 0x00000000004bbf9a /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__functional/function.h:1187)
#11 pc 0x00000000004c03d1 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/fml/platform/android/message_loop_android.cc:91)
#12 pc 0x0000000000018395 /system/lib64/libutils.so (android::Looper::pollInner(int)+1013)
#13 pc 0x0000000000017f3e /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+126)
#14 pc 0x000000000001938f /system/lib64/libandroid.so (ALooper_pollOnce+95)
#15 pc 0x00000000004c04df /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/fml/platform/android/message_loop_android.cc:67)
#16 pc 0x00000000004be018 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/fml/message_loop_impl.cc:94)
#17 pc 0x00000000004bde26 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__functional/function.h:1187)
#18 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#19 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"1.io" tid=10100 Unknown
#00 pc 0x00000000000b395a /apex/com.android.runtime/lib64/bionic/libc.so (__epoll_pwait+10)
#01 pc 0x000000000001809a /system/lib64/libutils.so (android::Looper::pollInner(int)+250)
#02 pc 0x0000000000017f3e /system/lib64/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+126)
#03 pc 0x000000000001938f /system/lib64/libandroid.so (ALooper_pollOnce+95)
#04 pc 0x00000000004c04df /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/fml/platform/android/message_loop_android.cc:67)
#05 pc 0x00000000004be018 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/fml/message_loop_impl.cc:94)
#06 pc 0x00000000004bde26 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__functional/function.h:1187)
#07 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#08 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"io.worker.1" tid=10101 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x00000000004a4d2d /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__threading_support:335)
#04 pc 0x00000000004b8ea5 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__mutex_base:398)
#05 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#06 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"io.worker.2" tid=10102 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x00000000004a4d2d /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__threading_support:335)
#04 pc 0x00000000004b8ea5 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__mutex_base:398)
#05 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#06 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"dart:io EventHa" tid=10103 Unknown
#00 pc 0x00000000000b395a /apex/com.android.runtime/lib64/bionic/libc.so (__epoll_pwait+10)
#01 pc 0x0000000000901363 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/dart/runtime/bin/eventhandler_linux.cc:400)
#02 pc 0x0000000000935a9e /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/dart/runtime/bin/thread_linux.cc:58)
#03 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#04 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"io.worker.1" tid=10104 Unknown
#00 pc 0x00000000000b26f7 /apex/com.android.runtime/lib64/bionic/libc.so (read+7)
#01 pc 0x0000000000011798 /vendor/lib64/libOpenglSystemCommon.so (QemuPipeStream::commitBufferAndReadFully(unsigned long, void*, unsigned long)+232)
#02 pc 0x000000000015a16a /vendor/lib64/libvulkan_enc.so (goldfish_vk::VulkanStreamGuest::read(void*, unsigned long)+74)
#03 pc 0x00000000001a3939 /vendor/lib64/libvulkan_enc.so (goldfish_vk::VkEncoder::vkCreateRenderPass(VkDevice_T*, VkRenderPassCreateInfo const*, VkAllocationCallbacks const*, VkRenderPass_T**, unsigned int)+553)
#04 pc 0x000000000024d72b /vendor/lib64/libvulkan_enc.so (goldfish_vk::entry_vkCreateRenderPass(VkDevice_T*, VkRenderPassCreateInfo const*, VkAllocationCallbacks const*, VkRenderPass_T**)+91)
#05 pc 0x000000000089fbc0 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/vulkan-deps/vulkan-headers/src/include/vulkan/vulkan_funcs.hpp:4303)
#06 pc 0x000000000089d007 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/impeller/renderer/backend/vulkan/pipeline_vk.cc:155)
#07 pc 0x000000000089bfcd /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/impeller/renderer/backend/vulkan/pipeline_library_vk.cc:191)
#08 pc 0x00000000004b90c4 /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/fml/concurrent_message_loop.cc:101)
#09 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#10 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"IplrVkFenceWait" tid=10106 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x00000000004a4d2d /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__threading_support:335)
#04 pc 0x00000000008991de /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__mutex_base:398)
#05 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#06 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
"IplrVkResMgr" tid=10107 Unknown
#00 pc 0x000000000005ad36 /apex/com.android.runtime/lib64/bionic/libc.so (syscall+22)
#01 pc 0x000000000005f362 /apex/com.android.runtime/lib64/bionic/libc.so (__futex_wait_ex(void volatile*, bool, int, bool, timespec const*)+146)
#02 pc 0x00000000000c6892 /apex/com.android.runtime/lib64/bionic/libc.so (pthread_cond_wait+50)
#03 pc 0x00000000004a4d2d /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__threading_support:335)
#04 pc 0x00000000008a172d /data/app/~~FpH_FumQGqONBEksRkKgXA==/***-GiZF6zc2PtPvzrjB0RcLlQ==/split_config.x86_64.apk!libflutter.so (out/ci/android_release_x64/../../../flutter/third_party/libcxx/include/__mutex_base:398)
#05 pc 0x00000000000c753a /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+58)
#06 pc 0x000000000005fcc7 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+55)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 13.6.6 22G630 darwin-arm64, locale en-CA)
• Flutter version 3.27.1 on channel stable at ****
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (2 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at ****
• Platform android-35, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
• All Android licenses accepted.
[✓] IntelliJ IDEA Ultimate Edition (version 2024.3.1.1)
• IntelliJ at /Applications/IntelliJ IDEA.app
• Flutter plugin version 83.0.4
• Dart plugin version 243.23177
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 13.6.6 22G630 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 13.6.6 22G630 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: crash,e: device-specific,platform-android,a: production,P3,needs repro info,team-engine,triaged-engine | low | Major |
2,763,519,908 | go | x/tools/gopls: out-of-memory failure on Windows | ```
#!stacks
"runtime.bgsweep" && "runtime.newArenaMayUnlock:+6" && "runtime.throw"
```
Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks).
I suspect this is another manifestation of #70445, but I thought I should report it since the call stack is completely different.
This stack `F860lA` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2024-12-28.json):
- `crash/crash`
- [`runtime.throw:+9`](https://cs.opensource.google/go/go/+/go1.23.4:src/runtime/panic.go;l=1067)
- [`runtime.newArenaMayUnlock:+6`](https://cs.opensource.google/go/go/+/go1.23.4:src/runtime/mheap.go;l=2498)
- [`runtime.newMarkBits:+22`](https://cs.opensource.google/go/go/+/go1.23.4:src/runtime/mheap.go;l=2418)
- [`runtime.(*sweepLocked).sweep:+185`](https://cs.opensource.google/go/go/+/go1.23.4:src/runtime/mgcsweep.go;l=683)
- [`runtime.sweepone:+37`](https://cs.opensource.google/go/go/+/go1.23.4:src/runtime/mgcsweep.go;l=389)
- [`runtime.bgsweep:+28`](https://cs.opensource.google/go/go/+/go1.23.4:src/runtime/mgcsweep.go;l=298)
- [`runtime.gcenable.gowrap1:+0`](https://cs.opensource.google/go/go/+/go1.23.4:src/runtime/mgc.go;l=204)
- `runtime.goexit:+0`
```
golang.org/x/tools/[email protected] go1.23.4 windows/amd64 vscode (1)
```
| NeedsInvestigation,gopls,Tools,gopls/telemetry-wins | low | Critical |
2,763,527,615 | godot | Script editor fails to reload closed scripts after externally modified | ### Tested versions
- Reproducible in v4.4.dev7.official [46c8f8c5c]
### System information
Godot v4.4.dev7 - Windows 11 (build 22631) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2070 SUPER (NVIDIA; 32.0.15.6603) - Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz (16 threads)
### Issue description
When a script is edited externally, it correctly reloads in the script editor if the script is open. Otherwise, the edit appears to be undetected, and the wrong script is displayed.
This greatly affects teams where script changes are frequently pulled, where an editor restart is currently required after every pull ( not good ), but can be tested manually with two editor processes running, or by editing a script externally.
The scripts are only out of date within the editor itself. Running the project will use the updated changes, even when its script editor conflicts with what is expected.
Video demonstration (using two editors):
https://github.com/user-attachments/assets/7d6aacb5-9ec8-48b3-9151-74cd79d6f093
### Steps to reproduce
1. Run the editor, and leave a script closed.
2. Edit that script externally by any means.
3. Focus the editor, and it will not pick up the script changes.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,needs testing | low | Minor |
2,763,554,852 | transformers | 8bits GPTQ quantization output | ### System Info
Hi, I noticed that with 8-bit quantization using GPTQConfig, the model inference generally produces really bad results, with outputs that often don't make sense. Could this be an engineering issue with GPTQ, or is this typical behavior for GPTQ when using 8-bit quantization? Thank you in advance!
### Who can help?
@SunMarc @ArthurZucker
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below) - NQOpen
### Reproduction
1. By using Qwen2.5 `Qwen/Qwen2.5-7B-Instruct-GPTQ-Int8`
2. By using GPTQConfig on `meta-llama/Llama-3.1-8B-Instruct`
```python
quantization = GPTQConfig(
bits = 8,
group_size = 128,
dataset = "c4",
desc_act=False,
)
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(model, quantization_config=quantization, device_map="auto")
```
### Expected behavior
1. model tends to generate unrelated outputs, such as `the !!!!!!!!!!!!!!!!!.....` as the output. The generated output does not make sense at all. | bug | low | Minor |
2,763,573,404 | go | crypto/tls: map boringssl errors to go errors in BoGo shim config | The BoGo shim config has a field (ErrorMap) that allows mapping between BoringSSL style error strings to local error strings, which we should populate.
We currently don't map these, which can lead to failures being masked (see [#70915](https://go.dev/issue/70915)). | NeedsFix | low | Critical |
2,763,591,156 | PowerToys | FancyZone - Layout Type Mismatch: Columns Saved as Grid | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
1. Open the FancyZones editor.
2. Select or create a layout of type columns.
3. Save the layout.
4. Check the JSON file for custom layouts at `%UserProfile%\AppData\Local\Microsoft\PowerToys\FancyZones\custom-layouts.json`.
### ✔️ Expected Behavior
The layout of type *columns* should be saved as columns in the type field of the JSON file
### ❌ Actual Behavior
The layout of type columns is being saved as grid in the type field of the JSON file.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,763,592,257 | storybook | [Bug]: favicon doesn't work for dev server | ### Describe the bug
For the dev server (aka localhost), the favicon does not appear and the link in the HTML header is invalid.
### Reproduction link
N/A
### Reproduction steps
Start the dev server.
### System
```bash
Storybook Environment Info:
System:
OS: Windows 11 10.0.26100
CPU: (8) x64 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz
Binaries:
Node: 20.11.1 - C:\Program Files\Node.js\node.EXE
npm: 10.8.2 - C:\Program Files\Node.js\npm.CMD <----- active
pnpm: 9.14.4 - C:\Program Files\Node.js\pnpm.CMD
Browsers:
Edge: Chromium (131.0.2903.86)
npmPackages:
@storybook/addon-actions: 8.4.2 => 8.4.2
@storybook/addon-controls: 8.4.2 => 8.4.2
@storybook/addon-docs: 8.4.2 => 8.4.2
@storybook/addon-links: 8.4.2 => 8.4.2
@storybook/manager-api: 8.4.2 => 8.4.2
@storybook/react: 8.4.2 => 8.4.2
@storybook/react-vite: 8.4.2 => 8.4.2
storybook: 8.4.2 => 8.4.2
```
### Additional context
For a build, the favicon works correctly. | bug,good first issue,help wanted,sev:S3 | low | Critical |
2,763,593,409 | godot | Partial Classes Are Not Found When They Not Defined In Main Assembly | # Partial Classes Are Not Found When Defined Outside the Main Assembly
## Description
When working with partial classes in Godot projects, I encountered an issue where partial classes are not recognized if they are defined in assemblies other than the main assembly. This behavior disrupts workflows that rely on modular design and separate assemblies for organizational purposes.
## Expected Behavior
Godot should seamlessly recognize and compile the partial class, regardless of whether its definitions are in the main assembly or an external assembly. This would allow for proper modular development.
## Actual Behavior
Godot fails to recognize the partial class defined outside the main assembly. This results in errors indicating missing definitions, even though the class is defined correctly in an external assembly.
## Impact
- This limitation forces developers to place all partial class definitions in the same assembly, leading to less modular code and increased complexity in large projects.
- It restricts the ability to use libraries or shared code effectively.
## Additional Context
This issue might be linked to assembly resolution and Godot's build pipeline not accounting for cross-assembly partial class definitions. Similar issues have been reported, such as:
- [#77675](https://github.com/godotengine/godot/issues/77675)
- [#75352](https://github.com/godotengine/godot/issues/75352)
These discussions highlight related problems with assembly dependencies and type recognition. Resolving this would greatly enhance Godot's flexibility and support for more complex project structures.
## Environment
- **Godot Version:** Godot 4.3
- **OS/Platform:** Windows 11 Home 23H2 22631.4602
- **Mono/.NET Version:** .NET 7.0
### Issue description
When working with partial classes in Godot projects, I encountered an issue where partial classes are not recognized if they are defined in assemblies other than the main assembly. This bug disrupts workflows that rely on modular design and separate assemblies for organizational purposes.
### Steps to reproduce
1 Create a Godot project.
2 Define a partial class that inherit from Godot.Node outside the main assembly (e.g., MyItemView.cs).
3 Create another Node class in the main assembly that Exports a MyItemView field and assign it in the editor.
4 Attempt to read the field of type MyItemView. | topic:dotnet | low | Critical |
2,763,606,306 | go | x/tools/gopls: IExportShallow returns unexpected error | ```
#!stacks
"bug.Reportf" && "cache.(*typeCheckBatch).checkPackageForImport.func3:+3"
```
Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks).
Not much to go on here....
```go
exportData, err := gcimporter.IExportShallow(b.fset, pkg, bug.Reportf)
if err != nil {
bug.Reportf("exporting package %v: %v", ph.mp.ID, err) // <---
return
}
```
This stack `jE2A4w` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2024-12-29.json):
- `gopls/bug`
- [`golang.org/x/tools/gopls/internal/util/bug.report:+35`](https://cs.opensource.google/go/x/tools/+/gopls/v0.15.3:gopls/internal/util/bug/bug.go;l=109)
- [`golang.org/x/tools/gopls/internal/util/bug.Reportf:+1`](https://cs.opensource.google/go/x/tools/+/gopls/v0.15.3:gopls/internal/util/bug/bug.go;l=54)
- [`golang.org/x/tools/gopls/internal/cache.(*typeCheckBatch).checkPackageForImport.func3:+3`](https://cs.opensource.google/go/x/tools/+/gopls/v0.15.3:gopls/internal/cache/check.go;l=737)
- `runtime.goexit:+0`
```
golang.org/x/tools/[email protected] go1.22.4 darwin/arm64 other (1)
```
| NeedsInvestigation,gopls,Tools,gopls/telemetry-wins | low | Critical |
2,763,641,884 | rust | rustdoc search: for argument based searches, deprioritize associated items | Inspired by discussion on https://github.com/rust-lang/rust/pull/131806, opening an issue so it can be discussed separately.
currently, both "In Parameters" and "Type ->" searches are dominated by methods on that type (and if the aforementioned PR is merged, fields will also clutter such searches)
however, associated items for a type can easily be viewed simply by going to that type's page, so arguably this form of search should prioritize free functions and methods of *other* types, which are harder to find.
this could be considered the dual of https://github.com/rust-lang/rust/issues/134935 | A-type-based-search,A-rustdoc-search,T-rustdoc-frontend | low | Minor |
2,763,664,004 | pytorch | torch.utils.flop_counter.FlopCounterMode | I found this class because it was referenced in the llama_recipes repo.
My question is whether this definition counts one addition and one multiplication and 2 FLOPs, or, if that's counted as 1 FLOP?
When reporting on GPU hardware, it's common to count the above as two flops.
But when reporting on models, it's common to count the above as one flop... the more accurate term would have been "MAC" - Multiply-Accumulate. But I believe the model literature just started calling it a "flop".
https://github.com/facebookresearch/fvcore/issues/69
This source strongly suggests that the reported value matches HW flops (a single multiply followed by a single add counts as two flops), since at the number of multiplications required for a matmul of two matrices of shapes MxK and KxN is MxKxN, and the number of additions is Mx(K-1)xN, commonly approximated to MxKxN, yielding 2xMxKxN.
https://github.com/pytorch/pytorch/blob/baee623691a38433d10843d5bb9bc0ef6a0feeef/torch/utils/flop_counter.py#L55C1-L64C25
| triaged,module: flop counter | low | Minor |
2,763,666,546 | godot | Non-fatal error when launching dotnet project on iOS due to missing scripts | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - macOS 13.7.2 - GLES3 (Compatibility) - AMD Radeon Pro 560 OpenGL Engine - Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz (8 Threads)
### Issue description
When exporting a dotnet project to iOS and selecting "Export selected scenes (and dependencies)" option, dependencies of the C# scripts are not included in the exported .pck file, which causes following errors to be printed:
```
ERROR: Cannot open file 'res://AbstractCustomResource.cs'.
at: read_all_file_utf8 (modules/mono/utils/string_utils.cpp:152)
ERROR: Failed to read file: 'res://AbstractCustomResource.cs'.
at: load_source_code (modules/mono/csharp_script.cpp:2721)
ERROR: Cannot load C# script file 'res://AbstractCustomResource.cs'.
at: load (modules/mono/csharp_script.cpp:2797)
ERROR: Failed loading resource: res://AbstractCustomResource.cs. Make sure resources have been imported by opening the project in the editor at least once.
at: _load (core/io/resource_loader.cpp:284)
ERROR: Cannot load script for type 'AssemblyNotExported.AbstractCustomResource'. Path: 'res://AbstractCustomResource.cs'.
at: void Godot.Bridge.ScriptManagerBridge.GetOrLoadOrCreateScriptForType(System.Type, Godot.NativeInterop.godot_ref*) (:0)
ERROR: Cannot open file 'res://CustomResource.cs'.
at: read_all_file_utf8 (modules/mono/utils/string_utils.cpp:152)
ERROR: Failed to read file: 'res://CustomResource.cs'.
at: load_source_code (modules/mono/csharp_script.cpp:2721)
ERROR: Cannot load C# script file 'res://CustomResource.cs'.
at: load (modules/mono/csharp_script.cpp:2797)
ERROR: Failed loading resource: res://CustomResource.cs. Make sure resources have been imported by opening the project in the editor at least once.
at: _load (core/io/resource_loader.cpp:284)
ERROR: Cannot load script for type 'AssemblyNotExported.CustomResource'. Path: 'res://CustomResource.cs'.
at: void Godot.Bridge.ScriptManagerBridge.GetOrLoadOrCreateScriptForType(System.Type, Godot.NativeInterop.godot_ref*) (:0)
ERROR: Cannot open file 'res://AbstractCustomResource.cs'.
at: read_all_file_utf8 (modules/mono/utils/string_utils.cpp:152)
ERROR: Failed to read file: 'res://AbstractCustomResource.cs'.
at: load_source_code (modules/mono/csharp_script.cpp:2721)
ERROR: Cannot load C# script file 'res://AbstractCustomResource.cs'.
at: load (modules/mono/csharp_script.cpp:2797)
ERROR: Failed loading resource: res://AbstractCustomResource.cs. Make sure resources have been imported by opening the project in the editor at least once.
at: _load (core/io/resource_loader.cpp:284)
ERROR: Cannot load script for type 'AssemblyNotExported.AbstractCustomResource'. Path: 'res://AbstractCustomResource.cs'.
at: void Godot.Bridge.ScriptManagerBridge.GetOrLoadOrCreateScriptForType(System.Type, Godot.NativeInterop.godot_ref*) (:0)
```
Note: despite the logged errors, game works as expected and logic defined in the "missing" scripts is executed correctly.
- why are the error messages logged?
- why are those files tried to be read from "res://" if they should be part of the assembly?
**Side question**: while troubleshooting this issue, i assumed that the ".dll" file should be loaded in the beginning, as described in [Opening PCK files at runtime](https://docs.godotengine.org/en/4.3/tutorials/export/exporting_pcks.html#opening-pck-files-at-runtime), however when examining the exported ".pck" file, i did not find the "ProjectName.dll" file there.
- where i can find the main project assembly in the "pck" file and do i need it?
- in future, when say when adding a DLC to the project, where i can find information on how to export a dotnet project so that the exported pack contains ".dll" file to be loaded?
### Steps to reproduce
- download MRP
- add ios export template
- export with provided export_presets.cfg
- run using xcode
- observe the errors
### Minimal reproduction project (MRP)
[assembly-not-exported.zip](https://github.com/user-attachments/files/18277523/assembly-not-exported.zip)
| bug,topic:export | low | Critical |
2,763,669,300 | flutter | "Check Flakiness Status of a Test" Dashboard is broken | [The "Check Flakiness Status of a Test" dashboard](https://data.corp.google.com/sites/dash_infra_metrics_datasite/flutter_check_test_flakiness_status_dashboard/) is no longer populating the flaky rates for various pools. Checking the staging pool test flake rate is (one way) we decide to turn "bringup" tests back on.
The flakiness bot is [creating new issues](https://github.com/flutter/flutter/issues/160840) at least of 5 days ago, so possibly a dashboard issue, or the flakiness detection broke in the last 5 days. | team-infra | low | Critical |
2,763,691,036 | material-ui | [Dialog] Focus lost on non-modal dialog after interacting with a full screen dialog | ### Steps to reproduce
Steps:
1. Open this link to live example: https://codesandbox.io/p/sandbox/exciting-banzai-m76rln
2. The chat dialog is open by default. Open the full screen dialog by clicking on the button below the app bar.
3. Notice that the chat bot window can still be used and a message can be typed in the textfield.
4. Click on something in the full screen dialog, like the dropdown or list item.
5. You can't change the focus back to the chat bot text field anymore.
### Current behavior
The chat bot text field isn't set to focused anymore once you interact with the full screen dialog.
### Expected behavior
The chat bot text field should still be able to be focused.
### Context
We've found that by adding the `disableEnforceFocus` prop on the full screen dialog obviously works (as many others have pointed out) and allows the chat bot text field to be focused no matter what you do in the background. However, we have two questions related to this:
1. Why does this work in the very beginning before you interact with something on the full screen dialog? Is this a bug of some sort?
2. We have a lot of dialogs and drawers and other things in our UI. Is the only option to add this prop, `disableEnforceFocus`, for each one or is there something we can just set on the chat bot type dialog instead to make it easier?
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: macOS 15.2
Binaries:
Node: 20.12.2 - /usr/local/bin/node
npm: 10.5.0 - /usr/local/bin/npm
pnpm: Not Found
Browsers:
Chrome: 131.0.6778.205
Edge: Not Found
Safari: 18.2
npmPackages:
@emotion/react: ^11.14.0 => 11.14.0
@emotion/styled: ^11.14.0 => 11.14.0
@mui/base: 5.0.0-beta.68
@mui/core-downloads-tracker: 6.3.0
@mui/lab: ^6.0.0-beta.21 => 6.0.0-beta.21
@mui/material: ^6.2.1 => 6.3.0
@mui/private-theming: 6.3.0
@mui/styled-engine: 6.3.0
@mui/system: 6.3.0
@mui/types: 7.2.20
@mui/utils: 6.2.1
@mui/x-data-grid: 7.23.3
@mui/x-data-grid-pro: ^7.21.0 => 7.23.3
@mui/x-date-pickers: 7.23.3
@mui/x-date-pickers-pro: ^7.21.0 => 7.23.3
@mui/x-internals: 7.23.0
@mui/x-license: ^7.21.0 => 7.23.2
@types/react: ^18.3.17 => 18.3.17
react: ^18.3.1 => 18.3.1
react-dom: ^18.3.1 => 18.3.1
typescript: ^5.6.3 => 5.7.2
```
</details>
**Search keywords**: non-modal dialog, chatbot, disableEnforceFocus | accessibility,support: question,waiting for 👍,component: dialog,component: modal | low | Critical |
2,763,705,437 | godot | Imported audio `loop_offset` is ignored by `AudioStreamPlayer` in `PLAYBACK_TYPE_SAMPLE` (web) | ### Tested versions
- Reproducible in v4.3.stable.flathub [77dcf97d8]
### System information
Godot v4.3.stable (77dcf97d8) - Freedesktop SDK 24.08 (Flatpak runtime) - X11 - GLES3 (Compatibility) - Mesa Intel(R) HD Graphics 5500 (BDW GT2) - Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz (4 Threads)
### Issue description
I've imported a `.ogg` audio with looping enabled, and it needs a loop offset.
But this offset is ignored on the Web platform, i.e. when playback type is Sample, it's looping to zero (beginning of music).
It loops to the expected point on web if I force playback type to Stream.
Is it just not supported, or is it a bug? Docs say:
> [AudioEffect](https://docs.godotengine.org/en/stable/classes/class_audioeffect.html#class-audioeffect)s are not supported when playback is considered as a sample.
But I think this is different? I would think if looping is supported at all, then it should be trivial to support the offset.
### Steps to reproduce
Just import some music with loop enabled and a noticeable loop offset.
Then run the exported web project while keeping playback type the default in the `AudioStreamPlayer`.
If it makes any difference, I used `.ogg` music, an autoload node for the audio player, and called `play()` from code. And the loop point is only slightly offset from the start (3.057 seconds).
### Minimal reproduction project (MRP)
N/A | bug,platform:web,topic:audio | low | Critical |
2,763,709,597 | godot | C# Library import issues for GPUParticles3D/CPUParticles3D etc | ### Tested versions
Discovered in Godot v4.3.stable.mono.official [77dcf97d8]
Other versions unknown.
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - Vulkan (Forward+) - integrated AMD Radeon(TM) Graphics (Advanced Micro Devices, Inc.; 31.0.21910.5) - AMD Ryzen 5 5500U with Radeon Graphics (12 Threads)
### Issue description
Greetings!
There seems to be a mismatch between documentation and actual design when it comes to GPUParticles3D and all related nodes. In the documentation it is listed as "GPUParticles3D", but in GodotSharp.xml (_root_/GodotSharp/Api/Release/GodotSharp.xml) it is listed as "GpuParticles3D".
Thus, when trying to use GPUParticles3D, one gets the error "The type or namespace name 'CPUParticles3D' could not be found (are you missing a using directive or an assembly reference?)" because GPUParticles3D doesn't actually exist, only GpuParticles3D exists.
Of course, newly armed with this knowledge I can easily continue my work, but this was a stumbling block for me and I'm sure it will be for many others unless the documentation matches the xml file, regardless of which side yields to the other.
Tl:Dr:
Please ether change the _root_/GodotSharp/Api/Release/GodotSharp.xml to match with "GPUParticles3D" and "CPUParticles3D" etc or change the documentation and syntax highlighting to match the xml file's GpuParticles3D and CpuParticles3D.
So, it's not a Godot breaker, but it is a rough edge that I'd like to see patched up!
Cheers!
### Steps to reproduce
Add GPUParticles3D into your code, at any point, and compile for immediate results.
I first discovered it with
GPUParticles3D dustParticles = new GPUParticles3D();
### Minimal reproduction project (MRP)
N/A | enhancement,discussion,documentation,topic:dotnet | low | Critical |
2,763,715,345 | excalidraw | Feature: Squared Paper | It would be nice if there was an option to make the Background like squared Paper, for School related work.
For example writing math equations is much more convenient on checkered paper than on a plane white canvas | enhancement,UX/UI | low | Minor |
2,763,725,351 | rust | #[thread_local] + #[no_mangle] produces a "symbol already defined" error on Windows targets | ```rust
#![feature(thread_local)]
#[no_mangle]
#[thread_local]
pub static FOO: u32 = 3;
```
Does not compile on x86_64-pc-windows-msvc and x86_64-pc-windows-gnu, due to:
```
error: symbol `FOO` is already defined
--> lib.rs:5:1
|
5 | pub static FOO: u32 = 3;
| ^^^^^^^^^^^^^^^^^^^
error: aborting due to 1 previous error
```
Found this while working on https://github.com/rust-lang/rust/pull/134777 | O-windows,T-compiler,A-thread-locals,C-bug,F-thread_local | low | Critical |
2,763,726,644 | pytorch | [dynamo][guards][feature] Do not realize LazyVariableTracker on `isinstance` checks | ### 🐛 Describe the bug
Today, calling `isinstance` on LazyVariableTracker realizes the VT, inserting the guards. In many cases, this guard insertion is accidental and not really required for program correctness.
I am not sure how to do this exhaustively. Maybe we can look at the value of the LazyVariableTracker and support isinstance checks for a few common VTs.
### Error logs
_No response_
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,763,733,391 | ollama | Incorrect NUMA detection logic, fails for AMD Threadripper 1950X | ### What is the issue?
On my AMD Threadripper 1950X CPU with NUMA mode enabled in the BIOS, ollama does not detect that I am running on a NUMA system due to flawed logic in its detection code here: https://github.com/ollama/ollama/blob/459d822b5188dba051e21dfd15b6552543a4bbcf/discover/cpu_common.go#L10-L24
I can "trick" ollama into detecting NUMA by setting up fake information in `/sys/devices/system/cpu/cpu*/topology/physical_package_id` using `overlayfs`, which gives me a ~20% speedup for CPU-only eval-rate (tested with gemma2:27b).
The problem in the logic is that it counts how many physical CPU packages are in the system, but my system has a single CPU package containing 2 dies each with their own memory controller.
A naïve fix would be to look at `die_id` rather than `physical_package_id`: this would work for me but I fear there may exist other hardware which has multiple dies sharing a single memory controller. Also even on my system I can disable NUMA in the BIOS so that memory access appears to be uniform - under the hood this interleaves memory access across both NUMA nodes. So in this mode looking at `die_id` would give the wrong answer.
A better fix would be to look at the actual NUMA node information presented by the kernel under `/sys/devices/system/node`, e.g. on my system the file `/sys/devices/system/node/online` contains `0-1` whereas on a uniform memory system it contains `0`.
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.4 | bug | low | Minor |
2,763,736,629 | flutter | Linux and Windows flutter_packaging is marked bringup | Linux and Windows flutter_packaging in .ci.yaml is:
https://github.com/flutter/flutter/blob/3762f2e9731ce97808fb2d78805f6712be83654d/.ci.yaml#L6925-L6932
1. It's labeled bringup I believe due to https://github.com/flutter/flutter/issues/139597 which isn't happening any more.
1. It's not visible on the [beta](https://flutter-dashboard.appspot.com/#/build?repo=flutter&branch=beta&showStaging=true&showBringup=true) or [stable](https://flutter-dashboard.appspot.com/#/build?repo=flutter&branch=stable&showStaging=true&showBringup=true) dashboards even though it appears that's where it should be running based on enabled_branches?
2. I don't have permission to see the [luci console](https://ci.chromium.org/ui/p/flutter/builders/staging/Linux%20flutter_packaging) so maybe this is all working?
Is the right thing to do remove `bringup` for these two builders? | team-infra | low | Minor |
2,763,740,914 | vscode | Configure coverage testing profile | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
The coverage testing profile should be configurable in the same manner as other kinds of testing profiles. From an outside perspective this seems like it should be a simple matter of adding a menu to the coverage run button.

Rationale: When I execute `go test` there's a `-coverpkg` flag I can pass to control the scope of what code gets instrumented for coverage. In some cases it's natural to show coverage for the entire project, where in other cases it's natural to show coverage only for a single package. | bug,testing | low | Minor |
2,763,742,173 | PowerToys | [Advanced paste] new feature request - Other/Custom AI Provider Selection in "AI Prompt" | ### Description of the new feature / enhancement
When “AI Prompt” is enabled, it would be beneficial to be able to select which AI provider to use. This would allow users to choose from providers such as OpenAI, Gemini, Claude, Mistral, or a custom OpenAI endpoint.
### Scenario when this would be used?
When working as a freelancer, some of my customers provide specific AI endpoints that have been validated by their security teams. Currently, users have to manually copy and paste these endpoints into the respective AI provider interfaces. Being able to configure the AI provider directly within the application would save a significant number of intermediate steps and improve efficiency.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,763,746,920 | godot | [4.4.dev7] Clicking on Play Animation in the Animation tab frequently freezes the engine | ### Tested versions
- Reproducable in all version from 4.4.dev7 to 4.0.stable on Windows
- Did not test if reproducible before 4.0.stable
### System information
Godot v4.4.dev7 - Windows 11 (build 26100) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 Laptop GPU (NVIDIA; 32.0.15.6603) - 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz (16 threads)
### Issue description
Clicking on the Play Animation button frequently causes Godot to stop responding. Freeze happens roughly between 0.6 - 0.9 seconds after animation began playing.

### Steps to reproduce
1. Create a new scene with an Animation Player
2. Add a new animation to the player
3. Click on Play Animation and wait for the animation to finish playing
4. If Godot didn't freeze while the animation was playing, repeatedly perform step 3
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,needs testing,topic:animation | low | Minor |
2,763,747,904 | flutter | Permission 'cloudkms.cryptoKeyVersions.useToDecrypt' denied on resource 'projects/flutter-infra-staging/locations/global/keyRings/luci/cryptoKeys/flutter-infra' | `Linux packages_autoroller` is marked flaky as of https://github.com/flutter/flutter/issues/160473 which is pointing to a flaking gradle issue.
However this build is newly flaking (seen in [Linux%20packages_autoroller/3144](https://ci.chromium.org/ui/p/flutter/builders/staging/Linux%20packages_autoroller/3144/overview)) with a `PermissionDenied` error for a cloudkms crypto key.
```
[E2024-12-22T11:06:10.784181-08:00 14938 0 decrypt.go:37] Error while making request
[E2024-12-22T11:06:10.784257-08:00 14938 0 common.go:325] Error while executing command
{"error":"rpc error: code = PermissionDenied desc = Permission 'cloudkms.cryptoKeyVersions.useToDecrypt' denied on resource 'projects/flutter-infra-staging/locations/global/keyRings/luci/cryptoKeys/flutter-infra' (or it may not exist)."}
```
https://ci.chromium.org/ui/p/flutter/builders/prod/Linux%20packages_autoroller/12884/overview
See recipe at https://cs.opensource.google/flutter/recipes/+/master:recipe_modules/kms/api.py;l=32;drc=a73eacefcd83b580fedfa6b30be6b8451087cbb1 | team-infra | low | Critical |
2,763,749,536 | go | x/tools/gopls: PathEnclosingInterval is inappropriate for single-point LSP queries | The handlers for most gopls RPCs start by computing the node in the syntax tree that is identified by the selection.
An [unfortunate](https://github.com/microsoft/language-server-protocol/issues/377) limitation of LSP is that in most cases the RPC request indicates the selection by only a single point, the cursor position, losing valuable information. Indicating the selection by '[' and ']' and the cursor by '⁁', in this example:
[break]⁁ label
the `break` keyword is selected, but the server can tell only that the cursor is after `break`. In:
[break ]⁁label
the selection is the `break` keyword plus the following space, but the server knows only that the cursor is before `label`. And in:
[break]⁁
the selection is the `break` keyword alone, but the server gets only the position after the 'k'.
Gopls makes extensive use of [astutil.PathEnclosingInterval](https://pkg.go.dev/golang.org/x/tools/go/ast/astutil#PathEnclosingInterval) for inferring the syntax node from a given selection (start, end). When invoked on an empty selection (end=start), which is the common case in gopls because of the limitation described above, it behaves as if invoked on the interval (start, start+1). That means that it operates
- in the first example on the space between `break` and `label`, and returns the `BranchStmt` (a reasonable answer);
- in the second case on the first "l" of `label`, returning the `Ident` (again reasonable); but
- in the third on the newline following `break`, returning the enclosing `BlockStmt`, which is not helpful.
It is the wrong tool for the job.
Until such time as LSP furnishes the server with both the start and end points of the selection (which may be a long wait), I propose that we introduce a new function for all single-position queries in gopls that gives the right answers in all three cases (respectively: BranchStmt, Ident, BranchStmt).
@madelinekalil
| gopls | low | Minor |
2,763,749,590 | next.js | NextJS is reporting Node warnings as errors instead of warnings in the browser | ### Link to the code that reproduces this issue
https://github.com/olivierr91/next-console-error-bug-repro
### To Reproduce
1. Start the application in development (next dev)
2. Navigate to the page in the browser (http://localhost:3000)
3. The following server error is thrown in the browser:
```
(node:33588) Warning: Setting the NODE_TLS_REJECT_UNAUTHORIZED environment variable to '0' makes TLS connections and HTTPS requests insecure by disabling certificate verification.
(Use `node --trace-warnings ...` to show where the warning was created)
```

### Current vs. Expected behavior
Node warnings like this should not throw errors in the browser. We should be able to change the value of NODE_TLS_REJECT_UNAUTHORIZED without Next throwing errors and inconveniencing the developer on every page load.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Enterprise
Available memory (MB): 65376
Available CPU cores: 16
Binaries:
Node: 22.12.0
npm: 10.9.0
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.1.1-canary.23
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
Issue is new in v15 and was not present in v14 | Runtime | low | Critical |
2,763,759,376 | flutter | Function.apply() named parameters fail when compiled to Wasm | ### Steps to reproduce
Use two different ways of setting container width and height:
1. Literal symbols with #width and #height.
2. Symbol('width') and Symbol('height').
3. Run the code on a Wasm web build.
Wasm doesn't seem to support symbols for named parameters.
Is this working as intended?
### Expected results
Both types of symbol Function.apply() works.
Two red containers should be rendered.
On non-Wasm web builds and other platforms, both methods work.
### Actual results
The second container fails on Wasm.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MaterialApp(
home: Scaffold(
body: Center(
child: DemoWidget(),
),
),
));
}
class DemoWidget extends StatelessWidget {
const DemoWidget({super.key});
@override
Widget build(BuildContext context) {
return Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
// 1) Direct usage with literal named parameters (#width, #height)
createContainerWithDirectSymbols(),
const SizedBox(height: 20),
// 2) Using Function.apply with Symbol-based named parameters
createContainerWithDynamicNamedParameters(),
],
);
}
Widget createContainerWithDirectSymbols() {
try {
return Function.apply(
Container.new,
[],
{
#width: 150.0,
#height: 50.0,
#color: Colors.red,
},
) as Widget;
} catch (e) {
return Text('Error: $e');
}
}
Widget createContainerWithDynamicNamedParameters() {
try {
return Function.apply(
Container.new,
[],
{
Symbol('width'): 150.0,
Symbol('height'): 50.0,
Symbol('color'): Colors.red,
},
) as Widget;
} catch (e) {
return Text('Error: $e');
}
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="732" alt="image" src="https://github.com/user-attachments/assets/f151fd51-9626-41e5-916a-b442bfc3ace4" />
There should be two Red containers. However, the dynamic named parameters fail on Wasm so the second container isn't visible.
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel beta, 3.28.0-0.1.pre, on macOS 15.1.1 24B2091 darwin-arm64, locale en-US)
• Flutter version 3.28.0-0.1.pre on channel beta at /Users/ray/Development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 3e493a3e4d (3 weeks ago), 2024-12-12 05:59:24 +0900
• Engine revision 2ba456fd7f
• Dart version 3.7.0 (build 3.7.0-209.1.beta)
• DevTools version 2.41.0
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/ray/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
This is the JDK bundled with the latest Android Studio installation on this machine.
To manually set the JDK path, use: `flutter config --jdk-dir="path/to/jdk"`.
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
! CocoaPods 1.15.2 out of date (1.16.2 is recommended).
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
To update CocoaPods, see https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.96.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1.1 24B2091 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1.1 24B2091 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.205
[✓] Network resources
• All expected network resources are available.
```
</details>
| engine,platform-web,has reproducible steps,e: wasm,team-web,found in release: 3.27,found in release: 3.28 | low | Critical |
2,763,773,905 | deno | `deno compile` should not surface graph errors for modules specified on `--include` | https://github.com/denoland/deno/issues/17994#issuecomment-2565983918 | bug,compile | low | Critical |
2,763,826,792 | next.js | Cannot rewrite to external URL when basePath option is defined in next.config.js contains a hyphen | ### Link to the code that reproduces this issue
https://github.com/massaynus/next-rewrite-issue
### To Reproduce
1. start app in dev mode
2. open network tab in dev tools and access the app in `http://localhost:3000/mj-builder`
3. notice how the rewrite which has the app basePath and locale in it fails to execute the rewrite even if the url is computed correctly
### Current vs. Expected behavior
if the `basePath` in next.config.mjs is set to `/app` for example it works fine, it should be the same for `basePaths` with a hyphen in them like the value `mj-builder` showin in the example repo
### Provide environment information
```bash
/bin/sh: 1: yarn: not found
/bin/sh: 1: pnpm: not found
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP Tue Nov 5 00:21:55 UTC 2024
Available memory (MB): 31943
Available CPU cores: 20
Binaries:
Node: 20.17.0
npm: 11.0.0
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.1.3 // Latest available version is detected (15.1.3).
eslint-config-next: 15.1.0
react: 18.2.0
react-dom: 18.2.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Internationalization (i18n), Middleware, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Other (Deployed)
### Additional context
first discovered the issue on v15.1.0 then tested on v15.1.3 and also canary to confirm the issue.
issue is reproducible everywhere it seems! | Middleware,Internationalization (i18n),Runtime | low | Major |
2,763,827,602 | godot | PopupMenu with prefer_native_menu=true locks focus on last focused node on MacOS | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - macOS 15.1.1 - Vulkan (Forward+) - dedicated AMD Radeon Pro 5500M - Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz (16 Threads)
### Issue description
When setting up a custom context menu when pressing the right click on top of a button with a PopUpMenu with prefer_native_menu set to true after the PopUp is displayed and hidden, by selecting a PopUpMenu item or not, the focus becomes permanently linked to the last control that the mouse was on top of.
The issue does not happen when prefer_native_menu=false, I tested on Windows and a friend tested on linux and the issue does not happen, appears to be exclusive to macOS.
### Steps to reproduce
- Create an empty project
- Add some button
- Add a PopUpMenu with prefer_native_menu set to true
- Add a script with an override to the _gui_input method
- On the same method when right click event received clear the PopUpMenu, add an item and call the popup method
- Run the project
- Right click on the button, dismiss the popup or select an item and try to press another button present in the scene
Also here is a video better explaining the issue.
https://github.com/user-attachments/assets/e2e43e56-2df9-49a8-9e52-0ab2b9f3f637
### Minimal reproduction project (MRP)
[bug-popup-menu.zip](https://github.com/user-attachments/files/18278468/bug-popup-menu.zip)
| bug,platform:macos,topic:gui | low | Critical |
2,763,829,505 | godot | Cannot compile godot project after moving godot scripts to new assembly | ### Tested versions
Reproducible in All versions. Especially 4.3 and 4.2
### System information
All
### Issue description
Godot does is not compiling application if godot scripts are moved into a second assembly.
How do we organize code for a larger project or make multi-project shared code if all scripts are forced to be in the top-level assembly?
### Steps to reproduce
Take a multi-assembly godot project.
Then move scripts to another assembly.
### Minimal reproduction project (MRP)
N/A | enhancement,discussion,topic:dotnet | low | Minor |
2,763,846,659 | terminal | Allow Fixing of Row/Col number, resize window only changes font size. | ### Description of the new feature
Allow setting the number of rows and columns to fixed numbers.
If the user then adjusts the size of the window, font size is adjusted instead of adding more rows or columns.
### Proposed technical implementation details
_No response_ | Issue-Feature,Help Wanted,Area-UserInterface,Product-Terminal,Needs-Tag-Fix | low | Major |
2,763,862,031 | rust | Tracking Issue for `-Z link-directives` | This is a tracking issue for the unstable rustc flag `-Z link-directives` where
> `-Zlink-directives=no` will ignored `#[link]` directives while compiling a crate, so nothing is emitted into the crate's metadata. The assumption is that the build system already knows about the crate's native dependencies and can provide them at link time without these directives.
>
> *PR description of https://github.com/rust-lang/rust/pull/107675#issue-1571163041*
### About tracking issues
Tracking issues are used to record the overall progress of implementation. They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions. A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature. Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
### Steps
- [x] ~~Open an MCP and have it accepted.~~ At the time of impl, there was no such requirement.
- [x] Implementation
- https://github.com/rust-lang/rust/pull/107675
- [ ] Adjust documentation ([see instructions on rustc-dev-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
### Unresolved Questions
- Missing unstable flag documentation?
- What does this unstable rustc flag `-Z link-directives` do, exactly?
- How does it relate to the stable `#[link]` attribute?
- What does this mean for the [`#[link]` attribute][link-attr] (which is a stable attribute)?
- How does this unstable flag relate to `-Z link-native-libraries`?
### Implementation history
- Initial implementation PR: https://github.com/rust-lang/rust/pull/107675
### Related links
- https://github.com/rust-lang/rust/issues/70093
- https://github.com/rust-lang/rust/pull/70095
- https://github.com/rust-lang/rust/issues/134948
[link-attr]: https://doc.rust-lang.org/reference/items/external-blocks.html#the-link-attribute | A-linkage,T-compiler,C-tracking-issue,A-CLI | low | Critical |
2,763,864,471 | rust | Tracking Issue for `-Z link-native-libraries` | This is a tracking issue for the unstable rustc flag `-Z link-native-libraries`.
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
Discussion comments will get marked as off-topic or deleted.
Repeated discussions on the tracking issue may lead to the tracking issue getting locked.
### Steps
- [x] ~~Open an MCP and have it accepted.~~ At the time of impl, there was no such requirement.
- [x] Implementation
- https://github.com/rust-lang/rust/pull/70095
- [ ] Adjust documentation ([see instructions on rustc-dev-guide][doc-guide])
- https://github.com/rust-lang/rust/pull/116213
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
### Unresolved Questions
- What does this unstable rustc flag `-Z link-native-libraries` do, exactly?
- How does it relate to the stable `#[link]` attribute?
- What does this mean for the [`#[link]` attribute][link-attr] (which is a stable attribute)?
- How does this unstable flag relate to `-Z link-directives`?
> > Seems fine to me. Perhaps we should proactively remove the other Zlink-native-libraries flag as a "failed experiment"?
>
> I think we still need -Zlink-native-libraries to deal with prebuilt rlibs - ie, the sysroot libraries.
> https://github.com/rust-lang/rust/pull/107675#issuecomment-1489041015
### Implementation history
- https://github.com/rust-lang/rust/pull/70095
- https://github.com/rust-lang/rust/pull/116213
### Related links
- https://github.com/rust-lang/rust/issues/70093
- https://github.com/rust-lang/rust/pull/70095
- https://github.com/rust-lang/rust/issues/134947
[link-attr]: https://doc.rust-lang.org/reference/items/external-blocks.html#the-link-attribute | A-linkage,T-compiler,C-tracking-issue,A-CLI | low | Critical |
2,763,867,681 | PowerToys | OLED Protection feature. | ### Description of the new feature / enhancement
#000000 overlay with opacity setting on top of each window until you hover on them or focus on them. Press a hotkey to show without overlay for set number of seconds or hold hotkey instead. Same thing goes for the taskbar so no need to set it to auto-hide.
An addon to FancyZones or a completely new feature.
### Scenario when this would be used?
OLED monitors are becoming more and more common and naturally the number of people buying them goes up, but they do still currently have issues with burn-in if you don't take care of them, especially for people with productivity workloads.
Much better than having to minimize them every time. This will definitely reduce burn-in to a significant degree.
### Supporting information
https://www.rtings.com/tv/tests/longevity-burn-in-test-updates-and-results
Thank you! | Needs-Triage | low | Minor |
2,763,876,198 | neovim | Already-typed prefix is deleted when invoking omnifunc with completeopt set to menuone,longest | ### Problem
When using `completeopt=menuone,longest`, triggering omnifunc with `<C-X C-O>` will delete the already-typed prefix if there are multiple matches. It should not delete any characters at all.
If I edit a Python file and try to type `staticmethod` by typing `sta` then pressing `<C-X C-O>`, `sta` is deleted and the completion popup is shown for prefix `sta`. If I type more characters before I invoke omnifunc, it will complete properly instead of deleting. It seems to be inconsistent how many characters are needed.
If I disable `longest` and just set `completeopt` to `menuone`, omnifunc in the Python file no longer causes the already-typed prefix to be deleted (but I lose the functionality of `longest`).
I bisected and found commit 025c87441502cf570bad7b71f40bc6fe88989297 does not exhibit this bug, but it looks like there has been a lot of change to the LSP code since then so there may be versions in between that don't exhibit it either.
### Steps to reproduce
My `minimal.lua`, from the template, and run with `nvim --clean -u minimal.lua`:
```
for name, url in pairs {
'https://github.com/neovim/nvim-lspconfig/',
} do
local install_path = vim.fn.fnamemodify('nvim_issue/' .. name, ':p')
if vim.fn.isdirectory(install_path) == 0 then
vim.fn.system { 'git', 'clone', '--depth=1', url, install_path }
end
vim.opt.runtimepath:append(install_path)
end
vim.o.completeopt = "longest,menuone"
local on_attach = function(client, bufnr)
vim.bo.omnifunc = "v:lua.vim.lsp.omnifunc"
end
local nvim_lsp = require('lspconfig')
nvim_lsp.pyright.setup({ on_attach = on_attach })
```
I have built neovim from source using a vanilla `make install` to a custom prefix:
```
git clone --bare --recurse-submodules https://github.com/neovim/neovim /path/to/dir
cd /path/to/dir
make CMAKE_BUILD_TYPE=Release CMAKE_INSTALL_PREFIX="/path/to/install" install
```
### Expected behavior
If I edit a Python file and try to type `staticmethod` by typing `sta` then pressing `<C-X C-O>`, `sta` should be preserved and the completion popup should be shown for prefix `sta`.
### Nvim version (nvim -v)
commit e9c077d197a80a2ecd858821b18d0be3e3eb6d0b
### Vim (not Nvim) behaves the same?
Can't reproduce, LSP related.
### Operating system/version
Debian Trixie
### Terminal name/version
foot 1.18.1
### $TERM environment variable
xterm-256color
### Installation
Build from repo | bug,lsp,completion | low | Critical |
2,763,883,449 | pytorch | [RFC] Add CPP Grouped GEMM Template for Inductor CPU | ### 🚀 The feature, motivation and pitch
## Motivation
Grouped GEMM is a common pattern in modeling. For example, in the `LlamaMLP` module (https://github.com/huggingface/transformers/blob/d5aebc64653d09660818109f2fac55b5e1031023/src/transformers/models/llama/modeling_llama.py#L187-L188), the `gate_proj` and `up_proj` layers have the same dimensions and share the same activation. After `gate_proj`, an activation function is applied, and the resulting of activation is multiplied by `up_proj` to compute the final output. Fusing the `gate_proj` and `up_proj` layers into a Grouped GEMM improves memory locality when applying activation and multiplication operations. In this RFC, we propose the `approaches` to implemente this Grouped GEMM optimization.
## Approaches
We propose to implement the Grouped GEMM optimization with CPP Template as it's more flexible to support different GEMM number and different epilogue fusions. Here are the proposed design of some key components.
### Pattern Matcher
We introduce `grouped_gemm_pass` to find the pattern of a anchor node (which is the activation shared by Grouped GEMM) and a Group of GEMMs. Replace this pattern with `grouped_gemm_lowering` lowering function and further lowering into GEMM Template.
We also evaluate the `MultiOutputPattern` to enable the pattern matcher and fusion in post-grad fusion passes. Current limitation is the `MultiOutputPattern` requires fixed number of output nodes when define the pattern.
### Inductor Lowering
After lowering into Grouped GEMM Template, most of the flow are same as standard template. The only extension is the Grouped GEMM Template may have multi output nodes. We define the template node with `MultiOutputLayout` and multi output buffers with `MultiOutput` (each is corresponding to a GEMM output).
### Inductor Scheduler Nodes Fusions
In the scheduler node fusion phase,
* Firstly, we fuse the template node (layout of `MultiOutputLayout`) and each GEMM output (`MultiOutput`) into a `FusedSchedulerNode`.
* Then, we further fuse this `FusedSchedulerNode` with it's epilogues, etc `silu`, `mul`, `relu`.
After this phase, we have the `FusedSchedulerNode` with Grouped GEMM and its epilogues. Next, we will do the code generation within CPP Backend into CPP Grouped GEMM Template.
### CPP Grouped GEMM Template
We define a CPP Grouped GEMM Template which extends current CPP GEMM Template implementation with:
* Flexible number of GEMMs
* Each GEMM can have independent or shared activations
* Each GEMM can have a unique weight but same sizes
* Each GEMM can have a unique bias or None
* Each GEMM have its own epilogues
Specifically, we introduce a `CppGroupedGemmTemplate` class that inherits from `CppGemmTemplate`. Key methods, such as `add_choices` and `render`, are overridden to support the aforementioned features.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Minor |
2,763,883,695 | next.js | Turbopack error when using roboto flex font | ### Link to the code that reproduces this issue
https://github.com/Arctomachine/next-font-roboto-flex-bug
### To Reproduce
Start dev server and open page in browser
### Current vs. Expected behavior
Current: gives error 500 when following is present. Error happens when constant is assigned. Other fonts from roboto family work
```js
import { Roboto_Flex } from 'next/font/google'
const robotoFlex = Roboto_Flex({
subsets: ['latin'],
})
```
Expected: works with this font
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Pro
Available memory (MB): 32720
Available CPU cores: 12
Binaries:
Node: 20.17.0
npm: 10.8.2
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.1.1-canary.23 // Latest available version is detected (15.1.1-canary.23).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Font (next/font), Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | Font (next/font),Turbopack | low | Critical |
2,763,893,224 | flutter | SelectionArea should expose a selectionHeightStyle option | ### Use case
For a `SelectionArea` with nested `Text` that defines a sufficiently large `height` value, the selected region contains gaps that do not account for the height, as seen in the example below:
```dart
Text(
"Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.",
style: TextStyle(height: 1.5),
),
```
<img width="537" alt="selection_area" src="https://github.com/user-attachments/assets/ee7a4ec1-a9fe-43d3-a25d-8d650fbf5fad" />
### Proposal
The reason for this is that `SelectionArea` does not expose a `selectionHeightStyle` property that allows a user to set `BoxHeightStyle` to an alternate value than the default.
This is provided for `SelectableText` and works as expected, but there is not analogous option in `SelectionArea`.
```dart
SelectableText(
"Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.",
style: TextStyle(height: 1.5),
selectionHeightStyle: ui.BoxHeightStyle.includeLineSpacingTop,
),
```
<img width="538" alt="selectable_text" src="https://github.com/user-attachments/assets/fc902516-f3ae-4e83-8ccf-b7a8d0a137d7" />
------
It would be nice to be able to set this in `SelectionArea` once instead of needing to use `SelectableText`. Further, it is more preferable to to use `SelectionArea` in a web app (to be able to drag the entire page) and `SelectableText` may not even be an option.
### Flutter Version
```
Flutter 3.27.1 • channel stable • https://github.com/flutter/flutter.git
Framework • revision 17025dd882 (2 weeks ago) • 2024-12-17 03:23:09 +0900
Engine • revision cb4b5fff73
Tools • Dart 3.6.0 • DevTools 2.40.2
```
### Related Issues:
There are two somewhat related issues/comments:
- https://github.com/flutter/flutter/issues/104429
- https://github.com/flutter/flutter/issues/137817#issuecomment-2010414947
| c: new feature,framework,c: proposal,P3,f: selection,team-framework,triaged-framework | low | Minor |
2,763,894,248 | ui | [bug]: After the mobile sidebar is opened and the page is redirected, the style="pointer-events: none;" in the body affects the click events on the redirected page. | ### Describe the bug
On the mobile page, after opening the sidebar, a link was redirected. After the redirect, the style="pointer-events: none;" in the body was not reset, affecting the page's click events.
### Affected component/components
Sidebar
### How to reproduce
1. `NavUser.tsx`
```tsx
<SidebarMenu>
<DropdownMenuGroup>
<DropdownMenuItem onClick={() => router.push('/')}>
<Undo2 />
Return to Home Page
</DropdownMenuItem>
</DropdownMenuGroup>
</SidebarMenu>
```
then in body
`<body style='pointer-events:none;'></body>`
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
macos
chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,763,902,375 | TypeScript | Incorrect Formatting on `if (a) try {} finally {}` | ### 🔎 Search Terms
format, formatter, finally
### 🕗 Version & Regression Information
- This is the behavior in every version I tried (it didn't change between TS3.3 and TS5.8).
### ⏯ Playground Link
[Playground Link](https://www.typescriptlang.org/play/?ts=5.7.2#code/JYMwBAFALgTgrgUwJRlgTzAbwFBj2AegLADozsBfMEYAOwEMAbRjHfQ4sky7IA)
### 💻 Code
```ts
if (true) try {
// ...
} finally {
// ...
}
```
Click "Format Document" from the context menu or via keyboard shortcut (opt+shift+f).
### 🙁 Actual behavior
The format result is
```ts
if (true) try {
// ...
} finally {
// ...
}
```
### 🙂 Expected behavior
Do not add indentation to the finally block.
### Additional information about the issue
Note that `if (a) try {} catch {}` works fine. Maybe there're something related. [Playground Link](https://www.typescriptlang.org/play/?ts=5.7.2#code/JYMwBAFALgTgrgUwJRlgTzAbwFBj2AegLADozsBfMAYwEMpqALLXfI08qkYAO1oBt+GHPlGFiZEqzwVsQA)
Related #3817 | Bug,Help Wanted,Domain: Formatter | low | Minor |
2,763,915,936 | pytorch | [inductor] [cuda] [fake tensor] `ConvTranspose` behave differently when Input type and weight type are not the same | ### 🐛 Describe the bug
**symptom**:
trigger condition1: only set the `input tensor` for cuda, but not set the `model.cuda()`
trigger condition2: `padding` param is necessary, otherwise, inductor will also raise the error.
**device**: `cuda` only
**exposed area**: `ConvTranspose1d`, `ConvTranspose2d`, `ConvTranspose3d`
```python
import torch
class Model(torch.nn.Module):
def __init__(self, dim):
super().__init__()
self.conv_t = eval(f"torch.nn.ConvTranspose{dim}d(1, 1, kernel_size=(2,) * {dim}, padding=(1,) * {dim})") # trigger condition
def forward(self, x):
x = self.conv_t(x)
return x
def run_test(dim, mode):
x = torch.randn(*([1] * (dim + 2))).cuda() # trigger condition
inputs = [x]
model = Model(dim)
if mode == "inductor":
model = torch.compile(model)
try:
output = model(*inputs)
print(f"success on {mode}: {output}")
except Exception as e:
print(e)
run_test(1, "eager")
run_test(1, "inductor")
run_test(2, "eager")
run_test(2, "inductor")
run_test(3, "eager")
run_test(3, "inductor")
```
### Error logs
```
Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
success on inductor: tensor([], device='cuda:0', size=(1, 1, 0), grad_fn=<CompiledFunctionBackward>)
Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
success on inductor: tensor([], device='cuda:0', size=(1, 1, 0, 0),
grad_fn=<CompiledFunctionBackward>)
Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
success on inductor: tensor([], device='cuda:0', size=(1, 1, 0, 0, 0),
grad_fn=<CompiledFunctionBackward>)
```
### Versions
PyTorch version: 20241230
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241230+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241230+cu126
[pip3] torchaudio==2.6.0.dev20241230+cu126
[pip3] torchvision==0.22.0.dev20241230+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241230+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @nairbv @mruberry @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,module: type promotion,oncall: pt2,module: inductor | low | Critical |
2,763,938,010 | tauri | [bug] Testing docs example doesn't compile when command accepts AppHandle | ### Describe the bug
I am attempting to write tests for my command code, based on the example in the [docs](https://docs.rs/tauri/latest/tauri/test/fn.assert_ipc_response.html#examples). However, when attempting to [pass an AppHandle to the command function](https://tauri.app/develop/calling-rust/#accessing-an-apphandle-in-commands), I get an error "the trait bound `AppHandle: CommandArg<'_, R>` is not satisfied".
### Reproduction
Slightly modified from the docs:
```// Prevents additional console window on Windows in release, DO NOT REMOVE!!
#![cfg_attr(not(debug_assertions), windows_subsystem = "windows")]
// Note accessing the AppHandle here!
// Adding this is what's causing the problem.
#[tauri::command]
fn greet(_app: tauri::AppHandle, name: &str) -> String {
format!("Hello, {}! You've been greeted from Rust!", name)
}
fn create_app<R: tauri::Runtime>(mut builder: tauri::Builder<R>) -> tauri::App<R> {
builder
.setup(|app| {
// do something
Ok(())
})
.invoke_handler(tauri::generate_handler![greet])
// remove the string argument on your app
.build(tauri::generate_context!())
.expect("failed to build app")
}
fn main() {
tauri::Builder::default()
.invoke_handler(tauri::generate_handler![greet])
.run(tauri::generate_context!())
.unwrap();
}
#[cfg(test)]
mod tests {
#[test]
fn something() {
let data = r#"{"name": "the test"}"#;
let app = super::create_app(tauri::test::mock_builder());
let window = tauri::WebviewWindowBuilder::new(&app, "main", Default::default())
.build()
.unwrap();
// do something with the app and window
// in this case we'll run the my_cmd command with no arguments
tauri::test::assert_ipc_response(
&window,
tauri::webview::InvokeRequest {
cmd: "greet".into(),
callback: tauri::ipc::CallbackFn(0),
error: tauri::ipc::CallbackFn(1),
url: "http://tauri.localhost".parse().unwrap(),
body: tauri::ipc::InvokeBody::default(),
headers: Default::default(),
invoke_key: tauri::test::INVOKE_KEY.to_string(),
},
Ok("Hello, the test! You've been greeted from Rust!"),
);
}
}
```
### Expected behavior
Code should compile and successfully run tests.
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 15.1.1 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 20.17.0
- pnpm: 9.15.0
- npm: 10.8.2
[-] Packages
- tauri 🦀: 2.0.6
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.46.3
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-store 🦀: 2.2.0
- @tauri-apps/plugin-store : 2.2.0
- tauri-plugin-shell 🦀: 2.2.0
- @tauri-apps/plugin-shell : 2.2.0
- tauri-plugin-dialog 🦀: 2.2.0
- @tauri-apps/plugin-dialog : 2.2.0
- tauri-plugin-fs 🦀: 2.2.0
- @tauri-apps/plugin-fs : 2.2.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite```
```
### Stack trace
```text
error[E0277]: the trait bound `AppHandle: CommandArg<'_, R>` is not satisfied
--> src/welcome/mod.rs:20:1
|
20 | #[tauri::command]
| ^^^^^^^^^^^^^^^^^ the trait `serde::Deserialize<'_>` is not implemented for `AppHandle`, which is required by `AppHandle: CommandArg<'_, R>`
|
::: src/startup.rs:9:25
|
9 | .invoke_handler(tauri::generate_handler![
| _________________________-
10 | | welcome::get_saw_welcome_screen,
11 | | welcome::set_saw_welcome_screen
12 | | ])
| |_________- in this macro invocation
|
= help: the following other types implement trait `serde::Deserialize<'de>`:
`&'a [u8]` implements `serde::Deserialize<'de>`
`&'a serde_json::value::RawValue` implements `serde::Deserialize<'de>`
`&'a std::path::Path` implements `serde::Deserialize<'de>`
`&'a str` implements `serde::Deserialize<'de>`
`&'p jsonptr::pointer::Pointer` implements `serde::Deserialize<'de>`
`()` implements `serde::Deserialize<'de>`
`(T,)` implements `serde::Deserialize<'de>`
`(T0, T1)` implements `serde::Deserialize<'de>`
and 352 others
= note: required for `AppHandle` to implement `CommandArg<'_, R>`
= note: this error originates in the macro `welcome::__cmd__set_saw_welcome_screen` which comes from the expansion of the macro `tauri::generate_handler` (in Nightly builds, run with -Z macro-backtrace for more info)
```
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,763,961,894 | electron | [Bug]: tray not showing up over fullscreen window when clicked on tray icon from a secondary display | > ### Electron Version
> 33.2.1
>
> ### What operating system(s) are you using?
> macOS
>
> ### Operating System Version
> macOS Sonoma 15.1.1
>
> ### What arch are you using?
> arm64 (including Apple Silicon)
>
> ### Last Known Working Electron version
> No response
>
> ### Update
> The tray now shows up when clicked on the tray icon of the secondary display, though not visible on fullscreen apps.
>
> Shows on empty workspace
> 
>
> Shows over non-fullscreen window
> 
>
> Not visible over the same app in fullscreen mode
> 
>
_Originally posted by @Hackerbee in [#43489](https://github.com/electron/electron/issues/43489#issuecomment-2566137038)_ | platform/macOS,bug :beetle:,33-x-y | low | Critical |
2,763,999,273 | godot | Moving mesh files with embedded textures doesn't clear files created in the original location | ### Tested versions
- Reproducible in 4.3 stable
### System information
Godot v4.3.stable - Fedora Linux 41 (Workstation Edition) - Wayland - Vulkan (Forward+) - integrated Intel(R) UHD Graphics (TGL GT1) - 11th Gen Intel(R) Core(TM) i5-11400H @ 2.70GHz (12 Threads)
### Issue description
When a mesh containing embedded textures is imported Godot automatically creates files for textures in that mesh. However, when that mesh is then moved to another directory, the automatically created textures will stay around (with new ones being created in the new directory).
That makes it quite easy to pollute your project with spurious files.
### Steps to reproduce
In editor, drag and drop `anvil.glb` from the `source_dir` to the `target_dir`.
The original texture will remain in `source_dir`.
### Minimal reproduction project (MRP)
[repro.zip](https://github.com/user-attachments/files/18279587/repro.zip)
| bug,topic:import | low | Minor |
2,764,080,451 | transformers | "Is it possible for Hugging Face to implement a chat model for quick information retrieval similar to vLLM?" | ### Feature request

### Motivation
Quick and convenient information retrieval.
### Your contribution
I don't have relevant code for this feature request. | Feature request | low | Minor |
2,764,092,538 | react | [React 19] Bug: optimisticState is updated with both new state and optimisticValue in useOptimistic | Using the react Example given for useOptimistic hook in docs, an update in state passed to the hook should directly reset the optimisticState in one render.
Instead
1. It calls the updaterFn first, updates the optimisticState using new state and optimisticValue during one render cycle
resets optimisticState with new State in another render cycle.
2. As a result, both the new state and optimisticValue is rendered. Tried this in 20x slowdown
https://github.com/user-attachments/assets/6abfdd9a-c2ea-4776-8df4-48ce2e3eb41d
**React version**: 19
**Steps To Reproduce**
1. Throttle CPU to 20x slowdown
2. Type some input in text box and press submit
3. See that the optimistic state is displayed and then it is pushed down for an instant along with the new state
4. Check console logs
<img width="1728" alt="Screen Shot 2024-12-31 at 1 42 21 PM" src="https://github.com/user-attachments/assets/10c655e8-154c-4813-9f68-eb78e4e1e57e" />
**Link to code example:**
https://codesandbox.io/p/sandbox/react-dev-forked-gmlxnr?file=%2Fsrc%2FApp.js&workspaceId=ws_QiCvK4c476hege6EDsXfpC
**The current behavior**
When new state is passed to useOptimistic,
1. It updates optimisticState by calling updaterFn with new state and optimisticValue
2. Component is rendered using this optimisticState (incorrect one)
3. Another update to optimisticState is done by resetting it to the new state
4. Component is rendered using this new optimisticState (correct one)
**The expected behavior**
When new state is passed to useOptimistic, it should directly reset the optimisticState without calling the updater Fn | React 19 | low | Critical |
2,764,134,144 | flutter | Line indentation and paragraph spacing | ### Use case
I would like to render poetic text where a soft-wrap results in greater indentation than the first line. For example, the word "inheritance" in the image below is more indented. Additionally, different lines have different indentation styles:
<img width="333" alt="Screenshot 2024-12-31 at 16 34 38" src="https://github.com/user-attachments/assets/30a3bfd9-301b-4934-a03f-d2f6471ca051" />
It would be possible to achieve an initial indent with a `WidgetSpan` wrapping a `SizedBox`. However, I know of no way to accomplish the soft-wrap indentation in Flutter besides measuring and laying out every word myself.
Another need is to be able to adjust the spacing between paragraphs. Using different Text widgets results in not being able to select text across widgets. Note in the following example, the paragraph spacing is less that it would be if you simply used two newline characters to separate the paragraphs:
<img width="363" alt="Screenshot 2024-12-31 at 16 55 48" src="https://github.com/user-attachments/assets/7a01173c-147d-4ac6-b38f-a1c13d66a2bb" />
Related issues: #12255, #128218
### Proposal
I would like a solution similar to what is available in Android or iOS development.
In Android, indentation is possible with [`LeadingMarginSpan`](https://developer.android.com/reference/android/text/style/LeadingMarginSpan) as shown in the following image ([source](https://stackoverflow.com/a/42058964/3681880)):

In iOS, both indentation and paragraph spacing are possible with [`NSMutableParagraphStyle`](https://developer.apple.com/documentation/uikit/nsmutableparagraphstyle).
An API similar to the iOS solution would be great. | framework,a: typography,c: proposal,P3,team-engine,triaged-engine | low | Minor |
2,764,139,679 | pytorch | torch.cuda.empty_cache() causes extra memory usage on 'cuda:0' | ### 🐛 Describe the bug
# Issue Description:
When utilizing PyTorch with a specific CUDA device (in this case, 'cuda:8'), calling `torch.cuda.empty_cache()` unexpectedly results in additional memory allocation on 'cuda:0', approximately 255MB. This behavior is contrary to expectations, as the operation should ideally only affect the memory cache of the specified device ('cuda:8') and not impact other CUDA devices.
# Code
```python
import numpy as np
import torch
a = np.ones(100)
b = torch.tensor(a).to('cuda:8')
torch.cuda.empty_cache()
# The previous behavior is normal, only occupying the video memory on cuda:8.
del b
torch.cuda.empty_cache() #At this time, about 255MB of video memory on cuda:0 is occupied.
```
# Implications in Multi-GPU Clusters:
This characteristic/bug can pose significant challenges in multi-GPU clusters, especially in shared environments among multiple users. The unintended memory allocation on 'cuda:0' can lead to its memory being exhausted, thereby preventing all users from performing their tasks effectively.
# Environment Details:
- **NVIDIA-SMI Version**: 550.54.14
- **Driver Version**: 550.54.14
- **CUDA Version**: 12.4
- **Hardware**: NVIDIA GeForce RTX 3090
### Versions
PyTorch version: 2.4.1.post303
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Anaconda gcc) 11.2.0
Clang version: Could not collect
CMake version: version 3.30.0-rc4
Libc version: glibc-2.35
Python version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
GPU 4: NVIDIA GeForce RTX 3090
GPU 5: NVIDIA GeForce RTX 3090
GPU 6: NVIDIA GeForce RTX 3090
GPU 7: NVIDIA GeForce RTX 3090
GPU 8: NVIDIA GeForce RTX 3090
GPU 9: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.54.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 104
On-line CPU(s) list: 0-103
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 26
Socket(s): 2
Stepping: 7
CPU max MHz: 3800.0000
CPU min MHz: 1200.0000
BogoMIPS: 5000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.6 MiB (52 instances)
L1i cache: 1.6 MiB (52 instances)
L2 cache: 52 MiB (52 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-25,52-77
NUMA node1 CPU(s): 26-51,78-103
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] gpytorch==1.13
[pip3] numpy==1.26.4
[pip3] numpy-groupies==0.10.2
[pip3] numpyro==0.13.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.19.3
[pip3] nvidia-nvjitlink-cu12==12.3.101
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-ignite==0.5.0.post2
[pip3] pytorch-lightning==2.2.0.post0
[pip3] torch==2.4.1.post303
[pip3] torch-geometric==2.6.1
[pip3] torchaudio==2.4.1
[pip3] torchmetrics==1.4.0.post0
[pip3] torchvision==0.19.1
[pip3] triton==2.2.0
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.3.101 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] cudatoolkit 11.8.0 h6a678d5_0
[conda] cudnn 9.3.0.75 cuda11.8 nvidia
[conda] gpytorch 1.13 pypi_0 pypi
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.4.107 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libmagma 2.8.0 hfdb99dd_0 conda-forge
[conda] libmagma_sparse 2.8.0 h9ddd185_0 conda-forge
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] libopenvino-pytorch-frontend 2024.4.0 h5888daf_2 conda-forge
[conda] libtorch 2.4.1 cuda118_h232d35b_303 conda-forge
[conda] mkl 2024.1.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310ha3dbc2a_1 conda-forge
[conda] mkl_random 1.2.5 py310hbd113e2_1 conda-forge
[conda] nccl 2.23.4.1 h03a54cd_2 conda-forge
[conda] numpy 1.25.2 pypi_0 pypi
[conda] numpy-base 1.26.4 py310h8a23956_0
[conda] numpy-groupies 0.10.2 pypi_0 pypi
[conda] numpyro 0.13.2 pyhd8ed1ab_0 conda-forge
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.19.3 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.3.101 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pyg 2.6.1 py310_torch_2.4.0_cu121 pyg
[conda] pytorch 2.4.1 cuda118_py310h8b36b8a_303 conda-forge
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-ignite 0.5.0.post2 pypi_0 pypi
[conda] pytorch-lightning 2.2.0.post0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.4.1 py310_cu121 pytorch
[conda] torchmetrics 1.3.1 pypi_0 pypi
[conda] torchtriton 2.2.0 py310 pytorch
[conda] torchvision 0.19.1 py310_cu121 pytorch
[conda] triton 2.2.0 pypi_0 pypi
cc @ptrblck @msaroufim @eqy | module: cuda,triaged,module: CUDACachingAllocator | low | Critical |
2,764,143,587 | svelte | Provide a 'reset' utility for function-based Springs | ### Describe the problem
The new Spring class is great, and in particular the fact that `Spring.of()` can be used in conjuction with `spring.set()` is super useful.
However, after using `.set()`, the spring stays at the provided value until reactivity causes its initial function to be reevaluated. This is ***expected*** and not a problem at all, but it's just undesirable in some cases, so I was wondering if DX could be improved a bit.
Consider something like:
```js
// a, b, c and d change rarely
let a = $state(2);
let b = $state(4);
let c = $state(8);
let d = $state(16);
const spring = Spring.of(() => a * b * c * d);
function setSpringValue(x) {
spring.set(x);
}
```
The spring will remain at the value passed as `x` until either `a`, `b`, `c` or `d` change, which could take a long time (or could happen literally immediately, making the whole thing a bit useless).
If we want the spring to go back to `a * b * c * d` after reaching `x`, one way to do it is:
```js
// ...
let calculatedTarget = $derived(a * b * c * d);
const spring = Spring.of(() => calculatedTarget);
function setSpringValue(x) {
spring.set(x).then(() => spring.set(calculatedTarget);
}
```
Again, this is **fine**, but can just turn into a lot of boilerplate in real-world situations with lots of spring-based UIs.
### Describe the proposed solution
To go along with the `set()` feature, it would be nice to have a way to force a reevaluation of the function initially provided to the Spring.
Something like `spring.reset()` could make things a little cleaner imo, instructing the spring to re-run its initial function without having to create a `$derived`.
```diff
- let calculatedTarget = $derived(a * b * c * d);
- const spring = Spring.of(() => calculatedTarget);
- function setSpringValue(x) {
- spring.set(x).then(() => spring.set(calculatedTarget);
- }
+ const spring = Spring.of(() => a * b * c * d);
+ function setSpringValue(x) {
+ spring.set(x).then(() => spring.reset();
+ }
```
And additionally, maybe an option could be passed to `.set()`, to make this process even easier?
```js
// default behavior:
spring.set(x, { reevaluate: 'reactive' });
// stop responding to reactivity until `x` is reached, then reevaluate the initial fn
spring.set(x, { reevaluate: 'after' });
// responds to reactivity throughout animation towards `x` as usual, and upon reaching `x` reevaluate the initial fn
spring.set(x, { reevaluate: 'reactiveAfter' });
// stops reactivity entirely until manually re-enabled with spring.reset()
spring.set(x, { reevaluate: 'never' });
```
Conversely, this would also allow for more control in situations where `a`, `b`, `c` and `d` change a *lot*, and we just want the spring to chill for a bit.
### Importance
would make my life easier | transition/animation | low | Minor |
2,764,153,106 | godot | Support way to create Vulkan surface under Freedreno/Mobile Vulkan that does not expect X11 | ### Tested versions
4.3
### System information
ARM64 Android with Godot 4.3 under a PRoot environment and a valid X11 desktop session, using Termux and Termux-X11
### Issue description
I have an Android gaming tablet with Snapdragon 8 Gen 3 with 16GB of RAM, and although I know that there is an Android version of the Godot Editor, I mainly program in C#, and the Android editor does not contain the dotnet compiler, and thus I'm forced to go the Termux + Debian Proot + Termux-X11 setup, and so I install both VSCode and dotnet compiler in it, get a Linux desktop environment running, and then run the real Linux Godot Editor for ARM64 with its full potential.
I confirm that I can run the dotnet compiler and build a full-fledged C# project after setting the environmental variable `DOTNET_GCHeapHardLimit=1C0000000`, [which is a known bug since dotnet 7](https://github.com/dotnet/runtime/issues/85556), I even tested it with NativeAOT, and I have confirmed my GPU driver is working right by using Freedreno.
Note that I cannot use the Android-native Vulkan driver because Android vendors usually implement a subset of Vulkan in a non-conformant and not-so-correct way, so I'm forced to use Freedreno that takes a little bit of performance hit, but it should guarantee more way Vulkan conformance given Mesa would follow the standard strictly.
I tried to launch Godot Editor under my environment, but I noticed that Vulkan failed to initialize, and so are other rendering drivers (both GL and GLES failed as well).
I'm getting very close to make it run, so I did some diagnostic. My speculation is that https://github.com/godotengine/godot/blob/fafc07335bdecacd96b548c4119fbe1f47ee5866/platform/linuxbsd/x11/rendering_context_driver_vulkan_x11.cpp#L50-L51 is the culprit. My guess is that `vkCreateXlibSurfaceKHR` failed and returns the equivalent "not implemented", and it is expected for `vkCreateAndroidSurfaceKHR` to be used instead. This could be a vendor-specific issue, as indeed Qualcomm wouldn't expect people to run X11 under Android.
Strangely, I have also tested the whole thing again, but this time using LLVMPipe as the Mesa driver. This means all the Vulkan, OpenGL and GLES are now technically software rendering. Everything works including mouse and sound (although I haven't tested packaging), but obviously that wouldn't be very efficient since everything graphics related is processed by CPU, and we can't just let the Adreno GPU sit idle. ~~I suspect the Vulkan backend still doesn't work~~ (Actually that is lavapipe), but both GL and GLES backend should technically work given Mesa and LLVMPipe is doing all the heavyliftings, and there is an automatic backend selection to either one of the two that works.
So now we have two problems:
1. Is there any ways to bypass `vkCreateXlibSurfaceKHR` and find a more compatible fallback?
2. Can the Godot Android Editor include C# support as well? Given that I can run dotnet inside Termux already, so given the right dotnet SDK home configuration, we can technically spawn the dotnet process using syscall (heck, maybe `system(3)` would work as well). How to get language servers to work on the Android Editor is another issue since we cannot use VSCode
### Steps to reproduce
1. Install a Termux instance and Linux desktop environment with GPU acceleration under Proot. I recommend using [this](https://github.com/sabamdarif/termux-desktop) and Snapdragon devices as they have open-source GPU driver that is usable in userspace.
2. Install Termux-X11
3. Run the Termux-X11 and Termux X11 session altogether so that the X11 surface is valid and confirm that GPU is working by running vkcube or glxgears
4. Install dotnet for ARM64 under that Termux session, and make sure `DOTNET_GCHeapHardLimit=1C0000000` is rightfully set for all shell profiles
5. Download Godot editor for Linux ARM64 and run it inside the Termux X11 session with GPU acceleration
### More details
https://github.com/sabamdarif/termux-desktop/issues/56 | enhancement,topic:rendering,topic:porting | low | Critical |
2,764,167,241 | ant-design | Allow adding badges next to Typography text | ### What problem does this feature solve?
I have to add `marginBottom: 19` to keep the badge and text vertically aligned. Ideally, I shouldn't do this.
```
<Space style={{ marginBottom: 19 }}>
<Typography.Title style={{ marginBottom: 0 }}>
{this.state.linkInfo?.title}
</Typography.Title>
<Tag color="red">
{this.state.linkInfo?.is_tracking_pixel_link
? 'Tracking Pixel'
: 'Link'}
</Tag>
</Space>
```
### What does the proposed API look like?
```
<Space>
<Typography.Title>
{this.state.linkInfo?.title}
</Typography.Title>
<Tag color="red">
{this.state.linkInfo?.is_tracking_pixel_link
? 'Tracking Pixel'
: 'Link'}
</Tag>
</Space>
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Minor |
2,764,185,700 | PowerToys | Windows don't remember where I put them | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Workspaces
### Steps to reproduce
I used the Launch editor to create a workspace. Opened and positioned all my apps and saved it. It works great. Saved a shortcut to my Taskbar. Scenario 1 works as expected.
- Scenario 1 as follows
- Turn on computer, click on icon in task bar, apps all open and go to where I set them up. Use computer all day. Closed all windows on computer and put it to sleep. Next day, Awaken computer, click icon in taskbar, all apps open as expected and all apps are in the monitors where I originally set them up.
-
- Scenario 2 doesn't work as I expect.
- Scenario 2 as follows
- Turn on computer, click on icon in task bar, apps all open and go to where I set them up. Use computer all day. Moved a bunch of the windows because there was glare coming in my window from the sun so I moved the apps to a different monitor so I could see the screen. Closed all windows on computer and put it to sleep. Next day, Awaken computer, click icon in taskbar, **all apps open as expected BUT all go to where I left them the day before, NOT to where I set them up in the editor. All the correct apps opened, but they just didn't go to where I expected them.**
### ✔️ Expected Behavior
I expected the window to always go to where I set them up in Workspaces even though I may have moved them throughout the day.
### ❌ Actual Behavior
Windows went to where I last used them.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response,Product-Workspaces | low | Major |
2,764,215,829 | transformers | Support SDPA & Flash Attention 2 for LayoutLMv3 | ### Feature request
Support SDPA & Flash Attention 2 for LayoutLMv3.
### Motivation
Speed up document transformer training and inference.
### Your contribution
I can add it. | Feature request | low | Minor |
2,764,234,748 | react-native | onResponderRelease on View is called one time when releasing two fingers or more at exactly the same time. | ### Description
Code:
```
<View
style={styles.container}
onStartShouldSetResponder={() => true}
onMoveShouldSetResponder={() => true}
onResponderStart={addFingerOnScreen}
onResponderMove={moveFingersOnScreen}
onResponderRelease={releaseFingerFromScreen}>
</View>
```
Actions:
- I pressed the screen (on the view) with two fingers
- I released the screen with my two fingers on exactely on the same time
Result:
onResponderRelease is called one time with touches legnth 1 and changed touches lenght 1.
Expected:
onResponderRelease is called two times. The first time with touches length 1 and changed touches length 1. The second time with touches length 0 and changed touches length 1.
OR
onResponderRelease is called 1 time. The first time with touches length 0 and changed touches length 2.
### Steps to reproduce
- I pressed the screen (on the view) with two fingers
- I released the screen with my two fingers on exactely on the same time
### React Native Version
0.76.5
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: macOS 15.2
CPU: (8) arm64 Apple M1
Memory: 324.31 MB / 8.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.12.2
path: ~/.nvm/versions/node/v20.12.2/bin/node
Yarn: Not Found
npm:
version: 10.5.0
path: ~/.nvm/versions/node/v20.12.2/bin/npm
Watchman:
version: 2024.12.02.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.14.3
path: /Users/Modis/.rvm/gems/ruby-2.7.6/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.2
- iOS 18.2
- macOS 15.2
- tvOS 18.2
- visionOS 2.2
- watchOS 11.2
Android SDK: Not Found
IDEs:
Android Studio: 2023.2 AI-232.10227.8.2321.11479570
Xcode:
version: 16.2/16C5032a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: /usr/bin/javac
Ruby:
version: 2.7.6
path: /Users/Modis/.rvm/rubies/ruby-2.7.6/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.5
wanted: 0.76.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
-
```
### Reproducer
https://github.com/mohamed-amine-haine/responder-release-bug
### Screenshots and Videos
_No response_ | Issue: Author Provided Repro,API: Easing | low | Critical |
2,764,250,563 | PowerToys | Translate | ### Description of the new feature / enhancement
An option like the text extractor which allows you to translate the selected text from an image with the possibility of choosing the starting language and the translation language if possible
### Scenario when this would be used?
This will be useful for several apps that only have English or other languages to use
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,764,269,550 | godot | Mouse cursor invisible in code editor | ### Tested versions
- reproducible in v4.4.dev7.official [46c8f8c5c]
- somewhat reproducible in dev6, dev5, dev4, dev3, dev2, dev1 (see below)
- not reproducible in v4.3.stable.official [77dcf97d8]
Behavior in 4.4 dev5-dev1 is inconsistent, the cursor sporadically disappears when moving the mouse. It most consistently disappears when entering the script editor for the first time after switching to the Script tab.
On dev7 it is much easier to reproduce in a blank project, and on one project I'm working on it's pretty much consistent.
### System information
Godot v4.4.dev7 - Windows 11 (build 22631) - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 3050 Ti Laptop GPU (NVIDIA; 32.0.15.6603) - 12th Gen Intel(R) Core(TM) i5-12500H (16 threads)
### Issue description
The mouse cursor becomes invisible when it is inside the script editor. Clicking on code and tooltips on hover still work. Opening the right-click menu causes the cursor to become visible again. The mouse cursor doesn't become invisible over the sidebar, but it does become invisible hovering over the text fields in the search fields.
For what it's worth, any running scene with a TextEdit node displays the same behavior. As well as any text field in the inspector.
### Steps to reproduce
Open the script editor, create a new script, place the cursor over the text field and notice it becomes invisible.
### Minimal reproduction project (MRP)
Apart from reproducing it in the editor, you can also check the following project:
[invisible-cursor-test.zip](https://github.com/user-attachments/files/18281098/invisible-cursor-test.zip)
| bug,platform:windows,topic:editor,regression | low | Minor |
2,764,280,966 | rust | Built-in attributes are treated differently vs prelude attributes, unstable built-in attributes can name-collide with stable macro, and built-in attributes can break back-compat | Example breakage: [Broken build after updating: `coverage` is ambiguous; ambiguous because of a name conflict with a builtin attribute](https://github.com/rust-lang/rust/issues/121157)
Example code:
```rs
macro_rules! coverage {
() => {
/* .. */
};
}
pub(crate) use coverage; // `use` here becomes ambiguous
```
> `test` is similar to a proc-macro, which is exposed via the prelude. It is not a "built-in" attribute.
>
> The reference hasn't really been updated from when that changed. The sub-namespace section also probably should be clearer on what it means to shadow. I also don't have a good explanation why a prelude attribute is treated differently from a built-in one.
>
_Originally posted by @ehuss in [#121157](https://github.com/rust-lang/rust/issues/121157#issuecomment-1952988065)_
This is an interesting problem that has three aspects:
1. (T-compiler) Built-in attributes like `#[coverage(..)]` are handled differently versus prelude attributes like `#[test]`, including name resolution.
2. (T-compiler) Current feature-gating of *unstable* built-in attributes is insufficient: adding a new unstable built-in attribute gated behind a feature gate (e.g. `#[coverage]`) can still break stable code without any feature gates (e.g. `use` of a user-defined macro of the same name as the newly added built-in attribute).
3. (T-compiler, T-lang) Stabilization of a built-in attribute can break backwards compatibility: old code can be broken by addition of a new built-in attribute.
It might be tricky to change (or not possible), mostly opened this issue for awareness. | A-attributes,A-resolve,A-stability,T-lang,T-compiler,C-discussion,I-lang-radar | low | Critical |
2,764,285,045 | transformers | LayerDrop broken in various Flax models (Whisper/BART/more...) | ### System Info
- `transformers` version: 4.44.2
- Platform: Linux-6.6.56+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (False)
- Tensorflow version (GPU?): 2.17.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.8.4 (cpu)
- Jax version: 0.4.26
- JaxLib version: 0.4.26
- Using distributed or parallel set-up in script?: N/A
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Train a FlaxWhisperForConditionalGeneration model with encoder/decoder layerdrop activated.
```python
from transformers import FlaxWhisperForConditionalGeneration
import numpy as np
model = FlaxWhisperForConditionalGeneration.from_pretrained('openai/whisper-tiny')
model.config.encoder_layerdrop = 1.0
model.encode(np.random.rand(1, 80, 3000), train=True)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-14-4ffcfa8247ef> in <cell line: 6>()
4 model = FlaxWhisperForConditionalGeneration.from_pretrained('openai/whisper-tiny')
5 model.config.encoder_layerdrop = 1.0
----> 6 model.encode(np.random.rand(1, 80, 3000), train=True)
/usr/local/lib/python3.10/dist-packages/transformers/models/whisper/modeling_flax_whisper.py in encode(self, input_features, attention_mask, output_attentions, output_hidden_states, return_dict, train, params, dropout_rng, **kwargs)
1006 return encode_module(input_features, **kwargs)
1007
-> 1008 return self.module.apply(
1009 {"params": params or self.params},
1010 input_features=jnp.array(input_features, dtype="f4"),
[... skipping hidden 4 frame]
/usr/local/lib/python3.10/dist-packages/transformers/models/whisper/modeling_flax_whisper.py in _encoder_forward(module, input_features, **kwargs)
1004 def _encoder_forward(module, input_features, **kwargs):
1005 encode_module = module._get_encoder_module()
-> 1006 return encode_module(input_features, **kwargs)
1007
1008 return self.module.apply(
[... skipping hidden 2 frame]
/usr/local/lib/python3.10/dist-packages/transformers/models/whisper/modeling_flax_whisper.py in __call__(self, input_features, output_attentions, output_hidden_states, return_dict, deterministic)
709 )
710
--> 711 last_hidden_states = outputs[0]
712 last_hidden_states = self.layer_norm(last_hidden_states)
713
/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py in __getitem__(self, k)
431 return inner_dict[k]
432 else:
--> 433 return self.to_tuple()[k]
434
435 def __setattr__(self, name, value):
IndexError: tuple index out of range
```
### Expected behavior
I'm using FlaxWhisperForConditionalGeneration but I see the same code is in a bunch of models.
Here hidden_states is set to None if the layer is dropped causing the error.
https://github.com/huggingface/transformers/blob/d5aebc64653d09660818109f2fac55b5e1031023/src/transformers/models/whisper/modeling_flax_whisper.py#L442-L453
Fixing that I also noticed dropout_probability = random.uniform(0, 1) is only run during tracing so looping a compiled training step will always drop the same layers. | bug | low | Critical |
2,764,342,295 | yt-dlp | [ie/tv2] metadata and playback JSON URLs return HTTP Error 404: Not Found | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Germany
### Provide a description that is worded well enough to be understood
Using even the `_TESTS` URLs, the TV2 extractor returns 404 on either the metadata JSON or the playback (or rather [sic] "playabck") JSON URLs.
Using the first `_TESTS` URL http://www.tv2.no/v/1791207/ passes the metadata download, but fails the playback download. If I use the page URL it redirects to in the browser, https://www.tv2.no/video/nyheter/her-kolliderer-romsonden-med-asteroiden/20083301, it fails the metadata already. (Note that the redirect has a different `video_id`.)
I did notice, when using the browser and checking what it does in its debug console, that some metadata containing the M3U8 URL is downloded from `https://tv2news.play.cf.eu-north-1-prod.vmnd.tv/api/v2/asset/{video_id}/play?contentType=application%2Fx-mpegurl`, but as a POST request which I can only reproduce using curl, and haven't managed to reproduce in yt-dlp's python code.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'http://www.tv2.no/v/1791207/']
[debug] User config "/home/barsnick/.config/yt-dlp/config": ['-o', '%(title).200B [%(extractor)s %(id)s].%(ext)s', '--parse-metadata', 'title:\\d.+(views|reactions) \\| (?P<title>.+)']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [65cf46cdd] (source)
[debug] Lazy loading extractors is disabled
[debug] Git HEAD: 0b6b7742c
[debug] Python 3.13.1 (CPython x86_64 64bit) - Linux-6.12.6-200.fc41.x86_64-x86_64-with-glibc2.40 (OpenSSL 3.2.2 4 Jun 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.0.2 (setts), ffprobe 7.0.2, phantomjs 1.9.8
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2023.05.07, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.46.1, urllib3-1.26.20, websockets-13.0.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Plugin directories: ['/home/barsnick/.config/yt-dlp/plugins/barsnick/yt_dlp_plugins']
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[debug] Using fake IP 84.210.172.100 (NO) as X-Forwarded-For
[TV2] Extracting URL: http://www.tv2.no/v/1791207/
[TV2] 1791207: Downloading metadata JSON
[TV2] 1791207: Downloading playabck JSON
ERROR: [TV2] 1791207: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: Not Found>)
File "/home/barsnick/Development/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/home/barsnick/Development/yt-dlp/yt_dlp/extractor/tv2.py", line 55, in _real_extract
data = self._download_json(f'https://api.sumo.tv2.no/play/{video_id}?stream={protocol}',
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
video_id, 'Downloading playabck JSON',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
headers={'content-type': 'application/json'},
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
data=b'{"device":{"id":"1-1-1","name":"Nettleser (HTML)"}}')['playback']
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/barsnick/Development/yt-dlp/yt_dlp/extractor/common.py", line 1152, in download_content
res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs)
File "/home/barsnick/Development/yt-dlp/yt_dlp/extractor/common.py", line 1112, in download_handle
res = self._download_webpage_handle(
url_or_request, video_id, note=note, errnote=errnote, fatal=fatal, encoding=encoding,
data=data, headers=headers, query=query, expected_status=expected_status,
impersonate=impersonate, require_impersonation=require_impersonation)
File "/home/barsnick/Development/yt-dlp/yt_dlp/extractor/common.py", line 962, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data,
headers=headers, query=query, expected_status=expected_status,
impersonate=impersonate, require_impersonation=require_impersonation)
File "/home/barsnick/Development/yt-dlp/yt_dlp/extractor/common.py", line 911, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/home/barsnick/Development/yt-dlp/yt_dlp/extractor/common.py", line 898, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query, extensions))
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/barsnick/Development/yt-dlp/yt_dlp/YoutubeDL.py", line 4172, in urlopen
return self._request_director.send(req)
~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/home/barsnick/Development/yt-dlp/yt_dlp/networking/common.py", line 117, in send
response = handler.send(request)
File "/home/barsnick/Development/yt-dlp/yt_dlp/networking/_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
File "/home/barsnick/Development/yt-dlp/yt_dlp/networking/common.py", line 340, in send
return self._send(request)
~~~~~~~~~~^^^^^^^^^
File "/home/barsnick/Development/yt-dlp/yt_dlp/networking/_requests.py", line 365, in _send
raise HTTPError(res, redirect_loop=max_redirects_exceeded)
yt_dlp.networking.exceptions.HTTPError: HTTP Error 404: Not Found
```
| site-bug | low | Critical |
2,764,425,084 | pytorch | Training fails with Torch 2.1.0 on Nvidia Jetpack 5.1.2 | ### 🐛 Describe the bug
Hello
We are trying to run a training on Nvidia Jetson devices with compute capabilities 7.2 and 8.7.
The system properties are as follows:
```
Python 3.8
Torch 2.1.0
Torchvision 0.16.2
CUDA 11.4
Nvidia Jetpack 5.1.2
Ubuntu 20.04
```
At the begining of a simple MNIST training, while executing `loss.backward()` we get the error below:
```
File "/mnt/nvme/.venvs/venv3_8/lib/python3.8/site-packages/torch/autograd/__init__.py", line 204, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Event device type CUDA does not match blocking stream's device type CPU.
```
The error occurs when we use an environment in which Torch is added using the [.whl from Jetson Zoo](https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048).
Eventhough we build our own torch .whl according to the [build file in Jetson containers](https://github.com/dusty-nv/jetson-containers/blob/master/packages/pytorch/build.sh) we get the same error.
When we use the same scripts to build torch for CUDA 12.2 and run the same simple MNIST training we do not get the error.
I appreciate any help.
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+git7bcf7da
Is debug build: False
CUDA used to build PyTorch: 11.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.31
Python version: 3.8.10 (default, Nov 7 2024, 13:10:47) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.120-tegra-aarch64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.4.315
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 3
Vendor ID: ARM
Model: 1
Model name: ARMv8 Processor rev 1 (v8l)
Stepping: r0p1
CPU max MHz: 2201.6001
CPU min MHz: 115.2000
BogoMIPS: 62.50
L1d cache: 768 KiB
L1i cache: 768 KiB
L2 cache: 3 MiB
L3 cache: 6 MiB
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, but not BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp uscat ilrcpc flagm
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] onnx==1.17.0
[pip3] torch==2.1.0a0+git7bcf7da
[pip3] torchaudio==2.1.0+6ea1133
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.16.2+c6f3977
[conda] Could not collect
cc @ptrblck @puririshi98 | triaged,module: jetson | low | Critical |
2,764,523,363 | ui | [feat]: Flex Component | ### Feature description
An easy to use Flex component where users can easily set vertical, wrap, justify, align, flex, gap, and component as is in other UI libraries particularly Ant Design.
| Property | Description|
| -------- | --------------------------------------------------------------------------|
|vertical | Is direction of the flex vertical, use flex-direction: column |
|wrap| Set whether the element is displayed in a single line or in multiple lines|
|justify|Sets the alignment of elements in the direction of the main axis |
|align|Sets the alignment of elements in the direction of the cross axis|
|flex|flex CSS shorthand properties|
|gap|Sets the gap between grids|
|component|custom element type|
### Affected component/components
Flex
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,764,529,833 | godot | High CPU usage when editor window has focus | ### Tested versions
Reproducible in v4.4.dev7.official [46c8f8c5c], 4.3 stable seems ok
### System information
Godot v4.4.dev7 - Windows 10 (build 19045) - Multi-window, 3 monitors - Vulkan (Forward+) - dedicated AMD Radeon RX 6600 (Advanced Micro Devices, Inc.; 32.0.12019.1028) - 12th Gen Intel(R) Core(TM) i3-12100F (8 threads)
### Issue description
When the editor window has focus, but not doing anything, editor total CPU usage stays around 4% (around 30% for the one core). Profiling a local build of the editor shows time being spent in `OS_Windows::add_frame_delay` and `OS_Windows::get_ticks_usec`, may be related to https://github.com/godotengine/godot/pull/99833. If so, this looks similar to #99593, just not as bad - I think the PR included changes to reduce CPU usage, but there's still a noticeable jump from previous versions. For comparison, 4.3 stable usually stays under 0.1% for me.
CPU usage actually seems to drop a bit when doing more in the editor. Enabling `interface/editor/update_continuously` reduces CPU usage from around 4% to 1.5%, but increases GPU to around 5.5%. Running CPUParticles2D emission in the editor has similar impact, or just panning around in the 2D scene view.
### Steps to reproduce
Seems to happen in any project, or even the Project Manager window - focus the editor window and check CPU usage in task manager
### Minimal reproduction project (MRP)
N/A, see above | bug,needs testing,topic:gui,performance | low | Major |
2,764,532,408 | flutter | Allow creating "darwin" (shared codebase for ios/macos) plugins directly in `flutter create` | ### Use case
Right now when you create darwin plugin, you have to start with an ios or macos template. Then follow the [docs](https://docs.flutter.dev/packages-and-plugins/developing-packages#shared-ios-and-macos-implementations). There are clearly some differences between macos and ios templates. Which one is preferred to start with? Why not just add a darwin platform?
### Proposal
Simply make 'darwin' a valid platform option and create a darwin project directly from CLI.
```sh
flutter create --template=plugin --platforms="darwin" my_plugin
``` | platform-ios,tool,platform-mac,c: proposal,team-ecosystem,P2,a: plugins,triaged-ecosystem | low | Minor |
2,764,560,558 | flutter | [Support] Creating `.pkg` file for MacOS release build along with a `.app` file to allow publishing to App Store | ### Use case
Today, Flutter only builds a local MacOS app in `release` mode, which is very different from building a releasable version of the app.
The only way to release a MacOS app today is to archive the project using Xcode that creates a `.pkg` file to be uploaded.
The command `flutter build macos --release` is useless to release the app on the store unfortunately.
The problem with that, is we can't obfuscate MacOS apps since we need to build them through Xcode, so we can't use the `--obfuscate` flag from the flutter command.
### Proposal
It would be nice that `flutter build macos --release` creates also a `.pkg` file in addition of the `.app`.
This way, we can use the App Transporter of Apple to upload the binary directly to the app store without going through a rebuild with Xcode. | c: new feature,tool,platform-mac,c: proposal,a: desktop,a: build,a: release,P3,team-macos,triaged-macos | low | Minor |
2,764,575,663 | angular | Show deprecated template items in IDE | ### Which @angular/* package(s) are relevant/related to the feature request?
language-service
### Description
This issue was raised in the [vscode-ng-language-service](https://github.com/angular/vscode-ng-language-service) repo by [@mr1008](https://github.com/Mr1008) ie https://github.com/angular/vscode-ng-language-service/issues/1995, however I think it would be implemented in the base language service which is in this repo.
Description copied from https://github.com/angular/vscode-ng-language-service/issues/1995,
> # 🚀 feature request
> ### Description
> Currently there is no way of marking components as deprecated. It would be cool if template editor supported standard way how deprecated code should be marked in TS/JS, using JSDoc [@deprecated](https://github.com/deprecated) feature. This would help working in enterprise grade environment when there are multiple components and some of these should no longer be used. Currently it is very easy to forget/overlook such usage. Displaying it somehow in editor in straightforward way would help a lot.
>
> ### Feature Type
> Display somehow that component is deprecated.
>
> ### Describe the solution you'd like
> I believe the standard way of displaying would be fine, similar way as vscode does for TS/JS code. Currently WebStorm supports it and it looks like in the screenshot: 
>
> It supports standard way of marking elements as deprecated using JSDoc [@deprecated](https://github.com/deprecated) feature. For example:
>
> /**
> * @deprecated DO NOT USE THIS TABLE.
> */
> @Component({
> selector: 'app-table-old',
> templateUrl: './old-table.component.html',
> styleUrls: ['./old-table.component.scss']
> })
FYI @Mr1008
### Proposed solution
This could be implemented as a rule that provides diagnostics with deprecated tags that are then shown in the IDE, similar as happens in Typescript files.
### Alternatives considered
No alternatives in VS Code.
The functionality seems to be available in Webstorm, sounds like they have a forked or extended version of the language service. It would be good to have this in the base language service so more IDEs can have this functionality. | area: language-service | low | Major |
2,764,575,796 | pytorch | [ONNX] Documentation describe the metadata stored in exported models | null | module: onnx,triaged | low | Minor |
2,764,582,923 | rust | ICE when using a generic const expression in a where clause | ### Disclaimer
I know the feature `generic_const_exprs` is incomplete, and don't expect it to fully work. This is just in case this is a new issue and could be helpful.
### Code
```Rust
#![feature(generic_const_exprs)]
pub struct Struct<const N: usize>;
impl<const N: usize> Struct<N> {
pub const OK: usize = 0;
}
fn main() {
function::<0>();
}
fn function<const NUM_CARDS: usize>()
where
[(); Struct::<{ NUM_CARDS + 0 }>::OK]:,
{
}
```
### Meta
`rustc --version --verbose`:
```
rustc 1.85.0-nightly (7f75bfa1a 2024-12-30)
binary: rustc
commit-hash: 7f75bfa1ad4e9a9d33a179a90603001515e91991
commit-date: 2024-12-30
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.6
```
### Error output
```
thread 'rustc query cycle handler' panicked at compiler/rustc_query_system/src/query/job.rs:504:9:
deadlock detected as we're unable to find a query cycle to break
```
<details><summary><strong>Backtrace</strong></summary>
<p>
```
❯ RUST_BACKTRACE=full cargo run
Compiling ice v0.1.0 (/home/user/.config/rebos/files/common/home/user/Code/rust/ice)
warning: the feature `generic_const_exprs` is incomplete and may not be safe to use and/or cause compiler crashes
--> src/main.rs:1:12
|
1 | #![feature(generic_const_exprs)]
| ^^^^^^^^^^^^^^^^^^^
|
= note: see issue #76560 <https://github.com/rust-lang/rust/issues/76560> for more information
= note: `#[warn(incomplete_features)]` on by default
thread 'rustc query cycle handler' panicked at compiler/rustc_query_system/src/query/job.rs:504:9:
deadlock detected as we're unable to find a query cycle to break
current query map:
{"reason":"compiler-message","package_id":"path+file:///home/user/.config/rebos/files/common/home/user/Code/rust/ice#0.1.0","manifest_path":"/home/user/.config/rebos/files/common/home/user/Code/rust/ice/Cargo.toml","target":{"kind":["bin"],"crate_types":["bin"],"name":"ice","src_path":"/home/user/.config/rebos/files/common/home/user/Code/rust/ice/src/main.rs","edition":"2024","doc":true,"doctest":false,"test":true},"message":{}}
stack backtrace:
0: 0x767883ccccea - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hfeb90fd9b5398c4f
1: 0x767884413466 - core::fmt::write::hd020802d4198e71b
2: 0x76788531abd1 - std::io::Write::write_fmt::h7e2e8e4cf92a8d65
3: 0x767883cccb42 - std::sys::backtrace::BacktraceLock::print::h66b4c03e5039d072
4: 0x767883ccf049 - std::panicking::default_hook::{{closure}}::hcc157a01562efcba
5: 0x767883ccee92 - std::panicking::default_hook::h21db3d7d04627650
6: 0x767882e3a688 - std[cb5a7f0ea0430aa5]::panicking::update_hook::<alloc[dd9cf49c51639ce8]::boxed::Box<rustc_driver_impl[a708e1f659847ef0]::install_ice_hook::{closure#1}>>::{closure#0}
7: 0x767883ccf803 - std::panicking::rust_panic_with_hook::h6f76ca5eeec77396
8: 0x767883ccf4fa - std::panicking::begin_panic_handler::{{closure}}::hf108fef1bd19d641
9: 0x767883ccd199 - std::sys::backtrace::__rust_end_short_backtrace::heff692e62eddffc0
10: 0x767883ccf1bd - rust_begin_unwind
11: 0x76788099d390 - core::panicking::panic_fmt::h98a3fbf157fea8f4
12: 0x7678838790da - rustc_query_system[7637a88347177bdb]::query::job::break_query_cycles
13: 0x767882e30f3c - std[cb5a7f0ea0430aa5]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[5a45d25d69207022]::util::run_in_thread_pool_with_globals<rustc_interface[5a45d25d69207022]::interface::run_compiler<(), rustc_driver_impl[a708e1f659847ef0]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#2}::{closure#1}, ()>
14: 0x767882e3f744 - <<std[cb5a7f0ea0430aa5]::thread::Builder>::spawn_unchecked_<rustc_interface[5a45d25d69207022]::util::run_in_thread_pool_with_globals<rustc_interface[5a45d25d69207022]::interface::run_compiler<(), rustc_driver_impl[a708e1f659847ef0]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#2}::{closure#1}, ()>::{closure#1} as core[2b031b7237be44de]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
15: 0x767885259381 - std::sys::pal::unix::thread::Thread::new::thread_start::h0f9dd4118f064405
16: 0x76787f6a0386 - <unknown>
17: 0x76787f721b0c - <unknown>
18: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/home/user/.config/rebos/files/common/home/user/Code/rust/ice/rustc-ice-2024-12-31T17_57_52-210002.txt` to your bug report
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED] -Z threads=12 -Z macro-backtrace -C target-cpu=native -C linker=clang -C link-arg=-fuse-ld=/usr/bin/mold -Z threads=12 -Z macro-backtrace -C target-cpu=native -C linker=clang -C link-arg=-fuse-ld=/usr/bin/mold
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
query cycle handler thread panicked, aborting process
warning: `ice` (bin "ice") generated 1 warning
error: could not compile `ice` (bin "ice"); 1 warning emitted
Caused by:
process didn't exit successfully: `/home/user/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/rustc --crate-name ice --edition=2024 src/main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=111 --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debuginfo=2 --check-cfg 'cfg(docsrs,test)' --check-cfg 'cfg(feature, values())' -C metadata=d5f9e422892f9ee1 -C extra-filename=-cd55e940cca72f2d --out-dir /home/user/.config/rebos/files/common/home/user/Code/rust/ice/target/debug/deps -C incremental=/home/user/.config/rebos/files/common/home/user/Code/rust/ice/target/debug/incremental -L dependency=/home/user/.config/rebos/files/common/home/user/Code/rust/ice/target/debug/deps -Zthreads=12 -Zmacro-backtrace -Ctarget-cpu=native -Clinker=clang -Clink-arg=-fuse-ld=/usr/bin/mold -Zthreads=12 -Zmacro-backtrace -Ctarget-cpu=native -Clinker=clang -Clink-arg=-fuse-ld=/usr/bin/mold` (signal: 6, SIGABRT: process abort signal)
```
</p>
</details>
| T-compiler,C-bug,WG-compiler-parallel,F-generic_const_exprs,requires-incomplete-features,I-cycle | low | Critical |
2,764,601,195 | pytorch | cpp_extension.py expects an integer on CUDA_ARCH, failing with Grace Hopper. | ### 🐛 Describe the bug
Grace hopper reports as 9.0a, not 9.0, and cpp_extension.py will bark when it expects an integer as the second part of it on autodetect.
The current workaround is to set `TORCH_CUDA_ARCH_LIST="9.0a"` while building it.
```
torch/utils/cpp_extension.py",
line 1972, in _get_cuda_arch_flags
supported_sm = [int(arch.split('_')[1])
^^^^^^^^^^^^^^^^^^^^^^^
ValueError: invalid literal for int() with base 10: '90a'
```
### Versions
2.5.1
cc @malfet @zou3519 @xmfan @ptrblck @msaroufim @eqy | module: cpp-extensions,module: cuda,triaged | low | Critical |
2,764,620,698 | TypeScript | Type parameter not usable as type argument in identical type; union in constraint becomes intersection | ### 🔎 Search Terms
indexed access, constraints, generic, unions, ts2344
### 🕗 Version & Regression Information
- This changed between versions 3.3 and 3.5 (likely #30769)
### ⏯ Playground Link
[Playground link](https://www.typescriptlang.org/play/#code/JYOwLgpgTgZghgYwgAgGIHt0B4AqyIAekIAJgM7IDeyBAXMmWFKAObIC+ANMgKr5ERSFABR4APlRr0ARHGkcAlAG0A5ARUBdAHxUAUMgPIAXvQzYc3Hlt3tddoA)
### 💻 Code
```ts
interface Foo<T extends { x: string }, U extends (T | { x: "a" })['x']> {
z: Foo<T, U> // error!
// ~ Type 'U' does not satisfy the constraint 'T["x"] & "a"'. 🙃
}
```
### 🙁 Actual behavior
The `U` type parameter is rejected as a type argument with a TS2344 error about how it doesn't satisfy the constraint. The constraint seems to have shifted from a union to an intersection, even though it comes from an identical place.
### 🙂 Expected behavior
The `U` type parameter should be accepted as a type argument.
### Additional information about the issue
Distilled from [SO question](https://stackoverflow.com/q/79320473/2887218).
I expect this is a consequence of #30769 as per https://github.com/microsoft/TypeScript/issues/31731#issuecomment-498465358, but it is at least somewhat surprising that this should happen when the types involved are identical. One could rewrite the constraint to `U extends T["x"] | "a"`, of course, but this is distilled from the above SO question which is presumably distilled from some use case. What's happening here, exactly, and is it intended, a design limitation, or a bona fide bug? | Bug,Help Wanted | low | Critical |
2,764,634,467 | PowerToys | Workspaces CLI options don't expand environment variables | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Workspaces
### Steps to reproduce
1. Save a workspaces layout with explorer
2. Edit CLI options to add `%windir%` or any other environment variable.
3. Launch workspace.
### ✔️ Expected Behavior
It is expected to open explorer to the folder the environment variable leads to.
### ❌ Actual Behavior
Explorer opens to default folder.
If the environment variable is replaced with the full path `C:\windows` it does open to the correct path.
Using power shell style `$env:windir` also does not work.
### Other Software
Windows 10 Pro 22H2 | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,764,640,378 | rust | SIGSEGV when compiling with 1.83.0 | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
Unfortunately, this is just something that I experienced when compiling i3status-rs; I don't have a minimal code example and have not been able to reproduce in subsequent clean/build attempts, so I understand if this is closed as no repro, but I figured I'd share what I saw just in case it sparks something. The steps I took:
1. Clone https://github.com/greshake/i3status-rust
2. Install rust 1.83.0 via `rustup`
3. `cargo install --path . --locked`
1. At this point, I received `Backtrace 1` (below).
4. `cargo install --path . --locked`
1. At this point, I received `Backtrace 2` (below).
5. Retry 4 several times. Each time, received `Backtrace 2`
6. Install rust 1.82.0 via `rustup`
7. `cargo install --path . --locked` - This was successful
8. Install rust 1.84.0-beta5 via `rustup`
9. `cargo install --path . --locked` - This was successful
10. Switch back to rust 1.83.0
11. `cargo install --path . --locked` - This was successful
Again, I understand if this isn't enough info to successfully find a bug, but unfortunately it's all I've got. Every attempt I've made to repro since the initial backtraces has failed (ie, compiled successfully), including fully cleaning the repo and compiling completely from scratch.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 19.1.1
```
<version>
```
### Error output
```
<output>
Full backtrace below, I don't have any more than that.
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace 1 (Initial Build):</strong></summary>
<p>
```
<backtrace>
error: rustc interrupted by SIGSEGV, printing backtrace
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x3520f13)[0x7f135b720f13]
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f1358019520]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(_RNvXNtCsfp4j0w2N6Xi_19rustc_mir_transform15mentioned_itemsNtB2_14MentionedItemsNtNtB4_12pass_manager7MirPass8run_pass+0xcc3)[0x7f135a3e6433]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(_RNvCsfp4j0w2N6Xi_19rustc_mir_transform13optimized_mir+0x289)[0x7f135d30a649]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x5108369)[0x7f135d308369]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x4c24b2d)[0x7f135ce24b2d]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x4c241b3)[0x7f135ce241b3]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(_RNvNtCsfp4j0w2N6Xi_19rustc_mir_transform18cross_crate_inline21cross_crate_inlinable+0x556)[0x7f135a27a9c6]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x5048b75)[0x7f135d248b75]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x50478ca)[0x7f135d2478ca]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x50474ca)[0x7f135d2474ca]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x57b5d27)[0x7f135d9b5d27]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x57b48bc)[0x7f135d9b48bc]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(_RNvXNtCsfp4j0w2N6Xi_19rustc_mir_transform6inlineNtB2_6InlineNtNtB4_12pass_manager7MirPass8run_pass+0x26b)[0x7f135d9b3d2d]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x4c0b3dd)[0x7f135ce0b3dd]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(_RNvCsfp4j0w2N6Xi_19rustc_mir_transform13optimized_mir+0x63a)[0x7f135d30a9fa]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x5108369)[0x7f135d308369]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x4c24b2d)[0x7f135ce24b2d]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x4c241b3)[0x7f135ce241b3]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(_RNvNtCsfp4j0w2N6Xi_19rustc_mir_transform18cross_crate_inline21cross_crate_inlinable+0x556)[0x7f135a27a9c6]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x5048b75)[0x7f135d248b75]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x50478ca)[0x7f135d2478ca]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x50474ca)[0x7f135d2474ca]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(_RNvNtCs7RnDN1nbn6O_12rustc_passes9reachable13reachable_set+0x8c7)[0x7f135d2ebf87]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x59f5234)[0x7f135dbf5234]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x59f41ea)[0x7f135dbf41ea]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x59f3a9e)[0x7f135dbf3a9e]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x5cd2e2a)[0x7f135ded2e2a]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(_RNvNtNtCse0EV7zojsSO_14rustc_metadata5rmeta7encoder15encode_metadata+0xc0a)[0x7f135decc59a]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(_RNvNtCse0EV7zojsSO_14rustc_metadata2fs25encode_and_write_metadata+0x2df)[0x7f135debdb33]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(_RNvMs3_NtCsbJfLK9LPwAh_15rustc_interface7queriesNtB5_6Linker24codegen_and_build_linker+0x106)[0x7f135debc9d6]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x59621d4)[0x7f135db621d4]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x59533d9)[0x7f135db533d9]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x5a22fac)[0x7f135dc22fac]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x5a23a6b)[0x7f135dc23a6b]
/lib/x86_64-linux-gnu/libc.so.6(+0x94ac3)[0x7f135806bac3]
/lib/x86_64-linux-gnu/libc.so.6(+0x126850)[0x7f13580fd850]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=16777216
note: backtrace dumped due to SIGSEGV! resuming signal
Compiling neli-proc-macros v0.1.3
error: could not compile `ring` (lib)
Caused by:
process didn't exit successfully: `/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc --crate-name ring --edition=2021 /home/fred/.cargo/registry/src/index.crates.io-6f17d22bba15001f/ring-0.17.8/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=200 --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C linker-plugin-lto --cfg 'feature="alloc"' --cfg 'feature="default"' --cfg 'feature="dev_urandom_fallback"' --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values("alloc", "default", "dev_urandom_fallback", "less-safe-getrandom-custom-or-rdrand", "slow_tests", "std", "test_logging", "unstable-testing-arm-no-hw", "unstable-testing-arm-no-neon", "wasm32_unknown_unknown_js"))' -C metadata=8508e9072b9ffe5d -C extra-filename=-8508e9072b9ffe5d --out-dir /home/fred/.dotfiles/config/i3status-rust/target/release/deps -C strip=debuginfo -L dependency=/home/fred/.dotfiles/config/i3status-rust/target/release/deps --extern cfg_if=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libcfg_if-2c8890ec7a4be5be.rmeta --extern getrandom=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libgetrandom-12e2d95801b19b20.rmeta --extern spin=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libspin-ca0362b34280e78b.rmeta --extern untrusted=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libuntrusted-f24073a689d23cf0.rmeta --cap-lints allow -L native=/home/fred/.dotfiles/config/i3status-rust/target/release/build/ring-2cba77927bd52185/out -l static=ring_core_0_17_8_ -l static=ring_core_0_17_8_test` (signal: 11, SIGSEGV: invalid memory reference)
```
</p>
</details>
<details><summary><strong>Backtrace 2 (Subsequent Builds):</strong></summary>
<p>
```
<backtrace>
error: rustc interrupted by SIGSEGV, printing backtrace
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x3520f13)[0x7f4586b20f13]
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f4583419520]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x527f1d0)[0x7f458887f1d0]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x527eb3a)[0x7f458887eb3a]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x5ac15d3)[0x7f45890c15d3]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x5ac453a)[0x7f45890c453a]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x59626f6)[0x7f4588f626f6]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x59533d9)[0x7f4588f533d9]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x5a22fac)[0x7f4589022fac]
/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/librustc_driver-a1396821e0813435.so(+0x5a23a6b)[0x7f4589023a6b]
/lib/x86_64-linux-gnu/libc.so.6(+0x94ac3)[0x7f458346bac3]
/lib/x86_64-linux-gnu/libc.so.6(+0x126850)[0x7f45834fd850]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=16777216
error: could not compile `i3status-rs` (lib)
Caused by:
process didn't exit successfully: `/home/fred/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc --crate-name i3status_rs --edition=2021 src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=200 --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C linker-plugin-lto --cfg 'feature="default"' --cfg 'feature="libpulse-binding"' --cfg 'feature="pulseaudio"' --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values("debug_borders", "default", "glob", "icu_calendar", "libpulse-binding", "maildir", "notmuch", "pipewire", "pulseaudio"))' -C metadata=0d08f14994b5b789 -C extra-filename=-0d08f14994b5b789 --out-dir /home/fred/.dotfiles/config/i3status-rust/target/release/deps -C strip=debuginfo -L dependency=/home/fred/.dotfiles/config/i3status-rust/target/release/deps --extern async_trait=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libasync_trait-0c5c751912fae825.so --extern backon=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libbackon-5340a4af2e6c14f6.rmeta --extern base64=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libbase64-51993054c6d47196.rmeta --extern calibright=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libcalibright-92355b3d5a75f536.rmeta --extern chrono=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libchrono-97c00698bfd90220.rmeta --extern chrono_tz=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libchrono_tz-6e5bbd3294d6e9e2.rmeta --extern clap=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libclap-7b58538aafd9a985.rmeta --extern debounced=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libdebounced-1e8bd888db949001.rmeta --extern dirs=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libdirs-296c6706a2a168ce.rmeta --extern env_logger=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libenv_logger-d1ffd9b7da638f4a.rmeta --extern futures=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libfutures-ed0fb5c7df476adc.rmeta --extern hyper=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libhyper-754668fd2d512189.rmeta --extern iana_time_zone=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libiana_time_zone-5df8ef486e0df093.rmeta --extern icalendar=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libicalendar-af0b60e52dedad87.rmeta --extern indexmap=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libindexmap-7c29f0542c8225ba.rmeta --extern inotify=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libinotify-ec2aebac7e113e11.rmeta --extern itertools=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libitertools-7af15bed5b8dc417.rmeta --extern libc=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/liblibc-2df993e0e790f285.rmeta --extern libpulse_binding=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/liblibpulse_binding-6e302c8339f5e080.rmeta --extern log=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/liblog-6c7f331056a1030d.rmeta --extern neli=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libneli-e4f798467d99b39d.rmeta --extern neli_wifi=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libneli_wifi-f368ea2626c0e3c3.rmeta --extern nix=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libnix-0e3c0132322f5a07.rmeta --extern nom=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libnom-a3ca2c125b1329b2.rmeta --extern oauth2=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/liboauth2-56bbf9b705eabb48.rmeta --extern quick_xml=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libquick_xml-417e090c6a80988f.rmeta --extern regex=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libregex-50002f2c598c4f0a.rmeta --extern reqwest=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libreqwest-b0824417c3bce2c3.rmeta --extern sensors=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libsensors-bac09d8b5a7ae532.rmeta --extern serde=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libserde-297e925250158b93.rmeta --extern serde_json=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libserde_json-3494bd55cb87f634.rmeta --extern shellexpand=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libshellexpand-cd9a1620efd215b0.rmeta --extern signal_hook=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libsignal_hook-b56e1ca05c9a3bb9.rmeta --extern signal_hook_tokio=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libsignal_hook_tokio-07c5e1341c4dd0a8.rmeta --extern smart_default=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libsmart_default-803a1d2fcfc60d24.so --extern sunrise=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libsunrise-2707b3ea79515516.rmeta --extern swayipc_async=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libswayipc_async-f74fd13b6f85aada.rmeta --extern thiserror=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libthiserror-0628287c6bb7f62a.rmeta --extern tokio=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libtokio-42a0bc4e326b3a95.rmeta --extern toml=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libtoml-17fcfcde14ad74ef.rmeta --extern unicode_segmentation=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libunicode_segmentation-57627c62d1bc1eeb.rmeta --extern wayrs_client=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libwayrs_client-f40084c9b1a36d9d.rmeta --extern wayrs_protocols=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libwayrs_protocols-00819333c1a50da0.rmeta --extern zbus=/home/fred/.dotfiles/config/i3status-rust/target/release/deps/libzbus-4e0e8210e028033b.rmeta -L native=/home/fred/.dotfiles/config/i3status-rust/target/release/build/ring-2cba77927bd52185/out` (signal: 11, SIGSEGV: invalid memory reference)
error: failed to compile `i3status-rs v0.33.2 (/home/fred/.dotfiles/config/i3status-rust)`, intermediate artifacts can be found at `/home/fred/.dotfiles/config/i3status-rust/target`.
```
</p>
</details>
| I-crash,S-needs-repro,C-defective-hardware | low | Critical |
2,764,652,740 | vscode | Incorrect terminal.creationOptions.shellPath on window reload | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.0
- OS Version: macOS Sonoma 14.4, Windows 11
Steps to Reproduce:
1. Create multiple terminals with different shells
2. Log their `terminal.creationOptions.shellPath` which should be correct (or sometimes unpopulated)
3. Reload the window, existing terminals get restored
4. Log their `terminal.creationOptions.shellPath` which will all show the shell path of the default shell
This is particularly bad on Windows where a Powershell terminal may be reported as using a `bash.exe` shell path or vice versa.
| confirmation-pending,terminal-persistence | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.