id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,696,061,565 | rust | DefaultMetadataLoader was changed to private, making it difficult to print out metadata | Hi,
A recent commit [bdacdfe](https://github.com/rust-lang/rust/commit/bdacdfe95f17188f5b047e0330e3e47f993a2016#diff-6bbf6edd930f26d3b180eb5c250b0e48d8f3c5eb474e3274909ef6ae6f0d1e61) made the `DefaultMetadataLoader` struct private to the crate.
This makes it impossible to print out the crate metadata using the given functions unless I define my own struct which implements the `MetadataLoader` trait. I'm currently just trying to understand the metadata file better, and am working with an older version of the Rust compiler so that I can access this type. I'm wondering why this type was made private.
Here is the code I'm using which works with an older version of the compiler:
```
fn main() {
let path = std::env::args().nth(1).expect("no path given");
let path = Path::new(&path);
let triple = TargetTriple::from_triple("x86_64-apple-darwin");
let target = Target::expect_builtin(&triple);
let mut stdout = io::stdout();
rustc_span::create_session_globals_then(Edition::Edition2018, || {
let _ = locator::list_file_metadata(&target, path, &DefaultMetadataLoader, &mut stdout);
});
}
``` | T-compiler,C-feature-request | low | Minor |
2,696,066,724 | svelte | a11y: Non-interactive element `<label>` cannot have interactive role 'tab' | ### Describe the bug
this is a false positive as
- label elements are interactive content https://html.spec.whatwg.org/multipage/dom.html#interactive-content
- it's the only way to do things progressively
### Reproduction
```svelte
<label role="tab">
tab <input type="checkbox" />
</label>
```
### Logs
_No response_
### System Info
```shell
none
```
### Severity
annoyance | a11y | low | Critical |
2,696,078,253 | pytorch | misaligned address without persistent reductions from test_selecsls42b_misaligned_address | ```
CUDA_LAUNCH_BLOCKING=1 TORCHINDUCTOR_PERSISTENT_REDUCTIONS=0 pytest test/inductor/test_cuda_repro.py -k test_selecsls42b_misaligned_address
```
Fails with:
```
RuntimeError: Triton Error [CUDA]: misaligned address
```
The failing kernel code is:
```py
import triton
import triton.language as tl
from triton.compiler.compiler import AttrsDescriptor
from torch._inductor.runtime import triton_helpers, triton_heuristics
from torch._inductor.runtime.triton_helpers import libdevice, math as tl_math
from torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties
triton_helpers.set_driver_to_gpu()
@triton_heuristics.reduction(
size_hints=[1024, 128],
reduction_hint=ReductionHint.INNER,
filename=__file__,
triton_meta={'signature': {'in_ptr0': '*i1', 'in_ptr1': '*fp32', 'in_ptr2': '*fp16', 'in_ptr3': '*fp32', 'in_ptr4': '*fp32', 'out_ptr0': '*fp32', 'out_ptr1': '*fp32', 'xnumel': 'i32', 'rnumel': 'i32'}, 'device': DeviceProperties(type='cuda', index=0, cc=86, major=8, regs_per_multiprocessor=65536, max_threads_per_multi_processor=1536, multi_processor_count=82, warp_size=32), 'constants': {}, 'configs': [AttrsDescriptor(divisible_by_16=(0, 1, 2, 3, 4, 5, 6, 7, 8), equal_to_1=())]},
inductor_meta={'autotune_hints': set(), 'kernel_name': 'triton_red_fused_convert_element_type_div_mul_sub_sum_where_0', 'mutated_arg_names': [], 'optimize_mem': True, 'no_x_dim': False, 'num_load': 7, 'num_reduction': 2, 'backend_hash': '9EFB47F39A9F0EC578D4E029967452062C70DD8478024529C6C592E836AC91B7', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': False, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False}
)
@triton.jit
def triton_red_fused_convert_element_type_div_mul_sub_sum_where_0(in_ptr0, in_ptr1, in_ptr2, in_ptr3, in_ptr4, out_ptr0, out_ptr1, xnumel, rnumel, XBLOCK : tl.constexpr, RBLOCK : tl.constexpr):
xnumel = 1024
rnumel = 128
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:, None]
xmask = xindex < xnumel
rbase = tl.arange(0, RBLOCK)[None, :]
x0 = xindex
block_ptr0 = tl.make_block_ptr(in_ptr0, shape=[1024, 8, 16], strides=[16, 16384, 1], block_shape=[XBLOCK, ((15 + RBLOCK) // 16), ((16) * ((16) <= (RBLOCK)) + (RBLOCK) * ((RBLOCK) < (16)))], order=[2, 1, 0], offsets=[xoffset, 0, 0])
tmp1 = tl.load(in_ptr1 + (0))
tmp2 = tl.broadcast_to(tmp1, [XBLOCK, RBLOCK])
block_ptr1 = tl.make_block_ptr(in_ptr2, shape=[1024, 8], strides=[1, 1024], block_shape=[XBLOCK, ((15 + RBLOCK) // 16)], order=[1, 0], offsets=[xoffset, 0])
block_ptr2 = tl.make_block_ptr(in_ptr3, shape=[1024, 8, 16], strides=[16, 16384, 1], block_shape=[XBLOCK, ((15 + RBLOCK) // 16), ((16) * ((16) <= (RBLOCK)) + (RBLOCK) * ((RBLOCK) < (16)))], order=[2, 1, 0], offsets=[xoffset, 0, 0])
tmp10 = tl.load(tl.make_block_ptr(in_ptr4, shape=[1024], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), boundary_check=[0], eviction_policy='evict_last')[:, None]
_tmp14 = tl.full([XBLOCK, RBLOCK], 0, tl.float32)
block_ptr3 = tl.make_block_ptr(in_ptr0, shape=[1024, 8, 16], strides=[16, 16384, 1], block_shape=[XBLOCK, ((15 + RBLOCK) // 16), ((16) * ((16) <= (RBLOCK)) + (RBLOCK) * ((RBLOCK) < (16)))], order=[2, 1, 0], offsets=[xoffset, 0, 0])
block_ptr4 = tl.make_block_ptr(in_ptr2, shape=[1024, 8], strides=[1, 1024], block_shape=[XBLOCK, ((15 + RBLOCK) // 16)], order=[1, 0], offsets=[xoffset, 0])
_tmp22 = tl.full([XBLOCK, RBLOCK], 0, tl.float32)
for roffset in range(0, rnumel, RBLOCK):
rindex = roffset + rbase
rmask = rindex < rnumel
r1 = rindex % 16
r2 = (rindex // 16)
tmp0 = tl.reshape(tl.broadcast_to(tl.load(block_ptr0, boundary_check=[0, 1], padding_option='zero', eviction_policy='evict_last')[:, :, None, :], [XBLOCK, ((15 + RBLOCK) // 16), ((1) * ((1) <= (((15 + RBLOCK) // 16))) + (((15 + RBLOCK) // 16)) * ((((15 + RBLOCK) // 16)) < (1))), ((16) * ((16) <= (RBLOCK)) + (RBLOCK) * ((RBLOCK) < (16)))]), [XBLOCK, RBLOCK]).to(tl.int1)
block_ptr0 = tl.advance(block_ptr0, [0, (RBLOCK // 16), RBLOCK % 16])
tmp4 = tl.reshape(tl.broadcast_to(tl.load(block_ptr1, boundary_check=[0, 1], padding_option='zero', eviction_policy='evict_last')[:, :, None, None], [XBLOCK, ((15 + RBLOCK) // 16), ((1) * ((1) <= (((15 + RBLOCK) // 16))) + (((15 + RBLOCK) // 16)) * ((((15 + RBLOCK) // 16)) < (1))), ((16) * ((16) <= (RBLOCK)) + (RBLOCK) * ((RBLOCK) < (16)))]), [XBLOCK, RBLOCK]).to(tl.float32)
block_ptr1 = tl.advance(block_ptr1, [0, (RBLOCK // 16)])
tmp9 = tl.reshape(tl.broadcast_to(tl.load(block_ptr2, boundary_check=[0, 1], padding_option='zero', eviction_policy='evict_first')[:, :, None, :], [XBLOCK, ((15 + RBLOCK) // 16), ((1) * ((1) <= (((15 + RBLOCK) // 16))) + (((15 + RBLOCK) // 16)) * ((((15 + RBLOCK) // 16)) < (1))), ((16) * ((16) <= (RBLOCK)) + (RBLOCK) * ((RBLOCK) < (16)))]), [XBLOCK, RBLOCK])
block_ptr2 = tl.advance(block_ptr2, [0, (RBLOCK // 16), RBLOCK % 16])
tmp16 = tl.reshape(tl.broadcast_to(tl.load(block_ptr3, boundary_check=[0, 1], padding_option='zero', eviction_policy='evict_first')[:, :, None, :], [XBLOCK, ((15 + RBLOCK) // 16), ((1) * ((1) <= (((15 + RBLOCK) // 16))) + (((15 + RBLOCK) // 16)) * ((((15 + RBLOCK) // 16)) < (1))), ((16) * ((16) <= (RBLOCK)) + (RBLOCK) * ((RBLOCK) < (16)))]), [XBLOCK, RBLOCK]).to(tl.int1)
block_ptr3 = tl.advance(block_ptr3, [0, (RBLOCK // 16), RBLOCK % 16])
tmp17 = tl.reshape(tl.broadcast_to(tl.load(block_ptr4, boundary_check=[0, 1], padding_option='zero', eviction_policy='evict_last')[:, :, None, None], [XBLOCK, ((15 + RBLOCK) // 16), ((1) * ((1) <= (((15 + RBLOCK) // 16))) + (((15 + RBLOCK) // 16)) * ((((15 + RBLOCK) // 16)) < (1))), ((16) * ((16) <= (RBLOCK)) + (RBLOCK) * ((RBLOCK) < (16)))]), [XBLOCK, RBLOCK]).to(tl.float32)
block_ptr4 = tl.advance(block_ptr4, [0, (RBLOCK // 16)])
tmp3 = tmp2.to(tl.float32)
tmp5 = 0.0625
tmp6 = tmp4 * tmp5
tmp7 = tl.where(tmp0, tmp3, tmp6)
tmp8 = tmp7.to(tl.float32)
tmp11 = tmp9 - tmp10
tmp12 = tmp8 * tmp11
tmp13 = tl.broadcast_to(tmp12, [XBLOCK, RBLOCK])
tmp15 = _tmp14 + tmp13
_tmp14 = tl.where(rmask & xmask, tmp15, _tmp14)
tmp18 = tmp17 * tmp5
tmp19 = tl.where(tmp16, tmp3, tmp18)
tmp20 = tmp19.to(tl.float32)
tmp21 = tl.broadcast_to(tmp20, [XBLOCK, RBLOCK])
tmp23 = _tmp22 + tmp21
_tmp22 = tl.where(rmask & xmask, tmp23, _tmp22)
tmp14 = tl.sum(_tmp14, 1)[:, None]
tmp22 = tl.sum(_tmp22, 1)[:, None]
tl.store(tl.make_block_ptr(out_ptr0, shape=[1024], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), tl.reshape(tmp14, [XBLOCK]).to(tl.float32), boundary_check=[0])
tl.store(tl.make_block_ptr(out_ptr1, shape=[1024], strides=[1], block_shape=[XBLOCK], order=[0], offsets=[xoffset]), tl.reshape(tmp22, [XBLOCK]).to(tl.float32), boundary_check=[0])
```
However if you set `TORCHINDUCTOR_PERSISTENT_REDUCTIONS=1` it passes, which results in removing the `for` loop from the generated code.
I suspect this is due to a Triton bug and may be related to https://github.com/triton-lang/triton/issues/2836
@manman-ren @plotfi @bertmaher is this something your team could look at?
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @bertmaher @int3 @davidberard98 @nmacchioni @embg @peterbell10 | triaged,oncall: pt2,module: inductor,upstream triton | low | Critical |
2,696,117,196 | react | [Compiler Bug]: False positive `Ref values (the `current` property) may not be accessed during render` for non-jsx code | ### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAHRFMCAEcA2AlggHYAuGA3GiTYQLYAOEMZOwOAVmAB44C+OAGYwI9HBhgIAhnDIB6bjwC0MKOQYIqdJizYcsCAEoIhA4aPGSZc7STgQSYNgGEI+fNMbYcAXhwAFMA0OHgAFoT4ACZStCT8AJR+AHzsIXiOzhnk0oQkCDAmZv6GRQEkUB4J1CShUmSwtfIAVDgAAgD6HQAKAKpGAKJdOM2KvAEYUYQAbhgANGm1oThSQsjZZLn5haZz6aFwEdGx6Yk1-DUIPMys7DhuHl4+FyD8QA
### Repro steps
following code gives false positive error
```js
const Collapse = ({
children
}) => {
const containerRef = useRef(null);
return /* @__PURE__ */jsx("div", {
ref: containerRef,
children
});
};
```
but not
```ts
export const Collapse: FC<{
children: ReactNode
}> = ({ children }) => {
const containerRef = useRef<HTMLDivElement>(null!)
return <div ref={containerRef}>{children}</div>
}
```
### How often does this bug happen?
Every time
### What version of React are you using?
from playground
### What version of React Compiler are you using?
from playground | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | medium | Critical |
2,696,130,693 | flutter | [DisplayList] DisplayListBuilder does not compute correct effective color for DlColorSource objects | (Discovered while deleting unused DlColorColorSource)
When a ColorSource is applied to a paint, it is modulated by the alpha of the color property. The code in DisplayListBuilder that computes an example color to determine if an operation is a NOP does not take this modulation into account:
https://github.com/flutter/engine/blob/9975c85013db35dd311ac27b09c932157ec881ea/display_list/dl_builder.cc#L1959 | engine,P2,team-engine,triaged-engine | low | Minor |
2,696,198,389 | vscode | VSCode will silently never auto-update if it is not explicitly moved from the "Downloads" directory to "Application" | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
Related issue: https://github.com/microsoft/pylance-release/issues/6710#issuecomment-2501907909
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.89 -> 1.95
- OS Version: MacOS 14.5
Steps to Reproduce:
1. Install vscode, but forget to move it from "Downloads" to "Applications"
2. Vscode will never update automatically, making it fall behind over time.
3. Try to update manually will run into an error (see below)
4. Eventually, extensions installed on remote ssh server (which uses the same version as your local) will start subtly failing, sometimes it simply fails to boot. For example: https://github.com/microsoft/pylance-release/issues/6710
The current error message:
> Cannot update while running on a read-only volume. The application is on a read-only volume. Please move the application and try again. If you're on macOS Sierra or later, you'll need to move the application out of the Downloads directory. This might mean the application was put on quarantine by macOS. See [this link](https://github.com/microsoft/vscode/issues/7426#issuecomment-425093469) for more information.
What needs to be done, specifically, is to move from Downloads to the "Application".
# Suggestion
It would be good to update the error message to explicitly say:
> *Move the file "Visual Studio Code" from "Downloads" to "Applications"*
The link could be updated to a more relevant link, which explains how to "unquarantine" VSCode: https://stackoverflow.com/a/65422671/13837091
That error message could appear as a popup if the extension tries to automatically update but fails. Right now, the popup only appears AFTER you try to manually update with `Code: Check for updates...`. | bug,install-update,macos | low | Critical |
2,696,225,195 | PowerToys | Switch display apps | ### Switch Apps in displays
Suppose that i have:
Applications A, B and C currently in my display number 2;
Applications X, and Y currently in my display number 1 (main);
My displays keeps on side by side, number 1 in front of me, and number 2 in its left.
For some reason, i want to give a focus on applications A, B and/or C, that are under display 2, this, in about 1h or so, would give some kind of neck pain, so knowing that, i would like to switch apps:
Applications A, B and C now are in my display 1;
Applications X and Y now are in my display 2
### Scenario when this would be used?
Suppose that i'm working remotely (RDP) in a machine in my secondary display using fullscreen mode. This would cause me too look a little bit to the left for some hours, this can be annoying, specially to my neck.
So, i would like to switch apps between displays, automatically, without need to switch apps one bye one.
This would cause, in this scenario that my RDP is now in front of me, giving me more comfort to my head, neck and eyes.
As soon as I decide to unfocus the app, i could switch back apps to their original state.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,696,234,116 | TypeScript | Some Playwright types now reported as `any` after upgrading to 5.7.2 | ### 🔎 Search Terms
I manually scrolled through all issues reported since the release of 5.7.2.
### 🕗 Version & Regression Information
- This changed between versions 5.6.3 and 5.7.2
### ⏯ Playground Link
https://github.com/mrmckeb/pw-repro
### 💻 Code
Minimal repro available here:
https://github.com/mrmckeb/pw-repro
The Playwright team are also able to reproduce:
https://github.com/microsoft/playwright/issues/33763
The affected code (from the repro) is:
```ts
import { test as base, expect } from "@playwright/test";
const test = base.extend({
// baseUrl, page, use are now `any`
page: async ({ baseURL, page }, use) => {
await page.goto(baseURL);
await use(page);
},
});
```
### 🙁 Actual behavior
Types that were previously available are now typed as `any`. JSDoc continues to work as expected.
### 🙂 Expected behavior
Types behave as they did in TypeScript 5.6.3.
### Additional information about the issue
We found this issue when upgrading from 5.6.3 to 5.7.2. I couldn't find a related issue here, so raised it at the Playwright repo to at least verify that they could reproduce and that they didn't think it was an issue with their types.
The Playwright team also confirmed that they have 71 new errors across 41 files when moving to TypeScript 5.7.2. | Help Wanted,Possible Improvement | low | Critical |
2,696,239,734 | pytorch | Clarify that `compile(module)` only affects the `forward` method. | ### 📚 The doc issue
The following code silently does not trigger compilation, apparently because `forward` is not called. One can see this by using a debug flag or passing an illegal configuration like `backend="cudagraphs", mode="reduce-overhead"` to `compile`.
Certain models like Normalizing Flows and distribution-like modules might be used without calling `forward`.
```python
import os
os.environ["TORCH_LOGS"] = "dynamo"
import torch
from torch import Tensor, nn
class Flow(nn.Module):
def forward(self, x: Tensor) -> Tensor:
return x + 1.0
def inverse(self, x: Tensor) -> Tensor:
return x - 1.0
m = Flow().to(device="cuda")
optimized: Flow = torch.compile(m) # type: ignore
x = torch.rand(1, 2, 3, 4).to(device="cuda")
y1 = m.inverse(x)
y2 = optimized.inverse(x)
assert torch.equal(y1, y2)
print("No compilation until here.")
optimized(x) # <- triggers compilation of `forward`
```
### Suggest a potential alternative/fix
Clarify that when using `torch.compile(module)`, it is only triggered upon calling `forward`, and that only `forward` gets compiled. The current documentation is imprecise:
#### https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html
> In the same fashion, when compiling a module all sub-modules and methods within it, that are not in a skip list, are also compiled.
- Apparently, not "all" methods within the module get compiled, but only the ones `forward` depends on.
- Secondly, assume `MyModel.forward` calls `MyModel.helper_fn`. After compilation, if I call `helper_fn` in a standalone manner, do I get a compiled version? Or does it only compile it in an "inlined" manner?
> When you use `torch.compile`, the compiler will try to recursively compile every function call inside the target function or module inside the target function or module that is not in a skip list (such as built-ins, some functions in the torch.* namespace).
This should probably read *"inside the target module's forward method"* instead.
#### https://pytorch.org/docs/stable/generated/torch.compile.html
This should also mention that compilation only applies to forward.
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @chauhang @penguinwu | module: docs,triaged,oncall: pt2 | low | Critical |
2,696,260,121 | react-native | FlatList scrollToEnd not working on iOS when items width changes (works on Android/Web) | ### Description
I want to automatically scrollToEnd() when changing FlatList item widths from smaller to larger number. However, this behavior does not behave correctly on iOS despite it working on Android and Web. Seems as if the list on iOS scrolls to the end position of the previous calculated offset. Adding a CTA to re-fire the 'Scroll to end' works as expected on all platforms.
### Steps to reproduce
1. Open the snack on an iOS device/simulator: https://snack.expo.dev/@omniphx/scroll-to-end-when-width-changes
2. Press "Set item width to 100"
3. Press "Set item width to 200"
Expected behavior is for FlatList to scroll to the end after changing item width from 100 > 200. Logic for this is inside a useEffect that fires after width has changed. This works on Android/web but does not seem to scroll to the correct end location when running on an iOS.
### React Native Version
0.75.4 (Where issue was originally discovered, but was able to reproduce with a snack on Expo SDK 52/RN v0.76)
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.7.1
CPU: (10) arm64 Apple M2 Pro
Memory: 130.67 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.11.0
path: ~/.nvm/versions/node/v20.11.0/bin/node
Yarn:
version: 1.22.21
path: ~/.nvm/versions/node/v20.11.0/bin/yarn
npm:
version: 10.2.4
path: ~/.nvm/versions/node/v20.11.0/bin/npm
Watchman:
version: 2024.11.04.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /Users/MITCHMX20/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12550806
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /usr/bin/javac
Ruby:
version: 2.7.6
path: /Users/MITCHMX20/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 14.1.1
wanted: 14.1.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.4
wanted: 0.75.4
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
```
### Stacktrace or Logs
```text
n/a
```
### Reproducer
https://snack.expo.dev/@omniphx/scroll-to-end-when-width-changes
### Screenshots and Videos
https://github.com/user-attachments/assets/fd01dce6-cd90-4aa9-9036-37a37667011f | Platform: iOS,Platform: Android,Component: FlatList,Needs: Triage :mag: | low | Minor |
2,696,266,613 | angular | Promise API for resource | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
Ideally, resource would expose a promise as part of its public API that:
* Would be pending while the resource is loading
* Would resolve when the status is loaded
* Would reject with the error
It would be a fit to use in async calls or route resolvers.
### Proposed solution
It could be a new method named `resolve()` or similar.
### Alternatives considered
Using effects to watch the state and generate a promise. | area: core,needs: clarification,core: reactivity | low | Critical |
2,696,423,830 | pytorch | Support reductions in FlexAttention's score_mod/mask_mod | ### 🚀 The feature, motivation and pitch
This issue is (essentially) requesting reductions in FlexAttention's score_mod/mask_mod: https://github.com/pytorch/pytorch/issues/140232
This isn't necessarily trivial (or even always possible!), since reductions are relatively heavyweight operations. But for small reductions it might be feasible.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @drisspg @yanboliang @BoyuanFeng | feature,triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Minor |
2,696,501,284 | godot | Godot v4.3 Editor - Windows 11 - Crash when editing any project | ### Tested versions
I only tried Godot v4.3.
### System information
Windows 11 Pro 23H2 - NVIDIA RTX 3070 Ti - Meta Quest 2
### Issue description
I am experiencing a problem where the GODOT Editor crashes when I try to edit a project.
I open the **Godot XR Tools Demo** and after about 30 seconds, the editor just crashes. I can run it but cannot open the project in the editor.
I updated my NVIDIA drivers to the latest which is currently v566.14. I have an NVIDIA RTX 3070 Ti and a Meta Quest 2 headset.
Windows events viewer shows the following error:
Faulting application name: Godot_v4.3-stable_win64.exe, version: 4.3.0.0, time stamp: 0x66bd392d
Faulting module name: D3D12Core.dll, version: 10.0.22621.4391, time stamp: 0x3ffb1bfa
Exception code: 0xc0000005
Fault offset: 0x00000000000b59a2
Faulting process id: 0x0x60D4
Faulting application start time: 0x0x1DB405BF676B4B3
Faulting application path: G:\GODOT Game Engine\Godot_v4.3-stable_win64.exe
Faulting module path: C:\Windows\system32\D3D12Core.dll
Report Id: 224a335a-1182-4c7a-8135-cf4b02771d6b
Faulting package full name:
Faulting package-relative application ID:
### Steps to reproduce
Download the **Godot XR Tools Demo** or **Godot XR Template** in the Asset Library and edit it. The editor will crash after a few seconds.
### Minimal reproduction project (MRP)
See above | bug,topic:editor,needs testing,crash | low | Critical |
2,696,532,195 | pytorch | [AOTI] KeyError: 'val' from n.meta["val"] when wrapping tensor with an object class | ### 🐛 Describe the bug
Hi. Sorry for this hacky code. Error message seems similar to that of https://github.com/pytorch/pytorch/issues/140592
I think, the motivation is to not use nn.Parameter, but wrap around in a different class.
repro:
```python
class FakeParam(object):
def __init__(
self,
data: torch.Tensor,
):
self.data = data
class FakeLinear(torch.nn.Module):
def __init__(
self,
input_size,
output_size,
):
super().__init__()
self.param = FakeParam(
torch.empty(
output_size,
input_size,
device=torch.device("cuda"),
dtype=torch.float32,
),
)
def forward(self, input_):
return torch.matmul(input_, self.param.data)
model = FakeLinear(512, 512).cuda()
input = (torch.rand(512, 512, dtype=torch.float32, device="cuda"),)
# sanity check
_ = model(*input)
# sanity check 2
_ = torch.compile(model, fullgraph=True)(*input)
ep = torch.export.export(model, input, strict=False)
path = torch._inductor.aot_compile(ep.module(), input)
aot_model = torch._export.aot_load(path, device="cuda")
aot_model(*input)
print("done")
```
warnings before error:
```
/torch/export/_unlift.py:75: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer
getattr_node = gm.graph.get_attr(lifted_node)
/torch/fx/graph.py:1800: UserWarning: Node lifted_tensor_0 target lifted_tensor_0 lifted_tensor_0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
```
error:
```
/torch/fx/node.py", line 894, in <listcomp>
return immutable_list([map_aggregate(elem, fn) for elem in a])
/torch/fx/node.py", line 907, in map_aggregate
return fn(a)
/torch/fx/node.py", line 881, in <lambda>
return map_aggregate(a, lambda x: fn(x) if isinstance(x, Node) else x)
/torch/_inductor/pattern_matcher.py", line 1269, in <lambda>
[match.kwargs[name] for name in argnames], lambda n: n.meta["val"]
KeyError: 'val'
```
### Versions
trunk
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 | oncall: pt2,oncall: export,module: aotinductor | low | Critical |
2,696,556,422 | go | proposal: crypto/tls: support AEGIS (for TLS 1.3) | ### Proposal Details
Resurrecting #58724 in a new proposal.
Voting for publication of AEGIS as an RFC closes December 12: https://mailarchive.ietf.org/arch/msg/cfrg/0T3m_Pageq7PIukaiG3Nxx4ENCM/
```
Dear CFRG participants,
This message is starting a 3-week RGLC on
draft-irtf-cfrg-aegis-aead-13 ("The AEGIS Family of Authenticated
Encryption Algorithms") that will end on December 12th 2024. If you've read
the document and think that it is ready (or not ready) for publication as
an RFC, please send a message in reply to this email or directly
to CFRG chairs ([[email protected]](mailto:[email protected])) If you have detailed comments, these
would also be very helpful at this point.
We've got a review of the draft from Scott Fluhrer (on behalf of the Crypto
Review Panel):
https://mailarchive.ietf.org/arch/msg/cfrg/ikGi2zb6CmWyQIVhuo8QR_GrOsA/
The comments were addressed in version -13 of the draft.
Thank you,
Stanislav, for CFRG chairs
```
cc @jedisct1 | Proposal,Proposal-Crypto | low | Major |
2,696,567,701 | godot | Android ANR in Godot.getRotatedValues | ### Tested versions
Reported to Google Play Console from Godot 4.3
### System information
Android(9,10) - Godot 4.3.stable - compatibility
### Issue description
We've received a few ANR reports in Google Play Console with this callstack:
```
"main" tid=1 Native
#00 pc 0x000000000001f0ac /system/lib64/libc.so (syscall+28)
#01 pc 0x00000000000d7224 /system/lib64/libart.so (art::ConditionVariable::WaitHoldingLocks(art::Thread*)+148)
#02 pc 0x000000000051bd30 /system/lib64/libart.so (art::GoToRunnable(art::Thread*) (.llvm.<US_SOCIAL_SECURITY_NUMBER>)+480)
#03 pc 0x000000000051bb0c /system/lib64/libart.so (art::JniMethodEnd(unsigned int, art::Thread*)+28)
at android.os.BinderProxy.transactNative (Native method)
at android.os.BinderProxy.transact (Binder.java:1129)
at android.os.ServiceManagerProxy.getService (ServiceManagerNative.java:125)
at android.os.ServiceManager.rawGetService (ServiceManager.java:253)
at android.os.ServiceManager.getService (ServiceManager.java:124)
at android.hardware.display.DisplayManagerGlobal.translateAppSpaceDisplayIdBasedOnCurrentActivity (DisplayManagerGlobal.java:135)
at android.hardware.display.DisplayManagerGlobal.getDisplayInfo (DisplayManagerGlobal.java:181)
at android.view.Display.updateDisplayInfoLocked (Display.java:1071)
at android.view.Display.getRotation (Display.java:750)
at org.godotengine.godot.Godot.getRotatedValues (Godot.kt:866)
at org.godotengine.godot.Godot.onSensorChanged (Godot.kt:896)
at android.hardware.SystemSensorManager$SensorEventQueue.dispatchSensorEvent (SystemSensorManager.java:833)
at android.os.MessageQueue.nativePollOnce (Native method)
at android.os.MessageQueue.next (MessageQueue.java:326)
at android.os.Looper.loop (Looper.java:160)
at android.app.ActivityThread.main (ActivityThread.java:6975)
at java.lang.reflect.Method.invoke (Native method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:493)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:865)
```
Apparently this line is the trigger of the ANR:
`when (windowManager.defaultDisplay.rotation) {`
Maybe it would be better to listen for screen rotation and save the value in a var, rather then calling windowManager.defaultDisplay.rotation everytime any sensor changes.
Also we don't use sensors anyways and would like to just turn off listening for them to save some cpu and battery. See proposal: https://github.com/godotengine/godot-proposals/discussions/7645#discussioncomment-11389311
### Steps to reproduce
I haven't been able to reproduce
### Minimal reproduction project (MRP)
I think this could happen in any Godot android project because godot is always listening for sensor changes on android. | bug,platform:android,needs testing,crash | low | Minor |
2,696,568,479 | PowerToys | Websites randomly don't load when using PowerToys | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
I only have Awake and Fancy Zones enabled.
This issue can be hard to reproduce due to its randomness. When I'm on a web browser (doesn't matter which one. Edge, Firefox, Chrome, I've tried them all) websites randomly don't load or they misbehave.
It is unclear to me what part of PowerToys is causing this issue. However, once the issue begins it persists until after a reboot. Then it may return shortly thereafter or go away for a while before occurring again.
I am certain that PowerToys is to blame because when I uninstall it and restart, the issue goes away permanently.
### ✔️ Expected Behavior
Expected behavior is for PowerToys to not interfere with internet access as at all.
### ❌ Actual Behavior
Google search results always seem to load. But trying to go to a website by clicking on it can result in this message (taken from Firefox):

Refreshing the page repeatedly will eventually get the website to load. I suspect the failure to load is related to what is actually being loaded on the page. Meaning that this issue may not occur with certain websites. Sometimes this issue causes websites to not load properly or not at all.
One example, of the webpage misbehaving is that it affects our ticketing system. Any .jpg or images in email signatures are not displayed while everything else on the page is displayed properly. This also applies to any images sent in attachments to the email tickets.
This issue extends beyond the web browsers and affects internet access as a whole. Connections through RDP, Windows cloud PCs, TeamViewer, etc. will randomly fail to connect. Attempting several times in a row can eventually get these to work as well.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,696,592,015 | pytorch | DISABLED test_threading (__main__.TestWithNCCL) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_threading&suite=TestWithNCCL&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/33563396152).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 9 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_threading`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 597, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 837, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 891, in _check_return_codes
raise RuntimeError(
RuntimeError: Process 1 terminated or timed out after 300.0387041568756 seconds
```
</details>
Test file path: `distributed/test_c10d_functional_native.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/distributed/test_c10d_functional_native.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @wdvr | oncall: distributed,module: flaky-tests,skipped,module: unknown | low | Critical |
2,696,603,848 | pytorch | Torch Compile Breaks Compiled Chunked Cross Entropy on Torch Nightly | ### 🐛 Describe the bug
I've found a problem when compiling @Chillee's chunked cross-entropy implementation - see below:
```python
import torch
import torch.nn as nn
class ChunkedCE(torch.autograd.Function):
@staticmethod
def forward(ctx, _input, weight, target, bias=None, compiled=True):
CHUNK_SIZE = 1024 # Reduced for gradcheck
def compute_loss(input_chunk, weight, bias, target):
logits = torch.addmm(bias, input_chunk, weight.t())
logits = logits.float()
loss = torch.nn.functional.cross_entropy(logits, target)
return loss
grad_weight = torch.zeros_like(weight)
grad_inputs = []
grad_bias = torch.zeros_like(bias)
loss_acc = torch.zeros((), device=_input.device)
chunks = _input.shape[0] // CHUNK_SIZE
def accumulate_chunk(input_chunk, target_chunk):
(chunk_grad_input, chunk_grad_weight, chunk_grad_bias), chunk_loss = (
torch.func.grad_and_value(
compute_loss, argnums=(0, 1, 2)
)(input_chunk, weight, bias, target_chunk)
)
grad_weight.add_(chunk_grad_weight)
grad_bias.add_(chunk_grad_bias)
loss_acc.add_(chunk_loss)
return chunk_grad_input
if compiled:
accumulate_chunk = torch.compile(accumulate_chunk)
input_chunks = torch.chunk(_input, chunks=chunks, dim=0)
target_chunks = torch.chunk(target, chunks=chunks, dim=0)
for input_chunk, target_chunk in zip(input_chunks, target_chunks):
chunk_grad_input = accumulate_chunk(input_chunk, target_chunk)
grad_inputs.append(chunk_grad_input)
ctx.save_for_backward(
torch.cat(grad_inputs, dim=0) / chunks,
grad_weight / chunks,
grad_bias / chunks,
)
return loss_acc / chunks
@staticmethod
def backward(ctx, grad_output):
grad_input, grad_weight, grad_bias = ctx.saved_tensors
return (grad_input, grad_weight, None, grad_bias, None)
torch.set_default_device("cuda")
torch.set_float32_matmul_precision("medium")
chunked_cross_entropy = ChunkedCE.apply
compiled_chunked_cross_entropy = torch.compile(chunked_cross_entropy)
B, T, D, V = 16, 128, 384, 267_735
model = nn.Linear(D, V, device="cuda")
x = torch.randn(B * T, D, requires_grad=True, device="cuda")
weight = torch.randn(V, D, requires_grad=True, device="cuda")
bias = torch.randn(V, requires_grad=True, device="cuda")
label = torch.randint(0, V, (B * T,), device="cuda")
compiled_x = x.detach().clone().requires_grad_(True)
compiled_weight = weight.detach().clone().requires_grad_(True)
compiled_bias = bias.detach().clone().requires_grad_(True)
out_compiled = compiled_chunked_cross_entropy(
compiled_x.view(-1, D), compiled_weight, label.view(-1), compiled_bias
)
out_compiled.backward()
out = chunked_cross_entropy(
x.view(-1, D), weight, label.view(-1), bias
)
out.backward()
print(x.grad)
print(compiled_x.grad)
```
You'll see by inspection that the compiled gradient is zero which does not match the non-fully compiled version. I have confirmed this does not happen on stable.
### Versions
'2.6.0.dev20241126+cu124'
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu | high priority,triaged,oncall: pt2 | low | Critical |
2,696,634,225 | pytorch | Relaxing Export IR node.meta requirements | ### 🐛 Describe the bug
https://github.com/pytorch/pytorch/blob/8c8a484d7257140284b50ed963c5afaefd029283/docs/source/export.ir_spec.rst?plain=1#L222-L225
Specifically the `only having the following metadata fields` part.
We have an anti-pattern in Dynamo, where people have been shoving arbitrary fields directly onto the node object that would otherwise belong in the meta dict. Export also relies on some of those fields, e.g. _dynamo_source https://github.com/pytorch/pytorch/blob/8c8a484d7257140284b50ed963c5afaefd029283/torch/_dynamo/eval_frame.py#L1112-L1115
This is typically available in node.meta["grapharg"].source for placeholder nodes, but export is filtering out node.meta fields before control is returned to the user. I would argue that the node.meta restriction is a bit too strict given the workaround of relying on dynamically set node attributes. An alternative could be to trim the node.meta at the very end of export, if that's feasible
### Error logs
_No response_
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Critical |
2,696,642,572 | pytorch | xpu: LayerNorm kernel produces different results on XPU than CPU/CUDA | With:
* PyTorch: https://github.com/pytorch/pytorch/commit/f2d388eddd25b5ab5b909a8b4cbf20ff34c6f3f2
* https://github.com/intel/torch-xpu-ops/commit/15f6d654685a98b6bbf6b76ee094258b16dd21ca
* https://github.com/huggingface/transformers/commit/1f6b423f0ce0c6e7afd300b59e5fd1a816e4896c
* https://github.com/huggingface/accelerate/commit/e11d3ceff3a49378796cdff5b466586d877d5c60
Running Huggingface `hiera` model with non-default `initializer_range=1e-10` (default is `0.02` and it works) on Pytorch XPU backend (I used Intel PVC platform) fails - output tensor contains `Nan` values. This behavior was first found running the following Huggingface Transformer tests for `hiera` model:
```
$ cat spec.py
import torch
DEVICE_NAME = 'xpu'
MANUAL_SEED_FN = torch.xpu.manual_seed
EMPTY_CACHE_FN = torch.xpu.empty_cache
DEVICE_COUNT_FN = torch.xpu.device_count
$ TRANSFORMERS_TEST_DEVICE_SPEC=spec.py python3 -m pytest --pspec tests/models/hiera/ -k torch_fx
...
FAILED tests/models/hiera/test_modeling_hiera.py::
Here we also overwrite some of the tests of test_modeling_common.py, as Hiera does not use input_ids, inputs_embeds,
attention_mask and seq_length.
::test_torch_fx - AssertionError: False is not true : traced 0th output doesn't match model 0th output for <class...
FAILED tests/models/hiera/test_modeling_hiera.py::
Here we also overwrite some of the tests of test_modeling_common.py, as Hiera does not use input_ids, inputs_embeds,
attention_mask and seq_length.
::test_torch_fx_output_loss - AssertionError: False is not true : traced 0th output doesn't match model 0th output for <class...
================================ 2 failed, 160 deselected in 2.32s =================================
```
Instead of running Huggingface Transformers, it's possible to reproduce the issue on the following script which is my attempt to extract essential behavior from HF test:
```
import copy
import random
import torch
from transformers import HieraConfig, HieraModel, PretrainedConfig
config = HieraConfig(
embed_dim=8,
image_size=[64, 64],
patch_stride=[4, 4],
patch_size=[7, 7],
patch_padding=[3, 3],
masked_unit_size=[8, 8],
mlp_ratio=1.0,
num_channels=3,
depths=[1, 1, 1, 1],
num_heads=[1, 1, 1, 1],
embed_dim_multiplier=2.0,
hidden_act='gelu',
decoder_hidden_size=2,
decoder_depth=1,
decoder_num_heads=1,
initializer_range=0.02)
setattr(config, "initializer_range", 1e-10) # NOTE THIS LINE !!!
model = HieraModel(config).to("xpu")
model.eval()
def floats_tensor(shape, scale=1.0, rng=None, name=None):
"""Creates a random float32 tensor"""
if rng is None:
rng = random.Random()
total_dims = 1
for dim in shape:
total_dims *= dim
values = []
for _ in range(total_dims):
values.append(rng.random() * scale)
return torch.tensor(data=values, dtype=torch.float, device="xpu").view(shape).contiguous()
inputs = {"pixel_values": floats_tensor([13, 3, 64, 64])}
model.config.use_cache = False
with torch.no_grad():
outputs = model(**inputs)
print(outputs)
```
Issue appears when `initializer_range=1e-10`. That's what HF Transformers are doing in the tests (see [tests/test_modeling_common.py#L149](https://github.com/huggingface/transformers/blob/6c3f168b36882f0beebaa9121eafa1928ba29633/tests/test_modeling_common.py#L149)). Output on XPU will be:
```
HieraModelOutput(last_hidden_state=tensor([[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]],
...
nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]],
device='xpu:0'), bool_masked_pos=None, ids_restore=None, hidden_states=None, attentions=None, reshaped_hidden_states=None)
```
And running with default `initializer_range=0,02` on XPU you will get reasonable results (named HF hiera tests will pass with this value, but other tests form hiera will start to fail):
```
HieraModelOutput(last_hidden_state=tensor([[[ 0.1861, -0.1193, 0.2276, ..., 0.1105, 0.1467, -0.0326],
[ 0.2299, -0.0409, 0.1859, ..., 0.1770, 0.0710, 0.0243],
[ 0.2048, -0.1447, 0.1758, ..., 0.1698, 0.0585, -0.0254],
[ 0.1917, -0.0898, 0.2141, ..., 0.1629, 0.0251, 0.0306]],
...
5.1592e-01, 2.1765e-01, 2.0610e+00, 1.5068e+00, -1.6449e+00,
1.5775e-01, 7.1410e-01, -4.9712e-01, -8.2570e-01]], device='xpu:0'), bool_masked_pos=None, ids_restore=None, hidden_states=None, attentions=None, reshaped_hidden_states=None)
```
**Note that CUDA does handle `initializer_range=1e-10` in a different way** and does not return `Nan` tensors. However, what CUDA returns is suspiciously "good":
```
HieraModelOutput(last_hidden_state=tensor([[[3.0000e-10, 3.0000e-10, 3.0000e-10, ..., 3.0000e-10,
3.0000e-10, 3.0000e-10],
[3.0000e-10, 3.0000e-10, 3.0000e-10, ..., 3.0000e-10,
3.0000e-10, 3.0000e-10],
...
1.0000e-10, 1.0000e-10, 1.0000e-10, 1.0000e-10, 1.0000e-10, 1.0000e-10,
1.0000e-10, 1.0000e-10, 1.0000e-10, 1.0000e-10, 1.0000e-10, 1.0000e-10,
1.0000e-10, 1.0000e-10, 1.0000e-10, 1.0000e-10, 1.0000e-10, 1.0000e-10,
1.0000e-10, 1.0000e-10, 1.0000e-10, 1.0000e-10]], device='cuda:0'), bool_masked_pos=None, ids_restore=None, hidden_states=None, attentions=None, reshaped_hidden_states=None)
```
And for complete picture on CUDA, that's what it returns on `initializer_range=0,02`:
```
HieraModelOutput(last_hidden_state=tensor([[[-0.0146, 0.1575, 0.0880, ..., 0.1405, 0.0805, -0.1740],
[ 0.0672, 0.1291, 0.0963, ..., 0.1333, 0.1244, -0.0825],
[-0.0050, 0.1595, 0.0767, ..., 0.1392, 0.1436, -0.1975],
[ 0.0101, 0.1367, 0.0705, ..., 0.1299, 0.1178, -0.1880]],
...
-2.5323e-01, 9.3338e-01, -5.0501e-01, -4.9314e-01, 7.3834e-01,
9.5067e-02, 1.7623e-01, 1.8487e+00, -1.5834e-01, 9.8299e-01,
1.8513e+00, 1.1275e+00, 1.8372e+00, 1.0047e+00, 1.9499e-01,
-1.7089e+00, 4.8383e-01, 1.8169e-01, -2.4395e+00]], device='cuda:0'), bool_masked_pos=None, ids_restore=None, hidden_states=None, attentions=None, reshaped_hidden_states=None)
```
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5 @xytintel @faaany
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,696,650,577 | PowerToys | Add a shortcut key for refreshing Mouse without Boarders connection | ### Description of the new feature / enhancement
now
 Mouse without Boarders is no icon
 double-click
 click icon
 click button
all step to refreshing connection
feature
we need a shortcut key !
### Scenario when this would be used?
When the connection is closed
### Supporting information
The shortcut keys for using the software without mouse border are ctrl+alt+r | Needs-Triage | low | Minor |
2,696,681,606 | godot | [C#] Signals created with `GodotObject.Connect()` aren't on the duplicated node when using `Node.Duplicate()` | ### Tested versions
Reproducible in: v4.3.stable.mono.official [77dcf97d8], v4.1.1.stable.mono.official [bd6af8e0e], v4.4.dev5.mono.official [9e6098432]
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GTX 1070 (NVIDIA; 32.0.15.6603) - Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz (8 Threads)
### Issue description
I'm using GodotObject.Connect() to connect a signal to a button (Including "ConnectFlags.Persist"), then using Node.Duplicate() to create a duplicate, however the duplicate doesn't have the signal connected.
### Steps to reproduce
```cs
[Export] VBoxContainer container;
[Export] Button button;
public override void _Ready()
{
button.Connect("pressed", Callable.From(delegate { GD.Print("pressed"); }), (uint)ConnectFlags.Persist);
Node newButton = button.Duplicate();
container.AddChild(newButton);
}
```
When I run the project, pressing the original button will print "pressed" to the console, and the duplicated button does nothing.
### Minimal reproduction project (MRP)
[connect-signal-persist.zip](https://github.com/user-attachments/files/17927495/connect-signal-persist.zip)
| bug,needs testing,topic:dotnet | low | Minor |
2,696,690,746 | vscode | Unable to create Notebook viewzones before the ViewModel is attached (i.e. before notebook list view is setup) | * When trying to add viewzones to notebooks in notebook editor contribution, the list view may not have been initialized yet.
* However the viewzone API requires the list view to be initialized, as we might be passing the index position for viewzones.
As a result things fall over.
| bug,notebook,notebook-layout | low | Minor |
2,696,769,733 | vscode | Extension host (Remote) terminated unexpectedly with code null | ```shell
2024-11-27 10:46:37.994 [info] [remote-connection][ExtensionHost][f2585…][reconnect] resolving connection...
2024-11-27 10:46:37.994 [info] [remote-connection][ExtensionHost][f2585…][reconnect] connecting to Managed(1)...
2024-11-27 10:46:37.994 [info] Creating a socket (renderer-ExtensionHost-f25853eb-b91d-4428-b925-75a70d8daf7f)...
2024-11-27 10:46:37.998 [info] Creating a socket (renderer-ExtensionHost-f25853eb-b91d-4428-b925-75a70d8daf7f) was successful after 4 ms.
2024-11-27 10:46:38.004 [info] [remote-connection][ExtensionHost][f2585…][reconnect] reconnected!
2024-11-27 10:46:58.010 [info] [remote-connection][ExtensionHost][f2585…][reconnect] received socket timeout event (unacknowledgedMsgCount: 359, timeSinceOldestUnacknowledgedMsg: 20004, timeSinceLastReceivedSomeData: 20004).
2024-11-27 10:46:58.010 [info] [remote-connection][ExtensionHost][f2585…][reconnect] starting reconnecting loop. You can get more information with the trace log level.
2024-11-27 10:46:58.010 [info] [remote-connection][ExtensionHost][f2585…][reconnect] resolving connection...
2024-11-27 10:46:58.010 [info] [remote-connection][ExtensionHost][f2585…][reconnect] connecting to Managed(1)...
2024-11-27 10:46:58.010 [info] Creating a socket (renderer-ExtensionHost-f25853eb-b91d-4428-b925-75a70d8daf7f)...
2024-11-27 10:46:58.015 [info] Creating a socket (renderer-ExtensionHost-f25853eb-b91d-4428-b925-75a70d8daf7f) was successful after 5 ms.
2024-11-27 10:46:58.025 [error] [remote-connection][ExtensionHost][f2585…][reconnect][Managed(1)] received error control message when negotiating connection. Error:
2024-11-27 10:46:58.025 [error] Error: Connection error: Unknown reconnection token (seen before)
at Dsi (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:2525:39384)
at vZ.value (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:2525:31527)
at x.B (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:30:747)
at x.fire (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:30:965)
at D6.fire (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:630:10348)
at rJi.A (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:630:13507)
at vZ.value (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:630:12874)
at x.B (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:30:747)
at x.fire (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:30:965)
at s5e.acceptChunk (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:630:8135)
2024-11-27 10:46:58.025 [error] [remote-connection][ExtensionHost][f2585…][reconnect] A permanent error occurred in the reconnecting loop! Will give up now! Error:
2024-11-27 10:46:58.025 [error] Error: Connection error: Unknown reconnection token (seen before)
at Dsi (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:2525:39384)
at vZ.value (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:2525:31527)
at x.B (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:30:747)
at x.fire (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:30:965)
at D6.fire (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:630:10348)
at rJi.A (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:630:13507)
at vZ.value (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:630:12874)
at x.B (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:30:747)
at x.fire (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:30:965)
at s5e.acceptChunk (vscode-file://vscode-app/c:/Users/ljc01/AppData/Local/Programs/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:630:8135)
2024-11-27 10:46:58.083 [error] Extension host (Remote) terminated unexpectedly with code null.
2024-11-27 10:46:58.083 [error] Extension host (Remote) terminated unexpectedly. The following extensions were running: jinliming2.vscode-go-template, mechatroner.rainbow-csv, vscode.emmet, golang.go, r3inbowari.gomodexplorer, vscode.git-base, andyyaldoo.vscode-json, mhutchie.git-graph, vscode.git, vscode.github, ms-vscode.makefile-tools, vscode.debug-auto-launch, vscode.merge-conflict, eamodio.gitlens, formulahendry.code-runner, GitHub.copilot, GitHub.copilot-chat, Gruntfuggly.todo-tree, ms-vsliveshare.vsliveshare
2024-11-27 10:46:58.084 [info] Automatically restarting the remote extension host.
``` | remote | low | Critical |
2,696,781,474 | tauri | [feat] Get bundle tools from local path | ### Describe the problem
I cannot build app because I could not access to network.
### Describe the solution you'd like
1. Release these resources with stable tauri.
2. Support an option like `--localpath` to copy local files from.
3. Both above.
### Alternatives considered
1. Mirror, I searched and find env `TAURI_BUNDLER_TOOLS_GITHUB_MIRROR` and `TAURI_BUNDLER_TOOLS_GITHUB_MIRROR_TEMPLATE`, and I prefer local file, it's more stable.
### Additional context
_No response_ | type: feature request,scope: bundler | low | Major |
2,696,782,759 | angular | docs: Version dropdown on v18.angular.dev redirects to incorrect URL and lacks clarity in version listing | ### Describe the problem that you experienced
The problem is with the version dropdown on https://v18.angular.dev/.
When selecting v18 from the dropdown, it incorrectly redirects users to https://angular.dev/ instead of staying on https://v18.angular.dev/.
Additionally, the dropdown lists versions from v2 to v18 and next, but it doesn't clarify the stable version or provide explicit links for versions beyond v18. This behavior causes confusion for users seeking version-specific documentation.
### Enter the URL of the topic with the problem
https://v18.angular.dev
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
I would like the following improvements to address the problem:
1. **Correct Redirection**: Selecting v18 from the version dropdown on https://v18.angular.dev/ should not redirect to https://angular.dev/. Instead, it should stay on https://v18.angular.dev/.
2. **Stable Version Clarity**: The dropdown should include an option labeled stable, which links to the latest stable documentation version, currently https://angular.dev/.
3. **Comprehensive Version Listing**: The dropdown should explicitly list all versions, including those beyond v18 (e.g., v19, v20) as they are released, for easy navigation without ambiguity.
These changes will ensure a more intuitive and seamless user experience when navigating between Angular versions.
### Add a screenshot if that helps illustrate the problem

### If this problem caused an exception or error, please paste it here
```true
```
### If the problem is browser-specific, please specify the device, OS, browser, and version
```true
```
### Provide any additional information here in as much as detail as you can
```true
``` | help wanted,good first issue,area: docs-infra | low | Critical |
2,696,936,851 | langchain | GraphCypherQAChain Issue with Node Names Containing Spaces | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.chains import GraphCypherQAChain
chain = GraphCypherQAChain.from_llm(graph=graph, llm=llm, verbose=True, allow_dangerous_requests=True)
response = chain.invoke({"What is Lana Fathima current Medical Condition "})
### Error Message and Stack Trace (if applicable)
{code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input 'Condition': expected a parameter, '&', ')', ':', 'WHERE', '{' or '|' (line 2, column 70 (offset: 76))
"MATCH (p:Person {id: 'Lana Fathima'})-[:DIAGNOSED_WITH]->(mc:Medical Condition)"
### Description
If the node Name is having space in between the genrated cypher query should have
MATCH (p:Person {id: 'Lana Fathima'})-[:DIAGNOSED_WITH]->(mc:'Medical Condition')
rather it is genration cypher query "MATCH (p:Person {id: 'Lana Fathima'})-[:DIAGNOSED_WITH]->(mc:Medical Condition)"
hence it is throughting syntax error
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Fri Jul 12 20:45:27 UTC 2024
> Python Version: 3.9.20 (main, Sep 9 2024, 00:00:00)
[GCC 11.5.0 20240719 (Red Hat 11.5.0-2)]
Package Information
-------------------
> langchain_core: 0.3.18
> langchain: 0.3.7
> langchain_community: 0.3.7
> langsmith: 0.1.143
> langchain_experimental: 0.3.3
> langchain_huggingface: 0.1.2
> langchain_milvus: 0.1.7
> langchain_ollama: 0.2.0
> langchain_openai: 0.2.9
> langchain_text_splitters: 0.3.2
> langchainhub: 0.1.21
> langgraph: 0.2.52
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.0
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> huggingface-hub: 0.26.2
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.4
> langgraph-sdk: 0.1.36
> numpy: 1.26.4
> ollama: 0.3.3
> openai: 1.54.4
> orjson: 3.10.11
> packaging: 24.2
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> pymilvus: 2.4.9
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> sentence-transformers: 3.3.1
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> tiktoken: 0.8.0
> tokenizers: 0.20.3
> transformers: 4.46.3
> types-requests: 2.32.0.20241016
> typing-extensions: 4.12.2 | 🤖:bug | low | Critical |
2,696,939,032 | kubernetes | [InPlacePodVerticalScaling]kubelet sometimes set `.status.resize` incorrectly | ### What happened?
In cluster which enable InPlacePodVerticalScaling, If I only resize resources, I will watch `.status.resize` and `.status.containerStatuses[x].resources` to know whether the resize progress.
I have encountered some corner cases that are difficult to consistently reproduce:
1. User changes cpu request from 200m to 100m
2. Kubelet set `.status.resize` to `InProgress`
3. Kubelet set `.status.resize` to be nil and set `.status.containerStatuses[x].resources` to be 100m
4. Kubelet set `.status.resize` to be `InProgress` and set `.status.containerStatuses[x].resources` to be 200m
5. finally, Kubelet set `.status.resize` to be nil and set `.status.containerStatuses[x].resources` to be 100m again
### What did you expect to happen?
Under normal circumstances, steps 4 and 5 should not take place.
I have discovered some relevant information in the Kubelet logs.
// step 2
```log
Nov 12 10:15:53 iZbp12y2uyns2wwzue952tZ kubelet[12114]: I1112 10:15:53.062599 12114 status_manager.go:874] "Patch status for pod" pod="e2e-tests-inplace-vpa-dn82w/clone-plf57qc7jx-csxgn" podUID="1cb463b9-8fbe-4d1f-a8ac-277d981684cd" patch="{\"metadata\":{\"uid\":\"1cb463b9-8fbe-4d1f-a8ac-277d981684cd\"},\"status\":{\"containerStatuses\":[{\"allocatedResources\":{\"cpu\":\"100m\",\"memory\":\"100Mi\"},\"containerID\":\"containerd://f674d36be2e4a13a47a66a1763e1d5bf315e08dba3c7d36e63169b9e1fefe8cb\",\"image\":\"registry.cn-hangzhou.aliyuncs.com/abner1/nginx:alpine\",\"imageID\":\"registry.cn-hangzhou.aliyuncs.com/abner1/nginx@sha256:e3c23d48c0a8ae0021a66c65fd9218608572e2746e6b923b9ddbcb89f29ef128\",\"lastState\":{},\"name\":\"nginx\",\"ready\":true,\"resources\":{\"limits\":{\"cpu\":\"1\",\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"200m\",\"memory\":\"200Mi\"}},\"restartCount\":0,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2024-11-12T02:15:50Z\"}}}],\"resize\":\"InProgress\"}}"
Nov 12 10:15:53 iZbp12y2uyns2wwzue952tZ kubelet[12114]: I1112 10:15:53.062635 12114 status_manager.go:883] "Status for pod updated successfully" pod="e2e-tests-inplace-vpa-dn82w/clone-plf57qc7jx-csxgn" statusVersion=3 status={"phase":"Running","conditions":[{"type":"KruisePodReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"},{"type":"InPlaceUpdateReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"},{"type":"PodReadyToStartContainers","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:51Z"},{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:51Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:51Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"}],"hostIP":"172.21.154.57","hostIPs":[{"ip":"172.21.154.57"}],"podIP":"172.21.154.96","podIPs":[{"ip":"172.21.154.96"}],"startTime":"2024-11-12T02:15:50Z","containerStatuses":[{"name":"nginx","state":{"running":{"startedAt":"2024-11-12T02:15:50Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.cn-hangzhou.aliyuncs.com/abner1/nginx:alpine","imageID":"registry.cn-hangzhou.aliyuncs.com/abner1/nginx@sha256:e3c23d48c0a8ae0021a66c65fd9218608572e2746e6b923b9ddbcb89f29ef128","containerID":"containerd://f674d36be2e4a13a47a66a1763e1d5bf315e08dba3c7d36e63169b9e1fefe8cb","started":true,"allocatedResources":{"cpu":"100m","memory":"100Mi"},"resources":{"limits":{"cpu":"1","memory":"1Gi"},"requests":{"cpu":"200m","memory":"200Mi"}}}],"qosClass":"Burstable","resize":"InProgress"}
```
// step 3
```
Nov 12 10:15:54 iZbp12y2uyns2wwzue952tZ kubelet[12114]: I1112 10:15:54.056885 12114 kuberuntime_manager.go:1051] "computePodActions got for pod" podActions="KillPod: false, CreateSandbox: false, UpdatePodResources: false, Attempt: 0, InitContainersToStart: [], ContainersToStart: [], EphemeralContainersToStart: [],ContainersToUpdate: map[], ContainersToKill: map[]" pod="e2e-tests-inplace-vpa-dn82w/clone-plf57qc7jx-csxgn"
Nov 12 10:15:54 iZbp12y2uyns2wwzue952tZ kubelet[12114]: I1112 10:15:54.067151 12114 status_manager.go:874] "Patch status for pod" pod="e2e-tests-inplace-vpa-dn82w/clone-plf57qc7jx-csxgn" podUID="1cb463b9-8fbe-4d1f-a8ac-277d981684cd" patch="{\"metadata\":{\"uid\":\"1cb463b9-8fbe-4d1f-a8ac-277d981684cd\"},\"status\":{\"containerStatuses\":[{\"allocatedResources\":{\"cpu\":\"100m\",\"memory\":\"100Mi\"},\"containerID\":\"containerd://f674d36be2e4a13a47a66a1763e1d5bf315e08dba3c7d36e63169b9e1fefe8cb\",\"image\":\"registry.cn-hangzhou.aliyuncs.com/abner1/nginx:alpine\",\"imageID\":\"registry.cn-hangzhou.aliyuncs.com/abner1/nginx@sha256:e3c23d48c0a8ae0021a66c65fd9218608572e2746e6b923b9ddbcb89f29ef128\",\"lastState\":{},\"name\":\"nginx\",\"ready\":true,\"resources\":{\"limits\":{\"cpu\":\"800m\",\"memory\":\"800Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"100Mi\"}},\"restartCount\":0,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2024-11-12T02:15:50Z\"}}}],\"resize\":null}}"
Nov 12 10:15:54 iZbp12y2uyns2wwzue952tZ kubelet[12114]: I1112 10:15:54.067191 12114 status_manager.go:883] "Status for pod updated successfully" pod="e2e-tests-inplace-vpa-dn82w/clone-plf57qc7jx-csxgn" statusVersion=4 status={"phase":"Running","conditions":[{"type":"KruisePodReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"},{"type":"InPlaceUpdateReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"},{"type":"PodReadyToStartContainers","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:51Z"},{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:51Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:51Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"}],"hostIP":"172.21.154.57","hostIPs":[{"ip":"172.21.154.57"}],"podIP":"172.21.154.96","podIPs":[{"ip":"172.21.154.96"}],"startTime":"2024-11-12T02:15:50Z","containerStatuses":[{"name":"nginx","state":{"running":{"startedAt":"2024-11-12T02:15:50Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.cn-hangzhou.aliyuncs.com/abner1/nginx:alpine","imageID":"registry.cn-hangzhou.aliyuncs.com/abner1/nginx@sha256:e3c23d48c0a8ae0021a66c65fd9218608572e2746e6b923b9ddbcb89f29ef128","containerID":"containerd://f674d36be2e4a13a47a66a1763e1d5bf315e08dba3c7d36e63169b9e1fefe8cb","started":true,"allocatedResources":{"cpu":"100m","memory":"100Mi"},"resources":{"limits":{"cpu":"800m","memory":"800Mi"},"requests":{"cpu":"100m","memory":"100Mi"}}}],"qosClass":"Burstable"}
```
// step 4
```
Nov 12 10:15:54 iZbp12y2uyns2wwzue952tZ kubelet[12114]: I1112 10:15:54.081966 12114 status_manager.go:874] "Patch status for pod" pod="e2e-tests-inplace-vpa-dn82w/clone-plf57qc7jx-csxgn" podUID="1cb463b9-8fbe-4d1f-a8ac-277d981684cd" patch="{\"metadata\":{\"uid\":\"1cb463b9-8fbe-4d1f-a8ac-277d981684cd\"},\"status\":{\"containerStatuses\":[{\"allocatedResources\":{\"cpu\":\"100m\",\"memory\":\"100Mi\"},\"containerID\":\"containerd://f674d36be2e4a13a47a66a1763e1d5bf315e08dba3c7d36e63169b9e1fefe8cb\",\"image\":\"registry.cn-hangzhou.aliyuncs.com/abner1/nginx:alpine\",\"imageID\":\"registry.cn-hangzhou.aliyuncs.com/abner1/nginx@sha256:e3c23d48c0a8ae0021a66c65fd9218608572e2746e6b923b9ddbcb89f29ef128\",\"lastState\":{},\"name\":\"nginx\",\"ready\":true,\"resources\":{\"limits\":{\"cpu\":\"1\",\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"200m\",\"memory\":\"200Mi\"}},\"restartCount\":0,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2024-11-12T02:15:50Z\"}}}],\"resize\":\"InProgress\"}}"
Nov 12 10:15:54 iZbp12y2uyns2wwzue952tZ kubelet[12114]: I1112 10:15:54.081998 12114 status_manager.go:883] "Status for pod updated successfully" pod="e2e-tests-inplace-vpa-dn82w/clone-plf57qc7jx-csxgn" statusVersion=5 status={"phase":"Running","conditions":[{"type":"KruisePodReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"},{"type":"InPlaceUpdateReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"},{"type":"PodReadyToStartContainers","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:51Z"},{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:51Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:51Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"}],"hostIP":"172.21.154.57","hostIPs":[{"ip":"172.21.154.57"}],"podIP":"172.21.154.96","podIPs":[{"ip":"172.21.154.96"}],"startTime":"2024-11-12T02:15:50Z","containerStatuses":[{"name":"nginx","state":{"running":{"startedAt":"2024-11-12T02:15:50Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.cn-hangzhou.aliyuncs.com/abner1/nginx:alpine","imageID":"registry.cn-hangzhou.aliyuncs.com/abner1/nginx@sha256:e3c23d48c0a8ae0021a66c65fd9218608572e2746e6b923b9ddbcb89f29ef128","containerID":"containerd://f674d36be2e4a13a47a66a1763e1d5bf315e08dba3c7d36e63169b9e1fefe8cb","started":true,"allocatedResources":{"cpu":"100m","memory":"100Mi"},"resources":{"limits":{"cpu":"1","memory":"1Gi"},"requests":{"cpu":"200m","memory":"200Mi"}}}],"qosClass":"Burstable","resize":"InProgress"}
```
// step5
```
Nov 12 10:15:55 iZbp12y2uyns2wwzue952tZ kubelet[12114]: I1112 10:15:55.052256 12114 kuberuntime_manager.go:1051] "computePodActions got for pod" podActions="KillPod: false, CreateSandbox: false, UpdatePodResources: false, Attempt: 0, InitContainersToStart: [], ContainersToStart: [], EphemeralContainersToStart: [],ContainersToUpdate: map[], ContainersToKill: map[]" pod="e2e-tests-inplace-vpa-dn82w/clone-plf57qc7jx-csxgn"
Nov 12 10:15:55 iZbp12y2uyns2wwzue952tZ kubelet[12114]: I1112 10:15:55.067675 12114 status_manager.go:874] "Patch status for pod" pod="e2e-tests-inplace-vpa-dn82w/clone-plf57qc7jx-csxgn" podUID="1cb463b9-8fbe-4d1f-a8ac-277d981684cd" patch="{\"metadata\":{\"uid\":\"1cb463b9-8fbe-4d1f-a8ac-277d981684cd\"},\"status\":{\"containerStatuses\":[{\"allocatedResources\":{\"cpu\":\"100m\",\"memory\":\"100Mi\"},\"containerID\":\"containerd://f674d36be2e4a13a47a66a1763e1d5bf315e08dba3c7d36e63169b9e1fefe8cb\",\"image\":\"registry.cn-hangzhou.aliyuncs.com/abner1/nginx:alpine\",\"imageID\":\"registry.cn-hangzhou.aliyuncs.com/abner1/nginx@sha256:e3c23d48c0a8ae0021a66c65fd9218608572e2746e6b923b9ddbcb89f29ef128\",\"lastState\":{},\"name\":\"nginx\",\"ready\":true,\"resources\":{\"limits\":{\"cpu\":\"800m\",\"memory\":\"800Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"100Mi\"}},\"restartCount\":0,\"started\":true,\"state\":{\"running\":{\"startedAt\":\"2024-11-12T02:15:50Z\"}}}],\"resize\":null}}"
Nov 12 10:15:55 iZbp12y2uyns2wwzue952tZ kubelet[12114]: I1112 10:15:55.067746 12114 status_manager.go:883] "Status for pod updated successfully" pod="e2e-tests-inplace-vpa-dn82w/clone-plf57qc7jx-csxgn" statusVersion=6 status={"phase":"Running","conditions":[{"type":"KruisePodReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"},{"type":"InPlaceUpdateReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"},{"type":"PodReadyToStartContainers","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:51Z"},{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:51Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:51Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-11-12T02:15:50Z"}],"hostIP":"172.21.154.57","hostIPs":[{"ip":"172.21.154.57"}],"podIP":"172.21.154.96","podIPs":[{"ip":"172.21.154.96"}],"startTime":"2024-11-12T02:15:50Z","containerStatuses":[{"name":"nginx","state":{"running":{"startedAt":"2024-11-12T02:15:50Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.cn-hangzhou.aliyuncs.com/abner1/nginx:alpine","imageID":"registry.cn-hangzhou.aliyuncs.com/abner1/nginx@sha256:e3c23d48c0a8ae0021a66c65fd9218608572e2746e6b923b9ddbcb89f29ef128","containerID":"containerd://f674d36be2e4a13a47a66a1763e1d5bf315e08dba3c7d36e63169b9e1fefe8cb","started":true,"allocatedResources":{"cpu":"100m","memory":"100Mi"},"resources":{"limits":{"cpu":"800m","memory":"800Mi"},"requests":{"cpu":"100m","memory":"100Mi"}}}],"qosClass":"Burstable"}
```
### How can we reproduce it (as minimally and precisely as possible)?
I am unable to identify a consistent method to reproduce this issue.
This is an intermittent case.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/node,triage/needs-information,needs-triage | low | Major |
2,696,952,074 | kubernetes | is disk block io support? | ### What would you like to be added?
As we can know that cgroupv2 is support blow config:
```json
"blockIO": {
"weight": 10,
"leafWeight": 10,
"weightDevice": [
{
"major": 8,
"minor": 0,
"weight": 500,
"leafWeight": 300
},
{
"major": 8,
"minor": 16,
"weight": 500
}
],
"throttleReadBpsDevice": [
{
"major": 8,
"minor": 0,
"rate": 600
}
],
"throttleWriteIOPSDevice": [
{
"major": 8,
"minor": 16,
"rate": 300
}
]
}
```
I want add disk io limit at the pod devicemount part,but is not support.
This is my idea about how to support it.
1. CRI runtime use the containerd which is open the nri.
2. Add the annotations to the pod which is Representative disk io set.
3. Write the custom nri plugin (change the annotations to the oci.runtime.spec block io part).
### Why is this needed?
If some pods in the same node is io busy, The performance of other Pods will be affected. | kind/support,sig/node,needs-triage | low | Major |
2,696,959,690 | go | x/tools/go/packages: TestConfigDir/Modules failures | ```
#!watchflakes
default <- pkg == "golang.org/x/tools/go/packages" && test == "TestConfigDir/Modules"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730195721421005729)):
=== RUN TestConfigDir/Modules
=== PAUSE TestConfigDir/Modules
=== CONT TestConfigDir/Modules
invoke.go:205: starting GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=off GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/b go list -e -f {{context.ReleaseTags}} -- unsafe
invoke.go:205: starting GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/b go list -f "{{context.GOARCH}} {{context.Compiler}}" -- unsafe
invoke.go:205: 44.839994ms for GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/b go list -f "{{context.GOARCH}} {{context.Compiler}}" -- unsafe
invoke.go:205: 109.735521ms for GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=off GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/b go list -e -f {{context.ReleaseTags}} -- unsafe
invoke.go:205: starting GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/b go list -e -json=Name,ImportPath,Error,Dir,GoFiles,IgnoredGoFiles,IgnoredOtherFiles,CFiles,CgoFiles,CXXFiles,MFiles,HFiles,FFiles,SFiles,SwigFiles,SwigCXXFiles,SysoFiles,TestGoFiles,XTestGoFiles,CompiledGoFiles,Export,DepOnly,Imports,ImportMap,TestImports,XTestImports,Module -compiled=true -test=true -export=true -deps=true -find=false -pgo=off -- golang.org/fake/a
invoke.go:205: 331.929773ms for GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/b go list -e -json=Name,ImportPath,Error,Dir,GoFiles,IgnoredGoFiles,IgnoredOtherFiles,CFiles,CgoFiles,CXXFiles,MFiles,HFiles,FFiles,SFiles,SwigFiles,SwigCXXFiles,SysoFiles,TestGoFiles,XTestGoFiles,CompiledGoFiles,Export,DepOnly,Imports,ImportMap,TestImports,XTestImports,Module -compiled=true -test=true -export=true -deps=true -find=false -pgo=off -- golang.org/fake/a
invoke.go:205: starting GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=off GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/b go list -e -f {{context.ReleaseTags}} -- unsafe
...
invoke.go:205: 298.264074ms for GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=off GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/a go list -e -f {{context.ReleaseTags}} -- unsafe
invoke.go:205: starting GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/a go list -e -json=Name,ImportPath,Error,Dir,GoFiles,IgnoredGoFiles,IgnoredOtherFiles,CFiles,CgoFiles,CXXFiles,MFiles,HFiles,FFiles,SFiles,SwigFiles,SwigCXXFiles,SysoFiles,TestGoFiles,XTestGoFiles,CompiledGoFiles,Export,DepOnly,Imports,ImportMap,TestImports,XTestImports,Module -compiled=true -test=true -export=true -deps=true -find=false -pgo=off -- ./a
invoke.go:205: 116.291411ms for GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/a go list -e -json=Name,ImportPath,Error,Dir,GoFiles,IgnoredGoFiles,IgnoredOtherFiles,CFiles,CgoFiles,CXXFiles,MFiles,HFiles,FFiles,SFiles,SwigFiles,SwigCXXFiles,SysoFiles,TestGoFiles,XTestGoFiles,CompiledGoFiles,Export,DepOnly,Imports,ImportMap,TestImports,XTestImports,Module -compiled=true -test=true -export=true -deps=true -find=false -pgo=off -- ./a
invoke.go:205: starting GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=off GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/a go list -e -f {{context.ReleaseTags}} -- unsafe
invoke.go:205: starting GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/a go list -f "{{context.GOARCH}} {{context.Compiler}}" -- unsafe
invoke.go:205: 746.937702ms for GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=off GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/a go list -e -f {{context.ReleaseTags}} -- unsafe
invoke.go:205: starting GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/a go list -e -json=Name,ImportPath,Error,Dir,GoFiles,IgnoredGoFiles,IgnoredOtherFiles,CFiles,CgoFiles,CXXFiles,MFiles,HFiles,FFiles,SFiles,SwigFiles,SwigCXXFiles,SysoFiles,TestGoFiles,XTestGoFiles,CompiledGoFiles,Export,DepOnly,Imports,ImportMap,TestImports,XTestImports,Module -compiled=true -test=true -export=true -deps=true -find=false -pgo=off -- ./b
invoke.go:205: 580.889406ms for GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/a go list -f "{{context.GOARCH}} {{context.Compiler}}" -- unsafe
invoke.go:205: 913.050791ms for GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestConfigDir_Modules2760053777/fake/a go list -e -json=Name,ImportPath,Error,Dir,GoFiles,IgnoredGoFiles,IgnoredOtherFiles,CFiles,CgoFiles,CXXFiles,MFiles,HFiles,FFiles,SFiles,SwigFiles,SwigCXXFiles,SysoFiles,TestGoFiles,XTestGoFiles,CompiledGoFiles,Export,DepOnly,Imports,ImportMap,TestImports,XTestImports,Module -compiled=true -test=true -export=true -deps=true -find=false -pgo=off -- ./b
--- FAIL: TestConfigDir/Modules (9.69s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,Tools | low | Critical |
2,696,983,425 | deno | Streams: writableStreamDefaultControllerGetChunkSize undefined strategy case slightly broken | See https://github.com/whatwg/streams/pull/1333.
Per https://github.com/denoland/deno/blob/4700f12ddc5427797333278d2a1d3a8e1195ebfc/ext/web/06_streams.js#L4445, Deno has the same bug as the reference implementation did: it catches the exception caused by trying to call `undefined` as a function, and then attempts to error the stream with it.
I don't believe this is observable since the stream is already erroring so the "TypeError: cannot call undefined" will not actually be exposed to the web developer.
However, it's probably best to fix this. | web | low | Critical |
2,697,011,372 | vscode | Toggle comment doesn't work on new PHP file | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: Windows 11
Steps to Reproduce:
1. open VSCode
2. create PHP file (don't type anything inside)
3. press ctrl+/ (`editor.action.commentLine`)
Expected:
The line to be commented
Actual:
nothing

type anything inside the file
now ctrl+/ works
it used to work in VSCode 1.75.1, but broke in 1.76.0
does not affect any other language | bug,editor-comments | low | Critical |
2,697,055,021 | PowerToys | Switch all `[DataTestMethod]` to `[TestMethod]` | See https://github.com/microsoft/testfx/issues/4166
| Needs-Triage | low | Minor |
2,697,092,970 | vscode | Support workspace-folder matching for `files.exclude` in user settings | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
# Background
`files.exclude` matches files under the opened workspace-folder now and cannot use absolute path to match the specified workspace-folder. For example:
```json
{
"files.exclude": {
"pnpm-lock.yaml": true
}
}
```
This user settings will exclude all `pnpm-lock.yaml` files in every workspace.
Why not use workspace settings? Because `.vscode` isn't ignored by git in my project, the changes of `files.exclude` is extremely apt to cause conflicts.
# Expected
Like this:
```json
{
"files.exclude": {
"pnpm-lock.yaml": {
"when": {
"workspaceFolder.name": "<specified-name>"
}
}
}
}
```
Or support absolute path matching:
```json
{
"files.exclude": {
"/absolute/path/to/workspace-folder/pnpm-lock.yaml": true
}
}
``` | feature-request,file-explorer | low | Minor |
2,697,214,409 | rust | Deprecate `std::time::Instant::saturating_duration_since()`? | As previously suggested here: https://github.com/rust-lang/rust/pull/84448#issuecomment-944421516
Starting from #89926, [`std::time::Instant::duration_since()`](https://doc.rust-lang.org/1.82.0/std/time/struct.Instant.html#method.duration_since) effectively does the same thing as the later introduced [`std::time::Instant::saturating_duration_since()`](https://doc.rust-lang.org/1.82.0/std/time/struct.Instant.html#method.saturating_duration_since) (they are now literally identical). Given that this was a possibly breaking change that has already passed, I propose the latter function for deprecation and subsequent removal, as it currently creates the false ambiguity.
The only objection I see is that the current state of affairs allows to bring back `panic!` in the future, as mentioned in a comment here:
https://doc.rust-lang.org/1.82.0/src/std/time.rs.html#143-144 | T-libs-api,T-libs,C-discussion | low | Minor |
2,697,277,776 | tensorflow | Very serious! Using this method will definitely result in memory leaks, I hope you can provide support | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18
### Custom code
Yes
### OS platform and distribution
ubuntu 2.2 or mac m1
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I have tried various methods, but the memory is definitely leaking, it seems that the release of memory cannot keep up. Through the logs, it can be found that there is periodic memory recycling, but with the increase of time, there is still a clear upward trend
### Standalone code to reproduce the issue
```shell
import gc
import keras
import numpy as np
import psutil
from keras.optimizers import Adam
from keras.layers import Dense, Dropout, Input, LSTM
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
import time
import json
num_samples = 6
num_features = 3
num_classes = 4
epochs = 50
batch_size = 2
identifier = "test_model"
num_iterations = 500
def build_model(X, num_classes):
model = Sequential()
model.add(Input(shape=(X.shape[1], X.shape[2])))
model.add(LSTM(16, return_sequences=True))
model.add(LSTM(16))
model.add(Dropout(0.4))
model.add(Dense(8, activation='tanh'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['categorical_accuracy'])
return model
data_X = np.random.rand(num_samples, num_features)
data_Y = np.random.randint(0, num_classes, size=(num_samples, 1))
data_Y = np.eye(num_classes)[data_Y.flatten()]
print(type(data_X))
scaler = MinMaxScaler()
data_X_scaled = scaler.fit_transform(data_X)
train_X, test_X, train_Y, test_Y = train_test_split(data_X_scaled, data_Y, train_size=0.6, random_state=42)
train_X = np.expand_dims(train_X, axis=1)
test_X = np.expand_dims(test_X, axis=1)
for iteration in range(num_iterations):
tf.keras.backend.clear_session()
tf.compat.v1.reset_default_graph()
model = build_model(train_X, num_classes)
model_name = f"model_{iteration}"
early_stopping = EarlyStopping(monitor='loss', patience=10, verbose=0, restore_best_weights=True)
print(f"Iteration {iteration + 1}/{num_iterations}")
process = psutil.Process()
mem_info = process.memory_info()
print(f"start Current memory usage: {mem_info.rss / (1024 * 1024):.2f} MB") # RSS - Resident Set Size
try:
history = model.fit(train_X, train_Y, epochs=epochs, batch_size=batch_size, shuffle=True,
validation_data=(test_X, test_Y), verbose=0)
print(f"Training model: {model.name}")
del model
tf.keras.backend.clear_session()
gc.collect()
except Exception as e:
print("err:", e)
finally:
process = psutil.Process()
mem_info = process.memory_info()
print(f"end Current memory usage: {mem_info.rss / (1024 * 1024):.2f} MB") # RSS - Resident Set Size
print("end!")
```
### Relevant log output
```shell
Iteration 1/500
start Current memory usage: 450.69 MB
Training model: sequential
end Current memory usage: 524.41 MB
Iteration 2/500
start Current memory usage: 524.52 MB
Training model: sequential
end Current memory usage: 564.97 MB
Iteration 3/500
start Current memory usage: 564.98 MB
Training model: sequential
end Current memory usage: 598.00 MB
Iteration 4/500
start Current memory usage: 598.03 MB
Training model: sequential
end Current memory usage: 624.69 MB
Iteration 5/500
start Current memory usage: 624.69 MB
Training model: sequential
end Current memory usage: 653.89 MB
Iteration 6/500
start Current memory usage: 653.91 MB
Training model: sequential
end Current memory usage: 679.45 MB
Iteration 7/500
start Current memory usage: 679.45 MB
Training model: sequential
end Current memory usage: 701.59 MB
Iteration 8/500
start Current memory usage: 701.59 MB
Training model: sequential
end Current memory usage: 726.83 MB
Iteration 9/500
start Current memory usage: 726.84 MB
Training model: sequential
end Current memory usage: 749.56 MB
Iteration 10/500
start Current memory usage: 749.56 MB
Training model: sequential
end Current memory usage: 782.56 MB
Iteration 11/500
start Current memory usage: 782.56 MB
Training model: sequential
end Current memory usage: 805.92 MB
Iteration 12/500
start Current memory usage: 805.92 MB
Training model: sequential
end Current memory usage: 833.17 MB
Iteration 13/500
start Current memory usage: 833.17 MB
Training model: sequential
end Current memory usage: 852.84 MB
Iteration 14/500
start Current memory usage: 852.84 MB
Training model: sequential
end Current memory usage: 875.05 MB
Iteration 15/500
start Current memory usage: 875.06 MB
Training model: sequential
end Current memory usage: 901.56 MB
Iteration 16/500
start Current memory usage: 901.56 MB
Training model: sequential
end Current memory usage: 930.62 MB
Iteration 17/500
start Current memory usage: 705.70 MB
Training model: sequential
end Current memory usage: 762.64 MB
Iteration 18/500
start Current memory usage: 762.70 MB
Training model: sequential
end Current memory usage: 798.06 MB
Iteration 19/500
start Current memory usage: 798.17 MB
Training model: sequential
end Current memory usage: 824.98 MB
Iteration 20/500
start Current memory usage: 824.98 MB
Training model: sequential
end Current memory usage: 850.34 MB
Iteration 21/500
start Current memory usage: 850.42 MB
Training model: sequential
end Current memory usage: 876.81 MB
Iteration 22/500
start Current memory usage: 876.81 MB
Training model: sequential
end Current memory usage: 904.02 MB
Iteration 23/500
start Current memory usage: 904.08 MB
Training model: sequential
end Current memory usage: 929.70 MB
Iteration 24/500
start Current memory usage: 929.73 MB
Training model: sequential
end Current memory usage: 952.33 MB
Iteration 25/500
start Current memory usage: 952.34 MB
Training model: sequential
end Current memory usage: 952.28 MB
Iteration 26/500
start Current memory usage: 952.47 MB
Training model: sequential
end Current memory usage: 980.39 MB
Iteration 27/500
start Current memory usage: 978.78 MB
Training model: sequential
end Current memory usage: 999.02 MB
Iteration 28/500
start Current memory usage: 999.05 MB
Training model: sequential
end Current memory usage: 1023.50 MB
Iteration 29/500
start Current memory usage: 1023.53 MB
Training model: sequential
end Current memory usage: 1047.80 MB
Iteration 30/500
start Current memory usage: 1047.83 MB
Training model: sequential
end Current memory usage: 1068.88 MB
Iteration 31/500
start Current memory usage: 1068.94 MB
Training model: sequential
end Current memory usage: 1095.78 MB
Iteration 32/500
start Current memory usage: 1095.78 MB
Training model: sequential
end Current memory usage: 1119.03 MB
Iteration 33/500
start Current memory usage: 1119.03 MB
Training model: sequential
end Current memory usage: 1039.41 MB
Iteration 34/500
start Current memory usage: 1022.78 MB
Training model: sequential
end Current memory usage: 1040.88 MB
Iteration 35/500
start Current memory usage: 1040.70 MB
Training model: sequential
end Current memory usage: 1054.58 MB
Iteration 36/500
start Current memory usage: 1054.58 MB
Training model: sequential
end Current memory usage: 1076.16 MB
Iteration 37/500
start Current memory usage: 1076.19 MB
Training model: sequential
end Current memory usage: 1097.02 MB
Iteration 38/500
start Current memory usage: 1097.03 MB
Training model: sequential
end Current memory usage: 1113.70 MB
Iteration 39/500
start Current memory usage: 1114.12 MB
Training model: sequential
end Current memory usage: 1140.30 MB
Iteration 40/500
start Current memory usage: 1140.33 MB
Training model: sequential
end Current memory usage: 1163.81 MB
Iteration 41/500
start Current memory usage: 1163.86 MB
Training model: sequential
end Current memory usage: 1195.83 MB
Iteration 42/500
start Current memory usage: 1195.83 MB
Training model: sequential
end Current memory usage: 1221.53 MB
Iteration 43/500
start Current memory usage: 1221.55 MB
Training model: sequential
end Current memory usage: 1231.09 MB
Iteration 44/500
start Current memory usage: 1231.14 MB
Training model: sequential
end Current memory usage: 1245.78 MB
Iteration 45/500
start Current memory usage: 1199.55 MB
Training model: sequential
end Current memory usage: 1221.59 MB
Iteration 46/500
start Current memory usage: 1221.59 MB
Training model: sequential
end Current memory usage: 1249.11 MB
Iteration 47/500
start Current memory usage: 1249.22 MB
Training model: sequential
end Current memory usage: 1275.50 MB
Iteration 48/500
start Current memory usage: 1259.83 MB
Training model: sequential
end Current memory usage: 1290.91 MB
Iteration 49/500
start Current memory usage: 1285.67 MB
Training model: sequential
end Current memory usage: 1296.75 MB
Iteration 50/500
start Current memory usage: 1296.75 MB
Training model: sequential
end Current memory usage: 1306.59 MB
Iteration 51/500
start Current memory usage: 1306.59 MB
Training model: sequential
end Current memory usage: 1287.53 MB
Iteration 52/500
start Current memory usage: 1287.53 MB
Training model: sequential
end Current memory usage: 1297.23 MB
Iteration 53/500
start Current memory usage: 1297.25 MB
Training model: sequential
end Current memory usage: 1285.45 MB
Iteration 54/500
start Current memory usage: 1285.45 MB
Training model: sequential
end Current memory usage: 1290.36 MB
Iteration 55/500
start Current memory usage: 1282.14 MB
Training model: sequential
end Current memory usage: 1302.14 MB
Iteration 56/500
start Current memory usage: 1302.14 MB
Training model: sequential
end Current memory usage: 1287.70 MB
Iteration 57/500
start Current memory usage: 1287.75 MB
Training model: sequential
end Current memory usage: 1282.77 MB
Iteration 58/500
start Current memory usage: 1271.38 MB
Training model: sequential
end Current memory usage: 1232.14 MB
Iteration 59/500
start Current memory usage: 1212.70 MB
Training model: sequential
end Current memory usage: 1201.16 MB
Iteration 60/500
start Current memory usage: 1200.53 MB
Training model: sequential
end Current memory usage: 1169.45 MB
Iteration 61/500
start Current memory usage: 1169.45 MB
Training model: sequential
end Current memory usage: 1209.73 MB
Iteration 62/500
start Current memory usage: 1207.19 MB
Training model: sequential
end Current memory usage: 1226.28 MB
Iteration 63/500
start Current memory usage: 1226.28 MB
Training model: sequential
end Current memory usage: 1231.45 MB
Iteration 64/500
start Current memory usage: 1210.11 MB
Training model: sequential
end Current memory usage: 1176.00 MB
Iteration 65/500
start Current memory usage: 1173.97 MB
Training model: sequential
end Current memory usage: 1201.42 MB
Iteration 66/500
start Current memory usage: 1201.42 MB
Training model: sequential
end Current memory usage: 1223.94 MB
Iteration 67/500
start Current memory usage: 1222.50 MB
Training model: sequential
end Current memory usage: 1229.80 MB
Iteration 68/500
start Current memory usage: 1227.14 MB
Training model: sequential
end Current memory usage: 1219.02 MB
Iteration 69/500
start Current memory usage: 1210.48 MB
Training model: sequential
end Current memory usage: 1247.17 MB
Iteration 70/500
start Current memory usage: 1245.94 MB
Training model: sequential
end Current memory usage: 1259.84 MB
Iteration 71/500
start Current memory usage: 1259.86 MB
Training model: sequential
end Current memory usage: 1286.39 MB
Iteration 72/500
start Current memory usage: 1286.53 MB
Training model: sequential
end Current memory usage: 1316.52 MB
Iteration 73/500
start Current memory usage: 1311.53 MB
Training model: sequential
end Current memory usage: 1338.72 MB
Iteration 74/500
start Current memory usage: 1338.75 MB
Training model: sequential
end Current memory usage: 1348.45 MB
Iteration 75/500
start Current memory usage: 1338.30 MB
Training model: sequential
end Current memory usage: 1354.97 MB
Iteration 76/500
start Current memory usage: 1353.83 MB
Training model: sequential
end Current memory usage: 1385.67 MB
Iteration 77/500
start Current memory usage: 1385.69 MB
Training model: sequential
end Current memory usage: 1408.83 MB
Iteration 78/500
start Current memory usage: 1408.88 MB
Training model: sequential
end Current memory usage: 1430.91 MB
Iteration 79/500
start Current memory usage: 1430.94 MB
Training model: sequential
end Current memory usage: 1443.62 MB
Iteration 80/500
start Current memory usage: 1428.00 MB
Training model: sequential
end Current memory usage: 1436.50 MB
Iteration 81/500
start Current memory usage: 1436.64 MB
Training model: sequential
end Current memory usage: 1454.66 MB
Iteration 82/500
start Current memory usage: 1440.91 MB
Training model: sequential
end Current memory usage: 1461.81 MB
Iteration 83/500
start Current memory usage: 1460.47 MB
Training model: sequential
end Current memory usage: 1481.19 MB
Iteration 84/500
start Current memory usage: 1481.19 MB
Training model: sequential
end Current memory usage: 1477.84 MB
Iteration 85/500
start Current memory usage: 1477.84 MB
Training model: sequential
end Current memory usage: 1493.55 MB
Iteration 86/500
start Current memory usage: 1493.58 MB
Training model: sequential
end Current memory usage: 1509.50 MB
Iteration 87/500
start Current memory usage: 1509.50 MB
Training model: sequential
end Current memory usage: 1543.94 MB
Iteration 88/500
start Current memory usage: 1542.83 MB
Training model: sequential
end Current memory usage: 1516.17 MB
Iteration 89/500
start Current memory usage: 1516.20 MB
Training model: sequential
end Current memory usage: 1470.17 MB
Iteration 90/500
start Current memory usage: 1470.22 MB
Training model: sequential
end Current memory usage: 1443.72 MB
Iteration 91/500
start Current memory usage: 1444.36 MB
Training model: sequential
end Current memory usage: 1486.23 MB
Iteration 92/500
start Current memory usage: 1476.41 MB
Training model: sequential
end Current memory usage: 1524.97 MB
Iteration 93/500
start Current memory usage: 1524.97 MB
Training model: sequential
end Current memory usage: 1534.94 MB
Iteration 94/500
start Current memory usage: 1551.98 MB
Training model: sequential
end Current memory usage: 1853.48 MB
Iteration 95/500
start Current memory usage: 1853.48 MB
Training model: sequential
end Current memory usage: 1790.12 MB
Iteration 96/500
start Current memory usage: 1792.27 MB
Training model: sequential
end Current memory usage: 1883.20 MB
Iteration 97/500
start Current memory usage: 1879.05 MB
Training model: sequential
end Current memory usage: 1759.69 MB
Iteration 98/500
start Current memory usage: 1669.66 MB
Training model: sequential
end Current memory usage: 1596.77 MB
Iteration 99/500
start Current memory usage: 1597.12 MB
Training model: sequential
end Current memory usage: 1568.83 MB
Iteration 100/500
start Current memory usage: 1532.98 MB
Training model: sequential
end Current memory usage: 1516.75 MB
Iteration 101/500
start Current memory usage: 1465.98 MB
Training model: sequential
end Current memory usage: 1486.66 MB
Iteration 102/500
start Current memory usage: 1483.34 MB
Training model: sequential
end Current memory usage: 1523.19 MB
Iteration 103/500
start Current memory usage: 1523.14 MB
Training model: sequential
end Current memory usage: 1532.77 MB
Iteration 104/500
start Current memory usage: 1531.14 MB
Training model: sequential
end Current memory usage: 1561.78 MB
Iteration 105/500
start Current memory usage: 1555.67 MB
Training model: sequential
end Current memory usage: 1586.70 MB
Iteration 106/500
start Current memory usage: 1586.75 MB
Training model: sequential
end Current memory usage: 1608.41 MB
Iteration 107/500
start Current memory usage: 1603.81 MB
Training model: sequential
end Current memory usage: 1629.00 MB
Iteration 108/500
start Current memory usage: 1629.05 MB
Training model: sequential
end Current memory usage: 1609.25 MB
Iteration 109/500
start Current memory usage: 1609.31 MB
Training model: sequential
end Current memory usage: 1630.09 MB
Iteration 110/500
start Current memory usage: 1629.20 MB
Training model: sequential
end Current memory usage: 1638.66 MB
Iteration 111/500
start Current memory usage: 1620.30 MB
Training model: sequential
end Current memory usage: 1642.81 MB
Iteration 112/500
start Current memory usage: 1642.94 MB
Training model: sequential
end Current memory usage: 1659.45 MB
Iteration 113/500
start Current memory usage: 1655.17 MB
Training model: sequential
end Current memory usage: 1687.80 MB
Iteration 114/500
start Current memory usage: 1673.33 MB
Training model: sequential
end Current memory usage: 1705.94 MB
Iteration 115/500
start Current memory usage: 1699.95 MB
Training model: sequential
end Current memory usage: 1708.22 MB
Iteration 116/500
start Current memory usage: 1707.88 MB
Training model: sequential
end Current memory usage: 1648.23 MB
Iteration 117/500
start Current memory usage: 1634.03 MB
Training model: sequential
end Current memory usage: 1670.97 MB
Iteration 118/500
start Current memory usage: 1671.97 MB
Training model: sequential
end Current memory usage: 1649.69 MB
Iteration 119/500
start Current memory usage: 1645.14 MB
Training model: sequential
end Current memory usage: 1698.64 MB
Iteration 120/500
start Current memory usage: 1699.69 MB
Training model: sequential
end Current memory usage: 1737.67 MB
Iteration 121/500
start Current memory usage: 1737.67 MB
Training model: sequential
end Current memory usage: 1738.05 MB
Iteration 122/500
start Current memory usage: 1721.47 MB
Training model: sequential
end Current memory usage: 1730.64 MB
Iteration 123/500
start Current memory usage: 1729.53 MB
Training model: sequential
end Current memory usage: 1766.12 MB
Iteration 124/500
start Current memory usage: 1761.22 MB
Training model: sequential
end Current memory usage: 1796.58 MB
Iteration 125/500
start Current memory usage: 1796.73 MB
Training model: sequential
end Current memory usage: 1709.02 MB
Iteration 126/500
start Current memory usage: 1721.67 MB
Training model: sequential
end Current memory usage: 1771.50 MB
Iteration 127/500
start Current memory usage: 1771.50 MB
Training model: sequential
end Current memory usage: 1777.38 MB
Iteration 128/500
start Current memory usage: 1757.58 MB
Training model: sequential
end Current memory usage: 1806.50 MB
Iteration 129/500
start Current memory usage: 1758.81 MB
Training model: sequential
end Current memory usage: 1812.45 MB
Iteration 130/500
start Current memory usage: 1812.86 MB
Training model: sequential
end Current memory usage: 1811.14 MB
Iteration 131/500
start Current memory usage: 1799.61 MB
Training model: sequential
end Current memory usage: 1835.33 MB
Iteration 132/500
start Current memory usage: 1716.38 MB
Training model: sequential
end Current memory usage: 1759.75 MB
Iteration 133/500
start Current memory usage: 1752.44 MB
Training model: sequential
end Current memory usage: 1818.41 MB
Iteration 134/500
start Current memory usage: 1811.42 MB
Training model: sequential
end Current memory usage: 1853.58 MB
Iteration 135/500
start Current memory usage: 1853.70 MB
Training model: sequential
end Current memory usage: 1858.50 MB
Iteration 136/500
start Current memory usage: 1858.56 MB
Training model: sequential
end Current memory usage: 1874.84 MB
Iteration 137/500
start Current memory usage: 1862.92 MB
Training model: sequential
end Current memory usage: 1768.23 MB
Iteration 138/500
start Current memory usage: 1762.73 MB
Training model: sequential
end Current memory usage: 1843.39 MB
Iteration 139/500
start Current memory usage: 1843.52 MB
Training model: sequential
end Current memory usage: 1885.88 MB
Iteration 140/500
start Current memory usage: 1885.95 MB
Training model: sequential
end Current memory usage: 1924.86 MB
Iteration 141/500
start Current memory usage: 1925.05 MB
Training model: sequential
end Current memory usage: 1946.80 MB
Iteration 142/500
start Current memory usage: 1946.69 MB
Training model: sequential
end Current memory usage: 1977.53 MB
Iteration 143/500
start Current memory usage: 1974.27 MB
Training model: sequential
end Current memory usage: 1995.17 MB
Iteration 144/500
start Current memory usage: 1992.41 MB
Training model: sequential
end Current memory usage: 1984.45 MB
Iteration 145/500
start Current memory usage: 1963.42 MB
Training model: sequential
end Current memory usage: 1947.31 MB
Iteration 146/500
start Current memory usage: 1944.47 MB
Training model: sequential
end Current memory usage: 1996.00 MB
Iteration 147/500
start Current memory usage: 1996.08 MB
Training model: sequential
end Current memory usage: 2008.41 MB
Iteration 148/500
start Current memory usage: 1999.69 MB
Training model: sequential
end Current memory usage: 1951.30 MB
Iteration 149/500
start Current memory usage: 1942.98 MB
Training model: sequential
end Current memory usage: 1992.28 MB
Iteration 150/500
start Current memory usage: 1982.86 MB
Training model: sequential
end Current memory usage: 2008.83 MB
Iteration 151/500
start Current memory usage: 2008.83 MB
Training model: sequential
end Current memory usage: 1946.42 MB
Iteration 152/500
start Current memory usage: 1946.92 MB
Training model: sequential
end Current memory usage: 1992.48 MB
Iteration 153/500
start Current memory usage: 1979.52 MB
Training model: sequential
end Current memory usage: 2035.66 MB
Iteration 154/500
start Current memory usage: 2023.91 MB
Training model: sequential
end Current memory usage: 2030.31 MB
Iteration 155/500
start Current memory usage: 1974.39 MB
Training model: sequential
end Current memory usage: 2029.30 MB
Iteration 156/500
start Current memory usage: 1997.42 MB
Training model: sequential
end Current memory usage: 2000.31 MB
Iteration 157/500
start Current memory usage: 1964.38 MB
Training model: sequential
end Current memory usage: 1979.45 MB
Iteration 158/500
start Current memory usage: 1973.12 MB
Training model: sequential
```
| stat:awaiting tensorflower,comp:runtime,type:performance,TF 2.18 | low | Critical |
2,697,279,599 | ant-design | ConfigProvider组件里注入的组件className是否能支持组件的popupClassName | ### What problem does this feature solve?
现在在ConfigProvider中为组件注入的className只会被添加到组件本身,但是对于像Select组件,它携带了下拉列表,这种脱离了Select组件,因此注入的className没法对下拉弹窗进行样式覆盖,需要单独给Select组件传递popupClassName去对下拉列表进行样式覆盖
### What does the proposed API look like?
ConfigProvider支持为组件注入popupClassName,满足开发者能够覆盖下下拉列表样式
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,👷🏻♂️ Someone working on it | low | Minor |
2,697,381,115 | angular | ResourceStatus of the resource should be string literal instead of enum | ### Which @angular/* package(s) are relevant/related to the feature request?
core
### Description
Enums in TS are not great. Even though there are use-cases where they are appropriate, most of the time string literals do the job better. I believe `ResourceStatus` should be a string literal.
Keeping `ResourceStatus` as `enum` will result in increased boilerplate without any added benefits:
Example using enum:
```ts
import { ResourceStatus } from '@angular/core'; // this import could be removed with string literals
@Component({
template: `
<-- this could be simply comparing to string literal without any lose of strict type checking --/>
@if (countriesResource.status() === status.Resolved) {
...
}
`,
})
export class AppComponent {
readonly #http = inject(HttpClient);
status = ResourceStatus; // // this property could be removed with string literals
countriesResource = rxResource({
loader: () =>
this.#http.get<Country[]>('https://restcountries.com/v3.1/all'),
});
}
```
Example using string literal:
```ts
@Component({
template: `
@if (countriesResource.status() === 'resolved') {
...
}
`,
})
export class AppComponent {
readonly #http = inject(HttpClient);
countriesResource = rxResource({
loader: () =>
this.#http.get<Country[]>('https://restcountries.com/v3.1/all'),
});
}
```
There is already confusion about how to check the value of the status in the template [here](https://github.com/angular/angular/issues/58854#issuecomment-2498674168)
### Proposed solution
```ts
export type ResourceStatus =
| 'idle'
| 'error'
| 'loading'
| 'reloading'
| 'resolved'
| 'local';
```
### Alternatives considered
none | area: core,core: reactivity,cross-cutting: signals | low | Critical |
2,697,522,332 | go | x/net: incorrect parsing of rt_msghdr errno on Darwin | ### Go version
x/net v0.31.0
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/spike/Library/Caches/go-build'
GOENV='/Users/spike/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/spike/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/spike/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/[email protected]/1.21.8/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/opt/homebrew/Cellar/[email protected]/1.21.8/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.21.8'
GCCGO='gccgo'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/53/zffdtv3x7lg_pyhrk85p7_nw0000gn/T/go-build2233440794=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
I read the source and spotted an error.
### What did you see happen?
I believe the parsing of route messages in https://go.googlesource.com/net/+/refs/tags/v0.31.0/route/route_classic.go is incorrect for Darwin (presumably other BSD variants). In particular it parses the errno from `28:32` in the header.
However, the header (https://github.com/apple-oss-distributions/xnu/blob/33de042d024d46de5ff4e89f2471de6608e37fa4/bsd/net/route.h#L158) looks like:
```
struct rt_msghdr {
u_short rtm_msglen; /* to skip over non-understood messages */
u_char rtm_version; /* future binary compatibility */
u_char rtm_type; /* message type */
u_short rtm_index; /* index for associated ifp */
int rtm_flags; /* flags, incl. kern & message, e.g. DONE */
int rtm_addrs; /* bitmask identifying sockaddrs in msg */
pid_t rtm_pid; /* identify sender */
int rtm_seq; /* for sender to identify action */
int rtm_errno; /* why failed */
int rtm_use; /* from rtentry */
u_int32_t rtm_inits; /* which metrics we are initializing */
struct rt_metrics rtm_rmx; /* metrics themselves */
};
```
`errno` is immediately after the `rtm_seq`, which are sliced as `20:24`, meaning it should be `24:28`.
### What did you expect to see?
errno should be parsed from bytes `24:28` on Darwin | NeedsInvestigation | low | Critical |
2,697,531,288 | pytorch | [inductor][cpu] AMP CPP wrapper performance regression in 2024-11-25 nightly release | ### 🐛 Describe the bug
<p>amp static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>huggingface</td>
<td>MegatronBertForQuestionAnswering</td>
<td>multiple</td>
<td>8</td>
<td>2.056239</td>
<td>0.167259388</td>
<td>0.34392527672173206</td>
<td>26.978458</td>
<td>8</td>
<td>2.679464</td>
<td>0.129058642</td>
<td>0.34580798512788796</td>
<td>25.750905</td>
<td>0.77</td>
<td>1.01</td>
<td>0.77</td>
<td>0.95</td>
</tr>
</tbody>
</table>
<p>amp dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>hf_Bert_large</td>
<td>multiple</td>
<td>1</td>
<td>1.033177</td>
<td>0.084021655</td>
<td>0.086809241447935</td>
<td>26.319139</td>
<td>1</td>
<td>2.77511</td>
<td>0.033001139</td>
<td>0.09158179085029</td>
<td>25.424551</td>
<td>0.37</td>
<td>1.05</td>
<td>0.39</td>
<td>0.97</td>
</tr>
<tr>
<td>huggingface</td>
<td>MegatronBertForQuestionAnswering</td>
<td>multiple</td>
<td>8</td>
<td>2.103844</td>
<td>0.164229288</td>
<td>0.345512802183072</td>
<td>38.84587</td>
<td>8</td>
<td>2.653344</td>
<td>0.12995433399999998</td>
<td>0.34481355239289596</td>
<td>37.212765</td>
<td>0.79</td>
<td>1.0</td>
<td>0.79</td>
<td>0.96</td>
</tr>
</tbody>
</table>
the last good commit: 2fc692b3dd42bf92c4f92dcec862bae7ae1c7995
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance huggingface MegatronBertForQuestionAnswering amp first static cpp
Testing with cpp wrapper.
Testing with inductor.
multi-threads testing....
/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
loading model: 0it [00:03, ?it/s]
cpu eval MegatronBertForQuestionAnswering
running benchmark: 100%|██████████████████████████████████████████████████████████████████| 50/50 [00:23<00:00, 2.13it/s]
2.632x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,MegatronBertForQuestionAnswering,8,2.632258,128.201069,28.769934,0.936615,1302.990029,1391.169126,726,1,0,0,0,0,0
```
the bad commit: 819b0ebd944c5fca4188541adcbd53f87a2ce534
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance huggingface MegatronBertForQuestionAnswering amp first static cpp
Testing with cpp wrapper.
Testing with inductor.
multi-threads testing....
/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
loading model: 0it [00:03, ?it/s]
cpu eval MegatronBertForQuestionAnswering
running benchmark: 100%|██████████████████████████████████████████████████████████████████| 50/50 [00:25<00:00, 1.98it/s]
1.996x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,MegatronBertForQuestionAnswering,8,1.995862,168.099244,29.314211,0.726404,1303.114138,1793.925120,726,1,0,0,0,0,0
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>766a5e3a</td>
<td>main</td>
<td>766a5e3a</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>c513f01516673898d551818c8ca6085cf07e4006</td>
<td>main</td>
<td>2fc692b3dd42bf92c4f92dcec862bae7ae1c7995</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.5.0a0+332760d</td>
<td>main</td>
<td>2.5.0a0+332760d</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance huggingface MegatronBertForQuestionAnswering amp first static cpp
Suspected guilty commit: https://github.com/pytorch/pytorch/commit/819b0ebd944c5fca4188541adcbd53f87a2ce534
[huggingface-MegatronBertForQuestionAnswering-inference-amp-static-cpp-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/17930843/huggingface-MegatronBertForQuestionAnswering-inference-amp-static-cpp-multiple-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129 | oncall: pt2,oncall: cpu inductor | low | Critical |
2,697,532,571 | rust | Missed optimization when looping over bytes of a value | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code, which contains 3 functions which check if all the bits in a u64 are all ones:
```rust
#[no_mangle]
fn ne_bytes(input: u64) -> bool {
let bytes = input.to_ne_bytes();
bytes.iter().all(|x| *x == !0)
}
#[no_mangle]
fn black_box_ne_bytes(input: u64) -> bool {
let bytes = input.to_ne_bytes();
let bytes = std::hint::black_box(bytes);
bytes.iter().all(|x| *x == !0)
}
#[no_mangle]
fn direct(input: u64) -> bool {
input == !0
}
```
I expected to see this happen: `ne_bytes()` should be optimized to the same thing as `direct()`, while `black_box_ne_bytes()` should be optimized slightly worse
Instead, this happened: I got the following assembly, where `ne_bytes()` is somehow optimized worse than `black_box_ne_bytes()`
```asm
ne_bytes:
mov rax, rdi
not rax
shl rax, 8
sete cl
shr rdi, 56
cmp edi, 255
setae al
and al, cl
ret
black_box_ne_bytes:
mov qword ptr [rsp - 8], rdi
lea rax, [rsp - 8]
cmp qword ptr [rsp - 8], -1
sete al
ret
direct:
cmp rdi, -1
sete al
ret
```
[Godbolt](https://godbolt.org/z/31nvsa3Ye)
### Meta
Reproducible on godbolt with stable `rustc 1.82.0 (f6e511eec 2024-10-15)` and nightly `rustc 1.85.0-nightly (7db7489f9 2024-11-25)` | A-LLVM,T-compiler,llvm-fixed-upstream,C-optimization | low | Critical |
2,697,627,246 | PowerToys | Give the option to omit minimized windows when cycling through windows of the same zone | ### Description of the new feature / enhancement
When the user presses `win+pgup/pgdn` on the currently focused window, which belongs to the same zone as other windows, it would be nice to be able to omit any minimized windows (of the same zone) from the cycle.
### Scenario when this would be used?
I often want to switch fast between 2 central windows that i use often and that need a larger space so they can't be put next to each other at the same time. At the same time, there might be other windows in the same zone, which are at the time minimized. They should be allowed to stay minimized!
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,697,635,067 | three.js | WebGPURenderer: Models with morphs are less performant | ### Description
I'm trying to improve the rendering performance of [three-vrm](https://github.com/pixiv/three-vrm) on WebGPURenderer.
During my profiling, I found that models with morphs have much lesser GPU performance compared to the same model without morphs.
### Reproduction steps
1. Load a GLTF model with morphs
2. Render
3. See the value `renderer.info.render.timestamp`
### Code
```js
const loader = new GLTFLoader();
const gltf = await loader.loadAsync(url); // a model with morphs
scene.add(gltf.scene);
```
### Live example
Here is my fixture.
This loads the same model under 4 different finishes (without skins and morphs, with morphs, with skins, with both) and calculates the runtime CPU/GPU performance. Each model is rendered for 100 frames and does the same profiling for five sets.
On the "result" row of the textarea, the second column indicates the model name, the third is the mean CPU time, and the fourth is the mean GPU time.
I used `renderer.info.render.timestamp` to profile the GPU performance on WebGPU; I'm not sure how reliable this value is.
https://three-webgpu-profile-morphs.glitch.me/
https://glitch.com/edit/#!/three-webgpu-profile-morphs
Here is the result on my machine (Alienware x16 r1, Intel i9-13900HK, NVIDIA RTX 4070 Laptop):
model | webgl cpu (ms) | webgpu cpu (ms) | webgl gpu (ms) | webgpu gpu (ms)
-- | -- | -- | -- | --
AvatarSample_B_.glb | 0.5560000007 | 0.7353999989 | 0.160837632 | 0.52953088
AvatarSample_B_morph.glb | 0.4741999991 | 0.7295999993 | 0.08183808 | 1.796866048
AvatarSample_B_skin.glb | 0.5308000017 | 0.7617999989 | 0.151005184 | 0.383254528
AvatarSample_B_both.glb | 0.6412000004 | 0.8019999999 | 0.12335872 | 1.027211264
### Screenshots
_No response_
### Version
r170
### Device
Desktop
### Browser
Chrome
### OS
Windows | WebGPU,Needs Investigation | low | Major |
2,697,636,974 | react-native | flexBasis Not Reflecting Updated State Value in Style | ### Description
When dynamically updating a state variable (flexBasisValue) with percentage values and using it in the style prop for components, the updated value is not reflected visually. This issue occurs in React Native latest `0.76.2v` but works as expected in React Native `0.74.5v`.
### Steps to reproduce
1. `npm i`
2. `npm start -c`
### React Native Version
0.76.2
### Affected Platforms
Runtime - Android, Runtime - iOS
### Areas
Other (please specify)
### Output of `npx react-native info`
```text
for react-native version: 0.74.5
System:
OS: macOS 14.4
CPU: (8) arm64 Apple M1
Memory: 575.59 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.10.0
path: ~/.nvm/versions/node/v20.10.0/bin/node
Yarn:
version: 1.22.21
path: /opt/homebrew/bin/yarn
npm:
version: 10.2.3
path: ~/.nvm/versions/node/v20.10.0/bin/npm
Watchman:
version: 2024.08.26.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/rajatchaudhary/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.21829.142.2421.12409432
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /usr/bin/javac
Ruby:
version: 3.1.2
path: /Users/rajatchaudhary/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.74.5
wanted: 0.74.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
info React Native v0.76.3 is now available (your project is running on v0.74.5).
info Changelog: https://github.com/facebook/react-native/releases/tag/v0.76.3
info Diff: https://react-native-community.github.io/upgrade-helper/?from=0.74.5
info For more info, check out "https://reactnative.dev/docs/upgrading?os=macos".
for React Native version : 0.76.2
System:
OS: macOS 14.4
CPU: (8) arm64 Apple M1
Memory: 79.78 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.10.0
path: ~/.nvm/versions/node/v20.10.0/bin/node
Yarn:
version: 1.22.21
path: /opt/homebrew/bin/yarn
npm:
version: 10.2.3
path: ~/.nvm/versions/node/v20.10.0/bin/npm
Watchman:
version: 2024.08.26.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/rajatchaudhary/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.21829.142.2421.12409432
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /usr/bin/javac
Ruby:
version: 3.1.2
path: /Users/rajatchaudhary/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.2
wanted: latest
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.2
wanted: 0.76.2
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
info React Native v0.76.3 is now available (your project is running on v0.76.2).
info Changelog: https://github.com/facebook/react-native/releases/tag/v0.76.3
info Diff: https://react-native-community.github.io/upgrade-helper/?from=0.76.2&to=0.76.3
info For more info, check out "https://reactnative.dev/docs/upgrading?os=macos".
```
### Stacktrace or Logs
```text
NA
```
### Reproducer
for RN 0.74.5 - https://github.com/rajat693/expo51-flexbasis, for RN 0.76.2 - https://github.com/rajat693/expo52-flexbasis
### Screenshots and Videos
_No response_ | Impact: Regression,Type: New Architecture,Impact: Bug,0.76 | low | Major |
2,697,641,318 | ollama | Ddos of parsing markdown in frontend & images | ### What is the issue?
if the frontend converts markdown strings to supported html elements, on every token the frontend is requesting the same image over and over again and start downloading all images on every new token.
So markdown conversion should be done on the backend to avoid ddos attacks through wrong javascript versions.
also a rate limiter on your image server should be applied, if this gets fired you can amplify the load on servers real easily, example on youtube
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
0.4.1 | bug | low | Minor |
2,697,709,265 | flutter | App Crash when hover or drag select ellipsis word | ### Steps to reproduce
1. create a demo app, use flutter create my_app
2. add a SelectArea witch contains a ellipsis text, like this
```dart
const SelectionArea(
child: SizedBox(
width: 114,
child: Text(
'www.myapp.com',
overflow: TextOverflow.ellipsis,
),
)),
```
4. run or debug, hover or drag select ellipsis text, app crashed
5. I use the newest flutter sdk 3.24.5, it also crash
### Expected results
not crash
### Actual results
crash
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// TRY THIS: Try running your application with "flutter run". You'll see
// the application has a blue toolbar. Then, without quitting the app,
// try changing the seedColor in the colorScheme below to Colors.green
// and then invoke "hot reload" (save your changes or press the "hot
// reload" button in a Flutter-supported IDE, or press "r" if you used
// the command line to start the app).
//
// Notice that the counter didn't reset back to zero; the application
// state is not lost during the reload. To reset the state, use hot
// restart instead.
//
// This works for code too, not just values: Most code changes can be
// tested with just a hot reload.
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
// This call to setState tells the Flutter framework that something has
// changed in this State, which causes it to rerun the build method below
// so that the display can reflect the updated values. If we changed
// _counter without calling setState(), then the build method would not be
// called again, and so nothing would appear to happen.
_counter++;
});
}
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
return Scaffold(
appBar: AppBar(
// TRY THIS: Try changing the color here to a specific color (to
// Colors.amber, perhaps?) and trigger a hot reload to see the AppBar
// change color while the other colors stay the same.
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
// Here we take the value from the MyHomePage object that was created by
// the App.build method, and use it to set our appbar title.
title: Text(widget.title),
),
body: Center(
// Center is a layout widget. It takes a single child and positions it
// in the middle of the parent.
child: Column(
// Column is also a layout widget. It takes a list of children and
// arranges them vertically. By default, it sizes itself to fit its
// children horizontally, and tries to be as tall as its parent.
//
// Column has various properties to control how it sizes itself and
// how it positions its children. Here we use mainAxisAlignment to
// center the children vertically; the main axis here is the vertical
// axis because Columns are vertical (the cross axis would be
// horizontal).
//
// TRY THIS: Invoke "debug painting" (choose the "Toggle Debug Paint"
// action in the IDE, or press "p" in the console), to see the
// wireframe for each widget.
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const SelectionArea(
child: SizedBox(
width: 114,
child: Text(
'www.myapp.com',
overflow: TextOverflow.ellipsis,
),
)),
const SizedBox(height: 20),
const Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: const Icon(Icons.add),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
-------------------------------------
Translated Report (Full Report Below)
-------------------------------------
Process: my_app [5481]
Path: /Users/USER/*/my_app.app/Contents/MacOS/my_app
Identifier: com.example.myApp
Version: 1.0.0 (1)
Code Type: X86-64 (Native)
Parent Process: launchd [1]
User ID: 501
Date/Time: 2024-11-27 16:47:07.9150 +0800
OS Version: macOS 12.6.8 (21G725)
Report Version: 12
Bridge OS Version: 7.6 (20P6072)
Anonymous UUID: EFB8A0A0-E255-EC6B-974D-FED822D3D81D
Sleep/Wake UUID: 405D25A8-A0AB-4A91-B782-51DC47F39DC8
Time Awake Since Boot: 280000 seconds
Time Since Wake: 9651 seconds
System Integrity Protection: enabled
Crashed Thread: 5 io.flutter.ui
Exception Type: EXC_BAD_INSTRUCTION (SIGILL)
Exception Codes: 0x0000000000000001, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Termination Reason: Namespace SIGNAL, Code 4 Illegal instruction: 4
Terminating Process: exc handler [5481]
Thread 0:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x7ff81df8496a mach_msg_trap + 10
1 libsystem_kernel.dylib 0x7ff81df84cd8 mach_msg + 56
2 CoreFoundation 0x7ff81e08834d __CFRunLoopServiceMachPort + 319
3 CoreFoundation 0x7ff81e0869d8 __CFRunLoopRun + 1276
4 CoreFoundation 0x7ff81e085e1c CFRunLoopRunSpecific + 562
5 HIToolbox 0x7ff826d335e6 RunCurrentEventLoopInMode + 292
6 HIToolbox 0x7ff826d3334a ReceiveNextEventCommon + 594
7 HIToolbox 0x7ff826d330e5 _BlockUntilNextEventMatchingListInModeWithFilter + 70
8 AppKit 0x7ff820ac0f6d _DPSNextEvent + 927
9 AppKit 0x7ff820abf62a -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1394
10 AppKit 0x7ff820ab1cd9 -[NSApplication run] + 586
11 AppKit 0x7ff820a85c57 NSApplicationMain + 817
12 my_app 0x10ab1a649 main + 9 (AppDelegate.swift:5)
13 dyld 0x11a9d952e start + 462
Thread 1:
0 libsystem_pthread.dylib 0x7ff81dfbcf48 start_wqthread + 0
Thread 2:
0 libsystem_pthread.dylib 0x7ff81dfbcf48 start_wqthread + 0
Thread 3:
0 libsystem_pthread.dylib 0x7ff81dfbcf48 start_wqthread + 0
Thread 4:
0 libsystem_pthread.dylib 0x7ff81dfbcf48 start_wqthread + 0
Thread 5 Crashed:: io.flutter.ui
0 FlutterMacOS 0x10e57d550 std::_fl::__function::__func<skia::textlayout::TextLine::getGlyphPositionAtCoordinate(float)::$_0::operator()(skia::textlayout::Run const*, float, skia::textlayout::SkRange<unsigned long>, float*) const::'lambda'(skia::textlayout::SkRange<unsigned long>, skia::textlayout::TextStyle const&, skia::textlayout::TextLine::ClipContext const&), std::_fl::allocator<skia::textlayout::TextLine::getGlyphPositionAtCoordinate(float)::$_0::operator()(skia::textlayout::Run const*, float, skia::textlayout::SkRange<unsigned long>, float*) const::'lambda'(skia::textlayout::SkRange<unsigned long>, skia::textlayout::TextStyle const&, skia::textlayout::TextLine::ClipContext const&)>, void (skia::textlayout::SkRange<unsigned long>, skia::textlayout::TextStyle const&, skia::textlayout::TextLine::ClipContext const&)>::operator()(skia::textlayout::SkRange<unsigned long>&&, skia::textlayout::TextStyle const&, skia::textlayout::TextLine::ClipContext const&) + 1264
1 FlutterMacOS 0x10e578262 skia::textlayout::TextLine::iterateThroughSingleRunByStyles(skia::textlayout::TextLine::TextAdjustment, skia::textlayout::Run const*, float, skia::textlayout::SkRange<unsigned long>, skia::textlayout::StyleType, std::_fl::function<void (skia::textlayout::SkRange<unsigned long>, skia::textlayout::TextStyle const&, skia::textlayout::TextLine::ClipContext const&)> const&) const + 1778
2 FlutterMacOS 0x10e57cea5 std::_fl::__function::__func<skia::textlayout::TextLine::getGlyphPositionAtCoordinate(float)::$_0, std::_fl::allocator<skia::textlayout::TextLine::getGlyphPositionAtCoordinate(float)::$_0>, bool (skia::textlayout::Run const*, float, skia::textlayout::SkRange<unsigned long>, float*)>::operator()(skia::textlayout::Run const*&&, float&&, skia::textlayout::SkRange<unsigned long>&&, float*&&) + 181
3 FlutterMacOS 0x10e57561b skia::textlayout::TextLine::iterateThroughVisualRuns(bool, std::_fl::function<bool (skia::textlayout::Run const*, float, skia::textlayout::SkRange<unsigned long>, float*)> const&) const + 875
4 FlutterMacOS 0x10e57898f skia::textlayout::TextLine::getGlyphPositionAtCoordinate(float) + 175
5 FlutterMacOS 0x10e56f693 skia::textlayout::ParagraphImpl::getClosestUTF16GlyphInfoAt(float, float, skia::textlayout::Paragraph::GlyphInfo*) + 227
6 FlutterMacOS 0x10e6e4f56 flutter::Paragraph::getClosestGlyphInfo(double, double, _Dart_Handle*) const + 54
7 ??? 0x119506abb ???
8 ??? 0x122ef8046 ???
9 ??? 0x122ef7eb2 ???
10 ??? 0x122ef7d04 ???
11 ??? 0x122ef7929 ???
12 ??? 0x122ef7388 ???
13 ??? 0x122ed14d2 ???
14 ??? 0x122ee5cb8 ???
15 ??? 0x122ee6572 ???
16 ??? 0x122ef7195 ???
17 ??? 0x122ee5cb8 ???
18 ??? 0x122ed14d2 ???
19 ??? 0x122ee5cb8 ???
20 ??? 0x122ee6572 ???
21 ??? 0x122ee5cb8 ???
22 ??? 0x122ed14d2 ???
23 ??? 0x122ef066c ???
24 ??? 0x122ee7a1a ???
25 ??? 0x122ef03d6 ???
26 ??? 0x122ef006a ???
27 ??? 0x122ed14d2 ???
28 ??? 0x122eebdd6 ???
29 ??? 0x122ee7a1a ???
30 ??? 0x122eebc32 ???
31 ??? 0x122ed14d2 ???
32 ??? 0x122eeb8ec ???
33 ??? 0x122ee7a1a ???
34 ??? 0x122eeb586 ???
35 ??? 0x122eeb21a ???
36 ??? 0x122ed14d2 ???
37 ??? 0x122ee5cb8 ???
38 ??? 0x122ed14d2 ???
39 ??? 0x122ee5cb8 ???
40 ??? 0x122ed14d2 ???
41 ??? 0x122eeb0b1 ???
42 ??? 0x122ee5cb8 ???
43 ??? 0x122ed14d2 ???
44 ??? 0x122ee5cb8 ???
45 ??? 0x122ed14d2 ???
46 ??? 0x122ee5cb8 ???
47 ??? 0x122ed14d2 ???
48 ??? 0x122eeac8c ???
49 ??? 0x122ee909c ???
50 ??? 0x122ee7a1a ???
51 ??? 0x122ee8d36 ???
52 ??? 0x122ee89ca ???
53 ??? 0x122ed14d2 ???
54 ??? 0x122ee5cb8 ???
55 ??? 0x122ed14d2 ???
56 ??? 0x122ee5cb8 ???
57 ??? 0x122ee8882 ???
58 ??? 0x122ee7a1a ???
59 ??? 0x122ee86e5 ???
60 ??? 0x122ee82fa ???
61 ??? 0x122ee5cb8 ???
62 ??? 0x122ee8882 ???
63 ??? 0x122ee7a1a ???
64 ??? 0x122ee86e5 ???
65 ??? 0x122ee82fa ???
66 ??? 0x122ee5cb8 ???
67 ??? 0x122ed14d2 ???
68 ??? 0x122ee5cb8 ???
69 ??? 0x122ed14d2 ???
70 ??? 0x122ee5cb8 ???
71 ??? 0x122ed14d2 ???
72 ??? 0x122ee816c ???
73 ??? 0x122ee5cb8 ???
74 ??? 0x122ed14d2 ???
75 ??? 0x122ee7fe7 ???
76 ??? 0x122ee7a1a ???
77 ??? 0x122ee6c88 ???
78 ??? 0x122ed14d2 ???
79 ??? 0x122ee5cb8 ???
80 ??? 0x122ed14d2 ???
81 ??? 0x122ee68ee ???
82 ??? 0x122ee5cb8 ???
83 ??? 0x122ee6572 ???
84 ??? 0x122ee5cb8 ???
85 ??? 0x122ed14d2 ???
86 ??? 0x122ee5cb8 ???
87 ??? 0x122ee62ce ???
88 ??? 0x122ed14d2 ???
89 ??? 0x122ee5cb8 ???
90 ??? 0x122ed14d2 ???
91 ??? 0x122ee5cb8 ???
92 ??? 0x122ed14d2 ???
93 ??? 0x122ee5cb8 ???
94 ??? 0x122ee5ef5 ???
95 ??? 0x122ee5cb8 ???
96 ??? 0x122ed14d2 ???
97 ??? 0x122ee5cb8 ???
98 ??? 0x122ed14d2 ???
99 ??? 0x122ee5cb8 ???
100 ??? 0x122ed14d2 ???
101 ??? 0x122ee5cb8 ???
102 ??? 0x122ed14d2 ???
103 ??? 0x122ed0b9b ???
104 ??? 0x122ed0965 ???
105 ??? 0x122ecf887 ???
106 ??? 0x122ece01a ???
107 ??? 0x122ecdd16 ???
108 ??? 0x1215035ae ???
109 ??? 0x12150341a ???
110 ??? 0x11cda54e9 ???
111 ??? 0x11cda7410 ???
112 ??? 0x11cda72d0 ???
113 ??? 0x11cda7229 ???
114 ??? 0x119502e46 ???
115 FlutterMacOS 0x10e7d2218 dart::DartEntry::InvokeFunction(dart::Function const&, dart::Array const&, dart::Array const&) + 376
116 FlutterMacOS 0x10e7d28ad dart::DartEntry::InvokeCallable(dart::Thread*, dart::Function const&, dart::Array const&, dart::Array const&) + 301
117 FlutterMacOS 0x10ebe9d18 Dart_InvokeClosure + 1080
118 FlutterMacOS 0x10e6a96b1 tonic::DartInvoke(_Dart_Handle*, std::initializer_list<_Dart_Handle*>) + 33
119 FlutterMacOS 0x10e6eb21a flutter::PlatformConfiguration::DispatchPointerDataPacket(flutter::PointerDataPacket const&) + 218
120 FlutterMacOS 0x10e737e3c flutter::RuntimeController::DispatchPointerDataPacket(flutter::PointerDataPacket const&) + 140
121 FlutterMacOS 0x10e66fd86 flutter::DefaultPointerDataDispatcher::DispatchPacket(std::_fl::unique_ptr<flutter::PointerDataPacket, std::_fl::default_delete<flutter::PointerDataPacket> >, unsigned long long) + 118
122 FlutterMacOS 0x10e66a4b9 flutter::Engine::DispatchPointerDataPacket(std::_fl::unique_ptr<flutter::PointerDataPacket, std::_fl::default_delete<flutter::PointerDataPacket> >, unsigned long long) + 121
123 FlutterMacOS 0x10e69078b std::_fl::__function::__func<fml::internal::CopyableLambda<flutter::Shell::OnPlatformViewDispatchPointerDataPacket(std::_fl::unique_ptr<flutter::PointerDataPacket, std::_fl::default_delete<flutter::PointerDataPacket> >)::$_0>, std::_fl::allocator<fml::internal::CopyableLambda<flutter::Shell::OnPlatformViewDispatchPointerDataPacket(std::_fl::unique_ptr<flutter::PointerDataPacket, std::_fl::default_delete<flutter::PointerDataPacket> >)::$_0> >, void ()>::operator()() + 75
124 FlutterMacOS 0x10debc6ba fml::MessageLoopImpl::FlushTasks(fml::FlushType) + 266
125 FlutterMacOS 0x10dec4eac fml::MessageLoopDarwin::OnTimerFire(__CFRunLoopTimer*, fml::MessageLoopDarwin*) + 44
126 CoreFoundation 0x7ff81e0a0f49 __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ + 20
127 CoreFoundation 0x7ff81e0a0a38 __CFRunLoopDoTimer + 923
128 CoreFoundation 0x7ff81e0a05a8 __CFRunLoopDoTimers + 307
129 CoreFoundation 0x7ff81e086cb6 __CFRunLoopRun + 2010
130 CoreFoundation 0x7ff81e085e1c CFRunLoopRunSpecific + 562
131 FlutterMacOS 0x10dec50fd fml::MessageLoopDarwin::Run() + 141
132 FlutterMacOS 0x10debc4e5 fml::MessageLoopImpl::DoRun() + 37
133 FlutterMacOS 0x10dec3210 void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0> >(void*) + 176
134 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
135 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 6:: io.flutter.raster
0 libsystem_kernel.dylib 0x7ff81df8496a mach_msg_trap + 10
1 libsystem_kernel.dylib 0x7ff81df84cd8 mach_msg + 56
2 CoreFoundation 0x7ff81e08834d __CFRunLoopServiceMachPort + 319
3 CoreFoundation 0x7ff81e0869d8 __CFRunLoopRun + 1276
4 CoreFoundation 0x7ff81e085e1c CFRunLoopRunSpecific + 562
5 FlutterMacOS 0x10dec50fd fml::MessageLoopDarwin::Run() + 141
6 FlutterMacOS 0x10debc4e5 fml::MessageLoopImpl::DoRun() + 37
7 FlutterMacOS 0x10dec3210 void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0> >(void*) + 176
8 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
9 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 7:: io.flutter.io
0 libsystem_kernel.dylib 0x7ff81df8496a mach_msg_trap + 10
1 libsystem_kernel.dylib 0x7ff81df84cd8 mach_msg + 56
2 CoreFoundation 0x7ff81e08834d __CFRunLoopServiceMachPort + 319
3 CoreFoundation 0x7ff81e0869d8 __CFRunLoopRun + 1276
4 CoreFoundation 0x7ff81e085e1c CFRunLoopRunSpecific + 562
5 FlutterMacOS 0x10dec50fd fml::MessageLoopDarwin::Run() + 141
6 FlutterMacOS 0x10debc4e5 fml::MessageLoopImpl::DoRun() + 37
7 FlutterMacOS 0x10dec3210 void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::Thread::Thread(std::_fl::function<void (fml::Thread::ThreadConfig const&)> const&, fml::Thread::ThreadConfig const&)::$_0> >(void*) + 176
8 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
9 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 8:: io.worker.1
0 libsystem_kernel.dylib 0x7ff81df873da __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81dfc1a6f _pthread_cond_wait + 1249
2 FlutterMacOS 0x10de85d94 std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 36
3 FlutterMacOS 0x10deb7593 fml::ConcurrentMessageLoop::WorkerMain() + 179
4 FlutterMacOS 0x10deb824c void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0> >(void*) + 188
5 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 9:: io.worker.2
0 libsystem_kernel.dylib 0x7ff81df873da __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81dfc1a6f _pthread_cond_wait + 1249
2 FlutterMacOS 0x10de85d94 std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 36
3 FlutterMacOS 0x10deb7593 fml::ConcurrentMessageLoop::WorkerMain() + 179
4 FlutterMacOS 0x10deb824c void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0> >(void*) + 188
5 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 10:: io.worker.3
0 libsystem_kernel.dylib 0x7ff81df873da __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81dfc1a6f _pthread_cond_wait + 1249
2 FlutterMacOS 0x10de85d94 std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 36
3 FlutterMacOS 0x10deb7593 fml::ConcurrentMessageLoop::WorkerMain() + 179
4 FlutterMacOS 0x10deb824c void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0> >(void*) + 188
5 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 11:: io.worker.4
0 libsystem_kernel.dylib 0x7ff81df873da __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81dfc1a6f _pthread_cond_wait + 1249
2 FlutterMacOS 0x10de85d94 std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 36
3 FlutterMacOS 0x10deb7593 fml::ConcurrentMessageLoop::WorkerMain() + 179
4 FlutterMacOS 0x10deb824c void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0> >(void*) + 188
5 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 12:: io.worker.5
0 libsystem_kernel.dylib 0x7ff81df873da __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81dfc1a6f _pthread_cond_wait + 1249
2 FlutterMacOS 0x10de85d94 std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 36
3 FlutterMacOS 0x10deb7593 fml::ConcurrentMessageLoop::WorkerMain() + 179
4 FlutterMacOS 0x10deb824c void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0> >(void*) + 188
5 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 13:: io.worker.6
0 libsystem_kernel.dylib 0x7ff81df873da __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81dfc1a6f _pthread_cond_wait + 1249
2 FlutterMacOS 0x10de85d94 std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 36
3 FlutterMacOS 0x10deb7593 fml::ConcurrentMessageLoop::WorkerMain() + 179
4 FlutterMacOS 0x10deb824c void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0> >(void*) + 188
5 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 14:: io.worker.7
0 libsystem_kernel.dylib 0x7ff81df873da __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81dfc1a6f _pthread_cond_wait + 1249
2 FlutterMacOS 0x10de85d94 std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 36
3 FlutterMacOS 0x10deb7593 fml::ConcurrentMessageLoop::WorkerMain() + 179
4 FlutterMacOS 0x10deb824c void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0> >(void*) + 188
5 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 15:: io.worker.8
0 libsystem_kernel.dylib 0x7ff81df873da __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81dfc1a6f _pthread_cond_wait + 1249
2 FlutterMacOS 0x10de85d94 std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 36
3 FlutterMacOS 0x10deb7593 fml::ConcurrentMessageLoop::WorkerMain() + 179
4 FlutterMacOS 0x10deb824c void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0> >(void*) + 188
5 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 16:: io.worker.9
0 libsystem_kernel.dylib 0x7ff81df873da __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81dfc1a6f _pthread_cond_wait + 1249
2 FlutterMacOS 0x10de85d94 std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 36
3 FlutterMacOS 0x10deb7593 fml::ConcurrentMessageLoop::WorkerMain() + 179
4 FlutterMacOS 0x10deb824c void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0> >(void*) + 188
5 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 17:: io.worker.10
0 libsystem_kernel.dylib 0x7ff81df873da __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81dfc1a6f _pthread_cond_wait + 1249
2 FlutterMacOS 0x10de85d94 std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 36
3 FlutterMacOS 0x10deb7593 fml::ConcurrentMessageLoop::WorkerMain() + 179
4 FlutterMacOS 0x10deb824c void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0> >(void*) + 188
5 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 18:: io.worker.11
0 libsystem_kernel.dylib 0x7ff81df873da __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81dfc1a6f _pthread_cond_wait + 1249
2 FlutterMacOS 0x10de85d94 std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 36
3 FlutterMacOS 0x10deb7593 fml::ConcurrentMessageLoop::WorkerMain() + 179
4 FlutterMacOS 0x10deb824c void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0> >(void*) + 188
5 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 19:: io.worker.12
0 libsystem_kernel.dylib 0x7ff81df873da __psynch_cvwait + 10
1 libsystem_pthread.dylib 0x7ff81dfc1a6f _pthread_cond_wait + 1249
2 FlutterMacOS 0x10de85d94 std::_fl::condition_variable::wait(std::_fl::unique_lock<std::_fl::mutex>&) + 36
3 FlutterMacOS 0x10deb7593 fml::ConcurrentMessageLoop::WorkerMain() + 179
4 FlutterMacOS 0x10deb824c void* std::_fl::__thread_proxy[abi:v15000]<std::_fl::tuple<std::_fl::unique_ptr<std::_fl::__thread_struct, std::_fl::default_delete<std::_fl::__thread_struct> >, fml::ConcurrentMessageLoop::ConcurrentMessageLoop(unsigned long)::$_0> >(void*) + 188
5 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
6 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
Thread 20:: dart:io EventHandler
0 libsystem_kernel.dylib 0x7ff81df8933e kevent + 10
1 FlutterMacOS 0x10e7004b8 dart::bin::EventHandlerImplementation::EventHandlerEntry(unsigned long) + 312
2 FlutterMacOS 0x10e7258b3 dart::bin::ThreadStart(void*) + 83
3 libsystem_pthread.dylib 0x7ff81dfc14e1 _pthread_start + 125
4 libsystem_pthread.dylib 0x7ff81dfbcf6b thread_start + 15
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.19.3, on macOS 12.6.8 21G725 darwin-x64, locale
zh-Hans-CN)
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from:
https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK
components.
(or visit https://flutter.dev/docs/get-started/install/macos#android-setup
for detailed instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[!] Xcode - develop for iOS and macOS (Xcode 14.2)
! CocoaPods 1.12.1 out of date (1.13.0 is recommended).
CocoaPods is used to retrieve the iOS and macOS platform side's plugin
code that responds to your plugin usage on the Dart side.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/platform-plugins
To upgrade see
https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods
for instructions.
[✓] Chrome - develop for the web
[!] Android Studio (not installed)
[✓] IntelliJ IDEA Ultimate Edition (version 2020.1)
[✓] IntelliJ IDEA Ultimate Edition (version 2020.1)
[✓] VS Code (version 1.94.2)
[✓] Connected device (2 available)
[✓] Network resources
! Doctor found issues in 3 categories.
```
</details>
| c: crash,engine,a: typography,has reproducible steps,P2,c: fatal crash,team-engine,triaged-engine,found in release: 3.24,found in release: 3.27 | low | Critical |
2,697,735,427 | godot | Geometry2D Union generates invalid Polygon2D/Self-intersection Transitivity | ### Tested versions
Reproducible in 4.3 and back in 3.x
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 4050 Laptop GPU (NVIDIA; 32.0.15.6070) - Intel(R) Core(TM) Ultra 7 155H (22 Threads)
### Issue description
So I've been experimenting with Polygon2D in the editor and managed to create some shapes that will disappear.
I'm not quite sure what the issue is since the polygon does not intersect itself, but if we move any of the points then it will reappear.


### Steps to reproduce
Copy these points into a Polygon2D and notice it doesn't render:
```
polygon = PackedVector2Array(450, 300, 550, 300, 550, 450, 600, 450, 600, 300, 550, 300, 550, 250, 700.68, 249, 700, 450, 700.68, 506, 449.68, 500)
```
### Minimal reproduction project (MRP)
I gave the points to the Polygon2D in reproduce steps. | needs testing,topic:2d | low | Minor |
2,697,738,731 | flutter | Linux_android_emu_34 android views is 3.00% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Linux_android_emu_34 android views"
}
-->
The post-submit test builder `Linux_android_emu_34 android views` had a flaky ratio 3.00% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Linux_android_emu_34%20android%20views/1373
Commit: https://github.com/flutter/flutter/commit/d39c3532578a05d208e16d080a234a5f2c40e4ba
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Linux_android_emu_34%20android%20views/1373
https://ci.chromium.org/ui/p/flutter/builders/prod/Linux_android_emu_34%20android%20views/1371
https://ci.chromium.org/ui/p/flutter/builders/prod/Linux_android_emu_34%20android%20views/1344
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Linux_android_emu_34%20android%20views
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| P1,c: flake | medium | Major |
2,697,743,871 | ant-design | RangePicker初始值如果开始值属于disabledDate,导致结束值不能修改为有效的非禁用日期 | ### Reproduction link
[](https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-h8gxnd)
### Steps to reproduce
1. 点击结束日期
2. 选择一个可选日期
3. 点击日历上的OK
4. 此时,开始日期focus, 需要选择开始日期
5. 点击页面其他部位,日历收起
6. 结束日期重置为原始值
### What is expected?
结束日期可以修改成功
### What is actually happening?
结束日期被重置为初始值
| Environment | Info |
| --- | --- |
| antd | 5.22.2 |
| React | 18 |
| System | Windows 11 |
| Browser | 130.0.2849.80 Edge |
---
4.x版本是可以单独修改结束日期的
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Minor |
2,697,805,288 | material-ui | [List] Sticky subheader overlaps scrollbar on iOS (all major browsers) | ### Steps to reproduce
Steps:
On any iOS 18.0.1+ browser
1. Open this link to live example: [(official List demo page)](https://mui.com/material-ui/react-list/#sticky-subheader)
2.Scroll the sticky sub-header list
3.Notice the list sub-headers overlapping the scrollbar

### Current behavior
List sub-headers are overlapping the list scrollbar (overlay)
### Expected behavior
List scrollbar stays "above" list content
### Context
_No response_
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
**Search keywords**: ListSubheader iOS scrollbar | component: list,package: material-ui,ready to take | low | Minor |
2,697,806,747 | tauri | [bug] Tauri2 on_window_event is fired when end of dom focus element is reached | ### Describe the bug
Within a WebPage you can navigate through all DOM-elements with the "tab"-key. When the last DOM-element is reached, the focus is reassigned to the first DOM-element. This does not work in Tauri2. When the end of all DOM-elements is reached in Tauri2, the Tauri::Window loses its focus.
This might be related (but is definitely not duplicated) by: https://github.com/tauri-apps/tauri/issues/10767
See the second bulletpoint in this comment:
https://github.com/tauri-apps/tauri/issues/10767#issuecomment-2434339148
> - window focus is toggling when it shouldn't
https://github.com/user-attachments/assets/18b918a7-d3ef-4d86-a0b4-3b1e7ae11b06
### Reproduction
Create a new sample Project with:
```
cargo install create-tauri-app --locked
cargo create-tauri-app
```
Replace the main function in lib.rs with:
```
pub fn run() {
tauri::Builder::default()
.plugin(tauri_plugin_shell::init())
.setup(move |_app: &mut tauri::App| {
println!("Starting application...");
Ok(())
})
.on_window_event(|_window, event| match event {
tauri::WindowEvent::Focused(focused) => {
if !focused {
println!("window lost focus");
} else {
println!("window gained focus");
}
}
_ => {}
})
.invoke_handler(tauri::generate_handler![greet])
.run(tauri::generate_context!())
.expect("error while running tauri application");
}
```
Build the Project:
`cargo build`
### Expected behavior
When pressing "Tab" while the focus is on the "Greet" Button, the focus should move to the Tauri-Logo (the first DOM-element).
The Tauri window should not lose its focus.
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.19045 x86_64 (X64)
✔ WebView2: 130.0.2849.80
✔ MSVC:
- Visual Studio Build Tools 2022
- Visual Studio Professional 2022
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 20.15.0
- npm: 10.7.0
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.0
- tao 🦀: 0.30.7
- tauri-cli 🦀: 2.0.0-rc.16
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.2
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../src
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,697,809,624 | vscode | Turn on EditContext by default for non-screen reader users and off for screen reader users | Turn on EditContext by default for non-screen reader users and off for screen reader users | feature-request,editor-edit-context | low | Minor |
2,697,816,173 | flutter | Mac_benchmark flutter_view_macos__start_up is 2.00% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Mac_benchmark flutter_view_macos__start_up"
}
-->
The post-submit test builder `Mac_benchmark flutter_view_macos__start_up` had a flaky ratio 2.00% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_benchmark%20flutter_view_macos__start_up/10531
Commit: https://github.com/flutter/flutter/commit/d39c3532578a05d208e16d080a234a5f2c40e4ba
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_benchmark%20flutter_view_macos__start_up/10531
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_benchmark%20flutter_view_macos__start_up/10459
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Mac_benchmark%20flutter_view_macos__start_up
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| platform-mac,P2,c: flake,team-macos,triaged-macos | low | Major |
2,697,862,493 | langchain | Indexing API not reducing the ingestion time significantly with OpenSearch, even with exactly the same input file | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
INDEX_NAME="test-vector-index-5"
VECTOR_FIELD='vector_field'
CONNECTION_STRING = "postgresql+psycopg://admin:[email protected]:5433/vectordb"
COLLECTION_NAME = "vectordb"
namespace = f"aoss/{COLLECTION_NAME}"
# Setting up record manager on PostgreSQL
record_manager = SQLRecordManager(namespace, db_url=CONNECTION_STRING)
record_manager.create_schema()
# Initializing OpenSearch LangChain client
osvectorsearch = OpenSearchVectorSearch(
embedding_function = embeddings,
index_name = INDEX_NAME,
http_auth = awsauth,
use_ssl = True,
verify_certs = True,
http_compress = True, # enables gzip compression for request bodies
connection_class = RequestsHttpConnection,
opensearch_url="https://" + OPENSEARCH_ENDPOINT
)
cleanup = "incremental"
result = index(
documents,
record_manager,
osvectorsearch,
cleanup=cleanup.value,
source_id_key="source",
batch_size=10
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I've used the same code for LangChain's Indexing API with Weaviate and PGVector where the feature works as expected. For example, for 2185 documents to be ingested on the first go with cleanup='full', it would take about 10 minutes to do so. I'd modify a single character, and reingestion using cleanup=incremental would take a few seconds.
I'd tried the same with OpenSearch, where the first ingestion took around 700 seconds, and re-ingesting exactly the same data took about 590 seconds. Is there any reason why this might be happening?
### System Info
LangChain package versions:
```
langchain 0.3.3
langchain-community 0.3.2
langchain-core 0.3.10
langchain-openai 0.2.2
langchain-text-splitters 0.3.0
```
Platform: Windows
Python Version: 3.12.4 | Ɑ: vector store | low | Critical |
2,697,889,377 | vscode | Win11 How to solve the problem of misplaced vscode pasting comments | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: latest
- OS Version: win11
Steps to Reproduce:

| feature-request,editor-autoindent | low | Critical |
2,697,933,009 | godot | `WebSocketPeer.put_packet()` fails with no error printed when the socket is disconnected | ### Tested versions
Reproducible in the 4.3-stable.
### System information
Ubuntu 22.04, 4.3-stable
### Issue description
WebSocketPeer.get_ready_state returns 1 when the socket is actually disconnected.
WebSocketPeer.put_packet puts packets into the void with no indication that something went wrong.
IIRC, this is not how websockets are supposed to work.
### Steps to reproduce
```gdscript
var socket = WebSocketPeer.new()
socket.connect_to_url(secure_url)
... wait for it to connect, then pull the cable
socket.poll()
socket.get_ready_state() # this returns 1
socket.put_packet(packet) # this doesn't report anything either
```
### Minimal reproduction project (MRP)
N/A | needs testing,topic:network | low | Critical |
2,698,025,147 | godot | Instanced scenes gets overwritten to null randomly on open scene tabs | ### Tested versions
4.3.stable Mono
### System information
MacOS - M1 Pro - 4.3.stable - Mono (C#) - Vulkan (Forward+)
### Issue description
Everyday, my opened scene in the editor are randomly overwrited - all their exported properties - with null. All connected signals are removed. Thankfully I have Git to catch these, but this is very concerning as any opened level or prefabs become completely broken.
Here's some of my git diff so you can see what's hapenning:


Any workarounds or ideas why this might happen is welcome - I'm desperate.
### Steps to reproduce
Unfortunately, I have no idea how to reproduce it, I'm sorry.
This issue wasn’t happening a few weeks ago. The project got bigger, so it could be either a scale issue, or related to a tool script I added that instantiate scene along a Path3D. Aside from that, nothing has changed.
### Minimal reproduction project (MRP)
Sorry, I can't reproduce. | bug,topic:editor,needs testing | low | Critical |
2,698,048,290 | pytorch | Stuck at dynamo with dynamic shape enabled in PT2.5.1 which was not observed PT2.4.0. | ### 🐛 Describe the bug
In PyTorch 2.5.1, it has been observed that tests involving dynamic shapes get stuck indefinitely. This behavior was not observed in PyTorch 2.4.0, where the same dynamic shape tests executed successfully.
When running the test with static shapes in PyTorch 2.5.1, the execution completes within approximately 1 minute. However, when dynamic shapes are enabled, the test hangs for hours without completing.
### Error logs
Stuck when executing a big graph. Test is created by capturing all operations of huggingface swin topology in forward path.
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.29.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] habana-torch-dataloader==1.18.0.128
[pip3] habana-torch-plugin==1.18.0.128
[pip3] intel_extension_for_pytorch==2.1.20+giteebe5cb
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.5.82
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] optree==0.12.1
[pip3] qtorch==0.3.0
[pip3] torch==2.5.1+cpu
[pip3] torch_tb_profiler==0.4.0
[pip3] torchaudio==2.5.1+cpu
[pip3] torchdata==0.7.1+5e6f7b7
[pip3] torchtext==0.18.0a0+9bed85d
[pip3] torchvision==0.20.1+cpu
[pip3] triton==3.0.0
[conda] Could not collect
cc @chauhang @penguinwu @ezyang @bobrenjc93 | triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,698,124,964 | ui | [bug]: request Support for CSP/nonce, Such as sidebar | ### Describe the bug
Env:
I used the nextjs + shadcn-ui for my ai peoject。After I set the nextjs csp setting,the shadcn parts work not well.
### Affected component/components
Sidebar, Toast, Sonner, Tabs, Dialog, Sheet, Command
### How to reproduce
When my first layer is initialized loaded the page, the page is rendered normally, and the pages are also rendered normally. But when I refresh the page, some styles and functions are no longer normal.
for example The width variables of the sidebar are acting in style, and it is undefined and will not take effect after CSP setting. I think the problem is caused by shadcn self.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
nextjs + shadcn-ui
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,698,142,313 | ollama | Can you make the normalize optional for embeddings? | ### What is the issue?
https://ollama.com/library/nomic-embed-text:v1.5
```shell
curl http://localhost:11434/api/embeddings -d '{
"model": "nomic-embed-text",
"prompt": "The sky is blue because of Rayleigh scattering"
}'
```
access "http://127.0.0.1:%d/embedding" is ok, [ollama_llama_server](https://github.com/ollama/ollama/blob/main/llm/server.go#L894) returns the right embedding, while the `/api/embed` returns a different embedding, can you make the `normalize` optional?
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | bug | low | Minor |
2,698,146,582 | pytorch | [Inductor] `score_fusion_memory` will always run the code optimized for small sets | ### 🐛 Describe the bug
It seems the condition below is always True:
https://github.com/pytorch/pytorch/blob/4ae1c4cbb59deeae2aabc61ea1e02cf1d5992812/torch/_inductor/scheduler.py#L3266-L3270
Both `node1_dep_len` and `node2_dep_len` are unsigned integers. The larger of the two, when multiplied by 4, is definitely greater than the smaller one.
### Versions
You can see that in latest commit in main branch.
cc @chauhang @penguinwu | triaged,oncall: pt2 | low | Critical |
2,698,262,983 | deno | fmt: keep newlines in class constructors | Version: Deno 2.1.1
## Current behaviour
Deno's `fmt` CLI tool removes newlines between `constructor`'s arguments. This makes classes harder to read if class members are being defined with before mentioned `constructor`
## Expected behaviour
Don't remove newlines between constructor's arguments, like Prettier does.
## Demonstration
https://github.com/user-attachments/assets/921244cf-518c-4ebe-b93f-92f3a4f8d1e6
| suggestion,deno fmt | low | Minor |
2,698,295,607 | pytorch | torch.cuda adds support for the changing `CUDA_VISIBLE_DEVICES` | ### 🐛 Describe the bug
Currently, torch.cuda doesn't support changing `CUDA_VISIBLE_DEVICES` on the fly, a demo:
```python
import os
import torch
os.environ["CUDA_VISIBLE_DEVICES"] = ""
print(f"torch.cuda.is_available(): {torch.cuda.is_available()}")
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
print(f"torch.cuda.is_available(): {torch.cuda.is_available()}")
print(f"torch.cuda.device_count(): {torch.cuda.set_device('cuda:0')}")
```
Output:
```log
torch.cuda.is_available(): False
torch.cuda.is_available(): False
Traceback (most recent call last):
File "test.py", line 74, in <module>
print(f"torch.cuda.device_count(): {torch.cuda.set_device('cuda:0')}")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "venv/lib/python3.11/site-packages/torch/cuda/__init__.py", line 478, in set_device
torch._C._cuda_setDevice(device)
File "venv/lib/python3.11/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
```
While this is actually supported by the CUDA python binding when I directly use the cuda-python:
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = ""
import cuda.bindings.runtime
cuda.bindings.runtime.cudaGetDeviceCount()
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
cuda.bindings.runtime.cudaGetDeviceCount()
```
```output
(<cudaError_t.cudaErrorNoDevice: 100>, 0)
(<cudaError_t.cudaSuccess: 0>, 4)
```
It would be appreciated if this can be fixed, a practical use case: https://github.com/OpenRLHF/OpenRLHF/pull/524#issuecomment-2501505023
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 9.4 (Plow) (x86_64)
GCC version: (Spack GCC) 12.3.0
Clang version: Could not collect
CMake version: version 3.27.7
Libc version: glibc-2.34
Python version: 3.11.6 (main, Apr 26 2024, 22:51:01) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.42.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.2.128
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8468
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 100%
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
L1d cache: 4.5 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,148,150,152,154,156,158,160,162,164,166,168,170,172,174,176,178,180,182,184,186,188,190
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111,113,115,117,119,121,123,125,127,129,131,133,135,137,139,141,143,145,147,149,151,153,155,157,159,161,163,165,167,169,171,173,175,177,179,181,183,185,187,189,191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchmetrics==1.6.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] No relevant package
cc @ptrblck @msaroufim @eqy | module: cuda,low priority,triaged,module: third_party,actionable,needs design | low | Critical |
2,698,335,036 | ollama | Support AMD 780m | Please consider to add better AMD support (e.g. 7840u with 780m) | feature request | low | Minor |
2,698,362,690 | next.js | VS Code cannot bind breakpoints in a Turborepo Next.js project | ### Link to the code that reproduces this issue
https://github.com/michaelschufi/turbo-debug-nextjs-repro
### To Reproduce
1. Using VS Code, open either
the file `turbo-debug-nextjs-repro.code-workspace`:
```
code turbo-debug-nextjs-repro.code-workspace
```
or the folder `apps/docs/`:
```
code apps/docs
```
2. Open the file `app/page.tsx` inside the folder `apps/docs/`.
3. Set a breakpoint on the same line as the `console.log`.
4. Start the debugger and open the page in a browser.
### Current vs. Expected behavior
Observe
- the debugger NOT stopping at the breakpoint
- the debugger stopping at the debugger statement
- the debugger opening a second VS Code file tab
- the paths of the two tabs not matching when hovering over the tab labels (see screenshot below)
-> As can be seen: The debugger kind of adds the relative path to the workspace folder to the path of the file it is debugging - resulting in doubling the path segments.
```diff
- ~/repos/turbo-debug-nextjs-repro/apps/docs/app/page.tsx
+ ~/repos/turbo-debug-nextjs-repro/apps/docs/apps/docs/app/page.tsx
```
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
Available memory (MB): 15853
Available CPU cores: 16
Binaries:
Node: 22.11.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.0.0
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: N/A
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I think, this might also be related to the Next.js dev error window's path not matching the editor's CWD - instead it also outputs the path from the monorepo-root instead of the CWD where `next dev` was called.
Note: I don't know if this is a Next.js, Turborepo, Turbopack, or a VS Code issue - probably a bit of all of them 😅 | bug,Turbopack | low | Critical |
2,698,404,829 | kubernetes | failed to get api resources with kubectl 1.30 | ### What happened?
we installed Istio CRD within standard container to perform some upgrade test, mainly the installation below CRD
```
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: fortio
spec:
hosts:
- '*'
gateways:
- fortio-gateway
http:
- route:
- destination:
host: fortio
port:
number: 8080
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: fortio-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP`
```
it depends on the api resources to be available, while the default kubelctl is 1.30, it does not list the api-resources with "Istio"
```
bash-4.4$ kubectl api-resources |grep istio
bash-4.4$
```
while if I change to use a 1.29 kubectl, it will list the istio api-resources; then I run with 1.30 kubectl again, it can get the Istio api resources as well. But if I don't run the 1.29 kubect, we kee retry, it take 10 + mins to get the Istio api -resources using 1.30 kubectl.
```
bash-4.4$ /tmp/kubectl.129 api-resources |grep istio
adapters config.istio.io/v1alpha2 true adapter
attributemanifests config.istio.io/v1alpha2 true attributemanifest
handlers config.istio.io/v1alpha2 true handler
httpapispecbindings config.istio.io/v1alpha2 true HTTPAPISpecBinding
httpapispecs config.istio.io/v1alpha2 true HTTPAPISpec
instances config.istio.io/v1alpha2 true instance
quotaspecbindings config.istio.io/v1alpha2 true QuotaSpecBinding
quotaspecs config.istio.io/v1alpha2 true QuotaSpec
rules config.istio.io/v1alpha2 true rule
templates config.istio.io/v1alpha2 true template
wasmplugins extensions.istio.io/v1alpha1 true WasmPlugin
destinationrules dr networking.istio.io/v1beta1 true DestinationRule
envoyfilters networking.istio.io/v1alpha3 true EnvoyFilter
gateways gw networking.istio.io/v1beta1 true Gateway
proxyconfigs networking.istio.io/v1beta1 true ProxyConfig
serviceentries se networking.istio.io/v1beta1 true ServiceEntry
sidecars networking.istio.io/v1beta1 true Sidecar
virtualservices vs networking.istio.io/v1beta1 true VirtualService
workloadentries we networking.istio.io/v1beta1 true WorkloadEntry
workloadgroups wg networking.istio.io/v1beta1 true WorkloadGroup
rbacconfigs rbac.istio.io/v1alpha1 true RbacConfig
servicerolebindings rbac.istio.io/v1alpha1 true ServiceRoleBinding
serviceroles rbac.istio.io/v1alpha1 true ServiceRole
authorizationpolicies security.istio.io/v1beta1 true AuthorizationPolicy
peerauthentications pa security.istio.io/v1beta1 true PeerAuthentication
requestauthentications ra security.istio.io/v1beta1 true RequestAuthentication
telemetries telemetry telemetry.istio.io/v1alpha1 true Telemetry
```
### What did you expect to happen?
There shall be no delay of 10+ mins by using the 1.30 kubectl to get the api resoures
### How can we reproduce it (as minimally and precisely as possible)?
1. running from docker container
```
docker --version
Docker version 24.0.7, build 24.0.7-0ubuntu2~22.04.1
```
2. install Istio api-resources
3. using 1.30 kubectl to get the Istio api -resources
```
kubectl api-resources |grep istio
```
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.30.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.31.1
```
</details>
### Cloud provider
<details>
Kubeadm brought up cluster
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,kind/support,sig/api-machinery,sig/cli,triage/accepted | low | Critical |
2,698,476,754 | PowerToys | Folder Size | ### Description of the new feature / enhancement
An option to show the sizes of folders within Windows Explorer.
### Scenario when this would be used?
For example finding out where space is disappearing to on a system or running out of space on one drive / google drive / dropbox.
### Supporting information
I know 3rd party tools exist for this, but maybe one of the free ones could be incorporated or would be open to collaboration? Its great not to have to download and install many different tools when migrating to a new machine. | Idea-Enhancement,Product-File Explorer,Needs-Triage | low | Minor |
2,698,486,981 | tensorflow | Some operators give different results on CPU and GPU when dealing with complex numbers that include `inf`. | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.17.1
### Custom code
Yes
### OS platform and distribution
Ubuntu 22.04.3
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
The outputs of TensorFlow mathematical APIs (`sin, cos, tan, sinh, cosh, exp, and reduce_mean`) are inconsistent between the CPU and GPU when applied to complex inputs containing `inf`.
### Standalone code to reproduce the issue
```shell
python
import tensorflow as tf
test_inputs = [
tf.constant([complex(float('inf'), 0), complex(0, float('inf')), complex(float('inf'), float('inf'))], dtype=tf.complex128),
]
test_apis = [
tf.math.sin, tf.math.cos, tf.math.tan,
tf.math.sinh, tf.math.cosh, tf.math.exp, tf.math.reduce_mean
]
for api in test_apis:
print(f"Testing {api.__name__}")
for x in test_inputs:
try:
with tf.device('/CPU'):
cpu_out = api(x)
print(f"CPU Output: {cpu_out}")
with tf.device('/GPU:0'):
gpu_out = api(x)
print(f"GPU Output: {gpu_out}")
except Exception as e:
print(f"Error in {api.__name__}: {e}")
```
### Relevant log output
```shell
Testing sin
CPU Output: [nan +0.j 0.+infj nan+infj]
GPU Output: [nan+nanj nan+infj nan+nanj]
Testing cos
CPU Output: [nan +0.j inf -0.j inf+nanj]
GPU Output: [nan+nanj inf+nanj nan+nanj]
Testing tan
CPU Output: [nan+0.j 0.+1.j 0.+1.j]
GPU Output: [nan+nanj 0. +1.j 0. +1.j]
Testing sinh
CPU Output: [inf +0.j 0.+nanj inf+nanj]
GPU Output: [inf+nanj nan+nanj nan+nanj]
Testing cosh
CPU Output: [inf +0.j nan +0.j inf+nanj]
GPU Output: [inf+nanj nan+nanj nan+nanj]
Testing exp
CPU Output: [inf +0.j nan+nanj inf+nanj]
GPU Output: [inf+nanj nan+nanj nan+nanj]
Testing reduce_mean
CPU Output: (inf+infj)
GPU Output: (nan+nanj)
```
| type:bug,comp:ops,2.17 | medium | Critical |
2,698,631,707 | godot | `Input.is_action_pressed()` does not report the correct action when releasing a modifier key | ### Tested versions
- Reproducible in 4.3.stable.official
### System information
Godot v4.3.stable - Ubuntu 24.04.1 LTS 24.04 - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1070 (nvidia; 535.183.01) - Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz (12 Threads)
### Issue description
In the `_process` or `_physics_process` functions, actions are not being properly recognized by `Input.is_action_pressed` when a modifier key is released, even when `exact_match` is true.
Consider two keyboard actions, "walk" and "run", which are both bound to the right arrow, but the latter of which uses the Shift key as a modifier. Pressing shift-right will be recognized as "run," but upon releasing the modifier key, the action continues to be recognized as "run" rather than being interpreted as "walk".
That is, given code like this:
```
func _process(_delta: float) -> void:
if Input.is_action_pressed("run", true):
_process_label.text = "Run"
elif Input.is_action_pressed("walk", true):
_process_label.text = "Walk"
else:
_process_label.text = "Idle"
```
if I give the "run" input, then release the shift key, it continues to report the "run" action as matching even though it should instead be the "walk" action.
Strangely, this happens only if the modified action is tested first. The following code works as expected, where "walk" is the unmodified action and "run" is the modified one:
```
func _process(_delta: float) -> void:
if Input.is_action_pressed("walk", true):
_process_label.text = "Walk"
elif Input.is_action_pressed("run", true):
_process_label.text = "Run"
else:
_process_label.text = "Idle"
```
One would expect these two functions to have the same behavior, but they don't.
Note that the `_input` function and its `InputEvent.is_action` work as expected, correctly distinguishing between the two states. That is, when running, if releasing the shift key, the input is then recognized as "walk".
### Steps to reproduce
- Create two keyboard input actions that are different only in that one takes a modifier key (e.g. shift)
- In the `_process` function, use `Input.is_action_pressed` to monitor the state of the input.
- See that the unmodified action is recognized, then pressing the modifier makes the modified action recognized, but releasing the modifier does not revert back to recognizing the unmodified action
### Minimal reproduction project (MRP)
When running the example,
- press right to recognize the "walk" action
- then press shift also to recognize the "run" action
- then release shift, keeping right held down, and see that it stays "run" rather than reverting to "walk"
The example also illustrates how this behavior is different from the `_input` function, which correctly distinguishes between "run" and "walk" using the `is_action` function.
[example.zip](https://github.com/user-attachments/files/17935125/example.zip) | bug,topic:input | low | Minor |
2,698,643,403 | flutter | Flutter Web: ElevatedButton has a 1 second hover delay only using Google Chrome on Linux | ### Steps to reproduce
1. Create a new flutter project with `flutter create .` and then add a few `ElevatedButton`s to the `lib/main.dart` file.
2. $ flutter run -d chrome --release
You will see a hover delay when you move your cursor over the buttons of about 1 second.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// TRY THIS: Try running your application with "flutter run". You'll see
// the application has a purple toolbar. Then, without quitting the app,
// try changing the seedColor in the colorScheme below to Colors.green
// and then invoke "hot reload" (save your changes or press the "hot
// reload" button in a Flutter-supported IDE, or press "r" if you used
// the command line to start the app).
//
// Notice that the counter didn't reset back to zero; the application
// state is not lost during the reload. To reset the state, use hot
// restart instead.
//
// This works for code too, not just values: Most code changes can be
// tested with just a hot reload.
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
return Scaffold(
appBar: AppBar(
// TRY THIS: Try changing the color here to a specific color (to
// Colors.amber, perhaps?) and trigger a hot reload to see the AppBar
// change color while the other colors stay the same.
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
// Here we take the value from the MyHomePage object that was created by
// the App.build method, and use it to set our appbar title.
title: Text(widget.title),
),
body: Center(
// Center is a layout widget. It takes a single child and positions it
// in the middle of the parent.
child: Column(
// Column is also a layout widget. It takes a list of children and
// arranges them vertically. By default, it sizes itself to fit its
// children horizontally, and tries to be as tall as its parent.
//
// Column has various properties to control how it sizes itself and
// how it positions its children. Here we use mainAxisAlignment to
// center the children vertically; the main axis here is the vertical
// axis because Columns are vertical (the cross axis would be
// horizontal).
//
// TRY THIS: Invoke "debug painting" (choose the "Toggle Debug Paint"
// action in the IDE, or press "p" in the console), to see the
// wireframe for each widget.
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text('Pressing either:', textAlign: TextAlign.left),
ElevatedButton(
onPressed: () => const Text('Pressed'),
child: const Text('Button 1'),
),
ElevatedButton(
onPressed: () => const Text('Pressed'),
child: const Text('Button 2'),
),
ElevatedButton(
onPressed: () => const Text('Pressed'),
child: const Text('Button 3'),
),
],
),
),
);
}
}
```
</details>
### Performance profiling on master channel
- [ ] The issue still persists on the master channel
### Timeline Traces
<details open><summary>Timeline Traces JSON</summary>
```json
[Paste the Timeline Traces here]
```
</details>
### Video demonstration
<details open>
<summary>Video demonstration</summary>
[Upload media here]
https://github.com/user-attachments/assets/de396ff0-7d3e-44cf-bbe1-8b5580972199
</details>
### What target platforms are you seeing this bug on?
Web
### OS/Browser name and version | Device information
```
```
### Does the problem occur on emulator/simulator as well as on physical devices?
Unknown
### Is the problem only reproducible with Impeller?
N/A
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
$ flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.5, on Ubuntu 24.04.1 LTS 6.8.0-45-generic,
locale en_US.UTF-8)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Chrome - develop for the web
[✓] Linux toolchain - develop for Linux desktop
[✓] Android Studio (version 2023.1)
[✓] IntelliJ IDEA Community Edition (version 2017.2)
[✓] VS Code (version 1.74.2)
[✓] Connected device (2 available)
[✓] Network resources
• No issues found!
```
</details>
| c: performance,platform-web,platform-linux,P2,browser: chrome-desktop,team-web,triaged-web | low | Critical |
2,698,653,235 | electron | WebContentView container priority is too high | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
Adding WebContentView to BrowseWindow will obscure the render thread UI elements, and positioning cannot solve this problem
### Proposed Solution
Refer to the comparison with webview, you can control the hierarchical relationship by yourself, or provide an api for control
### Alternatives Considered
none
### Additional Information
Demo: https://gist.github.com/7779d612dd7cdf8482f91dbfee266767 | enhancement :sparkles: | low | Major |
2,698,666,599 | pytorch | [PT2] Fine-grained custom triton kernels support | ### 🚀 The feature, motivation and pitch
Currently, only limited set of triton features supported for `torch.compile`: autotune's `configs` and `keys` fields only, and heuristics before autotune. This significantly limits possible use-cases.
Heuristics before of autotune disallow to create computed properties that depend on tuned parameters (aka is time dimension is divisible by block). That removes the possibility of some optimizations, and makes code messy.
Also, autotune additional fields like configs pruning are also disallowed, even though they allow to speed up tuning process.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @oulgen @aakhundov @davidberard98 | triaged,oncall: pt2,module: user triton | low | Minor |
2,698,672,829 | vscode | Empty code coverage (vscode development) | I was trying to understand VSCode code structure and testing philosophy, and tried to get code coverage on Linux according to the instructions:
https://github.com/microsoft/vscode/blob/main/test/unit/README.md#os-x-and-linux
I got here by following the links in Wiki:
[Writing Tests](https://github.com/microsoft/vscode/wiki/Writing-Tests) -> [Unit Tests](https://github.com/microsoft/vscode/blob/main/test/unit/README.md#L1) (in the sentence "Tests can be run manually on the command line, see the instructions here.")
However I got empty coverage result. It seems that `test.sh` does not have any logic around handling the `--coverage` flag.
My guess is that the instructions are outdated, and manual coverage collection requires a different process?
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: N/A
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: N/A, git commit at `67a68e1`
- OS Version: Debian 12
Steps to Reproduce:
1. run `./scripts/test.sh --coverage`
2. examine `./build/coverage/index.html`
| bug,engineering | low | Critical |
2,698,701,831 | vscode | Right-clicking injected text caused funny selection | * don't use edit context
* have injected text like ghost text or inlay hint
* right click on them
* the element gets weirdly selected (not the editor selection but like a native dom selection)
https://github.com/user-attachments/assets/85f2396b-97c3-48ba-8e82-b9e6eb866f21
| bug,file-decorations | low | Minor |
2,698,729,521 | pytorch | Reworking is_big_gpu check for inductor GEMMs | ### 🚀 The feature, motivation and pitch
While working with some newer AMD gpus we started to notice that they can't choose the GEMMs backend while being perfectly capable of performing with it. This led to looking at this piece of code: https://github.com/pytorch/pytorch/blob/b75bb64eb4bf6fea36b8ee7c7a67074bdaa7c69a/torch/_inductor/utils.py#L1106
```
def is_big_gpu(index) -> bool:
min_sms = 68 # 3080
avail_sms = torch.cuda.get_device_properties(index).multi_processor_count
if avail_sms < min_sms:
log.warning(
"Not enough SMs to use max_autotune_gemm mode",
extra={"min_sms": min_sms, "avail_sms": avail_sms},
)
return False
return True
```
This check doesn't make sense for AMD gpus since they have they have the right amount of CUs but multi_processor_count returns WGPs on RDNA which is a lower number. We would still like to choose the GEMMs backend for them. It's pretty easy to make a ROCm specific workaround, but I would like to understand the whole point of this hardcoded check.
- Do we still need this in the current form?
- Can this be made a little bit more generic?
- Is arbitrary for the reasons of performance or are we looking for some instruction sets specifically?
Also it seems counterintuitive that there is such a barrier even when forcing the usage with `max_autotune=True` and `max_autotune_gemm_backends="TRITON"`.
### Alternatives
_No response_
### Additional context
Small relevant workaround fix for ROCm: https://github.com/pytorch/pytorch/pull/141687
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Major |
2,698,814,112 | ui | [bug]: DialogContent Error When Using react-new-improved-window with Radix UI Dialog | ### Describe the bug
Description:
When attempting to open a dialog inside a new window using the [react-new-improved-window](https://www.npmjs.com/package/react-new-improved-window) library (version ^0.2.9), I encounter the following error:
hook.js:608 `DialogContent` requires a `DialogTitle` for the component to be accessible for screen reader users.
If you want to hide the `DialogTitle`, you can wrap it with our VisuallyHidden component.
For more information, see https://radix-ui.com/primitives/docs/components/dialog Error Component Stack
at TitleWarning (@radix-ui_react-dialog.js?v=583f576a:310:23)
at @radix-ui_react-dialog.js?v=583f576a:230:13
at @radix-ui_react-dialog.js?v=583f576a:153:58
at Presence (chunk-NJI4WNQ5.js?v=583f576a:24:11)
at @radix-ui_react-dialog.js?v=583f576a:144:64
at chunk-L3SKXKGI.js?v=583f576a:52:11
at chunk-L3SKXKGI.js?v=583f576a:33:11
at chunk-VNAZQ33N.js?v=583f576a:41:13
at chunk-UCWQWVJF.js?v=583f576a:461:22
at Presence (chunk-NJI4WNQ5.js?v=583f576a:24:11)
at Provider (chunk-Y7IJGSUZ.js?v=583f576a:38:15)
at DialogPortal (@radix-ui_react-dialog.js?v=583f576a:107:11)
at _c3 (dialog.tsx:39:7)
at Provider (chunk-Y7IJGSUZ.js?v=583f576a:38:15)
at Dialog (@radix-ui_react-dialog.js?v=583f576a:48:5)
Actual Behavior:
The error occurs, suggesting that DialogContent requires a DialogTitle for accessibility compliance, even though the <Dialog> component is correctly implemented within the main React application.
### Affected component/components
dialog
### How to reproduce
Steps to Reproduce:
Create a new React application.
Install the necessary dependencies:
[react-new-improved-window](https://www.npmjs.com/package/react-new-improved-window) ^0.2.9
[@radix-ui/react-dialog](https://www.npmjs.com/package/@radix-ui/react-dialog)
Use react-new-improved-window to open a new browser window.
Render a Radix UI <Dialog> component inside this new window.
Observe the console for the error.
Expected Behavior:
The dialog should render correctly in the new window without any accessibility-related warnings or errors.
### Codesandbox/StackBlitz link
https://github.com/babpulss/react-new-window.git
### Logs
_No response_
### System Info
```bash
Additional Information:
The issue seems related to rendering in a new window context created by react-new-improved-window.
Adding a DialogTitle or wrapping it with VisuallyHidden as suggested in the error message does not resolve the issue.
Environment:
React version: 18
react-new-improved-window version: ^0.2.9
radix-ui/react-dialog": ^1.1.2
Browser: chrome@latest
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,698,825,769 | ant-design | Tour component step glitching out/flickering when on certain positions/screen sizes | ### Reproduction link
[https://ant.design/components/tour](https://ant.design/components/tour)

### Steps to reproduce
https://ant.design/components/tour
https://codesandbox.io/p/sandbox/8cdhx6?file=%2Findex.html
The step "2" or "right" flickers in certain positions or screen sizes
### What is expected?
For the step to not flicker and stay in place
### What is actually happening?
The step is flickering and not staying in place
| Environment | Info |
| --- | --- |
| antd | 5.22.2 |
| React | https://ant.design/components/tour version used here |
| System | Windows 11 |
| Browser | Microsoft Edge 129.0.2792.52 |
---
Also occurs in different placements bottom etc.
Does not occur at 5.16.2
Does occur at 5.20.5
Do not know at which specific version the bug was introduced.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug,Inactive | low | Critical |
2,698,850,685 | deno | About WebSocket upgrade unavailable when run code original for Node.js | Recently I wrote a program using Node.js v22 and TypeScript, Express, [PeerJS](https://peerjs.com/) (PeerJS is a WebRTC server and client lib, which use **WebSocket** for signaling).
I used ts-node run this program for debug and files change watching/HMR, but ts-node is very slow, every launch will cost 8-10 seconds.
So I tried Deno v2.1.1 today, (command: deno run --allow-all --watch src/main.ts), Deno is very fast, every launch will cost only 1-2 seconds🚀. And without any code change, this program can be run directly use Deno! It is amazing🌟!
Everything is fine, HTTP request/reply by Express works, but WebSocket. Once a WebRTC signaling start between PeerJS server and client, the error below occurs on the server side(when accept WebSocket connection, the logic is encapsulted in PeerJS server component).
```
Error: upgrade unavailable
at InnerRequest._wantsUpgrade (ext:deno_http/00_serve.ts:105:26)
at upgradeHttpRaw (ext:deno_http/00_serve.ts:51:18)
at handler (node:http:1314:36)
at ext:deno_http/00_serve.ts:382:26
at ext:deno_http/00_serve.ts:593:29
at eventLoopTick (ext:core/01_core.js:175:7)
Upgrade response was not returned from callback
```
It looks like need to do some special operations in WebSocket connection related code when program run by Deno. Beside modify PeerJS code, Is there any other method to resolve this problem, such as add a run time parameter or config. Thanks a lot.
| bug,node compat,ext/http,node:http | low | Critical |
2,698,850,734 | material-ui | New `noSsr` prop of ThemeProvider throws warning | ### Steps to reproduce
Steps:
1. Open this link to live example: https://stackblitz.com/edit/react-yeet9i?file=Demo.tsx
2. Look into console
### Current behavior
Console warning:
```
Warning: Failed prop type: The following props are not supported: `noSsr`. Please remove them.
at ThemeProvider (https://react-yeet9i.stackblitz.io/turbo_modules/@mui/[email protected]/ThemeProvider/ThemeProvider.js:55:5)
at ThemeProviderNoVars (https://react-yeet9i.stackblitz.io/turbo_modules/@mui/[email protected]/node/styles/ThemeProviderNoVars.js:15:10)
at ThemeProvider (https://react-yeet9i.stackblitz.io/turbo_modules/@mui/[email protected]/node/styles/ThemeProvider.js:16:3)
at BasicButtons
at StyledEngineProvider (https://react-yeet9i.stackblitz.io/turbo_modules/@mui/[email protected]/node/StyledEngineProvider/StyledEngineProvider.js:56:5)
```
### Expected behavior
No console warning.
### Context
Release of v6.1.9 adds new `noSsr` prop for ThemeProvider: https://github.com/mui/material-ui/releases/tag/v6.1.9
https://github.com/mui/material-ui/pull/44451
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
Google Chrome Version 131.0.6778.86 (Offizieller Build) (arm64)
```
System:
OS: macOS 15.1.1
Binaries:
Node: 22.11.0 - /usr/local/bin/node
npm: 10.9.0 - /usr/local/bin/npm
pnpm: Not Found
Browsers:
Chrome: 131.0.6778.86
Edge: Not Found
Safari: 18.1.1
npmPackages:
@emotion/react: 11.13.5 => 11.13.5
@emotion/styled: ^11.13.5 => 11.13.5
@mui/core-downloads-tracker: 6.1.9
@mui/icons-material: ^6.1.9 => 6.1.9
@mui/material: ^6.1.9 => 6.1.9
@mui/private-theming: 6.1.9
@mui/styled-engine: 6.1.9
@mui/system: ^6.1.9 => 6.1.9
@mui/types: 7.2.19
@mui/utils: 6.1.9
@mui/x-date-pickers: ^7.22.3 => 7.22.3
@mui/x-internals: 7.21.0
@types/react: ^18.3.12 => 18.3.12
react: ^18.3.1 => 18.3.1
react-dom: ^18.3.1 => 18.3.1
typescript: ^5.7.2 => 5.7.2
```
</details>
**Search keywords**: ThemeProvider noSsr | package: system,dx | low | Critical |
2,698,872,316 | rust | Cannot return value from loop when temporary mutable access occurs before. | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
struct A {
buf: [u8; 32],
}
struct Ref<'a> {
slice: &'a [u8]
}
pub fn mkref(buf: &[u8]) -> Option<Ref> {
Some(Ref {
slice: &buf[1..8]
})
}
impl A {
// Borrowck error
pub fn read<'a>(&'a mut self) -> Option<Ref<'a>> {
loop {
self.buf.rotate_left(4);
match mkref(&self.buf) {
Some(rf) => return Some(rf),
None => continue,
}
}
None
}
// Borrowck error
pub fn read_break<'a>(&'a mut self) -> Option<Ref<'a>> {
let r = loop {
self.buf.rotate_left(4);
match mkref(&self.buf) {
Some(rf) => break Some(rf),
None => continue,
}
};
r
}
// OK
pub fn read_inline<'a>(&'a mut self) -> Option<Ref<'a>> {
loop {
self.buf.rotate_left(4);
return Some(Ref { slice: &self.buf[..] });
}
None
}
// OK
pub fn read_noloop<'a>(&'a mut self) -> Option<Ref<'a>> {
self.buf.rotate_left(4);
match mkref(&self.buf) {
Some(rf) => return Some(rf),
None => return None,
}
}
}
```
I expected to see this happen: No errors
Instead, this happened: Cannot return value from loop when temporary mutable access occurs before.
```
error[E0502]: cannot borrow `self.buf` as mutable because it is also borrowed as immutable
--> src/lib.rs:23:13
|
21 | pub fn read<'a>(&'a mut self) -> Option<Ref<'a>> {
| -- lifetime `'a` defined here
22 | loop {
23 | self.buf.rotate_left(4);
| ^^^^^^^^ mutable borrow occurs here
24 |
25 | match mkref(&self.buf) {
| --------- immutable borrow occurs here
26 | Some(rf) => return Some(rf),
| -------- returning this value requires that `self.buf` is borrowed for `'a`
error[E0502]: cannot borrow `self.buf` as mutable because it is also borrowed as immutable
--> src/lib.rs:38:13
|
36 | pub fn read_break<'a>(&'a mut self) -> Option<Ref<'a>> {
| -- lifetime `'a` defined here
37 | let r = loop {
38 | self.buf.rotate_left(4);
| ^^^^^^^^ mutable borrow occurs here
39 |
40 | match mkref(&self.buf) {
| --------- immutable borrow occurs here
...
46 | r
| - returning this value requires that `self.buf` is borrowed for `'a`
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.85.0-nightly (dff3e7ccd 2024-11-26)
binary: rustc
commit-hash: dff3e7ccd4a18958c938136c4ccdc853fcc86194
commit-date: 2024-11-26
host: x86_64-pc-windows-msvc
release: 1.85.0-nightly
LLVM version: 19.1.4
```
</p>
</details>
| A-borrow-checker,C-bug,T-types,fixed-by-polonius | low | Critical |
2,698,900,180 | PowerToys | New Outlook doesn't support "Paste as plain text directly" with Ctrl+Win+Alt+V | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Advanced Paste
### Steps to reproduce
1. Open New Outlook
2. Set View > Layout > Reading Pane > Show on the right
3. Ctrl+N to start a new email in the reading pane on the right
4. Copy some text from another application e.g. from the browser.
5. Press Ctrl+Win+Alt+V to "Paste as plain text directly"
### ✔️ Expected Behavior
Plain text to be pasted directly.
### ❌ Actual Behavior
Instead of the text pasting, all that happens is menubar shortcuts are highlighted (same as if you press Alt on its own). No text at all is pasted. Using the right-click menu and selecting "Paste as plain text" does work normally, but takes much longer than the keyboard shortcut.
### Other Software
I've had the same issue both a work computer with Win10 22H2 and a home computer with Win11 24H2. My work computer version information is:
Microsoft Outlook Version 1.2024.1115.300 (Production).
Client Version is 20241115003.27.
WebView2 Version is 131.0.2903.70. | Issue-Bug,Needs-Triage | low | Minor |
2,698,928,422 | vscode | When few QuickPickItems are shown there is not enough contrast in dark mode |
Type: <b>Bug</b>
When a QuickPick menu is shown with a small amount of items (like a true/false or only two options), in dark theme, the second item feels more muted as the background (editor or other thing) isn't enough contrast against the item behind it. Compared to light mode where there is more edges in contrast.
This was reported by some user studies on our extension, missing seeing the second option.


VS Code version: Code - Insiders 1.96.0-insider (709e28fc21bfb5ed982c04e7e5bd53279cf8869e, 2024-11-27T08:45:28.012Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i7-11370H @ 3.30GHz (8 x 3302)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.84GB (6.15GB free)|
|Process Argv|--crash-reporter-id 5f5d2ab1-465c-4ddc-809d-58d91db5dfb0|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (96)</summary>
Extension|Author (truncated)|Version
---|---|---
better-comments|aar|3.0.2
tsl-problem-matcher|amo|0.6.2
open-in-github-button|ant|0.1.1
vscode-httpyac|anw|6.16.4
azurite|Azu|3.33.0
npm-intellisense|chr|1.4.5
path-intellisense|chr|2.9.0
esbuild-problem-matchers|con|0.0.3
vscode-markdownlint|Dav|0.57.0
vscode-eslint|dba|3.0.10
npm-browser|den|1.1.4
gitlens|eam|2024.11.2704
vsc-material-theme-icons|equ|3.8.10
azure-storage-explorer|for|0.1.2
visual-nuget|ful|0.3.4
html-preview-vscode|geo|0.2.5
vscode-user-secret-management|gia|1.0.0
codespaces|Git|1.17.3
copilot|Git|1.246.1230
copilot-chat|Git|0.23.2024112701
remotehub|Git|0.64.0
vscode-github-actions|git|0.27.0
vscode-pull-request-github|Git|0.100.3
rest-client|hum|0.25.1
svg|joc|1.5.4
rainbow-csv|mec|3.13.0
render-crlf|med|1.8.5
fluent-icons|mig|0.0.19
dotenv|mik|1.0.1
lipsum-generator|MrA|1.2.3
language-gettext|mro|0.5.0
azure-pipelines|ms-|1.247.2
azure-dev|ms-|0.8.4
vscode-apimanagement|ms-|1.0.8
vscode-azure-github-copilot|ms-|0.3.42
vscode-azureappservice|ms-|0.25.4
vscode-azurecontainerapps|ms-|0.7.1
vscode-azurefunctions|ms-|1.16.1
vscode-azureresourcegroups|ms-|0.9.9
vscode-azurestaticwebapps|ms-|0.12.2
vscode-azurestorage|ms-|0.16.1
vscode-azurevirtualmachines|ms-|0.6.6
vscode-bicep|ms-|0.31.92
vscode-cosmosdb|ms-|0.24.0
vscode-docker|ms-|1.29.3
csdevkit|ms-|1.14.12
csharp|ms-|2.58.20
dotnet-interactive-vscode|ms-|1.0.5575041
vscode-dotnet-pack|ms-|1.0.13
vscode-dotnet-runtime|ms-|2.2.3
vscode-dotnet-sdk|ms-|0.8.0
vscode-aks-tools|ms-|1.5.2
vscode-kubernetes-tools|ms-|1.3.18
playwright|ms-|1.1.12
debugpy|ms-|2024.12.0
python|ms-|2024.20.0
jupyter|ms-|2024.10.0
remote-containers|ms-|0.391.0
remote-ssh|ms-|0.116.2024112515
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
vscode-remote-extensionpack|ms-|0.26.0
azure-account|ms-|0.12.0
azure-repos|ms-|0.40.0
azurecli|ms-|0.6.0
cpptools|ms-|1.23.1
live-server|ms-|0.4.15
makefile-tools|ms-|0.12.10
powershell|ms-|2024.4.0
remote-explorer|ms-|0.5.2024111900
remote-repositories|ms-|0.42.0
remote-server|ms-|1.6.2024112109
vscode-speech|ms-|0.12.1
windows-ai-studio|ms-|0.6.1
azurerm-vscode-tools|msa|0.15.13
uuid-generator|net|0.0.5
vsix-viewer|onl|1.0.5
vscode-jest|Ort|6.4.0
vscode-versionlens|pfl|1.14.5
material-icon-theme|PKi|5.14.1
postman-for-vscode|Pos|1.5.0
quicktype|qui|23.0.170
sqlite-viewer|qwt|0.9.5
vscode-thunder-client|ran|2.32.2
vscode-yaml|red|1.15.0
vscode-marketplace-preview|rob|1.5.1
svg-preview|Sim|2.8.3
aspire-gen|Tim|0.3.0
dotnet-containerizer|Tim|0.1.2
mympclient|tim|0.1.6
resx-editor|tim|0.2.21
es6-string-html|Tob|2.16.0
typespec-vscode|typ|0.62.0
application-insights|Vis|0.4.2
vscode-icons|vsc|12.9.0
volar|Vue|2.1.10
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
vsaa593:30376534
py29gd2263:31024238
vscaac:30438845
c4g48928:30535728
2i9eh265:30646982
962ge761:30841072
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
cppperfnew:30980852
pythonait:30973460
da93g388:31013173
dvdeprecation:31040973
dwnewjupyter:31046869
nativerepl1:31134653
pythonrstrctxt:31093868
nativeloc1:31118317
cf971741:31144450
e80f6927:31120813
iacca1:31150324
notype1:31143044
dwcopilot:31158714
h409b430:31177054
390bf810:31183120
5b1c1929:31184661
```
</details>
<!-- generated by issue reporter --> | bug,quick-pick | low | Critical |
2,698,937,631 | ollama | Please sync with llama.cpp for the update of bert_base like models. | ### What is the issue?
currently [ollama_llama_server](https://github.com/ollama/ollama/blob/main/llm/server.go#L894) can't return properly with google bert_base models, while llama.cpp have already supported, I've varified with `transformers`.
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | bug | low | Minor |
2,699,005,060 | deno | deno compile - Read bytes from embedded assets as static references | https://github.com/denoland/deno/pull/27033 made this work for strings, but we need to do it for bytes. The blocker is we need a copy on write buffer in ~~deno_core~~ v8. | perf,compile | low | Minor |
2,699,014,110 | ui | [bug]: @shadcn Block Page is loading loading https://ui.shadcn.com/blocks it been down for like a week now | ### Describe the bug
https://ui.shadcn.com/blocks this is page of your documentation is not loading
### Affected component/components
blocks
### How to reproduce
1.
### Codesandbox/StackBlitz link
https://ui.shadcn.com/blocks
### Logs
_No response_
### System Info
```bash
HP windows 10, 64bits, Google chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,699,078,992 | kubernetes | v1.32.0 Release Notes: "Known Issues" | ### What would you like to be added?
This issue is a bucket placeholder for collaborating on the "Known Issues" additions for the 1.32 Release Notes. If you know of issues or API changes that are going out in 1.32, please comment here so that we can coordinate incorporating information about these changes in the Release Notes.
/assign @kubernetes/release-team
/sig release
/milestone v1.32
### Why is this needed?
NA | kind/feature,sig/release,needs-triage | low | Minor |
2,699,140,028 | godot | `String.num_scientific()` does not work for floats not big or loses significant digits | ### Tested versions
Tested on 4.4_dev5, probably earlier versions have the same issue.
### System information
Godot v4.4.dev5 - Windows 10.0.26100 - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 3060 Laptop GPU (NVIDIA; 32.0.15.5612) - AMD Ryzen 7 5800H with Radeon Graphics (16 threads)
### Issue description
Tested num_scientific function with some floats:
```GDScript
var a = -12.345
print(String.num_scientific(a))
var b = -1.2345e5
print(String.num_scientific(b))
var c = -1.2345e6
print(String.num_scientific(c))
var d = -123456789
print(String.num_scientific(d))
```
===============
Prints:
-12.345
-123450
-1.2345e+06
-1.23457e+08
Thus, (1) the function does not work numbers with absolute value less than E+6
(2) the function loses significant digits if the length of the number exceeds certain limit.
### Steps to reproduce
Create a simple project & test it out as given in the screenshot attached.
### Minimal reproduction project (MRP)
N/A | bug,topic:core | low | Minor |
2,699,142,927 | godot | In tileset editor, attempting to paint terrain over alternative tile instead creates a new alternative tile | ### Tested versions
- Reproducible for v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - AMD Radeon RX 6600M (Advanced Micro Devices, Inc.; 30.0.13024.5001) - AMD Ryzen 7 5800H with Radeon Graphics (16 Threads)
### Issue description
Attempting to paint terrain over an alternative tile instead results with the terrain editor creating a new alternative tile. Which shouldn't happen, as the alternative tile creation is done via "Select" tab of the tileset editor.
### Steps to reproduce
1. Load attached project
2. Open `player.tscn`
3. Open `TileMapLayer` node
4. Switch to tileset editing, `Paint` mode
5. Select property `Terrains` -> `Terrain Set 0` -> `Dots Biome`
6. Attempt to paint this point:

7. Be greeted with this sight:

### Minimal reproduction project (MRP)
[TerrainPainterBug.zip](https://github.com/user-attachments/files/17937477/TerrainPainterBug.zip) | bug,topic:editor,topic:2d | low | Critical |
2,699,145,207 | tauri | [bug] Problems with adding shared libraries | ### Describe the bug
I have a c++ external binary that depends on tensorflow's shared lib (dll on windows, .so on linux), I want to make sure and be pretty sure that the shared libs are always on the same level of the that binary or else it will either be stuck and will seem that it is running infinitely or it will return: libtensorflow.so.1: cannot open shared object file: No such file or directory.
Keeping in mind: this only happens on release when using an installer like msi or .deb, because when i run the debug version `npm run tauri dev`, it works fine without any problem.
Here is the config:
```
// Resources contains the shared libs
"resources": {
"resources/*": "",
},
// The c++ bin itself
"externalBin": ["bin/externalBin"],
```
### Reproduction
_No response_
### Expected behavior
I want the shared libs to be in the same level as the external binary.
### Full `tauri info` output
```text
[✔] Environment
- OS: Fedora 40.0.0 x86_64 (X64)
✔ webkit2gtk-4.1: 2.46.1
✔ rsvg2: 2.57.1
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 20.17.0
- npm: 10.8.2
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.0
- tao 🦀: 0.30.8
- tauri-cli 🦀: 2.1.0
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-process 🦀: 2.0.1
- @tauri-apps/plugin-process : 2.0.0
- tauri-plugin-fs 🦀: 2.0.3
- @tauri-apps/plugin-fs : 2.0.2
- tauri-plugin-notification 🦀: 2.0.1
- @tauri-apps/plugin-notification : 2.0.0
- tauri-plugin-updater 🦀: 2.0.2
- @tauri-apps/plugin-updater : 2.0.0
- tauri-plugin-os 🦀: 2.0.1
- @tauri-apps/plugin-os : 2.0.0
- tauri-plugin-dialog 🦀: 2.0.3
- @tauri-apps/plugin-dialog : 2.0.1
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Rollup
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,699,192,495 | electron | stop using deprecated WebContentsObserver::WebContentsDestroyed() | This has was deprecated upstream on Nov 8 by https://chromium-review.googlesource.com/c/chromium/src/+/6001971 for https://issues.chromium.org/issues/377733394.
We use this method extensively, so migrating away from it will probably take some work:
```sh
$ rg WebContentsDestroyed -t h
browser/file_system_access/file_system_access_web_contents_helper.h
24: void WebContentsDestroyed() override;
browser/api/electron_api_web_contents_view.h
52: void WebContentsDestroyed() override;
browser/electron_api_ipc_handler_impl.h
63: void WebContentsDestroyed() override;
browser/ui/inspectable_web_contents.h
199: void WebContentsDestroyed() override;
browser/web_contents_zoom_controller.h
105: void WebContentsDestroyed() override;
browser/api/electron_api_browser_window.h
49: void WebContentsDestroyed() override;
browser/api/electron_api_web_contents.h
656: void WebContentsDestroyed() override;
browser/file_select_helper.h
92: void WebContentsDestroyed() override;
156: bool AbortIfWebContentsDestroyed();
browser/electron_web_contents_utility_handler_impl.h
55: void WebContentsDestroyed() override;
``` | upgrade-follow-up | low | Minor |
2,699,202,269 | transformers | Deprecation Warning for `max_size` in `DetrImageProcessor.preprocess` | ### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Linux-5.15.0-126-generic-x86_64-with-glibc2.31
- Python version: 3.11.0
- Huggingface_hub version: 0.24.5
- Safetensors version: 0.4.4
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
@amyeroberts, @qubvel
and I think @NielsRogge worked on it too ?
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import logging
import numpy as np
from transformers.models.detr.image_processing_detr import DetrImageProcessor
logging.basicConfig(level=logging.WARNING)
logger = logging.getLogger(__name__)
images = [np.ones((512, 512, 3))]
annotations = [{'image_id': [], 'annotations': []}]
size = {'max_height': 600, 'max_width': 600}
image_processor = DetrImageProcessor()
images = image_processor.preprocess(images, do_resize=True, do_rescale=False, size=size, annotations=annotations, format='coco_detection')
```
### Expected behavior
Hello!
I noticed that the `preprocess` method in the `DetrImageProcessor` class always passes `max_size` to the `resize` method,
https://github.com/huggingface/transformers/blob/4120cb257f03b834fb332e0b0ee6570245e85656/src/transformers/models/detr/image_processing_detr.py#L1445-L1447
and that triggers a deprecation warning in `resize` method,
```bash
The `max_size` parameter is deprecated and will be removed in v4.26. Please specify in `size['longest_edge'] instead`.
```
https://github.com/huggingface/transformers/blob/4120cb257f03b834fb332e0b0ee6570245e85656/src/transformers/models/detr/image_processing_detr.py#L992-L997
I propose removing the unused `max_size` argument from the preprocess method since it is always `None`,
https://github.com/huggingface/transformers/blob/4120cb257f03b834fb332e0b0ee6570245e85656/src/transformers/models/detr/image_processing_detr.py#L1340
Would it be okay if I work on this and submit a pull request? I can try to see if the problem also occurs in other models. | WIP,Vision,Processing | low | Minor |
2,699,216,934 | go | proposal: net/http: support slash (/) in path variable | ### Proposal Details
It would be really nice if the path variables can support / in the variable, example `GET /api/{resource_name}/components/info` only matches `/api/resource_name/components/info`, and not `/api/resource/name/components/info`. You can url encode the slash in resource name like `resource%2fname`, and that does work, but not what I'm looking for.
I understand we may not want to default handle / in the path variable, but maybe an option to enable it, like `{resource_name/2}` for 2 slashes and `{resource_name/*}` for any amount.
You also cant do `GET /api/{resource_name...}/components/info`, which would also solve the issue I'm proposing.
<details><summary>Example go code</summary>
```go
package main
import (
"log"
"net/http"
)
func main() {
mux := http.NewServeMux()
mux.Handle("GET /api/{resource_name}/components/info", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte(r.PathValue("resource_name")))
}))
s := http.Server{Handler: mux, Addr: ":8000"}
log.Fatalln(s.ListenAndServe())
}
```
```console
$ curl localhost:8000/api/resource_name/components/info
resource_name
$ curl localhost:8000/api/resource/name/components/info
404 page not found
$ curl localhost:8000/api/resource%2fname/components/info
resource/name
```
</details> | Proposal | low | Major |
2,699,239,305 | deno | Error aborting a child process' signal right when it exits | ```js
const controller = new AbortController();
Deno.addSignalListener("SIGCHLD", () => controller.abort());
await new Deno.Command("true", { signal: controller.signal }).output();
```
On a Mac running macOS 15.1, the `abort` call sometimes throws a `TypeError`.
```shellsession
$ deno run --allow-run=true a.js
$ deno run --allow-run=true a.js
error: Uncaught (in promise) TypeError: Child process has already terminated.
Deno.addSignalListener("SIGCHLD", () => controller.abort());
^
at ChildProcess.kill (ext:runtime/40_process.js:357:5)
at onAbort (ext:runtime/40_process.js:299:32)
at AbortSignal.[[[runAbortSteps]]] (ext:deno_web/03_abort_signal.js:182:9)
at AbortSignal.[[[signalAbort]]] (ext:deno_web/03_abort_signal.js:166:24)
at AbortController.abort (ext:deno_web/03_abort_signal.js:304:30)
at …/a.js:2:52
at loop (ext:runtime/40_signals.js:77:7)
at eventLoopTick (ext:core/01_core.js:175:7)
$ deno --version
deno 2.1.1+1e51b65 (canary, release, aarch64-apple-darwin)
v8 13.0.245.12-rusty
typescript 5.6.2
```
I wasn't able to reproduce this issue in a single-core Linux VM.
| bug,good first issue,runtime | low | Critical |
2,699,283,464 | langchain | Dependency conflict between langchain-community (^0.3.5) and SQLAlchemy (^2.0.36) | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Steps to reproduce using Poetry as dependency management:
```bash
poetry add sqlalchemy@^2.0.36
```
Attempt to add `langchain-community`:
```
poetry add langchain-community
```
### Error Message and Stack Trace (if applicable)
Because no versions of langchain-community match >0.3.8,<0.4.0
and langchain-community (0.3.8) depends on SQLAlchemy (>=1.4,<2.0.36), langchain-community (>=0.3.8,<0.4.0) requires SQLAlchemy (>=1.4,<2.0.36).
So, because search-api-worker depends on both sqlalchemy (^2.0.36) and langchain-community (^0.3.8), version solving failed.
### Description
* I'm trying to install `langchain-community` with `SQLAlchemy` version `^2.0.36`, but encountered a dependency conflict casued by the `SQLAlchemy` version constraint.
* Downgrading to `langchain-community==0.3.4` works.
Would be great if `langchain-community` dependency requirements would allow compatibility with `SQLAlchemy` 2.x.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.13.0 (tags/v3.13.0:60403a5, Oct 7 2024, 09:38:07) [MSC v.1941 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.21
> langchain: 0.3.8
> langchain_community: 0.3.4
> langsmith: 0.1.146
> langchain_ollama: 0.2.0
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.7
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> ollama: 0.3.3
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> typing-extensions: 4.12.2 | investigate | low | Critical |
2,699,285,337 | rust | search index doesn't properly regenerate when using `--document-private-items` after a normal `cargo doc` run | Hi 👋🏻,
Running the following produces a documentation where private items are unsearchable:
`cargo clean && cargo doc && cargo doc --document-private-items`
Whereas running `cargo doc` with the `document-private-items` flag directly produces a search index that includes private items. (`cargo clean && cargo doc --document-private-items`)
I cannot reproduce on a fresh minimal crate, but it is consistent when trying on https://github.com/tower-lsp-community/tower-lsp-server/.
While trying to reproduce the issue, I was curious and diffed the produced `target/doc` folders. Only `search-index.js` differs: the whole part about private items is not present in the normal then `private-items` run.
### Meta
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 19.1.1
```
→ also tested with `rustc 1.85.0-nightly (dff3e7ccd 2024-11-26)`
cc @GuillaumeGomez | T-rustdoc,C-bug,A-rustdoc-search | low | Minor |
2,699,345,882 | godot | [TRACKER] GDScript `class_name` | ```[tasklist]
### Issues
- [ ] godotengine/godot#30048
- [ ] godotengine/godot#48311
- [ ] godotengine/godot#56628
- [ ] godotengine/godot#81222
- [ ] godotengine/godot#82479
- [ ] godotengine/godot#83395
- [ ] godotengine/godot#83542
- [ ] godotengine/godot#83561
- [ ] godotengine/godot#90240
- [ ] godotengine/godot#94378
- [ ] godotengine/godot#98985
```
### Some issues brought up by `class_name`
- Addon authors avoid them because of possible naming conflicts.
- In some cases, scripts with `preload()` via transitive dependencies can load a lot of the project resources.
- Global classes clutters the node/resource creation dialog.
- godotengine/godot#91020
- godotengine/godot#95806
- There are no namespaces.
Many thanks to @dalexeev for the list. | topic:gdscript,tracker | low | Minor |
2,699,360,413 | deno | deno run fail to execute npm binary file in 2.0 | **Describe the bug**
Hi there, Andrew from supabase, we've been having report of users failing to use `supabase` CLI with Deno 2 and running into some error executing the compiled binary.
**To Reproduce**
Steps to reproduce the behavior:
1. Have Deno 2 installed
2. Create a new directory, and run `deno init` in it
3. The following content been added to `deno.json`
```json
{
# install deps in a node_modules directory
"nodeModulesDir": "auto",
"imports": {
"supabase": "npm:supabase@^1.223.10"
}
}
```
4. Install deps: `deno install --allow-scripts`
5. Try run the CLI with `deno run -A npm:supabase --help`
Sees the following error in terminal:
```
❯ deno run -A npm:supabase --help
error: Uncaught SyntaxError: Invalid or unexpected token
at <anonymous> (file:///Users/folder/node_modules/.deno/[email protected]/node_modules/supabase/bin/supabase:1:1)
```
**Debug:**
Running `/Users/folder/node_modules/.deno/[email protected]/node_modules/supabase/bin/supabase` properly execute the binary.
**Expected behavior**
Deno 2 with `deno run` command should also be able to run the `supabase` CLI
**Additional context**
Opening the issue based on discussion with @bartlomieju on discord: https://discord.com/channels/684898665143206084/1308818534809210930/1309145748264325120
Related: https://github.com/supabase/cli/issues/2900
Version: Deno 2.0.6 | needs investigation,node compat | low | Critical |
2,699,449,293 | go | runtime:cpu4: TestRuntimeLockMetricsAndProfile/runtime.lock/sample-1 failures | ```
#!watchflakes
default <- pkg == "runtime:cpu4" && test == "TestRuntimeLockMetricsAndProfile/runtime.lock/sample-1"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730104109724589921)):
=== RUN TestRuntimeLockMetricsAndProfile/runtime.lock/sample-1
metrics_test.go:1065: lock contention growth in runtime/pprof's view (1.241778s)
metrics_test.go:1066: lock contention growth in runtime/metrics' view (1.241730s)
metrics_test.go:1104: stack [runtime.unlock runtime_test.TestRuntimeLockMetricsAndProfile.func5.1 runtime_test.(*contentionWorker).run] has samples totaling n=199 value=1207128831
metrics_test.go:1192: mutex profile reported contention count different from the known true count (199 != 200)
--- FAIL: TestRuntimeLockMetricsAndProfile/runtime.lock/sample-1 (1.25s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| help wanted,NeedsInvestigation,arch-loong64,compiler/runtime | low | Critical |
2,699,451,921 | next.js | Scripts from root layout don’t appear first in document head, including `beforeInteractive` | ### Link to the code that reproduces this issue
https://github.com/controversial/next-script-order-repro
### To Reproduce
1. `next build`
2. `next start`
3. Load localhost:3000
### Current vs. Expected behavior
Right now, scripts added to the root layout `<head>` appear in the rendered document *after* “chunk“ scripts from webpack output. Here’s how it looks in the attached reproduction—the server HTML contains five “next.js scripts” before the inline `inject-polyfill` script that was rendered at the beginning of the root `layout.tsx`’s head:
<img width="500" alt="Screenshot 2024-11-27 at 12 57 47 PM" src="https://github.com/user-attachments/assets/8c276139-a2bd-4d75-b09e-88cc4b4d17ce">
This behavior is misleading (or incorrect?) based on the docs—which say:
> `beforeInteractive`: Load the script **before any Next.js code** and before any page hydration occurs.
---
This ordering of scripts prevents the use case I showed in the reproduction: *conditionally injecting a polyfill that might be needed for client component rendering*.
```ts
if (/* features missing */) {
const newScript = document.createElement('script');
newScript.src = /* polyfill to load missing features */;
document.head.prepend(newScript);
}
```
If such an inline script could appear *above* the webpack chunks, it could inject a polyfill whose load **blocks page hydration**, which is my desired behavior. Under the current system (where Next.js scripts are always initiated first) it’s a race condition between them and the polyfill—the Next.js have already started loading before we have a chance to perform feature detection and inject a blocking polyfill script above them.
---
Within the reproduction, currently, the console logs:
1. `injecting blocking polyfill script`
2. `client component mounted`
3. `polyfill script loaded, *after* component mounted`
`Even though it was injected as a blocking script at the beginning of <head>, it didn’t get a chance to block loading Next.js chunks, because it wasn’t injected until *after* Next.js chunk scripts had already started loading`
I expect it to log:
1. `injecting blocking polyfill script`
2. `polyfill script loaded`
3. `client component mounted`
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Sun Nov 3 20:52:07 PST 2024; root:xnu-11215.60.405~54/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 22.8.0
npm: 10.8.2
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.0.4-canary.29
eslint-config-next: N/A
react: 19.0.0-rc-b01722d5-20241114
react-dom: 19.0.0-rc-b01722d5-20241114
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Navigation, Script (next/script), Webpack
### Which stage(s) are affected? (Select all that apply)
next build (local), next start (local), Vercel (Deployed)
### Additional context
I want to use an inline script for injecting polyfills because it avoids shipping the polyfill code to users that don’t need it.
I don’t want the polyfill code *itself* bundled as part of the root layout, because that increases the bundle size unnecessarily for *all users*. I’m okay with trading slightly slower load times on *outdated* browsers for better performance on recent browser versions.
Cloudflare’s polyfill service ships only the necessary features based on user-agent, but injecting the script after performing feature detection allows to avoid *even waiting for the server roundtrip* for users who would be receiving empty polyfills.
This problem affects *both* the “vanilla” `<script>` tag and the `<Script>` component from `next/script` | bug,Webpack,Navigation,Script (next/script) | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.