id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,803,024,321 | rust | rustdoc's automatic link suggestion is syntactically invalid when the link is in a `#[doc]` attribute | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
```rust
#[doc = "https://rust-lang.org"]
pub fn doc_attribute() {}
/// https://rust-lang.org
pub fn doc_comment() {}
```
Running `rustdoc` on that code produces the following output:
```
warning: this URL is not a hyperlink
--> issue.rs:1:9
|
1 | #[doc = "https://rust-lang.org"]
| ^^^^^^^^^^^^^^^^^^^^^^^
|
= note: bare URLs are not automatically turned into clickable links
= note: `#[warn(rustdoc::bare_urls)]` on by default
help: use an automatic link instead
|
1 | #[doc = <"https://rust-lang.org">]
| + +
warning: this URL is not a hyperlink
--> issue.rs:4:5
|
4 | /// https://rust-lang.org
| ^^^^^^^^^^^^^^^^^^^^^
|
= note: bare URLs are not automatically turned into clickable links
help: use an automatic link instead
|
4 | /// <https://rust-lang.org>
| + +
warning: 2 warnings emitted
```
The replacement suggestion is correct for the link in the doc comment, but syntactically incorrect for the one in the `#[doc]` attribute - it should suggest `#[doc = "<https://rust-lang.org>"]` instead.
The bug happens on both stable (`rustdoc 1.84.0 (9fc6b4312 2025-01-07)`) and nightly (`rustdoc 1.86.0-nightly (f3d1d47fd 2025-01-20)`).
| T-rustdoc,A-lints,A-suggestion-diagnostics,D-invalid-suggestion | low | Critical |
2,803,038,965 | go | net/http: consider supporting HTTP 2/3 never-indexed literals | HTTP/2 and HTTP/3 header encoding is vulnerable to a category of attacks which permit an attacker sharing a connection with a target to determine the value of certain headers sent by the target. See [RFC 7541 section 7.1](https://www.rfc-editor.org/rfc/rfc7541.html#section-7.1) and [RFC 9204 section 7.1](https://www.rfc-editor.org/rfc/rfc9204.html#section-7.1).
An implementation may defend against these attacks by not compressing sensitive headers.
When choosing not to compress a sensitive header, an implementation may indicate that the header has been intentionally left uncompressed, sending it as a "never-indexed literal". The recipient of a header sent as a never-indexed literal is supposed to avoid compressing that header if it forwards it.
The net/http and golang.org/x/net/http2 HTTP/2 implementation does not preserve the never-indexed state of received headers, and is therefore in violation of RFC 7541's mandate that:
> An intermediary MUST NOT re-encode a value that uses the never-
indexed literal representation with another representation that would
index it. If HPACK is used for re-encoding, the never-indexed
literal representation MUST be used.
One reason for this (probably the primary reason) is that the `net/http.Headers` type doesn't have any obvious place to store the never-indexed status of a header.
However, I'm not certain how much support there is in the world for never-indexed literals. So far as I can tell, Google QUICHE (https://github.com/google/quiche) has no support for preserving the never-indexed status of header fields. Like net/http, it represents headers as string key/value pairs with no additional per-field data.
Perhaps other implementations do make use of never-indexed literals. I haven't done any research here.
If we *do* want to support never-indexed literals, however, I believe we have a way to do so without any backwards-incompatible API changes. I propose that, in the event that we do want to support never-indexed literals:
Given an `http.Header` value `h`, the field `field` is never-indexed if `h[field][len(field)] == "\x00never-indexed"`. That is, if the `[]string` in `h[field]` contains an element one past its end (in the unused capacity section) with the value "\x00never-indexed".
The field decoder would arrange to set this value as appropriate, and the encoder would avoid indexing never-indexed values. In naive usage, I believe this would just work--most users can ignore the presence of the marker, but an `http.Headers` copied from an inbound request to an outbound one will preserve it. We might need to arrange for `httputil.ReverseProxy` to explicitly copy the value.
In the event that a header somehow ends up incorrectly marked as never-indexed (probably by reusing a slice with a never-indexed marker in it), the negative impact is limited to transmitting that header slightly less efficiently.
The use of a NUL byte in the marker avoids us ever mistaking a real field value for the marker.
I'm not particularly convinced this is a good idea, given that I don't see much evidence of use of never-indexed literals in the wild. But I think it's technically feasible if we want to do it. | NeedsInvestigation | low | Minor |
2,803,040,863 | flutter | [extension_google_sign_in_as_googleapis_auth]: signInSilently does not return with access token | Hi I am running to an issue related the packages.
The problem of the code is line 65-67.
final auth.AuthClient? client = await _googleSignIn.authenticatedClient();
assert(client != null, 'Authenticated client missing!');
I have encountered the issue that it cannot run probably if it is login with _googleSignIn.signInSilently();. Authenticated client missing! will be thrown.
If await _googleSignIn.signIn(); run, everything works.
I have look into the response data, when it is running _googleSignIn.signIn(), there is access token that is essential to be used in people API.
However, when it runs _googleSignIn.signInSilently(), only field credential is returned. And therefore, the client become null.
Any thoughts on the issues? | d: examples,package,team-ecosystem,P2,p: extension_google_sign_in_as_googleapis_auth,triaged-ecosystem | low | Minor |
2,803,050,923 | pytorch | [inductor][triton] refactor ASTSource.make_ir integration | ### ๐ The feature, motivation and pitch
User-defined Triton kernel support in Inductor relies on being able to get the TTIR for a given kernel, so that Inductor do mutability analysis on the TTIR. (in triton_kernel_wrap.py)
As the Triton implementation changes, the implementation is getting more and more unwieldy as we copy more and more of Triton's JITFunction handling into Inductor.
We should try to see if there's a convenient way we can expose an API in Triton to simplify this handling, and then use that API from within Inductor to simplify this handling.
### Alternatives
_No response_
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov @bertmaher @int3 @nmacchioni @embg @peterbell10 @oulgen | triaged,oncall: pt2,module: inductor,upstream triton,module: user triton | low | Minor |
2,803,056,116 | flutter | tree-status-bot is on a deprecated GCP Cloud Run function | `tree-status-bot` is still on a Cloud Run function, which has been deprecated and replaced by artifact registry, and is scheduled to shut down March 18.
See `flutter-dashboard` go/function-tree-status-bot (sorry for the internal link).
| team-infra,P1 | medium | Minor |
2,803,089,008 | pytorch | [dynamo] Save/restore system random state more carefully | Internal example: [T207752792](https://www.internalfb.com/intern/tasks/?t=207752792)
There are some OSS unittests that are failing internally (e.g. `test/dynamo/test_unspec.py::UnspecTests::test_random_object`) likely because some internal logging code is burning some random numbers, leading to differing resulting random states from compiled vs. eager. In particular, if we skip `record_chromium_event_internal` and `log_chromium_event_internal` in `fb/_utils_internal.py`, then the test no longer fails internally.
Test case:
```python
def test_random_in_dynamo(self):
# test that system random calls still work even
# if Dynamo calls random methods.
def fn(x):
# r1 = random.random()
r1 = random.randint(1, 9)
y = x + random.uniform(10, 20)
r2 = random.randint(2, 18)
return y + r1, r2
orig_fn = torch._dynamo.eval_frame._maybe_set_eval_frame
def bad(*args, **kwargs):
# burn random call within dynamo
random.random()
return orig_fn(*args, **kwargs)
x = torch.tensor([[1.0, 2.0], [3.0, 4.0]])
random.seed(1)
res1 = fn(x)
opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
random.seed(1)
with unittest.mock.patch("torch._dynamo.eval_frame._maybe_set_eval_frame", bad):
res2 = opt_fn(x)
self.assertTrue(same(res1, res2))
```
Dynamo should save/restore system `random` state more carefully in order to prevent non-user random calls made during tracing to affect the final random state.
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Minor |
2,803,094,413 | godot | Collision shapes update (2d and 3d) race condition causing infinite loop | ### Tested versions
- Reproducible in: 4.3.stable, 4.4.beta1, v4.4.dev [61accf060]
### System information
Godot v4.3.stable - TUXEDO OS 24.04.1 LTS - Wayland - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Laptop GPU - 13th Gen Intel(R) Core(TM) i9-13900HX (32 Threads)
### Issue description
When running physics in a separate thread and updating a `CollisionShape2D` or a `CollisionShape3D` (eg. changing the transform) in `_physics_process`, the physics server enters an infinite loop and spams the console with:
```
ERROR: Condition "p_elem->_root != this" is true.
at: remove (./core/templates/self_list.h:80)
```
If the "run physics in separate thread" option is disabled, this error does not occur.
I have mostly tested 3D physics as this is where I first encountered this issue, so most of my findings are about 3D physics, though it seems to be the same thing for the 2D error.
First of all here is where the error occurs in godot-v4.4.dev [61accf060] :
```
#0 SelfList<GodotCollisionObject3D>::List::remove (this=0x555564c05718, p_elem=0x555561bc8640) at ./core/templates/self_list.h:80
#1 0x0000555558fa5650 in GodotPhysicsServer3D::_update_shapes (this=0x555564c05350) at modules/godot_physics_3d/godot_physics_server_3d.cpp:1730
#2 0x0000555558fa0849 in GodotPhysicsServer3D::body_test_motion (this=0x555564c05350, p_body=..., p_parameters=..., r_result=0x7fffffff89e0) at modules/godot_physics_3d/godot_physics_server_3d.cpp:918
#3 0x000055555cce8818 in PhysicsServer3DWrapMT::body_test_motion (this=0x55556382ea20, p_body=..., p_parameters=..., r_result=0x7fffffff89e0) at servers/physics_server_3d_wrap_mt.h:264
#4 0x000055555b859c3d in PhysicsBody3D::move_and_collide (this=0x555565e64ec0, p_parameters=..., r_result=..., p_test_only=false, p_cancel_sliding=true) at scene/3d/physics/physics_body_3d.cpp:109
#5 0x000055555b80c80b in CharacterBody3D::_move_and_slide_grounded (this=0x555565e64ec0, p_delta=0.016666666666666666, p_was_on_floor=false) at scene/3d/physics/character_body_3d.cpp:164
#6 0x000055555b80c2d1 in CharacterBody3D::move_and_slide (this=0x555565e64ec0) at scene/3d/physics/character_body_3d.cpp:112
#7 0x00005555587a28be in call_with_validated_variant_args_ret_helper<__UnexistingClass, bool>(__UnexistingClass*, bool (__UnexistingClass::*)(), Variant const**, Variant*, IndexSequence<>) (p_instance=0x555565e64ec0, p_method=(bool (__UnexistingClass::*)(__UnexistingClass * const)) 0x55555b80b9d4 <CharacterBody3D::move_and_slide()>, p_args=0x7fffffffa1c8, r_ret=0x7fffffffa1b0) at ./core/variant/binder_common.h:375
#8 0x000055555879dc60 in call_with_validated_object_instance_args_ret<__UnexistingClass, bool> (base=0x555565e64ec0, p_method=(bool (__UnexistingClass::*)(__UnexistingClass * const)) 0x55555b80b9d4 <CharacterBody3D::move_and_slide()>, p_args=0x7fffffffa1c8, r_ret=0x7fffffffa1b0) at ./core/variant/binder_common.h:662
#9 0x0000555558794947 in MethodBindTR<bool>::validated_call (this=0x555565561b00, p_object=0x555565e64ec0, p_args=0x7fffffffa1c8, r_ret=0x7fffffffa1b0) at ./core/object/method_bind.h:536
#10 0x0000555558ad1340 in GDScriptFunction::call (this=0x555565e96fe0, p_instance=0x555565cda400, p_args=0x7fffffffc858, p_argcount=1, r_err=..., p_state=0x0) at modules/gdscript/gdscript_vm.cpp:2250
#11 0x000055555894ec29 in GDScriptInstance::callp (this=0x555565cda400, p_method=..., p_args=0x7fffffffc858, p_argcount=1, r_error=...) at modules/gdscript/gdscript.cpp:2073
#12 0x000055555af899fa in Node::_gdvirtual__physics_process_call (this=0x555565e64ec0, arg1=0.016666666666666666) at scene/main/node.h:381
#13 0x000055555af68af1 in Node::_notification (this=0x555565e64ec0, p_notification=16) at scene/main/node.cpp:59
[...]
```
`GodotPhysicsServer3D::_update_shapes` tries to remove the first element of `GodotPhysicsServer3D::pending_shape_update_list`, but it can't because `pending_shape_update_list.first()` is already out of the list. Since this is executed until the list is empty, the thread is caught in an infinite loop.
Checking the data stored in this first element, I found that `_root`, `_next` and `_prev` are all `nullptr` (this was true every time I checked), meaning that it was "somewhat cleanely" removed from the list. From what I've gathered, this situation should not be possible unless `remove` and `add` operations are done at the same time from separate threads.
`pending_shape_update_list` is referenced in `GodotPhysicsServer3D::_update_shapes` and in 5 other function in `GodotCollisionObject3D`:
- `GodotCollisionObject3D::add_shape`
- `GodotCollisionObject3D::set_shape`
- `GodotCollisionObject3D::set_shape_transform`
- `GodotCollisionObject3D::set_shape_disabled`
- `GodotCollisionObject3D::remove_shape`
Those 5 functions try to `add` an element to the list to queue it's update.
From my understainding of the threading architecture, all of these functions referencing `pending_shape_update_list` are (indirectly) scheduled to be called in some way by `PhysicsServer3DWrapMT`. If they are scheduled to run in parallel when `run_on_separate_thread` is enabled, it would cause this race condition. But I'm not sure if that is the exact reason behind this issue.
Here are some things I find wierd / I don't yet understaind:
- When the collision shapes are updated during the process frame (and not the physics frame) this bug does not seem to be triggered. If this issue were caused by the `PhysicsServer3DWrapMT`'s scheduling, shouldn't it still trigger when pushing updates inside `_process` ?
- I've not had this bug happen when only one physics object is present in the scene, even after 40+min of it running. With two physics objects, it only takes a minute on average.
- I've not seen `ERR_FAIL_COND(p_elem->_root)` fail inside `SelfList<>::List::add`. If the issue were only due to `add` and `remove` being run at the same time, I would expect this condition to be met at least some of the time. This could be because it is rare enough for me not to have seen it though.
### Steps to reproduce
- Enable the advanced `run_on_separate_thread` option
- Create some physics objects
- Change the physics objects' collision shapes (eg. add/remove collision shapes, change the transform) during the physics frame
- Wait for godot to freeze and start screaming in pain
### Minimal reproduction project (MRP)
MRP: [mrp.zip](https://github.com/user-attachments/files/18497651/mrp.zip) | bug,topic:physics | low | Critical |
2,803,095,312 | rust | Failed to capture backtrace when compiling with `--remap-path-prefix=${PWD}=` | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
fn main() {
panic!("oh no!");
}
```
I expected to see this happen: `backtrace_rs::main` yields correct remapped source info.
```shell
$ RUST_BACKTRACE=1 cargo run
thread 'main' panicked at src/main.rs:2:5:
oh no!
stack backtrace:
0: rust_begin_unwind
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:662:5
1: core::panicking::panic_fmt
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/panicking.rs:74:14
2: backtrace_rs::main
at src/main.rs:2:5
3: core::ops::function::FnOnce::call_once
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
Instead, this happened: source info for 2-3 are missing.
```shell
$ RUSTFLAGS="--remap-path-prefix=$(pwd)=" RUST_BACKTRACE=1 cargo run
thread 'main' panicked at src/main.rs:2:5:
oh no!
stack backtrace:
0: rust_begin_unwind
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/std/src/panicking.rs:662:5
1: core::panicking::panic_fmt
at /rustc/f6e511eec7342f59a25f7c0534f1dbea00d01b14/library/core/src/panicking.rs:74:14
2: backtrace_rs::main
3: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: aarch64-apple-darwin
release: 1.82.0
LLVM version: 19.1.1
```
also checked:
`rustc +nightly-aarch64-apple-darwin --version --verbose`:
```
rustc 1.86.0-nightly (f3d1d47fd 2025-01-20)
binary: rustc
commit-hash: f3d1d47fd84dfcf7f513be1dbad356e74c8f3b2b
commit-date: 2025-01-20
host: aarch64-apple-darwin
release: 1.86.0-nightly
LLVM version: 19.1.7
```
This is also reported to https://github.com/rust-lang/backtrace-rs/issues/691. | T-compiler,C-bug,A-backtrace,A-path-remapping | low | Critical |
2,803,123,619 | ui | MORE MAINTAINERS | ### Feature description
I am bringing this issue again, there needs to be more maintainers. I love shadcn just as much as the next guy, but there are so many bugs. There are over 835 pull requests. There are a lot of bugs. I request that there be more maintainers, I am also willing to maintain myself.
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues and PRs | area: request | low | Critical |
2,803,124,088 | pytorch | DISABLED test_reorder_peak_memory (__main__.TestOperatorReorderForPeakMemory) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_reorder_peak_memory&suite=TestOperatorReorderForPeakMemory&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35949205522).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_reorder_peak_memory`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_memory.py", line 71, in test_reorder_peak_memory
.run(code)
RuntimeError: Expected to find "buf0 = " but did not find it
Searched string:
stream0 = get_raw_stream(0)
triton_red_fused_sum_2.run(buf4, buf6, 1, 2048, grid=grid(1), stream=stream0)
buf1 = buf4; del buf4 # reuse
# Topologically Sorted Source Nodes: [t2], Original ATen: [aten.mm]
extern_kernels.mm(primals_2, primals_3, out=buf1)
del primals_3
buf5 = empty_strided_cuda((2048, 10), (10, 1), torch.float32)
# Topologically Sorted Source Nodes: [t4], Original ATen: [aten.mm]
extern_kernels.mm(buf1, primals_5, out=buf5)
buf7 = empty_strided_cuda((3, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [sum_2], Original ATen: [aten.sum]
stream0 = get_raw_stream(0)
triton_red_fused_sum_3.run(buf5, buf7, 3, 6827, grid=grid(3), stream=stream0)
del buf5
buf9 = buf6; del buf6 # reuse
# Topologically Sorted Source Nodes: [sum_2, add], Original ATen: [aten.sum, aten.add]
stream0 = get_raw_stream(0)
triton_per_fused_add_sum_4.run(buf9, buf7, 1, 3, grid=grid(1), stream=stream0)
del buf7
return (buf9, primals_2, reinterpret_tensor(buf1, (1, 2048), (1, 1), 0), reinterpret_tensor(primals_5, (10, 1), (1, 10), 0), reinterpret_tensor(buf0, (10, 2048), (1, 10), 0), reinterpret_tensor(primals_4, (1, 10), (1, 1), 0), )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
primals_1 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
primals_2 = rand_strided((2048, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_3 = rand_strided((1, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_4 = rand_strided((10, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_5 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
fn = lambda: call([primals_1, primals_2, primals_3, primals_4, primals_5])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: buf0 =
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_memory.py TestOperatorReorderForPeakMemory.test_reorder_peak_memory
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_memory.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,803,124,089 | pytorch | DISABLED test_warn_on_invalid_torch_function_standalone_class (__main__.TestTorchFunctionWarning) | Platforms: asan, linux, mac, macos, rocm, win, windows, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_warn_on_invalid_torch_function_standalone_class&suite=TestTorchFunctionWarning&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35951159764).
Over the past 3 hours, it has been determined flaky in 111 workflow(s) with 222 failures and 111 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_warn_on_invalid_torch_function_standalone_class`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_overrides.py`
cc @clee2000 @wdvr @hameerabbasi @rgommers @ezyang | triaged,module: flaky-tests,skipped,module: __torch_function__ | low | Critical |
2,803,124,090 | pytorch | DISABLED test_cache_hot_load_device_cuda_bfloat16_dynamic_False (__main__.AOTAutogradCacheTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_hot_load_device_cuda_bfloat16_dynamic_False&suite=AOTAutogradCacheTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35949205522).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_hot_load_device_cuda_bfloat16_dynamic_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_aot_autograd_cache.py", line 119, in test_cache_hot_load
self.assertEqual(len(cache_info.autotune_artifacts), autotune_expect)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 2 but got 4.
Absolute difference: 2
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_aot_autograd_cache.py AOTAutogradCacheTests.test_cache_hot_load_device_cuda_bfloat16_dynamic_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_aot_autograd_cache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: dynamo | low | Critical |
2,803,124,136 | pytorch | DISABLED test_mm_plus_mm (__main__.TestPatternMatcher) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mm_plus_mm&suite=TestPatternMatcher&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35949080113).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 6 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mm_plus_mm`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_pattern_matcher.py", line 113, in test_mm_plus_mm
self.common(fn, args, 1, 3)
File "/var/lib/jenkins/pytorch/test/inductor/test_pattern_matcher.py", line 85, in common
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 1 but got 2.
Absolute difference: 1
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_pattern_matcher.py TestPatternMatcher.test_mm_plus_mm
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_pattern_matcher.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,803,124,669 | pytorch | DISABLED test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_True (__main__.TestFxGraphCache) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_True&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35950279286).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_True`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 146, in test_cache_load_function
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 14 but got 35.
Absolute difference: 21
Relative difference: 1.5
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_True
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,803,139,096 | pytorch | [libTorch] Model initialization on multi-device is slow. It seems to run sequentially in multi-thread | > Originally posted at https://discuss.pytorch.org/t/x/215093
I am using libTorch for inference on multiple GPU devices. I use one-thread-per-device to initialize and then to run inference. Inference (i.e. `forward()` ) works fast as expected, however the initialization step seems to run sequentially. Once the initialization is complete, the rest of the code runs concurrently as expected. This is problematic for bigger models, where each thread takes several minutes. How to initialize models on multiple devices using libtorch?
Here is a minimal, reproducible example:
```cpp
#include <torch/torch.h>
#include <spdlog/spdlog.h>
using namespace torch;
namespace nn = torch::nn;
const torch::Device DEVICE = torch::Device(torch::cuda::is_available() ? torch::kCUDA : torch::kCPU);
// a dummy model for demonstration
struct NetImpl : nn::Module {
nn::Sequential layers;
NetImpl(std::vector<int64_t> sizes, torch::Device device = DEVICE)
: layers{ register_module("layers", torch::nn::Sequential()) }
{
for (size_t i = 0; i < sizes.size() - 1; i++) {
layers->push_back(nn::Linear(sizes[i], sizes[i + 1]));
layers->push_back(nn::Functional(torch::relu));
}
this->to(device);
}
auto forward(Tensor x) -> Tensor {
x = layers->forward(x);
return x;
}
};
TORCH_MODULE(Net);
struct Timer {
std::string name;
std::chrono::time_point<std::chrono::high_resolution_clock> start;
Timer(std::string name="")
: name {name}, start {std::chrono::high_resolution_clock::now()}
{
spdlog::info("Timer {} started", name);
}
double elapsed() {
auto now = std::chrono::high_resolution_clock::now();
return std::chrono::duration_cast<std::chrono::seconds>(now - start).count();
}
~Timer() {
spdlog::info("Timer {} ended: {:.3f}s", name, elapsed());
}
};
int main() {
spdlog::info("torch version {}", TORCH_VERSION);
// deep network; FFN with a lot of layers to make it deep
std::vector<int64_t> dims = {
1024, 4096, 8192, 16384, 8192, 4096, 1024, 512, 256, 512,
1024, 4096, 8192, 16384, 8192, 4096, 1024, 512, 256, 512,
1024, 4096, 8192, 16384, 8192, 4096, 1024, 512, 256, 512,
1024, 4096, 8192, 16384, 8192, 4096, 1024, 512, 256, 512,
1024, 4096, 8192, 16384, 8192, 4096, 1024, 512, 256, 512,
};
if (!torch::cuda::is_available()) {
throw std::runtime_error("CUDA is not available");
}
std::vector<torch::Device> devices;
for (auto i = 0; i < torch::cuda::device_count(); i++) {
devices.push_back(torch::Device(torch::kCUDA, i));
}
{ // scope for timer
int n_threads = devices.size();
Timer timer(fmt::format("[{}-threaded initializer]", n_threads));
std::vector<std::jthread> threads;
for (int i = 0; i < n_threads; i++) {
auto t = std::jthread([i, &dims, &devices] {
auto device = devices[i];
Timer timer(fmt::format("{}", device.str()));
auto model = Net(dims, device);
});
threads.push_back(std::move(t));
}
}
return 0;
}
```
With a single GPU, i.e. `CUDA_VISIBLE_DEVICES=0`
```
[250108 04:12:39|t1753841][info] Timer [1-threaded initializer] started
[250108 04:12:39|t1753854][info] Timer cuda:0 started
[250108 04:12:53|t1753854][info] Timer cuda:0 ended: 14.000s
[250108 04:12:53|t1753841][info] Timer [1-threaded initializer] ended: 14.000s
```
Now, with `CUDA_VISIBLE_DEVICES=0,1,` the time is almost doubled
```
[250108 04:13:02|t1754149][info] Timer [2-threaded initializer] started
[250108 04:13:02|t1754163][info] Timer cuda:0 started
[250108 04:13:02|t1754164][info] Timer cuda:1 started
[250108 04:13:26|t1754164][info] Timer cuda:1 ended: 24.000s
[250108 04:13:27|t1754163][info] Timer cuda:0 ended: 24.000s
[250108 04:13:27|t1754149][info] Timer [2-threaded initializer] ended: 24.000s
```
And with `CUDA_VISIBLE_DEVICES=0,1,2,3`, the pattern continues:
```
[250108 04:14:04|t1754791][info] Timer [4-threaded initializer] started
[250108 04:14:04|t1754795][info] Timer cuda:0 started
[250108 04:14:04|t1754796][info] Timer cuda:1 started
[250108 04:14:04|t1754797][info] Timer cuda:2 started
[250108 04:14:04|t1754798][info] Timer cuda:3 started
[250108 04:14:52|t1754796][info] Timer cuda:1 ended: 47.000s
[250108 04:14:52|t1754795][info] Timer cuda:0 ended: 48.000s
[250108 04:14:58|t1754797][info] Timer cuda:2 ended: 54.000s
[250108 04:14:58|t1754798][info] Timer cuda:3 ended: 54.000s
[250108 04:14:58|t1754791][info] Timer [4-threaded initializer] ended: 54.000s
```
Finally, with all 8 devices:
```
[250108 04:15:50|t1755936][info] Timer [8-threaded initializer] started
[250108 04:15:50|t1755959][info] Timer cuda:0 started
[250108 04:15:50|t1755960][info] Timer cuda:1 started
[250108 04:15:50|t1755961][info] Timer cuda:2 started
[250108 04:15:50|t1755962][info] Timer cuda:3 started
[250108 04:15:50|t1755963][info] Timer cuda:4 started
[250108 04:15:50|t1755964][info] Timer cuda:5 started
[250108 04:15:50|t1755965][info] Timer cuda:6 started
[250108 04:15:50|t1755966][info] Timer cuda:7 started
[250108 04:17:23|t1755960][info] Timer cuda:1 ended: 92.000s
[250108 04:17:23|t1755965][info] Timer cuda:6 ended: 93.000s
[250108 04:17:24|t1755964][info] Timer cuda:5 ended: 93.000s
[250108 04:17:24|t1755959][info] Timer cuda:0 ended: 94.000s
[250108 04:17:24|t1755963][info] Timer cuda:4 ended: 94.000s
[250108 04:17:25|t1755966][info] Timer cuda:7 ended: 94.000s
[250108 04:17:25|t1755961][info] Timer cuda:2 ended: 95.000s
[250108 04:17:28|t1755962][info] Timer cuda:3 ended: 97.000s
[250108 04:17:28|t1755936][info] Timer [8-threaded initializer] ended: 97.000s
```
I canโt see where in `NetImpl` or `nn::LinearImpl` the locking is enforcing sequential execution.
It looks like some internal API (ATen/C10) is at play and I am clueless how to resolve it. How to improve the parallelization in this case?
cc @jbschlosser | module: cpp,triaged | low | Critical |
2,803,158,064 | ant-design | Option to Always Show removeIcon in Upload Component | ### What problem does this feature solve?
`removeIcon` in the `Upload` component is visible only on hover. However, on mobile devices where hover is not available, it becomes difficult for users to remove uploaded files.
It would be great if there was an option to make the removeIcon always visible, regardless of the hover state. This would improve the usability of the component on mobile devices.
### What does the proposed API look like?
Introduce a property, such as `alwaysShowRemoveIcon`, which can be set to true to make the removeIcon persistently visible.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Minor |
2,803,177,332 | ollama | how to get English output | ### What is the issue?
M:\AI\ollama>ollama run deepseek-r1:7b
>>> list philosophers
<think>
</think>
Here is a list of some of the most influential and notable philosophers throughout history, organized by era and
region:
### Ancient Philosophy (c. 600โ321 BCE)
- **Thales of Miletus** (c. 624โ548 BCE):่ขซ่ฎคไธบๆฏ็ฌฌไธไธชๅฒๅญฆๅฎถ๏ผๆๅบโไธ็ฉๆบไบๆฐดโ็ๅญฆ่ฏดใ
- **Anaximander of Miletus** (c. 570โ495 BCE): ๆๅบโๆ ้โๆฆๅฟต๏ผๅนถ่ฎคไธบไธ็ฉ่ตทๆบไบ่ช็ถใ
- **Anaximenes of Mileti** (c. 510โ441 BCE): ่ฎคไธบไธ็ฉๆฅๆบไบๆ็งๅๅง็ฉ่ดจ๏ผๅฆโairโ๏ผ็ฉบๆฐ๏ผใ
### OS
Microsoft Windows [Version 10.0.22635.4800]
### GPU
NVIDIA GeForce GTX 1660 SUPER
### CPU
Processor AMD Ryzen 7 3700X 8-Core Processor, 3600 Mhz, 8 Core(s), 16 Logical Processor(s)
### Ollama version
ollama version is 0.5.7 | bug | low | Minor |
2,803,200,007 | vscode | DAP completions mishandled |
Type: <b>Bug</b>
1. install a debugger that supports the DAP completions request (or write your own!)
2. assuming it's targeting gdb, in the debug console type 'help b' and wait for completions to appear
3. select 'help breakpoints' and press tab
4. note how 'help b' becomes 'help help breakpoints'.
I'm writing (yet another) gdb adapter, and was struggling to get the completions to work as advertised. I've also seen this behaviour on vGDB.
After investigation I know where the bug is, and I could create a pull request, but it might be quicker to just show you:
src/vs/workbench/contrib/debug/browser/repl.ts(273)
```
//const word = model.getWordAtPosition(position);
//const overwriteBefore = word ? word.word.length : 0;
const overwriteBefore = position.column;
```
The issue is the DAP completion request and response are not relative to word boundaries, so the processing of them in the DebugConsole shouldn't be processing them as if they are.
VS Code version: Code 1.96.4 (Universal) (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Darwin arm64 24.1.0
Modes:
Remote OS version: Linux x64 5.10.60-qnap
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M4 (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|3, 3, 4|
|Memory (System)|16.00GB (0.09GB free)|
|Process Argv|--crash-reporter-id 6e06ef7f-0758-449d-bad8-b85f9dd6a2fc|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|SSH: ubuntu-2404np|
|OS|Linux x64 5.10.60-qnap|
|CPUs|Intel(R) Celeron(R) CPU J1900 @ 1.99GHz (4 x 2417)|
|Memory (System)|7.66GB (5.53GB free)|
|VM|22%|
</details><details><summary>Extensions (38)</summary>
Extension|Author (truncated)|Version
---|---|---
process-matcher|abd|0.0.3
android-dev-ext|ade|1.4.0
rtf|ale|2.8.0
amazon-q-vscode|ama|1.43.0
fast-tasks|bat|0.0.11
vscode-eslint|dba|3.0.10
vscode-commands|fab|2.0.2
copilot|Git|1.257.0
copilot-chat|Git|0.23.2
gitweb|iso|1.0.1
modules|iso|1.0.1
mugdb|iso|0.0.0
better-cpp-syntax|jef|1.27.1
svn-scm|joh|2.17.0
git-graph|mhu|1.30.0
vscode-dotnet-runtime|ms-|2.2.5
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
hexeditor|ms-|1.11.1
remote-explorer|ms-|0.4.3
vscode-serial-monitor|ms-|0.13.1
platformio-ide|pla|3.3.4
rust-analyzer|rus|0.3.2273
vscode-3dviewer|sle|0.2.2
memento-inputs|spa|1.0.0
cmake|twx|0.0.17
cursor-align|yo1|2.1.1
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
```
</details>
<!-- generated by issue reporter --> | bug,debug | low | Critical |
2,803,215,087 | next.js | NextResponse.redirect() behaves incorrectly when redirecting from subdomain route to apex domain route | ### Link to the code that reproduces this issue
https://github.com/bentron2000/next-response-bug-report
### To Reproduce
Set `/etc/hosts`:
```
127.0.0.1 example.test
127.0.0.1 foo.example.test
```
Install Packages:
```
npm i
```
Start Server:
```
npm run dev
```
Then Navigate to the following link and observe the result.
'http://foo.example.test:3000/shouldredirect'
### Current vs. Expected behavior
When you navigate to the 'http://foo.example.test:3000/shouldredirect'
Middleware should select for this route and redirect to the apex domain '/login' page
**Expected Result:**
Redirect to : 'http://example.test:3000/login' -> 'Hooray - we redirected properly'
**Actual Result:**
Redirect to : 'http://foo.example.test:3000/login' -> 'Should not be redirected here'
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Fri Dec 6 18:51:28 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T8112
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 20.10.0
npm: 10.8.1
Yarn: 1.22.22
pnpm: 9.15.0
Relevant Packages:
next: 15.2.0-canary.19 // Latest available version is detected (15.2.0-canary.19).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
# Minimal demo of possible middleware `NextResponse.redirect()` Bug
Demonstrates that NextResponse.redirect is not obeying explicit urls when redirecting from a subdomain `foo.example.test` to the apex domain `example.test`
In this example, middleware has been set to catch a specific subdomain route and to redirect it to an explicitly set route on the apex domain
'http://foo.example.test:3000/shouldredirect' -> 'http://example.test:3000/login'
This does not happen - it redirects to the same path but on the subdomain.
## To Reproduce
Set `/etc/hosts`:
```
127.0.0.1 example.test
127.0.0.1 foo.example.test
```
Install Packages:
```
npm i
```
Start Server:
```
npm run dev
```
Then Navigate to the following link and observe the result.
'http://foo.example.test:3000/shouldredirect'
Middleware should select for this route and redirect to the apex domain '/login' page
**Expected Result:**
Redirect to : 'http://example.test:3000/login' -> 'Hooray - we redirected properly'
**Actual Result:**
Redirect to : 'http://foo.example.test:3000/login' -> 'Should not be redirected here'
| Middleware | low | Critical |
2,803,216,713 | pytorch | internal compiler error: in extract_insn when compiling pytorch with xpu with gcc 12 | ### ๐ Describe the bug
As title, compilation with XPU support fails with the issue below. Compiling CPU succeeds.
```
...
/opt/intel/oneapi/compiler/2025.0/bin/compiler/../../include/sycl/detail/builtins/builtins.hpp:235:1: warning: multi-line comment [-Wcomment]
235 | // clang++ -[DU]__SYCL_DEVICE_ONLY__ -x c++ math_functions.inc \
| ^
In file included from /usr/include/c++/12/functional:59,
from /root/pytorch/c10/util/string_view.h:6,
from /root/pytorch/c10/util/StringUtil.h:6,
from /root/pytorch/c10/util/Exception.h:8,
from /root/pytorch/aten/src/ATen/BlasBackend.h:3,
from /root/pytorch/aten/src/ATen/Context.h:3:
/usr/include/c++/12/bits/std_function.h: In static member function ~@~Xstatic _Res std::_Function_handler<_Res(_ArgTypes ...), _Functor>::_M_invoke(const sstd::_Any_data&, _ArgTypes&& ...) [with _Res = void; _Functor = sycl::_V1::handler::ResetHostKernel<at::native::xpu::VectorizedElementwiseKernel<8, at::native::xpu::SignbitFunctor<c10::BFloat16>, at::detail::Array<char*, 2>, TrivialOffsetCalculator<1, unsigned int> >, sycl::_V1::nd_item<1>, 1>(const at::native::xpu::VectorizedElementwiseKernel<8, at::native::xpu::SignbitFunctor<c10::BFloat16>, at::detail::Array<char*, 2>, TrivialOffsetCalculator<1, unsigned int> >&)::NormalizedKernelType; _ArgTypes = {const sycl::_V1::nd_item<1>&}]~@~Y:
/usr/include/c++/12/bits/std_function.h:292:7: error: unrecognizable insn:
292 | }
| ^
(insn 21 20 22 4 (set (reg:V2SI 87 [ vect__71.47795 ])
(lshiftrt:V2SI (subreg:V2SI (subreg:V2SF (reg:V2SI 118 [ vect__69.47793 ]) 0) 0)
(const_int 31 [0x1f]))) "/usr/include/c++/12/cmath":662:29 -1
(nil))
during RTL pass: vregs
/usr/include/c++/12/bits/std_function.h:292:7: internal compiler error: in extract_insn, at recog.cc:2791
0x1b3ed3a internal_error(char const*, ...)
???:0
0x6a22ba fancy_abort(char const*, int, char const*)
???:0
0x67affc _fatal_insn(char const*, rtx_def const*, char const*, int, char const*)
???:0
0x67b01e _fatal_insn_not_found(rtx_def const*, char const*, int, char const*)
???:0
Please submit a full bug report, with preprocessed source (by using -freport-bug).
Please include the complete backtrace with any bug report.
See <file:///usr/share/doc/gcc-12/README.Bugs> for instructions.
CMake Error at torch_xpu_ops_sycl_unary_binary_kernels_generated_UnarySignKernels.cpp.o.Release.cmake:145 (message):
Error generating file
/root/pytorch/build/caffe2/aten_xpu/src/CMakeFiles/torch_xpu_ops_sycl_unary_binary_kernels.dir/ATen/native/xpu/sycl/./torch_xpu_ops_sycl_unary_binary_kernels_generated_UnarySignKernels.cpp.o
...
```
### Versions
```
(xpu) root@2649cb81ee38:~# python collect_env.py
Collecting environment information...
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35 Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Intel GPU driver version:
* intel_opencl: 24.45.31740.15-1057~22.04
* level_zero: 1.18.5.0-1055~22.04
```
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,803,226,058 | go | io: Copy to a pipe prevents process exit (Go 1.24rc2 on Linux regression) | ### Go version
go version go1.24rc2 linux/amd64
### Output of `go env` in your module/workspace:
```shell
AR='ar'
CC='gcc'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_ENABLED='1'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
CXX='g++'
GCCGO='gccgo'
GO111MODULE=''
GOAMD64='v1'
GOARCH='amd64'
GOAUTH='netrc'
GOBIN=''
GOCACHE='/home/kir/.cache/go-build'
GOCACHEPROG=''
GODEBUG=''
GOENV='/home/kir/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFIPS140='off'
GOFLAGS=''
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build957638587=/tmp/go-build -gno-record-gcc-switches'
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMOD='/dev/null'
GOMODCACHE='/home/kir/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/kir/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/kir/sdk/go1.24rc2'
GOSUMDB='sum.golang.org'
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/kir/.config/go/telemetry'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/kir/sdk/go1.24rc2/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.24rc2'
GOWORK=''
PKG_CONFIG='pkg-config'
```
### What did you do?
There is a regression in Go 1.24rc2 (git-bisect points to [1]) caused by a (not yet confirmed) Linux kernel bug with sendmail(2)/splice(2), which I just reported [2].
In short, when sendfile(2) or splice(2) is used to copy data to a pipe, and another process is having the other end of this file, this prevents that other process from exit. You can get a short C repro from [2], and here's a Go repro:
```go
package main
import (
"io"
"log"
"os"
"os/exec"
)
func main() {
// Create a pipe for stdin
r, w, err := os.Pipe()
if err != nil {
log.Fatal(err)
}
// Create a short-lived process (like ps) with stdin connected to pipe
cmd := exec.Command("ps")
cmd.Stdin = r
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Start(); err != nil {
log.Fatal(err)
}
log.Print("Run child pid=", cmd.Process.Pid)
// Close read end in parent.
r.Close()
// Copy from stdin to pipe - this should trigger sendfile in Go 1.24.
go func() {
_, err = io.Copy(w, os.Stdin)
if err != nil {
log.Fatal("io.Copy: ", err)
}
}()
if err := cmd.Wait(); err != nil {
log.Fatal("wait: ", err)
}
}
```
[1]: https://go-review.googlesource.com/c/go/+/603295
[2]: https://lore.kernel.org/linux-fsdevel/[email protected]/T/#u
### What did you see happen?
When the above repro is run with go1.24rc2, the process is not exiting:
```console
[kir@kir-tp1 sendfile-vs-pipe]$ go1.24rc2 run repro.go
2025/01/21 18:32:54 Run child pid=2180908
PID TTY TIME CMD
63304 pts/1 00:00:03 bash
2180738 pts/1 00:00:00 go1.24rc2
2180747 pts/1 00:00:00 go
2180902 pts/1 00:00:00 repro
2180908 pts/1 00:00:00 ps
```
(and the process hangs here).
NOTE if you can't repro this, you probably pressed Enter in the terminal. Do not do anything with this terminal!.
In a different terminal, you can check the status of the child process (using the pid from the above output):
```console
[kir@kir-tp1 linux]$ ps -f -p 2180908
UID PID PPID C STIME TTY TIME CMD
kir 2180908 2180902 0 18:32 pts/1 00:00:00 [ps]
```
As you can see, `ps` thinks that the process is a kernel thread. This happens because the process is half-exited and some /proc/$PID/ entries (`root`, `cwd`, `exe) are no longer valid, like it is with kernel threads.
Now, to see where the process is stuck:
```console
[kir@kir-tp1 linux]$ sudo cat /proc/2180908/stack
[<0>] pipe_release+0x1f/0x100
[<0>] __fput+0xde/0x2a0
[<0>] task_work_run+0x59/0x90
[<0>] do_exit+0x309/0xab0
[<0>] do_group_exit+0x30/0x80
[<0>] __x64_sys_exit_group+0x18/0x20
[<0>] x64_sys_call+0x14b4/0x14c0
[<0>] do_syscall_64+0x82/0x160
[<0>] entry_SYSCALL_64_after_hwframe+0x76/0x7e
```
### What did you expect to see?
No stuck process.
With an older Go version, everything works as it should:
```console
[kir@kir-tp1 sendfile-vs-pipe]$ go1.23.4 run repro.go
2025/01/21 18:38:16 Run child pid=2181265
PID TTY TIME CMD
63304 pts/1 00:00:03 bash
2181065 pts/1 00:00:00 go1.23.4
2181071 pts/1 00:00:00 go
2181259 pts/1 00:00:00 repro
2181265 pts/1 00:00:00 ps
[kir@kir-tp1 sendfile-vs-pipe]$
``` | NeedsInvestigation,release-blocker,compiler/runtime,BugReport | medium | Critical |
2,803,255,292 | transformers | model.gradient_checkpointing_enable() makes loss.requires_grad be False | ### System Info
Python 3.9.19
transformers 4.42.0
torch 2.2.2+cu118
peft 0.12.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
When I tried using model.gradient_checkpointing_enable() to reduce memory consumption during training, I encountered an error: "RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn." After troubleshooting, I found that the issue seems to be caused by loss.requires_grad being set to False, which prevents backpropagation. The following is the reproducible code to directly obtain `loss.requires_grad False`
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "4"
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import get_peft_model, LoraConfig, TaskType
def main():
train_data = {"input": "input test", "output": "output test"}
model_name = "/workspace/model/CodeLlama-13b-Instruct-hf"
output_dir = "./test_debug"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16,device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = model.config.eos_token_id
input_ids = tokenizer.encode(train_data["input"])
output_ids = tokenizer.encode(train_data["output"])
model_inputs_output = input_ids + output_ids + [tokenizer.eos_token_id]
model_inputs_output = torch.tensor(model_inputs_output, dtype=torch.int64)
labels = copy.deepcopy(model_inputs_output)
labels[: len(input_ids)] = -1 #
example_mask = model_inputs_output.ge(0)
label_mask = labels.ge(0)
model_inputs_output[~example_mask] = 0
labels[~label_mask] = -100
train_dataset = {
"input_ids": model_inputs_output.unsqueeze(0).to("cuda"),
"attention_mask": example_mask.unsqueeze(0).to("cuda"),
"labels": labels.unsqueeze(0).to("cuda")
}
lora_config = LoraConfig(
r=8,
lora_alpha=16,
target_modules=["q_proj", "gate_proj", "v_proj", "o_proj", "up_proj", "k_proj", "down_proj"], # ไธllama-factoryไธ่ด
lora_dropout=0.05,
task_type= TaskType.CAUSAL_LM
)
model = get_peft_model(model, lora_config)
model.gradient_checkpointing_enable()
model.train()
model.print_trainable_parameters()
model.to("cuda")
output = model(**train_dataset)
loss = output["loss"]
print(f"loss: {loss.requires_grad}")
if __name__ == "__main__":
main()
```
Output is
```
loss: False
```
This is confusing because `model.gradient_checkpointing_enable()` is designed to reduce memory consumption, but if `loss.requires_grad` is set to `False`, it disrupts the normal training process. Meanwhile, when I use similar code from LLama-factory to achieve the effect of model.gradient_checkpointing_enable(), I find that `loss.requires_grad` is `True`. Below is the code:
```python
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "4"
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import get_peft_model, LoraConfig, TaskType
import copy
from types import MethodType
from functools import partial
import inspect
from typing import TYPE_CHECKING, Any, Dict, Optional, Tuple
from transformers import PreTrainedModel
def _gradient_checkpointing_enable(
self: "PreTrainedModel", gradient_checkpointing_kwargs: Optional[Dict[str, Any]] = None
) -> None:
r"""
Activates gradient checkpointing for the current model.
Modification of the original method to enable gradient checkpointing for block-wise optimizer.
"""
from torch.utils.checkpoint import checkpoint
if not self.supports_gradient_checkpointing:
raise ValueError("{} does not support gradient checkpointing.".format(self.__class__.__name__))
if gradient_checkpointing_kwargs is None:
gradient_checkpointing_kwargs = {"use_reentrant": True}
gradient_checkpointing_func = partial(checkpoint, **gradient_checkpointing_kwargs)
def custom_gradient_checkpointing_func(func, *args, **kwargs):
module: "torch.nn.Module" = func.__self__
if any(param.requires_grad for param in module.parameters()):
for arg in args:
if torch.is_tensor(arg) and torch.is_floating_point(arg):
arg.requires_grad_(True)
return gradient_checkpointing_func(func, *args, **kwargs)
if "value" in inspect.signature(self._set_gradient_checkpointing).parameters: # old GC format
self.apply(partial(self._set_gradient_checkpointing, value=True))
self.enable_input_require_grads()
print("You are using the old GC format, some features (e.g. BAdam) will be invalid.")
else: # have already enabled input require gradients
self._set_gradient_checkpointing(enable=True, gradient_checkpointing_func=custom_gradient_checkpointing_func)
def main():
train_data = {"input": "input test", "output": "output test"}
model_name = "/workspace/model/CodeLlama-13b-Instruct-hf"
output_dir = "./test_debug"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16,device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
# set the pad token of the model's configuration
model.config.pad_token_id = model.config.eos_token_id
# return
if not getattr(model, "supports_gradient_checkpointing", False):
print("Current model does not support gradient checkpointing.")
else:
# use_reentrant=False might increase VRAM usage (have not been empirically verified yet)
# According to: https://github.com/huggingface/transformers/issues/28339
model.gradient_checkpointing_enable = MethodType(_gradient_checkpointing_enable, model)
model.gradient_checkpointing_enable(gradient_checkpointing_kwargs={"use_reentrant": True})
setattr(model.config, "use_cache", False) # turn off when gradient checkpointing is enabled
print("Gradient checkpointing enabled.")
input_ids = tokenizer.encode(train_data["input"])
output_ids = tokenizer.encode(train_data["output"])
model_inputs_output = input_ids + output_ids + [tokenizer.eos_token_id]
model_inputs_output = torch.tensor(model_inputs_output, dtype=torch.int64)
labels = copy.deepcopy(model_inputs_output)
labels[: len(input_ids)] = -1 #
example_mask = model_inputs_output.ge(0)
label_mask = labels.ge(0)
model_inputs_output[~example_mask] = 0
labels[~label_mask] = -100
train_dataset = {
"input_ids": model_inputs_output.unsqueeze(0).to("cuda"),
"attention_mask": example_mask.unsqueeze(0).to("cuda"),
"labels": labels.unsqueeze(0).to("cuda")
}
lora_config = LoraConfig(
r=8,
lora_alpha=16,
target_modules=["q_proj", "gate_proj", "v_proj", "o_proj", "up_proj", "k_proj", "down_proj"], # ไธllama-factoryไธ่ด
lora_dropout=0.05,
task_type= TaskType.CAUSAL_LM
)
model = get_peft_model(model, lora_config)
# model.gradient_checkpointing_enable()
model.train()
model.print_trainable_parameters()
model.to("cuda")
output = model(**train_dataset)
loss = output["loss"]
print(f"loss: {loss.requires_grad}")
if __name__ == "__main__":
main()
```
output is
```
loss: True
```
### Expected behavior
I am not entirely sure if this is a bug in the implementation of `model.gradient_checkpointing_enable()`. If it is not, please feel free to close the issue directly and let me know. Thank you for taking the time to look into this issue :) | bug | low | Critical |
2,803,260,422 | transformers | ImportError: cannot import name 'NoneType' from 'types' on main in Python 3.9 | ### System Info
Linux
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
docker run python:3.9 bash -c 'pip install git+https://github.com/huggingface/transformers && python -c "import transformers"'
```
Output:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/transformers/__init__.py", line 26, in <module>
from . import dependency_versions_check
File "/usr/local/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 16, in <module>
from .utils.versions import require_version, require_version_core
File "/usr/local/lib/python3.9/site-packages/transformers/utils/__init__.py", line 27, in <module>
from .chat_template_utils import DocstringParsingException, TypeHintParsingException, get_json_schema
File "/usr/local/lib/python3.9/site-packages/transformers/utils/chat_template_utils.py", line 22, in <module>
from types import NoneType
ImportError: cannot import name 'NoneType' from 'types' (/usr/local/lib/python3.9/types.py)
```
### Expected behavior
No import error | bug | low | Critical |
2,803,267,929 | ollama | Log tracking | I have an idea, can I add a traceId when printing the log? This will facilitate the tracking details of subsequent requests. | feature request | low | Minor |
2,803,282,650 | PowerToys | [Quick Accent] Change the font of the interface | ### Description of the new feature / enhancement
Add a selection box in the settings menu, which changes the font displaying the Quick Accent interface.
(My English is not very good, please forgive me)
### Scenario when this would be used?
1. Some fonts cannot display all the characters, such as Simplified Chinese font "Microsoft Yahei".Unluckily, some operating systems cannot change the default font (for example, "ไธญๅฝๅฎๅถ็" Windows).
2. Some people have the need to change the font, in order to beautify the computer.
3. Some fonts can distinguish the similar characters. I think they are more proper.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,803,290,989 | ollama | don't show the thinking process | When I use DeepSeek-R1, the thinking process shown does not make sense to me, I only want to see the final result.
 | feature request | low | Major |
2,803,294,113 | pytorch | DISABLED test_partitioning_unremat_bw (__main__.MinCutPartitioningTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_partitioning_unremat_bw&suite=MinCutPartitioningTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35952027696).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 5 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_partitioning_unremat_bw`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_perf.py", line 718, in test_partitioning_unremat_bw
self.assertExpectedInline(count_numel_train(f, *inp), """1300""")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3066, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: '1300' != '1720'
- 1300
+ 1720
: To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_perf.py MinCutPartitioningTests.test_partitioning_unremat_bw
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_perf.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,803,297,781 | pytorch | DISABLED test_cat (__main__.NumBytesMetricTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cat&suite=NumBytesMetricTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35951074517).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cat`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_perf.py", line 207, in test_cat
self.assertExpectedInline(count_numel(f, *inp), """400""")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3066, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: '400' != '1264'
- 400
+ 1264
: To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_perf.py NumBytesMetricTests.test_cat
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_perf.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,803,297,782 | pytorch | DISABLED test_partitioning_with_view (__main__.MinCutPartitioningTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_partitioning_with_view&suite=MinCutPartitioningTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35951349745).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 5 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_partitioning_with_view`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_perf.py", line 776, in test_partitioning_with_view
self.assertExpectedInline(count_numel_train(f, *inp), """900""")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3066, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: '900' != '1520'
- 900
+ 1520
: To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_perf.py MinCutPartitioningTests.test_partitioning_with_view
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_perf.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,803,297,814 | pytorch | DISABLED test_graph_break_inside_ctx_with_side_effects (__main__.ContextlibContextManagerTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_graph_break_inside_ctx_with_side_effects&suite=ContextlibContextManagerTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35960839362).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_graph_break_inside_ctx_with_side_effects`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_ctx_manager.py", line 2051, in test_graph_break_inside_ctx_with_side_effects
self.assertEqual(len(eager.graphs), 0)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 0 but got 1.
Absolute difference: 1
Relative difference: inf
To execute this test, run the following from the base repo dir:
python test/dynamo/test_ctx_manager.py ContextlibContextManagerTests.test_graph_break_inside_ctx_with_side_effects
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_ctx_manager.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | triaged,module: flaky-tests,skipped,oncall: pt2,module: dynamo | low | Critical |
2,803,310,171 | vscode | Explorer not showing files |
Type: <b>Bug</b>
Not showing project file on the explorer
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Linux x64 6.8.0-51-generic snap
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz (4 x 3192)|
|GPU Status|2d_canvas: unavailable_software<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: disabled_software<br>multiple_raster_threads: enabled_on<br>opengl: disabled_off<br>rasterization: disabled_software<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: disabled_software<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: unavailable_software<br>webgl2: unavailable_software<br>webgpu: disabled_off<br>webnn: unavailable_software|
|Load (avg)|2, 2, 2|
|Memory (System)|11.40GB (6.16GB free)|
|Process Argv|--no-sandbox . --crash-reporter-id f85a8924-9ea9-4f52-bbad-e71cf3fdc9d8|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|ubuntu-xorg|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|ubuntu-xorg|
|XDG_SESSION_TYPE|x11|
</details><details><summary>Extensions (35)</summary>
Extension|Author (truncated)|Version
---|---|---
dscodegpt|Dan|3.7.16
dart-code|Dar|3.102.0
flutter|Dar|3.102.0
vscode-eslint|dba|3.0.10
es7-react-js-snippets|dsz|4.4.3
react-native-react-redux|EQu|2.0.6
prettier-vscode|esb|11.0.0
classroom|git|0.0.4
codespaces|Git|1.17.3
copilot|Git|1.257.0
copilot-chat|Git|0.23.2
vscode-github-actions|git|0.27.0
vscode-pull-request-github|Git|0.102.0
vsc-python-indent|Kev|1.19.0
rainbow-csv|mec|3.14.0
vscode-html-format|moh|0.1.6
vscode-docker|ms-|1.29.4
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.394.0
live-server|ms-|0.4.15
makefile-tools|ms-|0.11.13
vscode-speech|ms-|0.12.1
vsliveshare|ms-|1.0.5948
vscode-thunder-client|ran|2.33.2
LiveServer|rit|5.7.9
vs-code-prettier-eslint|rve|6.0.0
vscode-wakatime|Wak|25.0.0
markdown-all-in-one|yzh|3.6.2
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
9064b325:31222308
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,803,311,311 | rust | Detect missing `else` in block with `return` | ```
error: expected identifier, found keyword `return`
--> /home/gh-estebank/rust/compiler/rustc_lint/src/default_could_be_derived.rs:249:13
|
248 | let hir::VariantData::Struct { fields, recovered: Recovered::No } = data {
| ---- while parsing this struct
249 | return;
| ^^^^^^ expected identifier, found keyword
|
help: escape `return` to use it as an identifier
|
249 | r#return;
| ++
```
This should detect that there's a missing `else`: `= data else {`.
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"11happy"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-diagnostics,A-parser,P-low,T-compiler,F-let_else | low | Critical |
2,803,321,089 | ollama | feat: OpenAI reasoning_content compatibility | The current thinking model outputs XML tags to distinguish between thinking and answering, and we need a new feature to make the `reasoning_content` of the OpenAI SDK work normally | feature request | low | Minor |
2,803,331,600 | godot | [Bug] Open project with rendering method Mobile cause crash entire MacOS on MacOS Sequoia | ### Tested versions
- Godot 4.2.2
### System information
Godot 4.2.2, MacBook pro 2020, MacOS Sequoia 15.1.1, Intel Iris plus graphics
### Issue description
1. Occurred with project create not on Mac
- I created a project on Windows first, with rendering method Mobile
- Then I pull git from github and open project on Macbook
- Godot stuck a while an I can't even click or use mouse
- Then mac system crash and auto restart
2. Also occurred with project create on Mac
- Create a project on Mac, with rendering method Mobile
- Editor stuck a while and cause crash whole OS
### Steps to reproduce
1. Occurred with project create not on Mac
- I created a project on Windows first, with rendering method Mobile
- Then I pull git from github and open project on Macbook
- Godot stuck a while an I can't even click or use mouse
- Then mac system crash and auto restart
2. Also occurred with project create on Mac
- Create a project on Mac, with rendering method Mobile
- Editor stuck a while and cause crash whole OS
### Minimal reproduction project (MRP)
[demo.zip](https://github.com/user-attachments/files/18499730/demo.zip)
### Work around
If someone face this issue like me, you still can work around by change rendering method to Compatibility
Set manual by edit project.godot:
```
renderer/rendering_method="gl_compatibility"
renderer/rendering_method.mobile="gl_compatibility"
``` | bug,platform:macos,needs testing,crash | low | Critical |
2,803,368,900 | ollama | ollama pull hangs at ~90% completion | ### What is the issue?
<img width="1266" alt="Image" src="https://github.com/user-attachments/assets/821bda2a-f119-4c46-b72d-c00305072cc4" />
Seems to work fine after a couple retries... Very strange
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7 | bug | low | Minor |
2,803,374,516 | ui | [bug]: | ### Describe the bug
Can not get `toaster.tsx` when using "manual" method on the shadcn/ui site.
https://ui.shadcn.com/docs/components/toast
It appear `toast.tsx` and `toaster.tsx` are the same file and the actual `toaster.tsx` file is not present. I'm forced to use the manual method because I can't figure out how to get the shadcn CLI to work inside an electron app after significant time investment so I have no way to download it using the CLI.

### Affected component/components
toaster.tsx
### How to reproduce
1. Visit https://ui.shadcn.com/docs/components/toast
2. Click `Manual`
3. Try to get the source for `toaster.tsx`
It gives you the source for `toast.tsx` instead.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
```
### System Info
```bash
n/a
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug | low | Critical |
2,803,382,485 | next.js | Sitemap rendering broken on NextJS 15 with edge runtime | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/cranky-https-7mqs67
### To Reproduce
Error:
```
15:44:40.872 | โฒ โ Using edge runtime on a page currently disables static generation for that page
15:44:41.910 | โฒ [Error: Failed to collect configuration for /sitemap/sitemap/[__metadata_id__]] {
15:44:41.910 | โฒ [cause]: [Error: Edge runtime is not supported with `generateStaticParams`.]
15:44:41.911 | โฒ }
15:44:41.914 | โฒ > Build error occurred
15:44:41.919 | โฒ [Error: Failed to collect page data for /sitemap/sitemap/[__metadata_id__]] {
15:44:41.920 | โฒ type: 'Error'
15:44:41.920 | โฒ }
15:44:42.040 | โฒ Error: Command "npm run vercel-build" exited with 1
```
The sitemap MUST be edge-based - I don't want to statically-generate all my existing content.
### Current vs. Expected behavior
I expect to run all my sitemaps as dynamic edge functions as I could in NextJS 14. At the moment I can't even get past the build stage - this is the only thing remaining to bring me from Nextjs 14 to 15.
### Provide environment information
Cloudflare Next on Pages
### Which area(s) are affected? (Select all that apply)
Output (export/standalone)
### Which stage(s) are affected? (Select all that apply)
Other (Deployed), next build (local)
### Additional context
This build error occurs locally also. For now I've tried to use static generation but cloudflare pages also fails on this step as the database url isn't provided from the environment on build to the build step. Weird. Still, this is a regression IMHO. | Output (export/standalone) | low | Critical |
2,803,395,105 | pytorch | [CUDA] Illegal Memory Access with `AdaptiveAvgPool2d` | ### ๐ Describe the bug
```python
import torch
m1 = torch.randn(40, 40, 40).cuda()
model = torch.nn.AdaptiveAvgPool2d(output_size=[1, 67108607]).cuda()
model(m1)
```
```bash
compute-sanitizer python3 poc.py
```
Sanitizer Backtrace:
```
========= Invalid __global__ write of size 4 bytes
========= at void at::native::<unnamed>::adaptive_average_pool<float>(const T1 *, T1 *, int, int, int, int, long, long, long)+0x1dc0
========= by thread (0,0,0) in block (35,0,0)
========= Address 0x738041ff7374 is out of bounds
========= and is 7,784,664,204 bytes before the nearest allocation at 0x738212000000 of size 10,737,418,240 bytes
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x2dfbef]
========= in /lib/x86_64-linux-gnu/libcuda.so.1
========= Host Frame: [0x15803]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/../../../../libcudart.so.12
========= Host Frame:cudaLaunchKernel [0x75230]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/../../../../libcudart.so.12
========= Host Frame:at::native::(anonymous namespace)::adaptive_avg_pool2d_out_cuda_template(at::Tensor&, at::Tensor const&, c10::ArrayRef<long>) [0x15fc38d]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::native::adaptive_avg_pool2d_cuda(at::Tensor const&, c10::ArrayRef<long>) [0x15fd909]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::(anonymous namespace)::(anonymous namespace)::wrapper_CUDA___adaptive_avg_pool2d(at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x3569d28]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>), &at::(anonymous namespace)::(anonymous namespace)::wrapper_CUDA___adaptive_avg_pool2d>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::ArrayRef<c10::SymInt> > >, at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x3569df2]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::_ops::_adaptive_avg_pool2d::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x28900be]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:torch::autograd::VariableType::(anonymous namespace)::_adaptive_avg_pool2d(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x4aed88d]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>), &torch::autograd::VariableType::(anonymous namespace)::_adaptive_avg_pool2d>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt> > >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x4aeddd5]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::_ops::_adaptive_avg_pool2d::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x28c531e]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::native::adaptive_avg_pool2d_symint(at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x18b58a9]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>), &at::(anonymous namespace)::(anonymous namespace)::wrapper_CompositeImplicitAutograd__adaptive_avg_pool2d>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::ArrayRef<c10::SymInt> > >, at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x2d3c7e2]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::_ops::adaptive_avg_pool2d::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x27a4b7e]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:torch::autograd::THPVariable_adaptive_avg_pool2d(_object*, _object*, _object*) [0x776589]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_python.so
========= Host Frame:cfunction_call in /usr/local/src/conda/python-3.12.7/Objects/methodobject.c:537 [0x149d53]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyObject_MakeTpCall in /usr/local/src/conda/python-3.12.7/Objects/call.c:240 [0x11af9a]
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.11.0-1007-oem-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) SILVER 4510
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 37%
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd sgx_lc fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 48 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 2.1.3 py312hc5e2394_0
[conda] numpy-base 2.1.3 py312h0da6c21_0
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] pytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py312_cu124 pytorch
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.1 py312_cu124 pytorch
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @msaroufim @eqy | module: nn,module: cuda,triaged,module: edge cases,topic: fuzzer | low | Critical |
2,803,421,635 | pytorch | [CUDA] Illegal Memory Access with `ReplicationPad2D` | ### ๐ Describe the bug
This is found by fuzzer.
```python
import torch
m1 = torch.randn(1, 4484, 2).cuda()
model = torch.nn.ReplicationPad2d((0, 0, 0, 1826029949)).cuda()
model(m1)
```
```bash
computer-sanitizer python3 poc.py
```
compute-sanitizer log
```
========= COMPUTE-SANITIZER
========= Invalid __global__ write of size 4 bytes
========= at void at::native::<unnamed>::replication_pad_forward_kernel2d<float>(at::GenericPackedTensorAccessor<const T1, (unsigned long)4, at::DefaultPtrTraits, long>, at::GenericPackedTensorAccessor<T1, (unsigned long)4, at::DefaultPtrTraits, long>, int, int, int, int)+0x7f0
========= by thread (224,0,0) in block (8388906,0,0)
========= Address 0x79e0d604ab80 is out of bounds
========= and is 8,589,628,544 bytes before the nearest allocation at 0x79e2d6000000 of size 14,608,760,832 bytes
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame: [0x2dfbef]
========= in /lib/x86_64-linux-gnu/libcuda.so.1
========= Host Frame: [0x15803]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/../../../../libcudart.so.12
========= Host Frame:cudaLaunchKernel [0x75230]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/../../../../libcudart.so.12
========= Host Frame:at::native::structured_replication_pad2d_out_cuda::impl(at::Tensor const&, c10::ArrayRef<long>, at::Tensor const&) [0x279746f]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::(anonymous namespace)::wrapper_CUDA_replication_pad2d(at::Tensor const&, c10::ArrayRef<long>) [0x36007dc]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::ArrayRef<long>), &at::(anonymous namespace)::wrapper_CUDA_replication_pad2d>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::ArrayRef<long> > >, at::Tensor (at::Tensor const&, c10::ArrayRef<long>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>) [0x3600882]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cuda.so
========= Host Frame:at::_ops::replication_pad2d::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x240eb8c]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:torch::autograd::VariableType::(anonymous namespace)::replication_pad2d(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x48445f8]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>), &torch::autograd::VariableType::(anonymous namespace)::replication_pad2d>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt> > >, at::Tensor (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x4844c25]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::_ops::replication_pad2d::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>) [0x246806e]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::native::_pad_enum_symint(at::Tensor const&, c10::ArrayRef<c10::SymInt>, long, std::optional<double>) [0x1ba579c]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::native::pad_symint(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::basic_string_view<char>, std::optional<double>) [0x1ba5df7]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::basic_string_view<char>, std::optional<double>), &at::(anonymous namespace)::(anonymous namespace)::wrapper_CompositeImplicitAutograd__pad>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::basic_string_view<char>, std::optional<double> > >, at::Tensor (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::basic_string_view<char>, std::optional<double>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::basic_string_view<char>, std::optional<double>) [0x2d3c898]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:at::_ops::pad::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::basic_string_view<char>, std::optional<double>) [0x24909b5]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so
========= Host Frame:torch::autograd::THPVariable_pad(_object*, _object*, _object*) [0x7732e3]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/lib/python3.12/site-packages/torch/lib/libtorch_python.so
========= Host Frame:cfunction_call in /usr/local/src/conda/python-3.12.7/Objects/methodobject.c:537 [0x149d53]
========= in /home/jwnhy/miniconda3/envs/gpu-torch/bin/python3
========= Host Frame:_PyObject_MakeTpCall in /usr/local/src/conda/python-3.12.7/Objects/call.c:240 [0x11af9a]
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.11.0-1007-oem-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 PCIe
Nvidia driver version: 560.35.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: INTEL(R) XEON(R) SILVER 4510
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 8
CPU(s) scaling MHz: 37%
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 4800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd sgx_lc fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 48 MiB (24 instances)
L3 cache: 60 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.4.127 0 nvidia
[conda] cuda-cupti 12.4.127 0 nvidia
[conda] cuda-libraries 12.4.1 0 nvidia
[conda] cuda-nvrtc 12.4.127 0 nvidia
[conda] cuda-nvtx 12.4.127 0 nvidia
[conda] cuda-opencl 12.6.77 0 nvidia
[conda] cuda-runtime 12.4.1 0 nvidia
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.4.5.8 0 nvidia
[conda] libcufft 11.2.1.3 0 nvidia
[conda] libcurand 10.3.7.77 0 nvidia
[conda] libcusolver 11.6.1.9 0 nvidia
[conda] libcusparse 12.3.1.170 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.4.127 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 2.1.3 py312hc5e2394_0
[conda] numpy-base 2.1.3 py312h0da6c21_0
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] pytorch 2.5.1 py3.12_cuda12.4_cudnn9.1.0_0 pytorch
[conda] pytorch-cuda 12.4 hc786d27_7 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.5.1 py312_cu124 pytorch
[conda] torchtriton 3.1.0 py312 pytorch
[conda] torchvision 0.20.1 py312_cu124 pytorch
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck @msaroufim @eqy | module: nn,module: cuda,triaged,module: edge cases,topic: fuzzer | low | Critical |
2,803,432,399 | rust | ICE: `item_name: no name for DefPath` | <!--
[31mICE[0m: Rustc ./a.rs '' 'error: internal compiler error: compiler/rustc_middle/src/ty/mod.rs:1584:13: item_name: no name for DefPath { data: [DisambiguatedDefPathData { data: Impl, disambiguator: 0 }], krate: crate0 }', 'error: internal compiler error: compiler/rustc_middle/src/ty/mod.rs:1584:13: item_name: no name for DefPath { data: [DisambiguatedDefPathData { data: Impl, disambiguator: 0 }], krate: crate0 }'
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
use std::rc::Rc;
struct Foo<T: ?Sized>(T);
impl Foo<[u8]> {
fn len(self: &&MyNonNull<A>) -> usize {}
}
fn main() {
let rc = Rc::new() as Rc<Foo<[u8]>>;
assert_eq!(3, rc.len());
}
````
original:
````rust
//@ run-pass
use std::rc::Rc;
struct Foo<T: ?Sized>(T);
impl Foo<[u8]> {
fn len(self: &&MyNonNull<A>) -> usize {
self.0.len()
}
}
fn main() {
let rc = Rc::new(Foo([1u8,2,3])) as Rc<Foo<[u8]>>;
assert_eq!(3, rc.len());
}
````
Version information
````
rustc 1.86.0-nightly (c234b839d 2025-01-22)
binary: rustc
commit-hash: c234b839d1681a7aa3abb1bda6f6f350714eacfe
commit-date: 2025-01-22
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.7
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/c234b839d1681a7aa3abb1bda6f6f350714eacfe/compiler/rustc_middle/src/ty/mod.rs#L1578-L1590
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0412]: cannot find type `MyNonNull` in this scope
--> /tmp/icemaker_global_tempdir.eHo4LBDDvdmk/rustc_testrunner_tmpdir_reporting.6wJApM6bryHu/mvce.rs:6:20
|
6 | fn len(self: &&MyNonNull<A>) -> usize {}
| ^^^^^^^^^ not found in this scope
error[E0412]: cannot find type `A` in this scope
--> /tmp/icemaker_global_tempdir.eHo4LBDDvdmk/rustc_testrunner_tmpdir_reporting.6wJApM6bryHu/mvce.rs:6:30
|
6 | fn len(self: &&MyNonNull<A>) -> usize {}
| ^ not found in this scope
|
help: you might be missing a type parameter
|
5 | impl<A> Foo<[u8]> {
| +++
error[E0308]: mismatched types
--> /tmp/icemaker_global_tempdir.eHo4LBDDvdmk/rustc_testrunner_tmpdir_reporting.6wJApM6bryHu/mvce.rs:6:37
|
6 | fn len(self: &&MyNonNull<A>) -> usize {}
| --- ^^^^^ expected `usize`, found `()`
| |
| implicitly returns `()` as its body has no tail or `return` expression
error[E0061]: this function takes 1 argument but 0 arguments were supplied
--> /tmp/icemaker_global_tempdir.eHo4LBDDvdmk/rustc_testrunner_tmpdir_reporting.6wJApM6bryHu/mvce.rs:10:14
|
10 | let rc = Rc::new() as Rc<Foo<[u8]>>;
| ^^^^^^^-- argument #1 is missing
|
note: associated function defined here
--> /home/matthias/.rustup/toolchains/master/lib/rustlib/src/rust/library/alloc/src/rc.rs:410:12
|
410 | pub fn new(value: T) -> Rc<T> {
| ^^^
help: provide the argument
|
10 | let rc = Rc::new(/* value */) as Rc<Foo<[u8]>>;
| ~~~~~~~~~~~~~
error: internal compiler error: compiler/rustc_middle/src/ty/mod.rs:1584:13: item_name: no name for DefPath { data: [DisambiguatedDefPathData { data: Impl, disambiguator: 0 }], krate: crate0 }
thread 'rustc' panicked at compiler/rustc_middle/src/ty/mod.rs:1584:13:
Box<dyn Any>
stack backtrace:
0: 0x7634adaf7cba - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h71e03dee79b82d06
1: 0x7634ae212de6 - core::fmt::write::h6960a366d70bc3fd
2: 0x7634af158e11 - std::io::Write::write_fmt::hc000aee1e5248cb2
3: 0x7634adaf7b12 - std::sys::backtrace::BacktraceLock::print::h8f4aa0f83ab25bbe
4: 0x7634adaf9f92 - std::panicking::default_hook::{{closure}}::h6b011d6c9d596681
5: 0x7634adaf9e1a - std::panicking::default_hook::hf20faf194d2c76c9
6: 0x7634acc562cb - std[d07d606e05172b8f]::panicking::update_hook::<alloc[ce64880b90703bb2]::boxed::Box<rustc_driver_impl[fadb44195348e5f3]::install_ice_hook::{closure#1}>>::{closure#0}
7: 0x7634adafaad3 - std::panicking::rust_panic_with_hook::hb107d567106f7f52
8: 0x7634acc910f1 - std[d07d606e05172b8f]::panicking::begin_panic::<rustc_errors[3f2036b9e1b7a052]::ExplicitBug>::{closure#0}
9: 0x7634acc85ff6 - std[d07d606e05172b8f]::sys::backtrace::__rust_end_short_backtrace::<std[d07d606e05172b8f]::panicking::begin_panic<rustc_errors[3f2036b9e1b7a052]::ExplicitBug>::{closure#0}, !>
10: 0x7634acc85fdd - std[d07d606e05172b8f]::panicking::begin_panic::<rustc_errors[3f2036b9e1b7a052]::ExplicitBug>
11: 0x7634acc9b021 - <rustc_errors[3f2036b9e1b7a052]::diagnostic::BugAbort as rustc_errors[3f2036b9e1b7a052]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x7634ad27d343 - rustc_middle[4ceab1e61766c314]::util::bug::opt_span_bug_fmt::<rustc_span[3b1278fa1989d084]::span_encoding::Span>::{closure#0}
13: 0x7634ad262b3a - rustc_middle[4ceab1e61766c314]::ty::context::tls::with_opt::<rustc_middle[4ceab1e61766c314]::util::bug::opt_span_bug_fmt<rustc_span[3b1278fa1989d084]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x7634ad2629cb - rustc_middle[4ceab1e61766c314]::ty::context::tls::with_context_opt::<rustc_middle[4ceab1e61766c314]::ty::context::tls::with_opt<rustc_middle[4ceab1e61766c314]::util::bug::opt_span_bug_fmt<rustc_span[3b1278fa1989d084]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x7634ab3b3ad0 - rustc_middle[4ceab1e61766c314]::util::bug::bug_fmt
16: 0x7634af422043 - <rustc_middle[4ceab1e61766c314]::ty::context::TyCtxt>::item_name
17: 0x7634acfbccd4 - <rustc_hir_typeck[cc23a872e1f45a74]::fn_ctxt::FnCtxt>::detect_and_explain_multiple_crate_versions
18: 0x7634acfb8e05 - <rustc_hir_typeck[cc23a872e1f45a74]::fn_ctxt::FnCtxt>::suggest_traits_to_import
19: 0x7634acfa2b23 - <rustc_hir_typeck[cc23a872e1f45a74]::fn_ctxt::FnCtxt>::report_no_match_method_error
20: 0x7634acfd58e7 - <rustc_hir_typeck[cc23a872e1f45a74]::fn_ctxt::FnCtxt>::report_method_error
21: 0x7634aef2bbaa - <rustc_hir_typeck[cc23a872e1f45a74]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
22: 0x7634aef1d59b - <rustc_hir_typeck[cc23a872e1f45a74]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
23: 0x7634aef21f89 - <rustc_hir_typeck[cc23a872e1f45a74]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
24: 0x7634aef10119 - <rustc_hir_typeck[cc23a872e1f45a74]::fn_ctxt::FnCtxt>::check_expr_match::{closure#0}
25: 0x7634aef1e19d - <rustc_hir_typeck[cc23a872e1f45a74]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
26: 0x7634aef153cf - <rustc_hir_typeck[cc23a872e1f45a74]::fn_ctxt::FnCtxt>::check_expr_block
27: 0x7634aef1ccb3 - <rustc_hir_typeck[cc23a872e1f45a74]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
28: 0x7634ae8a3e80 - rustc_hir_typeck[cc23a872e1f45a74]::check::check_fn
29: 0x7634ae8adb7d - rustc_hir_typeck[cc23a872e1f45a74]::typeck_with_inspect::{closure#0}
30: 0x7634ae8abb8c - rustc_query_impl[23d350cebea3d85f]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[23d350cebea3d85f]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[4ceab1e61766c314]::query::erase::Erased<[u8; 8usize]>>
31: 0x7634ae5dcf0e - rustc_query_system[29f962824b226ddd]::query::plumbing::try_execute_query::<rustc_query_impl[23d350cebea3d85f]::DynamicConfig<rustc_data_structures[9d5d27112098fe07]::vec_cache::VecCache<rustc_span[3b1278fa1989d084]::def_id::LocalDefId, rustc_middle[4ceab1e61766c314]::query::erase::Erased<[u8; 8usize]>, rustc_query_system[29f962824b226ddd]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[23d350cebea3d85f]::plumbing::QueryCtxt, false>
32: 0x7634ae5db411 - rustc_query_impl[23d350cebea3d85f]::query_impl::typeck::get_query_non_incr::__rust_end_short_backtrace
33: 0x7634ae5db0cb - <rustc_middle[4ceab1e61766c314]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[f4ba3cad1ffd0a10]::check_crate::{closure#4}>::{closure#0}
34: 0x7634ae5d917f - rustc_hir_analysis[f4ba3cad1ffd0a10]::check_crate
35: 0x7634ae5d5662 - rustc_interface[8c9fe9b679497e1b]::passes::run_required_analyses
36: 0x7634af15461e - rustc_interface[8c9fe9b679497e1b]::passes::analysis
37: 0x7634af1545ef - rustc_query_impl[23d350cebea3d85f]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[23d350cebea3d85f]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[4ceab1e61766c314]::query::erase::Erased<[u8; 0usize]>>
38: 0x7634af1d2595 - rustc_query_system[29f962824b226ddd]::query::plumbing::try_execute_query::<rustc_query_impl[23d350cebea3d85f]::DynamicConfig<rustc_query_system[29f962824b226ddd]::query::caches::SingleCache<rustc_middle[4ceab1e61766c314]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[23d350cebea3d85f]::plumbing::QueryCtxt, false>
39: 0x7634af1d22ce - rustc_query_impl[23d350cebea3d85f]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
40: 0x7634af2250a9 - rustc_interface[8c9fe9b679497e1b]::passes::create_and_enter_global_ctxt::<core[afcf10abfa1dac94]::option::Option<rustc_interface[8c9fe9b679497e1b]::queries::Linker>, rustc_driver_impl[fadb44195348e5f3]::run_compiler::{closure#0}::{closure#2}>::{closure#2}::{closure#0}
41: 0x7634af218310 - rustc_interface[8c9fe9b679497e1b]::interface::run_compiler::<(), rustc_driver_impl[fadb44195348e5f3]::run_compiler::{closure#0}>::{closure#1}
42: 0x7634af05dc36 - std[d07d606e05172b8f]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[8c9fe9b679497e1b]::util::run_in_thread_with_globals<rustc_interface[8c9fe9b679497e1b]::util::run_in_thread_pool_with_globals<rustc_interface[8c9fe9b679497e1b]::interface::run_compiler<(), rustc_driver_impl[fadb44195348e5f3]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
43: 0x7634af05d919 - <<std[d07d606e05172b8f]::thread::Builder>::spawn_unchecked_<rustc_interface[8c9fe9b679497e1b]::util::run_in_thread_with_globals<rustc_interface[8c9fe9b679497e1b]::util::run_in_thread_pool_with_globals<rustc_interface[8c9fe9b679497e1b]::interface::run_compiler<(), rustc_driver_impl[fadb44195348e5f3]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[afcf10abfa1dac94]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
44: 0x7634af05d0ab - std::sys::pal::unix::thread::Thread::new::thread_start::h3c2634d9a7e37e9a
45: 0x7634a94a339d - <unknown>
46: 0x7634a952849c - <unknown>
47: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.86.0-nightly (c234b839d 2025-01-22) running on x86_64-unknown-linux-gnu
query stack during panic:
#0 [typeck] type-checking `main`
#1 [analysis] running analysis passes on this crate
end of query stack
error: aborting due to 5 previous errors
Some errors have detailed explanations: E0061, E0308, E0412.
For more information about an error, try `rustc --explain E0061`.
```
</p>
</details>
<!--
query stack:
#0 [typeck] type-checking `main`
#1 [analysis] running analysis passes on this crate
-->
| A-diagnostics,I-ICE,T-compiler,C-bug,A-suggestion-diagnostics,S-has-mcve,S-has-bisection | low | Critical |
2,803,507,677 | rust | Result type inference breaks in 2024 edition, while working correctly in 2021 edition | I tried this code:
```rust
fn foo() -> Result<(), u8> {
Ok(())
}
fn test() -> Result<(), u8> {
let f: fn() -> _ = foo as _;
f()?;
Ok(())
}
```
I expected to see this happen:
Type inference on `f` should work correctly.
Instead, this happened:
When switching to 2024 edition something odd is happening and inference results in confusing compilation error:
```
error[E0271]: type mismatch resolving `<Result<(), u8> as Try>::Output == !`
--> src/lib.rs:7:5
|
7 | f()?;
| ^^^^ expected `!`, found `()`
|
= note: expected type `!`
found unit type `()`
```
The same code works fine on 2021 edition and I don't understand why 2024 breaks.
Replacing `-> _` with explicit `-> Result<(), u8>` helps, but it is too verbose and hurts readability.
https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=7f1158511f3560d231db70cb839340c4
### Meta
`rustc --version --verbose`:
```
1.86.0-nightly
(2025-01-21 ed43cbcb882e7c06870a)
``` | A-inference,C-bug,A-coercions,F-never_type,T-types | low | Critical |
2,803,533,138 | pytorch | DISABLED test_extern (__main__.NumBytesMetricTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_extern&suite=NumBytesMetricTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35959227973).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_extern`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_perf.py", line 152, in test_extern
self.assertExpectedInline(count_numel(f, *inp), """200""")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3066, in assertExpectedInline
return super().assertExpectedInline(actual if isinstance(actual, str) else str(actual), expect, skip + 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 413, in assertExpectedInline
assert_expected_inline(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 378, in assert_expected_inline
assert_eq(expect, actual, msg=help_text)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/expecttest/__init__.py", line 450, in assertMultiLineEqualMaybeCppStack
self.assertMultiLineEqual(expect, actual, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1226, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: '200' != '820'
- 200
? -
+ 820
? +
: To accept the new output, re-run test with envvar EXPECTTEST_ACCEPT=1 (we recommend staging/committing your changes before doing this)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_perf.py NumBytesMetricTests.test_extern
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_perf.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,803,556,624 | ollama | Ollama Truncates Beginning of User Messages and System Prompt When Exceeding Context Window | ### What is the issue?
**Description:**
I am encountering an issue with Ollama where, upon sending a message that exceeds the context window, the model truncates the beginning of the message, including the system prompt. This is particularly frustrating as it leads to the loss of crucial context that is essential for the model's behavior.
**Steps to Reproduce:**
1. Compose a message exceeding the context window limit.
2. Send the message to Ollama.
3. Observe that the beginning of the message, including the system prompt, is truncated.
**Expected Behavior:**
When a message exceeds the context window, Ollama should prioritize retaining the beginning of the message, especially the system prompt, to maintain context and ensure the model behaves as intended.
**Request for Assistance:**
Could you please advise if there are parameters that can be adjusted to modify this behavior? Allowing the retention of the message's beginning would significantly enhance the user experience. Thank you for your consideration and assistance.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
v0.5.7 | bug | low | Major |
2,803,569,568 | godot | Exporting embeded pck potential fix? | ### Tested versions
4.3 Stable But have seen tones of people having same issues all the way back to Godot 3
### System information
Windows-10 Nitro 5 Laptop
### Issue description
Recently was trying to export my game to windows-10 embeded pck but got frustrated as while the game would export perfectly fine but the custom icon would not... After searching for a while I discovered my issue was with windows folder cache in some way. As if I exported to a brand new sub folder each export it would work perfectly. But would only be "corrupted" by exporting to that folder again with the same name. After exporting into new subfolder I noticed it would work perfectly in every other directory except if you dragged it back into the corrupted folder... I think this isolates the problem and shows windows is guilty here. My proposal if possible is to flag windows to refresh the cache each time a project is exported with embeded pck if that is possible? Or atleast update the documentation to advise people to create a new subfolder each time. https://www.youtube.com/watch?v=J7T0ZVfz5ZU Video where comments show this would help a lot of people) 
### Steps to reproduce
Try to export a game with embedded pck with no icon (default icon)
Export again with same name and same folder in that corrupted cache space but with a new icon and notice how its still default
Create a new subfolder and export there and it has the new one but dragging it into corrupted folder will bring back old one
### Minimal reproduction project (MRP)
https://github.com/JonathanHaws/Wrath repo this is happening with. relatively small game jam game I am working on... | feature proposal,platform:windows,topic:export | low | Minor |
2,803,581,100 | vscode | SCM Working Sets: Creating a new branch restores old working set if identically named branch existed in the past | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.3
- OS Version: Mac & Linux
Steps to Reproduce:
In the settings, set `scm.workingSets.enabled` to `true` and `sc.workingSets.default` to `"current"`. Then
```
git switch main # close all editors
git checkout -b abc # now open a file
git switch main # VScode closes editors if SCM>Working Sets is enabled
git branch -D abc
# <time passes>
git checkout -b abc # Unexpected behavior: Restores editor from step 2!
```
Expected behavior:
I would have expected to restore state only when *switching* branches, not when creating new branches. It seems VScode erroneously detects `git checkout -b` as a branch *switch* instead of a new branch creation if a branch of the same name existed in the past?
| feature-request,scm | low | Critical |
2,803,595,183 | ui | [bug]: npm error Unsupported URL Type "workspace:": workspace:* using npx shadcn@canary init | ### Describe the bug
When creating a new project and running `npx shadcn@canary init`, selecting the next.js(monorepo) version results in an `EUNSUPPORTEDPROTOCOL` error.
In my opinion, The reason for this is likely that Shadcn defaults to npm when it cannot detect a package manager, then attempts to run the npm install command. However, the monorepo template files are based on pnpm, which causes compatibility issues and results in the error
### Affected component/components
CLI init(monorepo)
### How to reproduce
1. make empty project
2. execute `npx shadcn@canary init`
3. Would you like to start a new project? โบ Next.js (Monorepo)
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
shadcn-mono-test git:(main) npx shadcn@canary init
โ The path /Users/jh/shadcn-mono-test does not contain a package.json file.
Would you like to start a new project? โบ Next.js (Monorepo)
โ What is your project named? โฆ my-app
โ Something went wrong creating a new Next.js monorepo.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
Command failed with exit code 1: npm install
npm error code EUNSUPPORTEDPROTOCOL
npm error Unsupported URL Type "workspace:": workspace:*
npm error A complete log of this run can be found in: /Users/jh/.npm/_logs/2025-01-22T06_59_21_296Z-debug-0.log
```
### System Info
```bash
mac
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug | low | Critical |
2,803,609,111 | pytorch | [autograd] inconsistent jvp results | ### ๐ Describe the bug
I have two implementations of an isometry loss function that uses Jacobian-vector products (JVP), but they're producing different gradients:
```python
import torch
vae = VAEModel()
vae.to("cuda")
func = lambda z: vae.decode(z, return_dict=False)[0]
input = torch.randn(1, 8, 8, 8, device="cuda")
u = torch.randn_like(input, device=input.device)
def iso_loss1():
Ju = torch.autograd.functional.jvp(func, input, u, create_graph=True)[1]
TrR = torch.sum(Ju.float() ** 2, dim=tuple(range(1, Ju.dim()))).mean()
isometry_loss = TrR
return isometry_loss
def iso_loss2():
Ju = torch.func.jvp(func, (input,), (u,))[1]
TrR = torch.sum(Ju.float() ** 2, dim=tuple(range(1, Ju.dim()))).mean()
isometry_loss = TrR
return isometry_loss
def compare_grads():
vae.zero_grad()
loss1 = iso_loss1()
loss1.backward()
grads1 = {name: param.grad.clone() for name, param in vae.decoder.named_parameters() if param.grad is not None}
vae.zero_grad()
loss2 = iso_loss2()
loss2.backward()
grads2 = {name: param.grad.clone() for name, param in vae.decoder.named_parameters() if param.grad is not None}
print(f"Loss1: {loss1.item()}")
print(f"Loss2: {loss2.item()}")
max_diff = 0
for name in grads1:
print(f"{grads1[name]=} {grads2[name]=}")
diff = (grads1[name] - grads2[name]).abs().max().item()
print(f"Max gradient difference for {name}: {diff}")
max_diff = max(max_diff, diff)
break
print(f"\nMaximum gradient difference across all parameters: {max_diff}")
compare_grads()
```
The original implementation (iso_loss1) uses `torch.autograd.functional.jvp`, which is computationally expensive as it involves two vector-Jacobian product (VJP) calculations under the hood. To improve performance, I attempted to switch to `torch.func.jvp`, which uses a more efficient forward-mode implementation.
However, I've noticed two concerning issues:
1. The gradients produced by these two loss implementations differ.
2. Unlike `torch.autograd.functional.jvp`, `torch.func.jvp` doesn't provide a `create_graph=True` parameter
This raises the question: Is `torch.func.jvp` not intended for use in network optimization scenarios? I'd appreciate any insights into this behavior and guidance on the proper approach to use.
### Versions
N/A
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan | module: autograd,triaged | low | Critical |
2,803,644,173 | rust | Compiler crashes with `SIGSEGV` on aarch64-unknown-linux-gnu | <!--
Thank you for finding an Internal Compiler Error! ๐ง If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```sh
git clone https://github.com/sgrif/pq-sys
cd pq-sys
git checkout 4e4f24c8f35abec47927ba20d144b7a8172f1f98
cargo check --no-default-features --features "bundled"
```
This happened once on a github CI runner, I nevertheless fill this as report as the output asked for it. It might be an hardware issue, as it went away with a rebuild.
CI LOG: https://github.com/sgrif/pq-sys/actions/runs/12903477183/job/35978791282?pr=73#step:14:26
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: aarch64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
### Error output
```
error: rustc interrupted by SIGSEGV, printing backtrace
/home/runner/.rustup/toolchains/stable-aarch64-unknown-linux-gnu/bin/../lib/librustc_driver-bedc4a794a543ce8.so(+0xbf78ec)[0xff0bdc5f78ec]
linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xff0be59597e0]
note: we would appreciate a report at https://github.com/rust-lang/rust
help: you can increase rustc's stack size by setting RUST_MIN_STACK=16777216
error: could not compile `vcpkg` (lib)
``` | I-crash,I-ICE,T-compiler,C-bug,O-AArch64,S-needs-repro,C-external-bug | medium | Critical |
2,803,649,809 | godot | Example programs of the AssertLib crash on the Meta Quest 3 | ### Tested versions
Reproducible on current version on Meta Quest 3.
Godot 4.4dev3
Other versions (for testing purposes) are not avaiable from the store.
### System information
Godot 4.4dev3 meta
### Issue description
We need some advice from you. We are planning an MR application for production and service.
After research, the Godot engine and the Meta Quest were selected for development.
BUT ... the example programs for Meta Quest usually crash. In rare cases, they work.
We don't see a pattern in this sporadic behavior. Restarting the meta sometimes helps, sometimes only a reinstall of Godot helps. Sometimes, deleting all of Godot's cache files is enough.
With luck, you can sometimes take a look at the editor. Various classes/objects are not found.
("Parse Error", "OpenXRFbSceneManager not found", "OpenXRFbSpatialEntity not found" etc.)
Due to time constraints, we did not check the entire issue board for similar errors. Please excuse this.
Is there any feedback and plans for this bug yet?
Unfortunately, this is a very significant mistake, so our executives are already talking about a different engine with different hardware. However, we love the concepts of Godot and especially the editor in the meta.
### Steps to reproduce
Load a sample program (e.g. "meta scene xr sample") from the asset library.
Do this within the meta.
Start the program and it may crash then.
### Minimal reproduction project (MRP)
meta scene xr sample | needs testing,topic:xr | low | Critical |
2,803,657,153 | deno | Deno panic on KV | From the log output of my app:
```
| ============================================================
| Deno has panicked. This is a bug in Deno. Please report this
| at https://github.com/denoland/deno/issues/new.
| If you can reliably reproduce this panic, include the
| reproduction steps and re-run with the RUST_BACKTRACE=1 env
| var set and include the backtrace in your report.
|
| Platform: linux x86_64
| Version: 2.1.4
| Args: ["deno", "run", "--allow-env", "--allow-net", "--unstable-kv", "server.ts"]
|
| thread 'main' panicked at /home/runner/.cargo/registry/src/index.crates.io-6f17d22bba15001f/denokv_sqlite-0.8.4/lib.rs:250:10:
| called `Result::unwrap()` on an `Err` value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }
| note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
This happened overnight 4 times:
* Tue Jan 21 2025, 19:31:27 UTC
* Tue Jan 21 2025, 19:33:07 UTC
* Tue Jan 21 2025, 19:34:27 UTC
* Tue Jan 21 2025, 19:35:27 UTC
Looking back through other logs, it's also happened before, just not this many times in a short period of time.
Because there's no stack trace that shows where in my code it's crashing, I have no reference point to start with which to create a repro.
There was a little bit of load and memory pressure on the machine at the time, but it certainly wasn't near any limits. | ext/kv,panic | low | Critical |
2,803,674,921 | pytorch | FP8: E5M2: The FP8 E5M2 result is not `inf` when casting a FP32 value larger than max normal value of FP8 E5M2 (57344) | ### ๐ Describe the bug
See the case,
```
>>> import torch
>>> a = torch.tensor(60000, dtype=torch.float)
>>> b = a.to(torch.float8_e5m2)
>>> b
tensor(57344., dtype=torch.float8_e5m2)
```
In theory, the max normal value of fp8 e5m2 is 57344. Any values above 57344 will be represented with fp8 e5m2 `inf`.
https://github.com/pytorch/pytorch/blob/3cbc8c54fd37eb590e2a9206aecf3ab568b3e63c/c10/util/Float8_e5m2.h#L91
Code shows the fp8 value will be `inf` or `nan` if the input fp32 value is larger than 65536, which is the first value not representable for fp8 e5m2. In another word, value between 57344 and 65536 will go to the else branch.
BTW, even the boundary is 65536 in PyTorch implementation, I found,
```
>>> a = torch.tensor(61440, dtype=torch.float)
>>> b = a.to(torch.float8_e5m2)
>>> b
tensor(inf, dtype=torch.float8_e5m2)
```
61440 in fp32 is converted to `inf` in fp8 e5m2.
### Versions
Latest main branch.
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @yanbing-j @vkuzo @albanD @kadeng @penguinwu | module: docs,triaged,module: NaNs and Infs,module: float8 | low | Critical |
2,803,722,634 | material-ui | Ban sx atrribute in all pigment components and skip compilation | ### Summary
Migrating from v4 -> v5 has been a rollercoaster migrating `makestyles` to `styled` that introduced `SX` property and made life easier.
Migrating from v5 -> v6 is still unclear how to workaround some cases of the dynamic context of styling and introduced unwanted experiences for developers (for middle, junior level developers mostly) on how to work with `SX` property. Most cases can be solved using component + styled + classes combination over `SX` which is better and clearer how to do it.
### Examples
_No response_
### Motivation
It would be nice to disable this feature if possible and choose a proper styling method for consistency throughout the application. I do not think, that `SX` is the future of styling and probably it will shift in v7 or v8 material release again.
Note: disabled, only for pigment components, not emotion for easy migration.
**Search keywords**: sx, not the future | package: system,status: waiting for maintainer | low | Minor |
2,803,769,873 | transformers | tokenizer_class: `LlamaTokenizerFast` becomes `LlamaTokenizer` after load + immediate save | ### System Info
I do not understand why but saving a loaded tokenizer changes the tokenizer class type. Unsure this is a usage error on my part of expected output from HF.
### Who can help?
@ArthurZucker @itazap
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```py
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("DeepSeek/DeepSeek-R1-Distill-Qwen-7B")
tokenizer.save_pretrained('./tokenizer_copy')
```
`tokenizer_config.json`
* before `save` `"tokenizer_class": "LlamaTokenizerFast" `
* after `save` `"tokenizer_class": "LlamaTokenizer"`
### Expected behavior
Tokenizer Class stays the same | bug | low | Critical |
2,803,771,243 | rust | Internal Status Access Violation with constant redefinition | <!--
Thank you for finding an Internal Compiler Error! ๐ง If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
const FOO: usize = 32;
fn bar() {
const FOO: usize = FOO;
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.86.0-nightly (ed43cbcb8 2025-01-21)
binary: rustc
commit-hash: ed43cbcb882e7c06870abdd9305dc1f17eb9bab9
commit-date: 2025-01-21
host: x86_64-pc-windows-msvc
release: 1.86.0-nightly
LLVM version: 19.1.7
```
### Error output
```
error: could not compile `testtest` (lib)
Caused by:
process didn't exit successfully: `C:\Users\<username>\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\bin\clippy-driver.exe C:\Users\<username>\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\bin\rustc.exe --crate-name testtest --edition=2024 src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata -C embed-bitcode=no -C debuginfo=2 -Ctarget-cpu=native -Ztune-cpu=native -Ctarget-feature=+aes -Clink-arg=-fuse-ld=lld -Zthreads=4 --cfg "feature=\"default\"" --check-cfg cfg(docsrs,test) --check-cfg "cfg(feature, values(\"default\", \"rand\"))" -C metadata=10a1d148f727b426 -C extra-filename=-652f9f02f745d5a0 --out-dir C:\Users\<username>\Rust_stuff\misc\testtest\target\debug\deps -C incremental=C:\Users\<username>\Rust_stuff\misc\testtest\target\debug\incremental -L dependency=C:\Users\<username>\Rust_stuff\misc\testtest\target\debug\deps` (exit code: 0xc0000005, STATUS_ACCESS_VIOLATION)
```
| I-ICE,T-compiler,C-bug,WG-compiler-parallel,S-has-mcve,S-has-bisection | medium | Critical |
2,803,786,232 | ollama | ollama only using cpu even with gpu found | ### What is the issue?
hello,
this has been reported in the past at least two times, I am here to report it a third time because something doesnt seem right.
relevant issues:
https://github.com/ollama/ollama/issues/8485
https://github.com/ollama/ollama/issues/8467
same error, same fix with ' just reinstalling within the same session no reboot no nothing just magically fixes it.
installing from official arch repos also causes the gpu not to be used, any way to .. fix that?
`
> 2025/01/22 09:42:17 routes.go:1187: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/home/nylle/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
> time=2025-01-22T09:42:17.713+01:00 level=INFO source=images.go:432 msg="total blobs: 30"
> time=2025-01-22T09:42:17.713+01:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
> [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
>
> [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
> - using env: export GIN_MODE=release
> - using code: gin.SetMode(gin.ReleaseMode)
>
> [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
> [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
> [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
> [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
> [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
> [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
> [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
> [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
> [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
> [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
> [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
> [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
> [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
> [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
> [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
> [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
> [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
> [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
> [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
> [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
> [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
> [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
> [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
> [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
> time=2025-01-22T09:42:17.714+01:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)"
> time=2025-01-22T09:42:17.714+01:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[cpu]
> time=2025-01-22T09:42:17.715+01:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
> time=2025-01-22T09:42:17.991+01:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-f16d0b56-1989-0c40-d33b-480f8247ae00 library=cuda variant=v12 compute=8.9 driver=12.7 name="NVIDIA GeForce RTX 4070" total="11.6 GiB" available="10.9 GiB"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 2.178715ms | 127.0.0.1 | GET "/api/tags"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 2.279349ms | 127.0.0.1 | GET "/api/tags"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 569.677ยตs | 127.0.0.1 | GET "/api/tags"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 641.407ยตs | 127.0.0.1 | GET "/api/tags"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 952.749ยตs | 127.0.0.1 | GET "/api/tags"
> [GIN] 2025/01/22 - 09:42:29 | 404 | 2.06483ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 404 | 3.234779ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 404 | 708.286ยตs | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 404 | 4.649411ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 404 | 507.699ยตs | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 15.227486ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 14.713616ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 16.811971ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 14.913503ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 16.80457ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 11.736522ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 14.276927ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 15.946584ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 16.378602ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 16.905593ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 19.674409ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 9.267617ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 18.489808ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 9.548855ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 14.015642ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 18.927918ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 10.819718ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 20.026506ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 20.160486ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 20.971805ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 22.430443ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 20.434382ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 23.104035ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 19.600649ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 12.181043ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 9.689575ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 11.553348ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 23.011692ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 24.279914ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 24.367486ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 21.774185ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 23.247884ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 17.658695ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 23.333246ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 19.259493ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 14.342186ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 14.479045ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 26.379969ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 12.077609ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 24.800855ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 17.678988ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 27.778669ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 28.465122ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 21.794806ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 33.221807ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 26.157838ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 6.808224ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 7.428048ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 7.468563ms | 127.0.0.1 | POST "/api/show"
> [GIN] 2025/01/22 - 09:42:29 | 200 | 7.715287ms | 127.0.0.1 | POST "/api/show"
> time=2025-01-22T09:42:34.091+01:00 level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/home/nylle/.ollama/models/blobs/sha256-e04bbddd58d9290a89af21ef484ce1113ff34ef35e822e95e52ff1045bac17f5 gpu=GPU-f16d0b56-1989-0c40-d33b-480f8247ae00 parallel=1 available=11592138752 required="9.1 GiB"
> time=2025-01-22T09:42:34.212+01:00 level=INFO source=server.go:104 msg="system memory" total="31.3 GiB" free="26.3 GiB" free_swap="32.0 GiB"
> time=2025-01-22T09:42:34.213+01:00 level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=33 layers.offload=33 layers.split="" memory.available="[10.8 GiB]" memory.gpu_overhead="0 B" memory.required.full="9.1 GiB" memory.required.partial="9.1 GiB" memory.required.kv="4.0 GiB" memory.required.allocations="[9.1 GiB]" memory.weights.total="7.7 GiB" memory.weights.repeating="7.6 GiB" memory.weights.nonrepeating="103.4 MiB" memory.graph.full="553.8 MiB" memory.graph.partial="673.3 MiB"
> time=2025-01-22T09:42:34.213+01:00 level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/bin/ollama runner --model /home/nylle/.ollama/models/blobs/sha256-e04bbddd58d9290a89af21ef484ce1113ff34ef35e822e95e52ff1045bac17f5 --ctx-size 8096 --batch-size 512 --n-gpu-layers 33 --threads 16 --parallel 1 --port 42953"
> time=2025-01-22T09:42:34.213+01:00 level=INFO source=sched.go:449 msg="loaded runners" count=1
> time=2025-01-22T09:42:34.213+01:00 level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
> time=2025-01-22T09:42:34.214+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
> time=2025-01-22T09:42:34.223+01:00 level=INFO source=runner.go:936 msg="starting go runner"
> time=2025-01-22T09:42:34.223+01:00 level=INFO source=runner.go:937 msg=system info="CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | CPU : LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=16
> time=2025-01-22T09:42:34.223+01:00 level=INFO source=runner.go:995 msg="Server listening on 127.0.0.1:42953"
> llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /home/nylle/.ollama/models/blobs/sha256-e04bbddd58d9290a89af21ef484ce1113ff34ef35e822e95e52ff1045bac17f5 (version GGUF V3 (latest))
> llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
> llama_model_loader: - kv 0: general.architecture str = llama
> llama_model_loader: - kv 1: general.name str = deepseek-ai
> llama_model_loader: - kv 2: llama.context_length u32 = 16384
> llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
> llama_model_loader: - kv 4: llama.block_count u32 = 32
> llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
> llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
> llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
> llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
> llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001
> llama_model_loader: - kv 10: llama.rope.freq_base f32 = 100000.000000
> llama_model_loader: - kv 11: llama.rope.scaling.type str = linear
> llama_model_loader: - kv 12: llama.rope.scaling.factor f32 = 4.000000
> llama_model_loader: - kv 13: general.file_type u32 = 3
> llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
> llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32256] = ["!", "\"", "#", "$", "%", "&", "'", ...
> llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32256] = [0.000000, 0.000000, 0.000000, 0.0000...
> llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
> llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,31757] = ["ฤ ฤ ", "ฤ t", "ฤ a", "i n", "h e...
> llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 32013
> llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 32014
> llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 32014
> llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
> llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
> llama_model_loader: - kv 24: general.quantization_version u32 = 2
> llama_model_loader: - type f32: 65 tensors
> llama_model_loader: - type q4_1: 225 tensors
> llama_model_loader: - type q6_K: 1 tensors
> llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
> llm_load_vocab: control-looking token: 32015 '<๏ฝfimโhole๏ฝ>' was not control-type; this is probably a bug in the model. its type will be overridden
> llm_load_vocab: control-looking token: 32017 '<๏ฝfimโend๏ฝ>' was not control-type; this is probably a bug in the model. its type will be overridden
> llm_load_vocab: control-looking token: 32016 '<๏ฝfimโbegin๏ฝ>' was not control-type; this is probably a bug in the model. its type will be overridden
> llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
> llm_load_vocab: special tokens cache size = 256
> llm_load_vocab: token to piece cache size = 0.1792 MB
> llm_load_print_meta: format = GGUF V3 (latest)
> llm_load_print_meta: arch = llama
> llm_load_print_meta: vocab type = BPE
> llm_load_print_meta: n_vocab = 32256
> llm_load_print_meta: n_merges = 31757
> llm_load_print_meta: vocab_only = 0
> llm_load_print_meta: n_ctx_train = 16384
> llm_load_print_meta: n_embd = 4096
> llm_load_print_meta: n_layer = 32
> llm_load_print_meta: n_head = 32
> llm_load_print_meta: n_head_kv = 32
> llm_load_print_meta: n_rot = 128
> llm_load_print_meta: n_swa = 0
> llm_load_print_meta: n_embd_head_k = 128
> llm_load_print_meta: n_embd_head_v = 128
> llm_load_print_meta: n_gqa = 1
> llm_load_print_meta: n_embd_k_gqa = 4096
> llm_load_print_meta: n_embd_v_gqa = 4096
> llm_load_print_meta: f_norm_eps = 0.0e+00
> llm_load_print_meta: f_norm_rms_eps = 1.0e-06
> llm_load_print_meta: f_clamp_kqv = 0.0e+00
> llm_load_print_meta: f_max_alibi_bias = 0.0e+00
> llm_load_print_meta: f_logit_scale = 0.0e+00
> llm_load_print_meta: n_ff = 11008
> llm_load_print_meta: n_expert = 0
> llm_load_print_meta: n_expert_used = 0
> llm_load_print_meta: causal attn = 1
> llm_load_print_meta: pooling type = 0
> llm_load_print_meta: rope type = 0
> llm_load_print_meta: rope scaling = linear
> llm_load_print_meta: freq_base_train = 100000.0
> llm_load_print_meta: freq_scale_train = 0.25
> llm_load_print_meta: n_ctx_orig_yarn = 16384
> llm_load_print_meta: rope_finetuned = unknown
> llm_load_print_meta: ssm_d_conv = 0
> llm_load_print_meta: ssm_d_inner = 0
> llm_load_print_meta: ssm_d_state = 0
> llm_load_print_meta: ssm_dt_rank = 0
> llm_load_print_meta: ssm_dt_b_c_rms = 0
> llm_load_print_meta: model type = 7B
> llm_load_print_meta: model ftype = Q4_1
> llm_load_print_meta: model params = 6.74 B
> llm_load_print_meta: model size = 3.95 GiB (5.03 BPW)
> llm_load_print_meta: general.name = deepseek-ai
> llm_load_print_meta: BOS token = 32013 '<๏ฝbeginโofโsentence๏ฝ>'
> llm_load_print_meta: EOS token = 32014 '<๏ฝendโofโsentence๏ฝ>'
> llm_load_print_meta: EOT token = 32014 '<๏ฝendโofโsentence๏ฝ>'
> llm_load_print_meta: PAD token = 32014 '<๏ฝendโofโsentence๏ฝ>'
> llm_load_print_meta: LF token = 126 'ร'
> llm_load_print_meta: FIM PRE token = 32016 '<๏ฝfimโbegin๏ฝ>'
> llm_load_print_meta: FIM SUF token = 32015 '<๏ฝfimโhole๏ฝ>'
> llm_load_print_meta: FIM MID token = 32017 '<๏ฝfimโend๏ฝ>'
> llm_load_print_meta: EOG token = 32014 '<๏ฝendโofโsentence๏ฝ>'
> llm_load_print_meta: max token length = 128
> time=2025-01-22T09:42:34.465+01:00 level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
> llm_load_tensors: CPU_Mapped model buffer size = 4043.12 MiB
> llama_new_context_with_model: n_seq_max = 1
> llama_new_context_with_model: n_ctx = 8096
> llama_new_context_with_model: n_ctx_per_seq = 8096
> llama_new_context_with_model: n_batch = 512
> llama_new_context_with_model: n_ubatch = 512
> llama_new_context_with_model: flash_attn = 0
> llama_new_context_with_model: freq_base = 100000.0
> llama_new_context_with_model: freq_scale = 0.25
> llama_new_context_with_model: n_ctx_per_seq (8096) < n_ctx_train (16384) -- the full capacity of the model will not be utilized
> llama_kv_cache_init: kv_size = 8096, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1
> llama_kv_cache_init: CPU KV buffer size = 4048.00 MiB
> llama_new_context_with_model: KV self size = 4048.00 MiB, K (f16): 2024.00 MiB, V (f16): 2024.00 MiB
> llama_new_context_with_model: CPU output buffer size = 0.14 MiB
> llama_new_context_with_model: CPU compute buffer size = 553.82 MiB
> llama_new_context_with_model: graph nodes = 1030
> llama_new_context_with_model: graph splits = 1
> time=2025-01-22T09:42:38.977+01:00 level=INFO source=server.go:594 msg="llama runner started in 4.76 seconds"
> llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /home/nylle/.ollama/models/blobs/sha256-e04bbddd58d9290a89af21ef484ce1113ff34ef35e822e95e52ff1045bac17f5 (version GGUF V3 (latest))
> llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
> llama_model_loader: - kv 0: general.architecture str = llama
> llama_model_loader: - kv 1: general.name str = deepseek-ai
> llama_model_loader: - kv 2: llama.context_length u32 = 16384
> llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
> llama_model_loader: - kv 4: llama.block_count u32 = 32
> llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
> llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
> llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
> llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
> llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001
> llama_model_loader: - kv 10: llama.rope.freq_base f32 = 100000.000000
> llama_model_loader: - kv 11: llama.rope.scaling.type str = linear
> llama_model_loader: - kv 12: llama.rope.scaling.factor f32 = 4.000000
> llama_model_loader: - kv 13: general.file_type u32 = 3
> llama_model_loader: - kv 14: tokenizer.ggml.model str = gpt2
> llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32256] = ["!", "\"", "#", "$", "%", "&", "'", ...
> llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32256] = [0.000000, 0.000000, 0.000000, 0.0000...
> llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
> llama_model_loader: - kv 18: tokenizer.ggml.merges arr[str,31757] = ["ฤ ฤ ", "ฤ t", "ฤ a", "i n", "h e...
> llama_model_loader: - kv 19: tokenizer.ggml.bos_token_id u32 = 32013
> llama_model_loader: - kv 20: tokenizer.ggml.eos_token_id u32 = 32014
> llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 32014
> llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
> llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
> llama_model_loader: - kv 24: general.quantization_version u32 = 2
> llama_model_loader: - type f32: 65 tensors
> llama_model_loader: - type q4_1: 225 tensors
> llama_model_loader: - type q6_K: 1 tensors
> llm_load_vocab: missing or unrecognized pre-tokenizer type, using: 'default'
> llm_load_vocab: control-looking token: 32015 '<๏ฝfimโhole๏ฝ>' was not control-type; this is probably a bug in the model. its type will be overridden
> llm_load_vocab: control-looking token: 32017 '<๏ฝfimโend๏ฝ>' was not control-type; this is probably a bug in the model. its type will be overridden
> llm_load_vocab: control-looking token: 32016 '<๏ฝfimโbegin๏ฝ>' was not control-type; this is probably a bug in the model. its type will be overridden
> llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
> llm_load_vocab: special tokens cache size = 256
> llm_load_vocab: token to piece cache size = 0.1792 MB
> llm_load_print_meta: format = GGUF V3 (latest)
> llm_load_print_meta: arch = llama
> llm_load_print_meta: vocab type = BPE
> llm_load_print_meta: n_vocab = 32256
> llm_load_print_meta: n_merges = 31757
> llm_load_print_meta: vocab_only = 1
> llm_load_print_meta: model type = ?B
> llm_load_print_meta: model ftype = all F32
> llm_load_print_meta: model params = 6.74 B
> llm_load_print_meta: model size = 3.95 GiB (5.03 BPW)
> llm_load_print_meta: general.name = deepseek-ai
> llm_load_print_meta: BOS token = 32013 '<๏ฝbeginโofโsentence๏ฝ>'
> llm_load_print_meta: EOS token = 32014 '<๏ฝendโofโsentence๏ฝ>'
> llm_load_print_meta: EOT token = 32014 '<๏ฝendโofโsentence๏ฝ>'
> llm_load_print_meta: PAD token = 32014 '<๏ฝendโofโsentence๏ฝ>'
> llm_load_print_meta: LF token = 126 'ร'
> llm_load_print_meta: FIM PRE token = 32016 '<๏ฝfimโbegin๏ฝ>'
> llm_load_print_meta: FIM SUF token = 32015 '<๏ฝfimโhole๏ฝ>'
> llm_load_print_meta: FIM MID token = 32017 '<๏ฝfimโend๏ฝ>'
> llm_load_print_meta: EOG token = 32014 '<๏ฝendโofโsentence๏ฝ>'
> llm_load_print_meta: max token length = 128
> llama_model_load: vocab only - skipping tensors
> [GIN] 2025/01/22 - 09:43:28 | 200 | 54.425610491s | 127.0.0.1 | POST "/api/chat"
`
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.7 | bug | low | Critical |
2,803,792,728 | flutter | Windows module_host_with_custom_build_test is 2.11% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Windows module_host_with_custom_build_test"
}
-->
The post-submit test builder `Windows module_host_with_custom_build_test` had a flaky ratio 2.11% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Windows%20module_host_with_custom_build_test/22149
Commit: https://github.com/flutter/flutter/commit/8ef7b932879cea66b9d98cfdd87dc3787354683c
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Windows%20module_host_with_custom_build_test/22149
https://ci.chromium.org/ui/p/flutter/builders/prod/Windows%20module_host_with_custom_build_test/22134
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Windows%20module_host_with_custom_build_test
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| P0,c: flake,team-tool | high | Minor |
2,803,832,844 | godot | The behavior of CollisionObject2D._mouse_enter() is very strange | ### Tested versions
4.3-stable
### System information
MacOS 15
### Issue description
1, The function _mouse_entered() of Area2D is disabled when an Control node cover the Area2D. except on you click mouse left button. As shown in the following figure

2, I can`t prevent _mouse_entered() of Area2D with Viewport.set_input_as_handled() even in _input(). So how is the _mouse_entered() method implemented?
3, My game requires a lot of mouse clicks for operation. But the shape of the Control node is not rich enough, the mouse event of Area2D node is strange. Maybe I missed something, can you tell me the best practices.
### Steps to reproduce
NA
### Minimal reproduction project (MRP)
[signalbugtest.zip](https://github.com/user-attachments/files/18502773/signalbugtest.zip) | bug,confirmed,topic:physics,topic:input | low | Critical |
2,803,848,954 | storybook | GET http://localhost:6006/sb-preview/runtime.js net::ERR_ABORTED 404 (Not Found) | ### Describe the bug
```bash
pnpm dlx [email protected]
```
select vue3 and vite
then run `pnpm storybook`
The command line reported an error๏ผ
>
info => Starting manager..
info => Starting preview..
The CJS build of Vite's Node API is deprecated. See https://vite.dev/guide/troubleshooting.html#vite-cjs-node-api-deprecated for more details.
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โ
โ Storybook 8.5.0 for D:\projects\nominox\gamebff-frontend\node_modules\.pnpm\@storybook+vue โ
โ [email protected][email protected][email protected][email protected]_@[email protected]_less_ih2kvum5g4 โ
โ cpenog3bonzyb23e\node_modules\@storybook\vue3-vite started โ
โ 681 ms for manager and 5.29 s for preview โ
โ โ
โ Local: http://localhost:6006/ โ
โ On your network: http://192.168.169.77:6006/ โ
โ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏX [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "WithTooltip"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/Color-F6OSRLHC.mjs:3:9:
3 โ import { WithTooltip, TooltipNote, Form } from 'storybook/internal/components';
โต ~~~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "TooltipNote"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/Color-F6OSRLHC.mjs:3:22:
3 โ import { WithTooltip, TooltipNote, Form } from 'storybook/internal/components';
โต ~~~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "Form"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/Color-F6OSRLHC.mjs:3:35:
3 โ import { WithTooltip, TooltipNote, Form } from 'storybook/internal/components';
โต ~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "withReset"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:9:
4 โ import { withReset, SyntaxHighlighter, FlexBar, Form, IconButton, codeCommon, compon...
โต ~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "SyntaxHighlighter"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:20:
4 โ import { withReset, SyntaxHighlighter, FlexBar, Form, IconButton, codeCommon, compon...
โต ~~~~~~~~~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "FlexBar"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:39:
4 โ import { withReset, SyntaxHighlighter, FlexBar, Form, IconButton, codeCommon, compon...
โต ~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "Form"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:48:
4 โ ...withReset, SyntaxHighlighter, FlexBar, Form, IconButton, codeCommon, components, ...
โต ~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "IconButton"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:54:
4 โ ..., SyntaxHighlighter, FlexBar, Form, IconButton, codeCommon, components, Zoom, Act...
โต ~~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "codeCommon"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:66:
4 โ ...lighter, FlexBar, Form, IconButton, codeCommon, components, Zoom, ActionBar, Butt...
โต ~~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "components"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:78:
4 โ ...xBar, Form, IconButton, codeCommon, components, Zoom, ActionBar, Button, Link, Re...
โต ~~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "Zoom"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:90:
4 โ ...m, IconButton, codeCommon, components, Zoom, ActionBar, Button, Link, ResetWrappe...
โต ~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "ActionBar"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:96:
4 โ ...tton, codeCommon, components, Zoom, ActionBar, Button, Link, ResetWrapper, Code, ...
โต ~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "Button"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:107:
4 โ ...eCommon, components, Zoom, ActionBar, Button, Link, ResetWrapper, Code, nameSpace...
โต ~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "Link"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:115:
4 โ ..., components, Zoom, ActionBar, Button, Link, ResetWrapper, Code, nameSpaceClassNa...
โต ~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "ResetWrapper"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:121:
4 โ ...ts, Zoom, ActionBar, Button, Link, ResetWrapper, Code, nameSpaceClassNames, H3, H...
โต ~~~~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "Code"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:135:
4 โ ...ActionBar, Button, Link, ResetWrapper, Code, nameSpaceClassNames, H3, H2, Loader,...
โต ~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "nameSpaceClassNames"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:141:
4 โ ...ton, Link, ResetWrapper, Code, nameSpaceClassNames, H3, H2, Loader, EmptyTabConte...
โต ~~~~~~~~~~~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "H3"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:162:
4 โ ...esetWrapper, Code, nameSpaceClassNames, H3, H2, Loader, EmptyTabContent, TabsStat...
โต ~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "H2"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:166:
4 โ ...Wrapper, Code, nameSpaceClassNames, H3, H2, Loader, EmptyTabContent, TabsState, E...
โต ~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "Loader"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:170:
4 โ ...r, Code, nameSpaceClassNames, H3, H2, Loader, EmptyTabContent, TabsState, ErrorFo...
โต ~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "EmptyTabContent"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:178:
4 โ ...SpaceClassNames, H3, H2, Loader, EmptyTabContent, TabsState, ErrorFormatter, getS...
โต ~~~~~~~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "TabsState"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:195:
4 โ ...s, H3, H2, Loader, EmptyTabContent, TabsState, ErrorFormatter, getStoryHref, With...
โต ~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "ErrorFormatter"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:206:
4 โ ...ader, EmptyTabContent, TabsState, ErrorFormatter, getStoryHref, WithTooltipPure }...
โต ~~~~~~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "getStoryHref"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:222:
4 โ ...ontent, TabsState, ErrorFormatter, getStoryHref, WithTooltipPure } from 'storyboo...
โต ~~~~~~~~~~~~
X [ERROR] No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "WithTooltipPure"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:236:
4 โ ...e, ErrorFormatter, getStoryHref, WithTooltipPure } from 'storybook/internal/compo...
โต ~~~~~~~~~~~~~~~
ERROR Unhandled promise rejection: Build failed with 25 errors: 17:16:52
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/Color-F6OSRLHC.mjs:3:9: ERROR: No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "WithTooltip"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/Color-F6OSRLHC.mjs:3:22: ERROR: No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "TooltipNote"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/Color-F6OSRLHC.mjs:3:35: ERROR: No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "Form"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:9: ERROR: No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "withReset"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:20: ERROR: No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "SyntaxHighlighter"
...
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/Color-F6OSRLHC.mjs:3:9: ERROR: No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "WithTooltip"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/Color-F6OSRLHC.mjs:3:22: ERROR: No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "TooltipNote"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/Color-F6OSRLHC.mjs:3:35: ERROR: No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "Form"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:9: ERROR: No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "withReset"
../../node_modules/.pnpm/@[email protected][email protected][email protected][email protected][email protected][email protected]_/node_modules/@storybook/blocks/dist/index.mjs:4:20: ERROR: No matching export in "../../node_modules/.pnpm/[email protected][email protected]/node_modules/storybook/core/components/index.js" for import "SyntaxHighlighter"
...
at failureErrorWithLog (D:\projects\nominox\gamebff-frontend\node_modules\.pnpm\[email protected]\node_modules\esbuild\lib\main.js:1472:15)
at D:\projects\nominox\gamebff-frontend\node_modules\.pnpm\[email protected]\node_modules\esbuild\lib\main.js:945:25
at D:\projects\nominox\gamebff-frontend\node_modules\.pnpm\[email protected]\node_modules\esbuild\lib\main.js:1353:9
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
and on the page for http://localhost:6006/, error:
> GET http://localhost:6006/sb-preview/runtime.js net::ERR_ABORTED 404 (Not Found)

There are problems with almost all of your versions
### Reproduction link
no
### Reproduction steps
_No response_
### System
```bash
System:
OS: Windows 10 10.0.19045
CPU: (24) x64 13th Gen Intel(R) Core(TM) i7-13700
Binaries:
Node: 20.14.0 - ~\AppData\Local\fnm_multishells\23504_1737369058038\node.EXE
npm: 10.7.0 - ~\AppData\Local\fnm_multishells\23504_1737369058038\npm.CMD
pnpm: 9.12.2 - ~\AppData\Local\fnm_multishells\23504_1737369058038\pnpm.CMD <----- active
Browsers:
Edge: Chromium (128.0.2739.63)
npmPackages:
@storybook/addon-essentials: 8.5.0 => 8.5.0
@storybook/addon-interactions: 8.5.0 => 8.5.0
@storybook/addon-onboarding: 8.5.0 => 8.5.0
@storybook/blocks: 8.5.0 => 8.5.0
@storybook/test: 8.5.0 => 8.5.0
@storybook/vue3: 8.5.0 => 8.5.0
@storybook/vue3-vite: 8.5.0 => 8.5.0
storybook: 8.5.0 => 8.5.0
```
### Additional context
_No response_ | bug,builder-vite,sev:S2 | medium | Critical |
2,803,864,458 | PowerToys | Mouse Button Customization Features Similar to XMBC | ### Description of the new feature / enhancement
I am a frequent user of both PowerToys and X-Mouse Button Control (XMBC). While PowerToys provides excellent utilities like FancyZones and Mouse Utilities, it lacks the ability to assign custom actions to mouse buttons.
I believe adding mouse button customization features (e.g., application-specific profiles, media control, or macro assignment) would significantly enhance the functionality of PowerToys. This feature is particularly useful for users who rely on productivity tools or need fine-grained control over their devices.
Such a feature would complement the existing "Mouse Utilities" module and attract a broader user base. Please consider integrating this functionality in future updates.
### Scenario when this would be used?
.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,803,873,817 | react-native | network request failed for some users | ### Description
some users reported network request failed using rtk query. This behavior occurs specifically in a React Native environment.
### Steps to reproduce
1. Use RTK Query to initiate a network request (e.g., `useQuery`)
3. Observe that the query returns a fetch error instead of executing successfully.
### React Native Version
0.75.4
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 12.7.6
CPU: (8) x64 Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz
Memory: 40.65 MB / 16.00 GB
Shell:
version: 5.8.1
path: /bin/zsh
Binaries:
Node:
version: 20.15.0
path: ~/.nvm/versions/node/v20.15.0/bin/node
Yarn:
version: 4.5.0
path: ~/.nvm/versions/node/v20.15.0/bin/yarn
npm:
version: 10.7.0
path: ~/.nvm/versions/node/v20.15.0/bin/npm
Watchman:
version: 2024.07.15.00
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 21.4
- iOS 16.0
- macOS 12.3
- tvOS 16.0
- watchOS 9.0
Android SDK:
API Levels:
- "25"
- "28"
- "30"
- "31"
- "33"
- "34"
- "35"
Build Tools:
- 24.0.0
- 24.0.1
- 24.0.2
- 24.0.3
- 30.0.2
- 30.0.3
- 33.0.0
- 33.0.1
- 34.0.0
- 35.0.0
System Images:
- android-26 | Google Play Intel x86 Atom
- android-34 | Intel x86_64 Atom
- android-34 | Google APIs Intel x86_64 Atom
Android NDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2411.12169540
Xcode:
version: 14.0/14A309
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.11
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.4
wanted: 0.75.4
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
## Observed Behavior
- The query fails with a fetch error randomly.
```
### Reproducer
general issue not needing for repo
### Screenshots and Videos
_No response_ | ๐Networking,Needs: Repro,Needs: Attention | low | Critical |
2,803,886,610 | pytorch | DISABLED test_linear_and_cel_max_autotune (__main__.InplacePaddingTest) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_linear_and_cel_max_autotune&suite=InplacePaddingTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35974314707).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_linear_and_cel_max_autotune`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_inplace_padding.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,803,887,452 | pytorch | DISABLED test_max_autotune_remote_caching_dynamic_False (__main__.TestMaxAutotuneRemoteCache) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_max_autotune_remote_caching_dynamic_False&suite=TestMaxAutotuneRemoteCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35967125228).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 9 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_max_autotune_remote_caching_dynamic_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_max_autotune.py", line 1072, in test_max_autotune_remote_caching
self.assertEqual(global_stats.autotune_remote, Stats(2, 3, 2))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=4, num_get_hit=2, num_get_miss=4) != Stats(num_put=2, num_get_hit=3, num_get_miss=2)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_max_autotune.py TestMaxAutotuneRemoteCache.test_max_autotune_remote_caching_dynamic_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_max_autotune.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,803,887,453 | pytorch | DISABLED test_comprehensive_svd_lowrank_cuda_float32 (__main__.TestInductorOpInfoCUDA) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_svd_lowrank_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35964561116).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_svd_lowrank_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1156, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1444, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2262, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1620, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1542, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 950, in inner
raise e
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 942, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1189, in test_comprehensive
raise e
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor_opinfo.py", line 1149, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 624, in check_model_gpu
check_model(
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 532, in check_model
assert strides_equal
AssertionError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_cuda.py", line 247, in wrapped
return f(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1236, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1620, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1168, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(3, 2), device="cuda:0", dtype=torch.float32], args=TensorList[Tensor[size=(3, 2), device="cuda:0", dtype=torch.float32]], kwargs={'q': '2', 'M': 'None'}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_svd_lowrank_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,803,887,595 | pytorch | DISABLED test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_False (__main__.TestFxGraphCache) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35961539277).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 5 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 146, in test_cache_load_function
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 7 but got 14.
Absolute difference: 7
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_grad_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,803,887,718 | pytorch | DISABLED test_cache_hot_load_device_cuda_bfloat16_dynamic_False (__main__.TestFxGraphCache) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_hot_load_device_cuda_bfloat16_dynamic_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35972563562).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_hot_load_device_cuda_bfloat16_dynamic_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 363, in test_cache_hot_load
self.assertEqual(counters["inductor"]["fxgraph_cache_miss"], 2)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4028, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 2 but got 3.
Absolute difference: 1
Relative difference: 0.5
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_hot_load_device_cuda_bfloat16_dynamic_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,803,889,361 | rust | Tracking issue for release notes of #118159: Implementation of `fmt::FormattingOptions` |
This issue tracks the release notes text for #118159.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Implementation of `fmt::FormattingOptions`](https://github.com/rust-lang/rust/pull/118159)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @EliasHolzmann, @m-ou-se -- origin issue/PR authors and assignees for starting to draft text
| relnotes,T-libs,needs-triage,relnotes-tracking-issue | low | Minor |
2,803,933,734 | vscode | Provide the originating conversion in the MappedEdits | > We never used the conversation in our mapper code. It turned out that the code block with just the markdown before the block as well as the input file works well.
>
>
>
> Would it be good enough to add the `ChatResult` from the response that contained the block?
>
> Maybe lets open a new feature request so we can discuss this.
_Originally posted by @aeschli in [4f2c166](https://github.com/microsoft/vscode/commit/4f2c166752c938b98b14244acb4003126536278f#r151621031)_ | chat | low | Minor |
2,803,963,674 | vscode | Printenv in shellIntegration-bash.sh cause hang in terminal git bash |
Type: <b>Performance Issue</b>
When i open one terminal with git bash in windows the windows hang for 20/30 second before open totally. After the openig i see the last output of printenv before the prompt.
The row that cause this are in the file shellIntegration-bash.sh, in the function __vsc_update_env . If i delete delete this function and the two call to __vsc_update_env() i dont have any problem anymore.
__vsc_update_env() {
builtin printf '\e]633;EnvSingleStart;%s;\a' $__vsc_nonce
for var in $(compgen -v); do
if printenv "$var" >/dev/null 2>&1; then
value=$(builtin printf '%s' "${!var}")
builtin printf '\e]633;EnvSingleEntry;%s;%s;%s\a' "$var" "$(__vsc_escape_value "$value")" $__vsc_nonce
fi
done
builtin printf '\e]633;EnvSingleEnd;%s;\a' $__vsc_nonce
}
VS Code version: Code - Insiders 1.97.0-insider (d226a2a497b928d78aa654f74c8af5317d3becfb, 2025-01-22T05:05:14.565Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (32 x 2095)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.92GB (22.24GB free)|
|Process Argv|--crash-reporter-id cd6f428f-b0f8-40ce-8258-3cb7753136da|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
0 133 17420 code-insiders main
0 98 3756 gpu-process
0 30 9088 crashpad-handler
0 106 11568 ptyHost
0 7 13544 conpty-agent
0 6 24928 "C:\Program Files\Git\bin\bash.exe" --init-file "c:\SoftwareLight\VS Code Insiders\resources\app/out/vs/workbench/contrib/terminal/common/scripts/shellIntegration-bash.sh"
0 12 3704 "C:\Program Files\Git\bin\..\usr\bin\bash.exe" --init-file "c:\SoftwareLight\VS Code Insiders\resources\app/out/vs/workbench/contrib/terminal/common/scripts/shellIntegration-bash.sh"
0 124 16500 shared-process
0 150 18544 extensionHost [1]
0 100 11356 "C:\SoftwareLight\VS Code Insiders\Code - Insiders.exe" "c:\SoftwareLight\VS Code Insiders\resources\app\extensions\json-language-features\server\dist\node\jsonServerMain" --node-ipc --clientProcessId=18544
0 46 19632 utility-network-service
0 99 20728 fileWatcher [1]
0 225 22436 window [1] (settings.json - gui-nmf (Workspace) - Visual Studio Code - Insiders)
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (settings.json - gui-nmf (Workspace) - Visual Studio Code - Insiders)
| Folder (gui-proxy.nmf): 450 files
| File types: js(129) map(57) css(50) html(50) scss(34) json(17) png(7)
| woff(7) woff2(5) svg(5)
| Conf files: dockerfile(2) package.json(1)
| Folder (gui-ansible.nmf): 76 files
| File types: yml(26) j2(13) crt(6) cer(2) key(2) md(2) cfg(1) sh(1)
| Conf files:
| Folder (gui.nmf): 556 files
| File types: scss(177) vue(118) js(82) css(40) json(6) png(5) svg(5)
| woff(4) woff2(3) md(2)
| Conf files: dockerfile(1) package.json(1);
```
</details>
<details><summary>Extensions (9)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-html-css|ecm|2.0.12
beautify|Hoo|1.5.0
rest-client|hum|0.25.1
kubernetes-yaml-formatter|ken|2.4.0
vscode-docker|ms-|1.29.4
vscode-language-pack-it|MS-|1.97.2025012209
vetur|oct|0.37.3
uridecode|sry|0.3.6
JavaScriptSnippets|xab|1.8.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vsc_aacf:30263846
vscod805cf:30301675
vsaa593cf:30376535
py29gd2263:31024238
c4g48928:30535728
962ge761:30841072
pythonnoceb:30776497
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
cppperfnew:30980852
pythonait:30973460
dvdeprecation:31040973
dwnewjupyter:31046869
2f103344:31071589
nativerepl1:31134653
pythonrstrctxt:31093868
nativeloc1:31118317
cf971741:31144450
e80f6927:31120813
iacca1:31150324
notype1:31143044
dwcopilot:31158714
h409b430:31177054
5b1c1929:31184661
6074i472:31201624
dwoutputs:31217127
6e63a538:31218797
9064b325:31222308
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | important | low | Critical |
2,803,966,533 | storybook | [Bug]: Storybook not accessible from local hostname after Vite 6 upgrade | ### Describe the bug
After upgrading from Vite 5 to Vite 6, Storybook is no longer accessible via local hostname (e.g., timon-work.local) configured in macOS System Settings > Sharing. Despite setting the `--host` flag to `0.0.0.0` in the dev script, accessing Storybook through the local hostname results in 403 Forbidden errors:

Setting `allowedHosts` in `vite.config.ts` doesn't resolve the issue. However, configuring it in `.storybook/main.ts` within the `viteFinal` function does work:
```ts
const config: StorybookConfig = {
// ... snip
viteFinal: (config) => {
config.server ??= {};
console.log(config.server);
// Allow Storybook to be accessed from any host in development mode
// config.server.allowedHosts = process.env.NODE_ENV === 'development' ? true : config.server.allowedHosts;
return config;
},
};
```
I've managed to reproduce this with a clean Storybook setup (`npx storybook@latest init`).
### Reproduction link
https://github.com/TimonVS/storybook-hostname-reproduction
### Reproduction steps
1. Create a Storybook project (`npx storybook@latest init`)
2. Setup a local hostname on macOS (System Settings > Sharing)
3. Add `--host 0.0.0.0` flag to `storybook` script
### System
```bash
Storybook Environment Info:
System:
OS: macOS 15.1
CPU: (12) arm64 Apple M3 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 22.6.0 - ~/.local/share/mise/installs/node/22/bin/node
npm: 10.8.2 - ~/.local/share/mise/installs/node/22/bin/npm
pnpm: 9.15.0 - ~/Library/pnpm/pnpm <----- active
Browsers:
Chrome: 131.0.6778.265
Safari: 18.1
```
### Additional context
_No response_ | bug,good first issue,help wanted,compatibility with other tools,has workaround,core,builder-vite | low | Critical |
2,803,971,727 | langchain | Cannot install langchain-pinecone in windows. Need python for x86, but found x86_64 | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```cmd
pip install langchain-pinecone
```
```python
from langchain_pinecone import PineconeVectorStore
vector_store = PineconeVectorStore(index=index, embedding=embeddings)
```
*python version*
Python 3.13.1
### Error Message and Stack Trace (if applicable)
D:\codebase context\langchain-vector>pip install langchain-pinecone
Processing d:\codebase context\langchain-vector\langchain_pinecone-0.2.2-py3-none-any.whl
Collecting aiohttp<3.11,>=3.10 (from langchain-pinecone==0.2.2)
Downloading aiohttp-3.10.11-cp313-cp313-win_amd64.whl.metadata (8.0 kB)
Requirement already satisfied: langchain-core<0.4.0,>=0.3.29 in d:\abhiram\projects\llama automate\others\codebase context\langchain-vector\vecenv\lib\site-packages (from langchain-pinecone==0.2.2) (0.3.31)
Collecting langchain-tests<0.4.0,>=0.3.7 (from langchain-pinecone==0.2.2)
Downloading langchain_tests-0.3.8-py3-none-any.whl.metadata (3.6 kB)
Collecting numpy<2.0.0,>=1.26.0 (from langchain-pinecone==0.2.2)
Using cached numpy-1.26.4.tar.gz (15.8 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
ร Preparing metadata (pyproject.toml) did not run successfully.
โ exit code: 1
โฐโ> [22 lines of output]
+ D:\codebase context\langchain-vector\vecenv\Scripts\python.exe C:\Users\\AppData\Local\Temp\pip-install-qa_xstzd\numpy_e9d24d9f7710499d8ed38160396ec2ea\vendored-meson\meson\meson.py setup C:\Users\\AppData\Local\Temp\pip-install-qa_xstzd\numpy_e9d24d9f7710499d8ed38160396ec2ea C:\Users\\AppData\Local\Temp\pip-install-qa_xstzd\numpy_e9d24d9f7710499d8ed38160396ec2ea\.mesonpy-z2vnu182 -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=C:\Users\Abhiram\AppData\Local\Temp\pip-install-qa_xstzd\numpy_e9d24d9f7710499d8ed38160396ec2ea\.mesonpy-z2vnu182\meson-python-native-file.ini
The Meson build system
Version: 1.2.99
Source dir: C:\Users\\AppData\Local\Temp\pip-install-qa_xstzd\numpy_e9d24d9f7710499d8ed38160396ec2ea
Build dir: C:\Users\Abhiram\AppData\Local\Temp\pip-install-qa_xstzd\numpy_e9d24d9f7710499d8ed38160396ec2ea\.mesonpy-z2vnu182
Build type: native build
Project name: NumPy
Project version: 1.26.4
C compiler for the host machine: ccache gcc (gcc 13.2.0 "gcc (MinGW-W64 i686-msvcrt-posix-dwarf, built by Brecht Sanders, r8) 13.2.0")
C linker for the host machine: gcc ld.bfd 2.42
C++ compiler for the host machine: ccache c++ (gcc 13.2.0 "c++ (MinGW-W64 i686-msvcrt-posix-dwarf, built by Brecht Sanders, r8) 13.2.0")
C++ linker for the host machine: c++ ld.bfd 2.42
Cython compiler for the host machine: cython (cython 3.0.11)
Host machine cpu family: x86
Host machine cpu: x86
Program python found: YES (D:\abhiram\projects\Llama Automate\others\codebase context\langchain-vector\vecenv\Scripts\python.exe)
Need python for x86, but found x86_64
Run-time dependency python found: NO (tried sysconfig)
..\meson.build:41:12: ERROR: Python dependency not found
A full log can be found at C:\Users\Abhiram\AppData\Local\Temp\pip-install-qa_xstzd\numpy_e9d24d9f7710499d8ed38160396ec2ea\.mesonpy-z2vnu182\meson-logs\meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
ร Encountered error while generating package metadata.
โฐโ> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
### Description
I am trying to install pincone but the buid is failing with the eror.
### System Info
Windows 11
Python 3.13.1 | investigate | low | Critical |
2,803,984,247 | next.js | Turbopack fails to load valid .mjs module with UTF-8 error | ### Link to the code that reproduces this issue
https://github.com/utsuboco/r3f-perf/blob/752adc19edbcabc43fc917519c5366718fa0b9d0/src/components/TextsHighHZ.tsx#L10
### To Reproduce
### Description
Turbopack is failing to load a valid .mjs module that exports a base64 string. The module is from [r3f-perf](https://github.com/utsuboco/r3f-perf) package and contains a valid JavaScript file that exports a base64-encoded font string.
### Error Message
```sh
Module not found: Can't resolve '../roboto.woff.mjs'
8 | import as THREE from "three";
9 | import { useEvent } from "@utsubo/events";
> 10 | import localFont from "../roboto.woff.mjs";
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
11 | import { getPerf } from "../store.mjs";
failed to convert rope into string
Caused by:
invalid utf-8 sequence of 1 bytes from index 11
```
### Current vs. Expected behavior
### Expected Behavior
The file should load successfully as it's a valid ES module that simply exports a string:
```js
javascript
// Content of roboto.woff.mjs
const localFont = "data:font/woff;base64,..."
export { localFont as default };
```
### Actual Behavior
Turbopack appears to be attempting UTF-8 validation before module parsing, causing it to fail on what should be a valid JavaScript file. The same file loads correctly when using webpack.
Their package is Typescript and builds for CJS and ESM. I tried aliasing and it failed.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Home
Available memory (MB): 40638
Available CPU cores: 16
Binaries:
Node: 20.9.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.23 // An outdated version detected (latest is 15.1.5), upgrade is highly recommended!
eslint-config-next: 14.1.0
react: 18.2.0
react-dom: 18.2.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Webpack, Turbopack, Developer Experience
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
- This worked in Next.js 14.1 with Turbopack
- Still works with webpack in Next.js 14.2
- The file is a valid ES module exporting a base64 string
- Attempted to use resolveAlias in turbo config but it's ignored
- Package affected: r3f-perf
Link to issue in other library : https://github.com/utsuboco/r3f-perf/issues/59
### Impact
This prevents using Turbopack with packages that use this pattern for asset loading, requiring fallback to webpack development server. | please add a complete reproduction,Turbopack | low | Critical |
2,804,004,343 | flutter | Linux web_canvaskit_tests_5 is 2.11% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Linux web_canvaskit_tests_5"
}
-->
The post-submit test builder `Linux web_canvaskit_tests_5` had a flaky ratio 2.11% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Linux%20web_canvaskit_tests_5/17575
Commit: https://github.com/flutter/flutter/commit/da080e6976acd5c9f281f227f7df634ccfa84fae
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Linux%20web_canvaskit_tests_5/17575
https://ci.chromium.org/ui/p/flutter/builders/prod/Linux%20web_canvaskit_tests_5/17514
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Linux%20web_canvaskit_tests_5
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| P0,c: flake,team-web | high | Minor |
2,804,022,361 | next.js | Tree shaking in middleware doesn't work | ### Link to the code that reproduces this issue
https://github.com/amannn/nextjs-bug-repro-middleware/commit/6c442b025f45b0f777fda6f32fc355e58bef1fb9
### To Reproduce
1. Run `pnpm i && pnpm build`
2. See console output and analyzer output
### Current vs. Expected behavior
**Console output:**
```
ฦ Middleware 1.07 MB
```
I'd expect `getMessages` to not be bundled into the middleware as it's not used there.
See also the analyzer output:

### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:00 PDT 2024; root:xnu-10063.141.2~1/RELEASE_X86_64
Available memory (MB): 16384
Available CPU cores: 12
Binaries:
Node: 20.11.1
npm: 10.2.4
Yarn: 1.22.22
pnpm: 9.15.4
Relevant Packages:
next: 15.1.5 // Latest available version is detected (15.1.5).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
_No response_ | Middleware,linear: next | low | Critical |
2,804,029,353 | next.js | Lazy loaded component CSS doesn't contain assetPrefix | ### Link to the code that reproduces this issue
https://github.com/GeorgeHulpoi/lazy-loaded-component-asset-prefix
### To Reproduce
1. Verify in development mode the behavior.
2. Build the project with `NEXT_PUBLIC_ASSET_PREFIX` (it will be ussed in `next.config.ts` / `assetPrefix`.
3. Move the `_next/static` to the cdn / server / whatever (I used an S3 + CloudFront for this reproduction).
4. Start the server in production (everything should work fine).
5. Press on `Load lazy`.
6. You will see that the JS has loaded with `assetPrefix`:

Nonetheless, the CSS doesn't have the `assetPrefix`

### Current vs. Expected behavior
Within development environment, clicking on `Load lazy` will show the following:

In production with the `assetPrefix` set, the CSS fails to load:

### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Pro
Available memory (MB): 14189
Available CPU cores: 16
Binaries:
Node: 20.9.0
npm: 10.1.0
Yarn: 1.22.21
pnpm: 9.15.0
Relevant Packages:
next: 15.1.5 // Latest available version is detected (15.1.5).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Module Resolution, Lazy Loading
### Which stage(s) are affected? (Select all that apply)
next start (local)
### Additional context
_No response_ | Lazy Loading,linear: next,Module Resolution,CSS | low | Minor |
2,804,030,962 | vscode | Wrong text expected |
Type: <b>Bug</b>
I'm using VSCode Insider, with EVKey - A Vietnamese Keyboard, when i wrote "taij", it should be "tแบกi" but the result was "taiแบกi"
Update: VSCode resolves the problem
VS Code version: Code - Insiders 1.97.0-insider (d226a2a497b928d78aa654f74c8af5317d3becfb, 2025-01-22T05:05:14.565Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Xeon(R) CPU E3-1225 v5 @ 3.30GHz (4 x 3312)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: unavailable_off<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: unavailable_off<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|19.93GB (12.00GB free)|
|Process Argv|--crash-reporter-id 2467a45c-82aa-484d-9f03-9e53c22b4e29|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (12)</summary>
Extension|Author (truncated)|Version
---|---|---
codesnap|adp|1.3.4
vsc-material-theme|Equ|34.7.9
vsc-material-theme-icons|equ|3.8.12
copilot|Git|1.257.1326
copilot-chat|Git|0.24.2025012201
discord-vscode|icr|5.8.0
ia-vscode|Lon|0.1.5
vsliveshare|ms-|1.0.5948
material-icon-theme|PKi|5.17.0
vscode-yaml|red|1.15.0
vscode-spotify|shy|3.2.1
catppuccin-theme|Sir|0.3.1
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
pythonvspyt551:31179976
vscod805:30301674
vsaa593cf:30376535
py29gd2263:31024238
c4g48928:30535728
2i9eh265:30646982
962ge761:30841072
pythonnoceb:30776497
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
cppperfnew:30980852
pythonait:30973460
dvdeprecation:31040973
dwnewjupyter:31046869
nativerepl1:31134653
pythonrstrctxt:31093868
nativeloc1:31118317
cf971741:31144450
e80f6927:31120813
iacca1:31150324
notype1:31143044
dwcopilot:31158714
h409b430:31177054
c3hdf307:31184662
6074i472:31201624
dwoutputs:31217127
8did9651:31218798
hdaa2157:31222309
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | triage-needed | low | Critical |
2,804,095,738 | flutter | [webview_flutter_android] WebView fails to resize in SystemUiMode.edgeToEdge after display size change via system settings | ### Steps to reproduce
1. Lock device orientation in portrait mode
2. Enable SystemUiMode.edgeToEdge
3. Change display size from Settings -> Display -> Display size and text
4. Go back to app
### Expected results
Webview resizes correctly.
### Actual results
Webview does not resize correctly.
Reproducible with Pixel 5 (Android 14), Pixel 3XL (Android 12).
Flutter 3.27.2
webview_flutter: 4.10.0
webview_flutter_android: 4.3.1
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'package:webview_flutter/webview_flutter.dart';
import 'package:webview_flutter_android/webview_flutter_android.dart';
void setSystemMode(SystemUiMode mode) {
SystemChrome.setEnabledSystemUIMode(mode, overlays: SystemUiOverlay.values);
SystemChrome.setSystemUIOverlayStyle(SystemUiOverlayStyle.light.copyWith(
statusBarColor: Colors.transparent,
systemNavigationBarColor: Colors.transparent,
systemNavigationBarDividerColor: Colors.transparent,
));
}
void main() {
WidgetsFlutterBinding.ensureInitialized();
SystemChrome.setPreferredOrientations([DeviceOrientation.portraitUp]);
setSystemMode(SystemUiMode.edgeToEdge);
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
useMaterial3: true,
),
home: const WebViewExample(),
);
}
}
class WebViewExample extends StatefulWidget {
const WebViewExample({super.key});
@override
State<WebViewExample> createState() => _WebViewExampleState();
}
class _WebViewExampleState extends State<WebViewExample> {
late final WebViewController _webViewController;
SystemUiMode _uiMode = SystemUiMode.edgeToEdge;
@override
void initState() {
super.initState();
_webViewController = WebViewController()
..setJavaScriptMode(JavaScriptMode.unrestricted)
..loadHtmlString("""
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1,user-scalable=no,viewport-fit=cover" />
</head>
<body style="margin: 0; padding: 0;">
<div style="height: 100vh; width: 100vw; background-color: red; box-sizing: border-box; border: 10px solid blue;"></div>
</body>
</html>
""");
AndroidWebViewController.enableDebugging(true);
}
@override
Widget build(BuildContext context) {
final mq = MediaQuery.sizeOf(context);
return Scaffold(
body: SizedBox(
width: mq.width,
height: mq.height,
child: WebViewWidget(controller: _webViewController),
),
floatingActionButton: FloatingActionButton(
onPressed: () {
_uiMode = _uiMode == SystemUiMode.edgeToEdge
? SystemUiMode.manual
: SystemUiMode.edgeToEdge;
setSystemMode(_uiMode);
},
tooltip: 'Change ui mode',
child: const Icon(Icons.add),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img src="https://github.com/user-attachments/assets/9f17d831-5020-4a7f-822f-dfada5d82a81" width ="300">
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 3.27.2, on Microsoft Windows [Version 10.0.19045.5371], locale et-EE)
[โ] Windows Version (Installed version of Windows is version 10 or higher)
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.1)
[โ] Chrome - develop for the web
[โ] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.2.3)
[โ] Android Studio (version 2024.2)
[โ] IntelliJ IDEA Community Edition (version 2022.1)
[โ] VS Code, 64-bit edition (version 1.96.3)
[โ] Connected device (3 available)
[โ] Network resources
```
</details>
| platform-android,p: webview,package,has reproducible steps,team-android,found in release: 3.27,found in release: 3.28 | low | Critical |
2,804,099,092 | electron | [Windows 11] `new BrowserWindow(...)` bug it shows a white screen, and after waiting for a while, the software automatically closed | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.3.1
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 11 23H2 or 24H2
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
Source: https://github.com/wangliang181230/dev-sidecar/tree/v2.0.0-RC4.3
Release: https://github.com/wangliang181230/dev-sidecar/releases/tag/v2.0.0-RC4.3
Open `Dev-Sidecar-2.0.0-RC4.3` without error.
### Actual Behavior
Open `Dev-Sidecar-2.0.0-RC4.3` but it shows a white screen, and after waiting for a period of time, the software automatically closed.
And no logs were recorded.

### Testcase Gist URL
_No response_
### Additional Information
_No response_ | platform/windows,bug :beetle:,blocked/need-repro,33-x-y | low | Critical |
2,804,105,274 | material-ui | [TextField] Autocomplete does not allow the select to open when text field uses input slotProps (any of them) | ### Steps to reproduce
Steps:
1. Open this link to live example: https://stackblitz.com/edit/react-eomwegjs?file=Demo.tsx
2. Click on Autocomplete, select will not open.
### Current behavior
Clicking on Autocomplete when input is provided to TextField component does not open the list.
### Expected behavior
When clicking on the TextField the list should open (in this case the movies example)
### Context
I need to have an auto complete with a custom text field containing end adornments.
### Your environment
Forked on stackblitz from MUI autocomplete example.
**Search keywords**: TextField, slotProps, AutoComplete | support: question,component: text field,status: waiting for author | low | Major |
2,804,109,123 | PowerToys | World Clock | ### Description of the new feature / enhancement
You can add a plugin or add in world clock functionality to PowerToys run to easily check the time in different countries.
### Scenario when this would be used?
To check world times easily.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,804,147,705 | pytorch | The possible error in the pytorch documentation of RNN. | ### ๐ The doc issue
### 1. Where is the documentation?
URL: https://pytorch.org/docs/stable/generated/torch.nn.RNN.html#rnn
### 2. What is the possible error?
The documentation provide a piece of code about " Efficient implementation equivalent to the following with bidirectional=False " which is shown below:
```python
# Efficient implementation equivalent to the following with bidirectional=False
def forward(x, h_0=None):
if batch_first:
x = x.transpose(0, 1)
seq_len, batch_size, _ = x.size()
if h_0 is None:
h_0 = torch.zeros(num_layers, batch_size, hidden_size)
h_t_minus_1 = h_0
h_t = h_0
output = []
for t in range(seq_len):
for layer in range(num_layers):
h_t[layer] = torch.tanh(
x[t] @ weight_ih[layer].T
+ bias_ih[layer]
+ h_t_minus_1[layer] @ weight_hh[layer].T
+ bias_hh[layer]
)
output.append(h_t[-1])
h_t_minus_1 = h_t
output = torch.stack(output)
if batch_first:
output = output.transpose(0, 1)
return output, h_t
```
However, the piece of code **does not explain** the implementation of RNN correctly because it uses `x[t]`as the input data to compute the `h_t[layer]`**in each RNN layer** at the time `t`.
To compute the `h_t[layer]` correctly, the input data in each RNN layer at the time `t` should be 'x[t]' when `layer == 0` and 'h_t[layer-1]' when `layer > 0` respectively.
Thus, the correct interpretation of the RNN implementation can be:
### 3. The code of possible correct interpretation of the RNN implementation
```python
def forward(x, h_0=None):
if batch_first:
x = x.transpose(0, 1)
seq_len, batch_size, _ = x.size()
if h_0 is None:
h_0 = torch.zeros(num_layers, batch_size, hidden_size)
h_t_minus_1 = h_0
h_t = h_0
output = []
for t in range(seq_len):
input_t = x[t]
for layer in range(num_layers):
h_t[layer] = torch.tanh(
input_t @ weight_ih[layer].T
+ bias_ih[layer]
+ h_t_minus_1[layer] @ weight_hh.T
+ bias_hh[layer]
)
input_t = h_t[layer]
output.append(h_t[-1])
h_t_minus_1 = h_t
output = torch.stack(output)
if batch_first:
output = output.transpose(0, 1)
return output, h_t
```
### Suggest a potential alternative/fix
_No response_
cc @mikaylagawarecki | module: rnn,triaged | low | Critical |
2,804,160,391 | node | http: `HEAD` request consumes response body | ### Version
v22.10.0
### Platform
```text
Linux tumba 6.12.6-gentoo-yuran #1 SMP Sat Dec 21 16:28:04 +08 2024 x86_64 Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz GenuineIntel GNU/Linux
```
### Subsystem
http
### What steps will reproduce the bug?
```js
// srv.mjs
import { createServer } from 'node:http';
import { Readable } from 'node:stream';
const stream = Readable.from(['Hello ', 'World', '\n']);
stream.on('end', () => console.log('Body consumed'));
const server = createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
stream.pipe(res);
});
server.listen(9999, '127.0.0.9');
```
```bash
$ node srv.mjs
```
```bash
$ curl -I http://127.0.0.9:9999/
$ curl -v http://127.0.0.9:9999/
```
### How often does it reproduce? Is there a required condition?
Always.
### What is the expected behavior? Why is that the expected behavior?
No console output on server side after:
```console
$ curl -I http://127.0.0.9:9999
HTTP/1.1 200 OK
Content-Type: text/plain
Date: Wed, 22 Jan 2025 12:34:56 GMT
Connection: keep-alive
Keep-Alive: timeout=5
```
`Body consumed` output on server side after:
```console
$ curl http://127.0.0.9:9999
Hello World
```
### What do you see instead?
`Body consumed` on server terminal after sending `HEAD` request.
Nothing on client terminal after sending subsequent `GET` request.
### Additional information
With a patch from https://github.com/nodejs/node/pull/56681, the body is correctly recognized as stream, resulting in `Transfer-Encoding: chunked` header in first response. However, the body is still slurped, and subsequent requests have zero length. | http | low | Critical |
2,804,172,109 | vscode | Codespace Issues | I'm having problems with my school codespace. It says I have to run a command, but I don't know how or where. It says I'm in recovery mode, but nothing has come from it.



Version: 1.96.3
Commit: 91fbdddc47bc9c09064bf7acf133d22631cbf083
User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36 OPR/113.0.0.0
Embedder: codespaces
<!-- generated by web issue reporter --> | triage-needed | low | Minor |
2,804,200,155 | kubernetes | [Flaking Test] [sig-api-machinery] Kubernetes e2e suite.[It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource. | ### Which jobs are flaking?
master-blocking
- gce-cos-master-default
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Prow: https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-gci-gce/1881681179850051584
Triage: https://storage.googleapis.com/k8s-triage/index.html?date=2025-01-21&test=ResourceQuota%20should%20create%20a%20ResourceQuota%20and%20capture%20the%20life%20of%20a%20custom%20resource.
### Since when has it been flaking?
[1/7/2025, 2:23:00 AM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-prow-canary/1876499753093566464)
[1/9/2025, 8:04:02 AM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-gci-gce-ip-alias/1877310352669020160)
[1/9/2025, 3:40:02 PM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-prow-canary/1877425109312999424)
[1/13/2025, 8:52:19 AM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/e2e-kops-aws-cni-calico-ipv6-flatcar/1878771997446508544)
[1/21/2025, 9:32:08 AM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-gci-gce/1881681179850051584)
### Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#gce-cos-master-default
### Reason for failure (if possible)
```
{ failed [FAILED] client rate limiter Wait returned an error: context deadline exceeded
In [It] at: k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:673 @ 01/21/25 12:55:52.138
}
```
### Anything else we need to know?
N/A
### Relevant SIG(s)
/sig api-machinery
cc @kubernetes/release-team-release-signal | sig/api-machinery,kind/flake,needs-triage | low | Critical |
2,804,220,630 | transformers | forward() got an unexpected keyword argument 'num_items_in_batch' | ### System Info
New versions can't train encoder-decoder models.
Related issue and pull request: https://github.com/huggingface/transformers/issues/34575
System-Info:
- `transformers` version: 4.48.1
- Platform: Linux-6.8.0-36-generic-x86_64-with-glibc2.39
- Python version: 3.12.8
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: Tesla V100-PCIE-32GB
```
Traceback (most recent call last):
File "/home/hilsenbek/workspace/thesis/syntax_transformer/training/train_cross_attention.py", line 110, in <module>
trainer.train()
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/transformers/trainer.py", line 2171, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/transformers/trainer.py", line 2531, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/transformers/trainer.py", line 3675, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/transformers/trainer.py", line 3731, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/accelerate/utils/operations.py", line 823, in forward
return model_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/accelerate/utils/operations.py", line 811, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/torch/amp/autocast_mode.py", line 43, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 603, in forward
encoder_outputs = self.encoder(
^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hilsenbek/.conda/envs/harness/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: RobertaModel.forward() got an unexpected keyword argument 'num_items_in_batch'
```
### Who can help?
@ArthurZucker
@gheinrich
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
follow the blog https://huggingface.co/blog/encoder-decoder
### Expected behavior
Work as in old transformer versions | bug | medium | Critical |
2,804,279,688 | godot | [4.4 beta 1] TLS Handshake Error with Godot HTTPRequest | ### Tested versions
4.4 beta 1
### System information
Windows 11 - Godot 4.4 beta 1 - Vulkan (Mobile)
### Issue description
Iโm encountering a TLS handshake error when trying to make a request to https://graph.oculus.com/ using Godot's HTTPRequest.
* Requests to https://www.google.com/ work without any issues.
* Python's requests library can access the URL without any problems.
* The same https://graph.oculus.com/ URL works correctly in Postman.
* I attempted to create a certificate bundle as well, but it was unsuccessful.
* This issue appears to be specific to Godot.
CPP ERROR:
E 0:00:01:0639 StreamPeerMbedTLS::_do_handshake: TLS handshake error: -28800
<C++ Source> modules\mbedtls\stream_peer_mbedtls.cpp:88 @ StreamPeerMbedTLS::_do_handshake()
### Steps to reproduce
* Create simple project
* Add a HTTPRequest node
* Add script:
```
extends Node3D
@onready var http: HTTPRequest = $HTTPRequest
func _ready() -> void:
http.request("https://graph.oculus.com/")
print(await http.request_completed)
```
* Run
### Minimal reproduction project (MRP)
[teste-vr.zip](https://github.com/user-attachments/files/18505403/teste-vr.zip) | bug,needs testing,topic:network | low | Critical |
2,804,289,207 | storybook | [Bug]: Controls not showing on first load in 8.5.0 (web components) | ### Describe the bug
When upgrading from 8.4 to 8.5.0 in our Lit/web components project we noticed that the controls panel was empty for every story.
When you opened the story a second time the controls would show up as expected/as it did in 8.4.
I can reproduce the same problem in the example project you get when running `npx storybook@latest init` in an empty folder, and also in the default stackblitz.
### Reproduction link
https://stackblitz.com/github/storybookjs/sandboxes/tree/next/lit-vite/default-ts/after-storybook?embed=1&file=README.md
### Reproduction steps
1. If you show the mobile experience, resize window/frame to get the desktop experience
2. Click on the story Button > Secondary
3. If not visible, expand the addons panel
**Expected result:**
Controls show up
**Result:**
Controls do not show up
**Workaround:**
Click on another story and then click on Button > Secondary again. The controls are now visible.
### System
```bash
Storybook Environment Info:
System:
OS: Windows 10 10.0.19045
CPU: (8) x64 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
Binaries:
Node: 20.18.0 - C:\Program Files\nodejs\node.EXE
npm: 10.8.2 - C:\Program Files\nodejs\npm.CMD <----- active
Browsers: {}
npmPackages:
@storybook/addon-essentials: ^8.5.0 => 8.5.0
@storybook/blocks: ^8.5.0 => 8.5.0
@storybook/test: ^8.5.0 => 8.5.0
@storybook/web-components: ^8.5.0 => 8.5.0
@storybook/web-components-vite: ^8.5.0 => 8.5.0
storybook: ^8.5.0 => 8.5.0
```
### Additional context
_No response_ | bug,web-components,addon: controls,upgrade:8.5 | low | Critical |
2,804,304,194 | material-ui | [docs] Pigment and Emotion at the same time in application | ### Related page
https://mui.com/material-ui/migration/migrating-to-pigment-css/
### Kind of issue
Unclear explanations
### Issue description
Is there a guide that tells us how to migrate from current emotion to material pigment in the same code base living together? Without breaking the whole application, with a gradual upgrade from Emotion to Pigment. I have seen [example ](https://github.com/mui/pigment-css/blob/master/examples/pigment-css-webpack-ts/webpack.config.cjs) how to migrate while using webpack, but it failed. Because there is no such thing as ThemeProvider to share the same theme as Pigment.
### Context
_No response_
**Search keywords**: migration, guide, pigment, emotion | docs,status: waiting for maintainer,support: docs-feedback,package: pigment-css | low | Critical |
2,804,315,742 | ollama | Llama 3.1 sha256 mismatch | ### What is the issue?
<img width="909" alt="Image" src="https://github.com/user-attachments/assets/a8f79e64-2f9b-4a6f-b5cc-a1534c8479b5" />
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.5.7 | bug | low | Minor |
2,804,365,566 | PowerToys | When using Text Extractor it should be able to keep the same format as the text you are extracting. | ### Description of the new feature / enhancement
If your extracting text that is on multiple lines, it should be pasted as such.
Links should be links.
Bold should be bold. etc.
### Scenario when this would be used?
Right now it takes a large amount of effort to reformat the text after it has been copied.
### Supporting information
Right now this is what it looks like.
 | Needs-Triage | low | Minor |
2,804,367,860 | pytorch | Set `size` when `is_coalesced` is set in `torch.sparse_coo_tensor()` | ### ๐ The doc issue
The doc of [torch.sparse_coo_tensor()](https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html#torch-sparse-coo-tensor) shows its `Parameters`/`Keyword Arguments` as below:
> size (list, tuple, or torch.Size, optional) โ Size of the sparse tensor. If not provided the size will be inferred as the minimum size big enough to hold all non-zero elements.
> is_coalesced (bool, optional) โ When`True`, the caller is responsible for providing tensor indices that correspond to a coalesced tensor. If the `check_invariants` flag is False, no error will be raised if the prerequisites are not met and this will lead to silently incorrect results. To force coalescion please use `coalesce()` on the resulting Tensor. Default: None: except for trivial cases (e.g. nnz < 2) the resulting Tensor has is_coalesced set to `False`.
But when `is_coalesced` is set, whether it is None/True/False/..., `size` must be set properly, but document isn't noted or warned.
### Repro
```python
import torch
is_coalesced = True # choice: None, True, False
i = torch.tensor([[0, 1, 0], [1, 2, 3]])
v = torch.tensor([3.0, 4.0, 5.0])
s = (2, 3)
result = torch.sparse_coo_tensor(i, v, is_coalesced=is_coalesced) # always fail
# result = torch.sparse_coo_tensor(i, v, s, is_coalesced=is_coalesced) # always success
```
### Outputs
```txt
TypeError: sparse_coo_tensor() received an invalid combination of arguments - got (Tensor, Tensor, is_coalesced=bool), but expected one of:
* (object indices, object values, *, torch.dtype dtype = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False, bool check_invariants = None)
* (object indices, object values, tuple of ints size, *, torch.dtype dtype = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False, bool check_invariants = None, bool is_coalesced = None)
* (tuple of ints size, *, torch.dtype dtype = None, torch.device device = None, bool requires_grad = False, bool check_invariants = None)
```
### Suggest a potential alternative/fix
So, a `Note`/`Warning` should be added to the doc of [torch.sparse_coo_tensor()](https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html#torch-sparse-coo-tensor) as shown below:
> Note/Warning:
When `is_coalesced` is set, whether it is None/True/False/..., `size` must be set properly.
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @svekars @brycebortree @sekyondaMeta @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: sparse,triaged | low | Critical |
2,804,399,937 | material-ui | Getting this error -> Warning: Failed prop type: Invalid prop `disabled` of type `string` | ### Steps to reproduce
Render a button and provide disabled prop to it.

### Current behavior
Gives a warning

### Expected behavior
Should not give warning
### Context
_No response_
### Your environment
react: 18.3.1
mui: 6.4.0
Chrome
**Search keywords**: button | component: button,status: waiting for author | low | Critical |
2,804,451,669 | pytorch | [XPU] torch.nn.functional.pad brings wrong results with torch.compile on Intel GPU | ### ๐ Describe the bug
torch.nn.functional.pad brings wrong results with torch.compile on Intel GPU (XPU).
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, *args):
pad = torch.nn.functional.pad(args[0], (0, 1, 1, 0), mode = 'constant', value = 0.5)
return pad
m = Model()
inp = torch.randn((1, 1), dtype=torch.float32)
print(inp)
# tensor([[-0.5137]])
m.to('cpu')
cpu_out = m(inp.to('cpu'))
print(cpu_out)
# tensor([[ 0.5000, 0.5000],
# [-0.5137, 0.5000]])
m.to('xpu')
xpu_out = m(inp.to('xpu'))
print(xpu_out)
# tensor([[ 0.5000, 0.5000],
# [-0.5137, 0.5000]], device='xpu:0')
opt = torch.compile(m, fullgraph=True, backend='inductor', mode=None)
opt.to('cpu')
cpu_out = opt(inp.to('cpu'))
print(cpu_out)
# tensor([[ 0.5000, 0.5000],
# [-0.5137, 0.5000]])
opt.to('xpu')
xpu_out = opt(inp.to('xpu'))
print(xpu_out) # Different!
# tensor([[-0.5137, -0.5137],
# [-0.5137, -0.5137]], device='xpu:0')
```
### **Error Logs**
```bash
tensor([[-0.5137]])
tensor([[ 0.5000, 0.5000],
[-0.5137, 0.5000]])
tensor([[ 0.5000, 0.5000],
[-0.5137, 0.5000]], device='xpu:0')
tensor([[ 0.5000, 0.5000],
[-0.5137, 0.5000]])
tensor([[-0.5137, -0.5137],
[-0.5137, -0.5137]], device='xpu:0')
```
### Versions
PyTorch version: 2.5.1+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 18
On-line CPU(s) list: 0-17
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 5 125H
CPU family: 6
Model: 170
Thread(s) per core: 2
Core(s) per socket: 9
Socket(s): 1
Stepping: 4
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtop
ology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_sin
gle ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 432 KiB (9 instances)
L1i cache: 576 KiB (9 instances)
L2 cache: 18 MiB (9 instances)
L3 cache: 18 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241211
[pip3] pytorch-triton-xpu==3.1.0
[pip3] torch==2.5.1+xpu
[pip3] torchaudio==2.5.1+xpu
[pip3] torchvision==0.20.1+xpu
[conda] numpy 2.1.3 pypi_0 pypi
[conda] pytorch-triton-xpu 3.1.0 pypi_0 pypi
[conda] torch 2.5.1+xpu pypi_0 pypi
[conda] torchaudio 2.5.1+xpu pypi_0 pypi
[conda] torchvision 0.20.1+xpu pypi_0 pypi
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @gujinghui @fengyuan14 @guangyey @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | triaged,oncall: pt2,module: inductor,module: xpu | low | Critical |
2,804,468,967 | go | weak: docs don't guarantee Pointer compares not equal for different values | weak.Pointer docs say this about when they compare equal
```
// Two Pointer values always compare equal if the pointers from which they were
// created compare equal. This property is retained even after the
// object referenced by the pointer used to create a weak reference is
// reclaimed.
// If multiple weak pointers are made to different offsets within the same object
// (for example, pointers to different fields of the same struct), those pointers
// will not compare equal.
// If a weak pointer is created from an object that becomes unreachable, but is
// then resurrected due to a finalizer, that weak pointer will not compare equal
// with weak pointers created after the resurrection.
```
However, to use safely as a cache key, two things need to be true
- always compare equal if the pointers from which they were created compare equal (yes)
- never compare equal if the pointers from which they were created don't compare equal (unclear?)
In particular, I am worried about using `weak.Make(x)` as a cache key, then x is garbage collected, a new value y is allocated at the same address, and now `weak.Make(y)` compares equal to my `weak.Pointer` map key.
I *think* this can't be the case, because at that point `Value` would end up returning `y` instead of `x`, but that's me reading into the implementation details, rather than the docs telling me what I am doing is safe. | Documentation,NeedsFix,BugReport | low | Major |
2,804,483,960 | angular | Untracked() should break cycle in computations | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
I just updated to the **19.1.2** version and suddenly I get an error at compiling time: `Error: Detected cycle in computations.`
It made me discover that I have a `rxResource` whose request calls a `linkedSignal` that depends on the resource's value itself.
Here's part of my code:
- I want `loadedRanges` to update with the new range, but only after the resource has finished loading.
- On the other hand, the resource needs access to the `loadedRanges` to know if the range has already been loaded or not
But if I use `untracked` in the resource's request, shouldn't it break the cycle? Especially if it depends on a `linkedSignal` which should always have a value.
```
from = signal(new Date());
to = signal(new Date());
loadedRanges = linkedSignal<any, any[]>({
source: () => ({
value: this.resource.value()
}),
computation: (source, previous) => {
const previousRanges = previous?.value || [];
if (source.value) {
const newRange = {
from: untracked(() => this.from()),
to: untracked(() => this.to()),
};
return this.mergeRanges([...previousRanges, newRange]);
} else {
return previousRanges;
}
},
});
resource = rxResource<any[], any>({
request: () => {
const requiredRange = {
from: this.from(),
to: this.to(),
};
const ranges = untracked(() => this.loadedRanges()); // <-- here I use the untracked
const isLoaded = this.isRangeFullyCovered(requiredRange, ranges);
return isLoaded ? undefined : true;
},
loader: () => {
// call to the API
return of([]);
}
});
```
My code was working without problems before updating (with the v19.0.1).
Am I wrong stating that untracked should break the cycle, and the error shouldn't be triggered?
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-8jm3wgx2?file=src%2Fmain.ts
### Please provide the exception or error you saw
```true
Error: Detected cycle in computations.
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.1.3
Node: 18.19.1
Package Manager: npm 10.2.4
OS: darwin x64
Angular: 19.1.2
... animations, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, router
... service-worker
Package Version
--------------------------------------------------------------
@angular-devkit/architect 0.1901.3
@angular-devkit/build-angular 19.1.3
@angular-devkit/core 19.1.3 (cli-only)
@angular-devkit/schematics 19.1.3
@angular/cdk 19.1.0
@angular/cli 19.1.3
@angular/fire 19.0.0
@angular/material 19.1.0
@angular/material-date-fns-adapter 19.1.0
@schematics/angular 19.1.3
ng-packagr 19.0.0
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else?
_No response_ | area: core,needs: clarification,core: reactivity | low | Critical |
2,804,510,443 | tauri | [bug] Event Listeners Stack on Repeated WebviewWindow Creation | ### Describe the bug
Event listeners stack each time I run the creation function. This leads to the same event being triggered multiple times.
### Reproduction
https://github.com/shonya3/tauri-events-stacking
1. npm i, npm run tauri dev, keep terminal open to see output.
2.

Click button( -> The reproduction window should apper).
3. Close Reproduction window.
4. Repeat 2. and 3. multiple times and check the terminal.

### Expected behavior
Events should not stack in this case, when you create a new window.
```rs
fn create_reproduction_window(handle: &AppHandle) -> WebviewWindow {
let window = WebviewWindowBuilder::new(
handle,
"reproduction",
tauri::WebviewUrl::App("/reproduction".into()),
)
.build()
.unwrap();
window.listen("reproduction-ping", move |_| {
println!("Reproduction pong");
});
window
}
```
### Full `tauri info` output
```text
[โ] Environment
- OS: Windows 10.0.26100 x86_64 (X64)
โ WebView2: 131.0.2903.146
โ MSVC: Visual Studio Community 2022
โ rustc: 1.84.0 (9fc6b4312 2025-01-07)
โ cargo: 1.84.0 (66221abde 2024-11-19)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- node: 23.6.0
- pnpm: 9.14.4
- npm: 10.9.2
- bun: 1.1.45
[-] Packages
- tauri ๐ฆ: 2.2.3
- tauri-build ๐ฆ: 2.0.5
- wry ๐ฆ: 0.48.1
- tao ๐ฆ: 0.31.1
- tauri-cli ๐ฆ: 2.2.5
- @tauri-apps/api ๎: 2.2.0
- @tauri-apps/cli ๎: 2.2.5
[-] Plugins
- tauri-plugin-opener ๐ฆ: 2.2.5
- @tauri-apps/plugin-opener ๎: 2.2.5
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- bundler: Vite
```
### Stack trace
```text
```
### Additional context
https://discord.com/channels/616186924390023171/1331321595314176030 | type: bug,status: needs triage | low | Critical |
2,804,529,853 | godot | Intermittent Nonexistent function exception when function exists on child class but not parent using GDScript | ### Tested versions
- Reproducible in 4.4 Beta 1, 4.4 dev 7
### System information
PopOS 22.04 LTS, AMDยฎ Ryzen 7 6800h, AMDยฎ Radeon rx 6700m
### Issue description
Intermittent Nonexistent function exception when function exists on child class but not parent using GDScript
I have to restart the whole editor for it to recognize that the function exists on the child class but not the parent class
For additional context, I have the following:
InputComponent - This is the parent class
PlayerInputComponent - This child class inherits from the InputComponent
NpcInputComponent - This child class also inherits from the InputComponent
Both child classes have a method with the same name.
The method with the same name is accessed in other states in the game to calculate movement direction.

Defined parent class:
```
class_name InputComponent
extends Component
```
Defined child classes with the `move_direction` function:
```
class_name PlayerInputComponent
extends InputComponent
```
```
class_name NpcInputStateComponent
extends InputComponent
```
Defined and used in a state; `_input_component` is set on `_enter`. Either `PlayerInputComponent` or `NpcInputStateComponent` could be assigned which contain the `move_direction` function:
```
var _input_component: InputComponent
```
```
func update_input_move_direction() -> void:
input_move_direction = _input_component.move_direction()
```
### Steps to reproduce
I'm not sure how to trigger it reliably, because it does not happen all the time.
For additional context, I am not modifying this method, or where it is called. It is unrelated to any code that I am working on, but it triggers intermittently.
- Create a class without a method
- Create a child class that inherits from the first class with the same method
- Trigger the child method during the game at some point
- See that the game doesn't recognize that the function exists
It seems like the error occurs if you modify the parent class? It does seem like it still happens intermittently after many runs and file modifications still.
### Minimal reproduction project (MRP)
[gdscript_function_error_inherit.zip](https://github.com/user-attachments/files/18520982/gdscript_function_error_inherit.zip)
Make sure to assign the `ChildScript` node to the `StateExample` export in the `main` scene:

 | bug,topic:gdscript,needs testing | low | Critical |
2,804,546,480 | pytorch | Memory Leak in MPS Backend During LSTM Iterations (Out of Memory Error) | ### ๐ Describe the bug
## Bug Description
When running a simple LSTM model on the MPS backend with a repetitive loop, memory usage steadily increases, eventually leading to an Out of Memory error. This issue occurs despite clearing the MPS memory cache using torch.mps.empty_cache() after every iteration. The error happens after running approximately 15,666 iterations with a batch size of 16 and hidden size of 256.
Reproduction Steps
Run the following code to reproduce the issue:
```py
import torch
import torch.nn as nn
import platform
class LSTMModel(nn.Module):
def __init__(self, input_size, hidden_size, num_layers=1, batch_first=True):
super(LSTMModel, self).__init__()
self.lstm = nn.LSTM(input_size, hidden_size, num_layers=num_layers, batch_first=batch_first)
def forward(self, x, hidden):
output, hidden = self.lstm(x, hidden)
return output, hidden
def check_memory_leak():
input_size = 256
hidden_size = 256
batch_size = 16
sequence_length = 10
num_iterations = 100000 # Set a high number to check for memory leaks
# Use MPS if available
device = "mps" if torch.backends.mps.is_available() else "cpu"
# Model initialization
model = LSTMModel(input_size, hidden_size).to(device)
# Input data and hidden state initialization
x = torch.randn(batch_size, sequence_length, input_size).to(device)
hidden = (
torch.zeros(1, batch_size, hidden_size).to(device),
torch.zeros(1, batch_size, hidden_size).to(device),
)
print("Starting memory check...")
for i in range(num_iterations):
with torch.no_grad():
output, hidden = model(x, hidden)
# Clear MPS memory cache
torch.mps.empty_cache()
print(f"Iteration {i + 1}/{num_iterations}: Completed")
if __name__ == "__main__":
print("PyTorch Version:", torch.__version__)
print("Python Version:", platform.python_version())
print("Platform:", platform.system(), platform.release())
print("MPS Available:", torch.backends.mps.is_available())
print("MPS Built:", torch.backends.mps.is_built())
check_memory_leak()
```
## Expected Behavior
Memory usage should remain stable or properly recycle after clearing the cache with torch.mps.empty_cache().
## Observed Behavior
The program crashes with an Out of Memory error after ~15,666 iterations. The error message is as follows:
RuntimeError: MPS backend out of memory (MPS allocated: 24.00 MB, other allocations: 27.18 GB, max allowed: 27.20 GB). Tried to allocate 16.00 KB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
## Environment Information
MacBook Air 15 M3(24GB)
PyTorch Version: 2.5.1
Python Version: 3.12.2
Platform: Darwin 24.3.0
MPS Available: True
MPS Built: True
## Additional Context
This issue may be related to the MPS backendโs memory management while handling LSTM computations. Using torch.mps.empty_cache() does not appear to effectively release memory in this scenario. The problem persists even when torch.no_grad() is used.
## Request
Could you please investigate the memory leak issue in the MPS backend for LSTM models? Let me know if further debugging or testing is needed.
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.3 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.3
Libc version: N/A
Python version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:54:21) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-15.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3
Versions of relevant libraries:
[pip3] efficientnet_pytorch==0.7.1
[pip3] numpy==1.26.4
[pip3] segmentation_models_pytorch==0.4.0
[pip3] torch==2.5.1
[pip3] torchaudio==2.4.1
[pip3] torchvision==0.19.1
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] numpy 2.2.1 pypi_0 pypi
[conda] numpy-base 1.26.4 py312he047099_0
[conda] segmentation-models-pytorch 0.4.0 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.4.1 py312_cpu pytorch
[conda] torchvision 0.19.1 py312_cpu pytorch
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @mikaylagawarecki @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | module: rnn,module: memory usage,triaged,module: mps | low | Critical |
2,804,566,477 | kubernetes | Documentation of pod selection for node-pressure eviction is confusing regarding QoS | In [the Node-pressure doc explaining the ranking of pods for eviction](https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction), there is a note saying that QoS are not used for this ranking (which appears to be correct, AFAICT from the code).
However, this section still mentions QoS extensively, and it reads a bit ambiguous. In particular, this section:
>As a result, kubelet ranks and evicts pods in the following order:
>
> 1. BestEffort or Burstable pods where the usage exceeds requests. These pods are evicted based on their Priority and then by how much their usage level exceeds the request.
> 2. Guaranteed pods and Burstable pods where the usage is less than requests are evicted last, based on their Priority.
>
> Note:
> The kubelet does not use the pod's [QoS class](https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/) to determine the eviction order. You can use the QoS class to estimate the most likely pod eviction order when reclaiming resources like memory. QoS classification does not apply to EphemeralStorage requests, so the above scenario will not apply if the node is, for example, under DiskPressure.
The first half directly mention the QoS in relation to the eviction order, while the note specifically say they are not used. While I suppose the note is intended as disambiguation, it's not explicitly said.
Furthermore, the [documentation on QoS](https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/) says that kubelet use the QoS to determine which pods to evict, which seems a contradiction to the above.
/kind documentation
/sig docs | kind/documentation,sig/node,sig/docs,needs-triage | low | Major |
2,804,576,535 | godot | Can't pan on the visual shader editor while the mouse cursor is on a node preview | ### Tested versions
- Reproducible in: 4.0.stable, 4.2.stable, 4.3.stable, 4.4.beta1
### System information
Godot v4.4.dev (24d74510e) - Fedora Linux 41 (KDE Plasma) on X11 - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4090 (nvidia; 565.77) - 13th Gen Intel(R) Core(TM) i9-13900K (32 threads)
### Issue description
It's not possible to pan on the visual shader editor while the mouse cursor is on a node preview:
https://github.com/user-attachments/assets/83b286a6-3f42-4c70-a4b8-f5c007973fa8
It should logically be possible as the preview can't be interacted with in any way.
### Steps to reproduce
- Create a visual shader with a node like ColorFunc and enable its preview by clicking on the eye icon.
- Try to pan with the middle mouse button or <kbd>Space</kbd> + left mouse button outside the preview.
- Try to do the same, but with the mouse movement starting inside the preview.
### Minimal reproduction project (MRP)
[test_visual_shader_pan.zip](https://github.com/user-attachments/files/18507065/test_visual_shader_pan.zip) | bug,topic:editor,usability,topic:shaders | low | Minor |
2,804,582,237 | angular | Missing localize config documentation | ### Describe the problem that you experienced
We use โdeโ as sourceLocale, but would like to host all locales under a 5-letter code. Apart from the fact that I don't understand in which cases the locales are 2-letter and why, for example, โde-DEโ is not also available as a 5-letter variant, there currently seems to be no documentation on the sourceLocale node of the angular.json.
I have seen in another issue (https://github.com/angular/angular-cli/issues/17144) that sourceLocale also accepts an object with baseHref and code in addition to a string. If you set this, the baseHref in the index.html is adjusted, but the build is still written to the โdeโ directory.
I know, i can write a postbuild script to change directory naming from de to 'de-DE', but i'm wondering why you support to set a baseHref but not renaming the directory. Maybe there is another undocumented property to allow renaming, but except of the issue i couldn't find any information about this.
### Enter the URL of the topic with the problem
https://angular.dev/guide/i18n
### Describe what you were looking for in the documentation
I'm missing a typed documention of the i18n node of the angular.json. There is no hint, what properties can be set at i18n except of the provided samples. So at least there is missing the information that sourceLocale can also be an object.
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
_No response_
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
```true
```
### If the problem is browser-specific, please specify the device, OS, browser, and version
```true
```
### Provide any additional information here in as much as detail as you can
```true
``` | area: docs | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.