id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,703,165,881 | ui | [bug]: CLI doesn't interpret tsconfig baseUrl set to ${configDir} | ### Describe the bug
Since TS 5.5, the use of `${configDir}` in tsconfig is valid (more [info](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-5-5.html#the-configdir-template-variable-for-configuration-files)). It allows for highly re-usable configs which is very convenient.
My tsconfig looks like this:
```json
{
"extends": "@codecompose/typescript-config/nextjs.json",
"include": [
"${configDir}",
"${configDir}/**/*.json",
"${configDir}/next-env.d.ts",
"${configDir}/.next/types/**/*.ts"
]
}
```
The `include` list is really only there because otherwise Next.js currently trips over things, but for my backend services I only use the `extends`.
If I run the CLI this is what I get:
```sh
❯ npx shadcn@latest add tooltip
✔ Checking registry.
✔ Installing dependencies.
✔ Created 1 file:
- @codecompose/typescript-config/${configDir}/src/components/ui/tooltip.tsx
```
It looks like the CLI *needs* to have baseUrl in the tsconfig. This works:
```json
{
"extends": "@codecompose/typescript-config/nextjs.json",
"compilerOptions": {
"baseUrl": "." // Required for ShadCN CLI
},
"include": [
"${configDir}",
"${configDir}/**/*.json",
"${configDir}/next-env.d.ts",
"${configDir}/.next/types/**/*.ts"
]
}
```
In the extended config, `baseUrl` is set to `${configDir}`. See https://github.com/0x80/typescript-config/blob/main/base.json
I think this should be allowed to work, but I'm not surprised that not all tooling handles it yet.
### Affected component/components
CLI
### How to reproduce
1. Use a similar configuration as described above
2. Run a similar CLI command as described above
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
NA
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,703,221,515 | ui | [bug]: <Link href="#abc"/> inside a dropdown or a drawer will scroll partially to target and scroll back. | ### Describe the bug
<Link href="#abc"/> inside a dropdown or a drawer will scroll partially to target and scroll to top of page.
I have a <Link/> anchored to a section-#id .
When this Link component is inside of a <Drawer/> or <Dropdown/>, it will scroll to the target id partially and then scroll back to the top of the page.
### Affected component/components
Drodown, Drawer
### How to reproduce
Place Link inside dropdown or drawer with the href set to an ID on the same page.
Click on Link
Watch as it scrolls to and back.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Consistent issue in Safari and Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,703,226,650 | rust | Tracking issue for release notes of #132706: Stabilize async closures (RFC 3668) |
This issue tracks the release notes text for #132706.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Language
- [Stabilize async closures (RFC 3668)](https://github.com/rust-lang/rust/pull/132706)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @compiler-errors -- origin issue/PR authors and assignees for starting to draft text
| T-lang,relnotes,A-async-await,F-async_closure,relnotes-tracking-issue | low | Critical |
2,703,243,388 | godot | HTTPRequest.set_https_proxy and HTTPClient.set_https_proxy not working (with oxylabs residential proxy, others?) | ### Tested versions
Reproducible in 4.3.stable, Mac OS 14.5 and Linux Mint, tested with oxylabs.io residential proxies using both url-authentication (eg `myusername:mypassword.proxyhost.com` and IP whitelist. You will need your own proxy to test with.
### System information
v4.3 stable, tested on current MacOS, and on Linux Mint
### Issue description
There appears to be two separate problems with the [set_https_proxy](https://docs.godotengine.org/en/stable/classes/class_httprequest.html#class-httprequest-method-set-https-proxy) methods:
1. If you specify a host that contains a username, as is a common practice with proxies, such as `myusername:mypassword.proxyhost.com` the request fails with ERR_UNCONFIGURED.
2. If you specify a host that doesn't require authentication (eg, as I've done by whitelisting my IP) then the request failed with ERR_UNAVAILABLE.
Occurs with both HTTPRequest and HTTPClient.
### Steps to reproduce
Attempt to use a proxy for a HTTPs connection.
Reproduction code (you will need your own proxy):
```gdscript
extends Node2D
@export var proxy_host = 'us-pr.oxylabs.io'
@export var proxy_port = 10000
@export var test_url = 'https://ip.oxylabs.io/location'
func _ready() -> void:
await test_http_request() # prints "HTTPRequest Unavailable 0"
test_http_client() # prints "HTTPClient 4" (STATUS_CANT_CONNECT)
func test_http_request():
var req = HTTPRequest.new()
add_child(req)
req.set_https_proxy(proxy_host, proxy_port)
req.request(test_url)
var result = await req.request_completed
print('HTTPRequest ', error_string(result[0]), ' ', result[1])
func test_http_client():
var client = HTTPClient.new()
client.set_https_proxy(proxy_host, proxy_port)
var err = client.connect_to_host(test_url)
if err == OK:
while client.get_status() == HTTPClient.STATUS_CONNECTING or client.get_status() == HTTPClient.STATUS_RESOLVING:
client.poll()
await get_tree().process_frame
if client.get_status() != HTTPClient.STATUS_CONNECTED:
print('HTTPClient ', client.get_status())
```
To confirm that the issue isn't with the proxy, here's an equivalent terminal curl you can make replacing the info with your proxy urls:
```
curl -x us-pr.oxylabs.io:10000 https://ip.oxylabs.io/location
```
And that returns the expected results:
```
{"ip":"2601:c4:4500:5750:f56a:90af:e9c9:4506","providers":{"dbip":{"country":"US","asn":"AS7922","org_name":"Comcast Cable Communications, LLC","city":"Atlanta","zip_code":"","time_zone":"","meta":"\u003ca href='https://db-ip.com'\u003eIP Geolocation by DB-IP\u003c/a\u003e"},"ip2location":{"country":"US","asn":"","org_name":"","city":"Augusta","zip_code":"30903","time_zone":"-05:00","meta":"This site or product includes IP2Location LITE data available from \u003ca href=\"https://lite.ip2location.com\"\u003ehttps://lite.ip2location.com\u003c/a\u003e."},"ipinfo":{"country":"US","asn":"AS7922","org_name":"Comcast Cable Communications, LLC","city":"","zip_code":"","time_zone":"","meta":"\u003cp\u003eIP address data powered by \u003ca href=\"https://ipinfo.io\" \u003eIPinfo\u003c/a\u003e\u003c/p\u003e"},"maxmind":{"country":"US","asn":"AS7922","org_name":"COMCAST-7922","city":"Fairburn","zip_code":"","time_zone":"-05:00","meta":"This product includes GeoLite2 Data created by MaxMind, available from https://www.maxmind.com."}}}
```
### Minimal reproduction project (MRP)
[proxy_bug.zip](https://github.com/user-attachments/files/17952271/proxy_bug.zip)
| enhancement,discussion,documentation,topic:network | low | Critical |
2,703,244,406 | rust | ICE: `range end index 2 out of range for slice of length 1` | <!--
[31mICE[0m: Rustc ./a.rs '' 'thread 'rustc' panicked at compiler/rustc_middle/src/ty/generics.rs:311:14: 'range end index 2 out of range for slice of length 1'', 'thread 'rustc' panicked at compiler/rustc_middle/src/ty/generics.rs:311:14: 'range end index 2 out of range for slice of length 1''
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
pub trait Foo2 {
fn boxed<'a: 'a>() -> impl Sized + FnOnce<()>;
}
impl Foo2 for () {}
````
original:
````rust
pub trait Foo2 {
fn boxed<'a: 'a>(&'a mut self) -> impl Sized + FnOnce<()>;
}
impl Foo2 for () {
fn bar<'a: Add<S<M>>>(&'a mut self) -> impl Sized + 'a {}
}
````
Version information
````
rustc 1.85.0-nightly (7e565cce6 2024-11-28)
binary: rustc
commit-hash: 7e565cce6a03340edb4b9f56228cf5e480e24806
commit-date: 2024-11-28
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.4
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/7e565cce6a03340edb4b9f56228cf5e480e24806/compiler/rustc_middle/src/ty/generics.rs#L305-L317
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0601]: `main` function not found in crate `mvce`
--> /tmp/icemaker_global_tempdir.9uBYr2RCUK08/rustc_testrunner_tmpdir_reporting.WADMNevz7sTe/mvce.rs:5:20
|
5 | impl Foo2 for () {}
| ^ consider adding a `main` function to `/tmp/icemaker_global_tempdir.9uBYr2RCUK08/rustc_testrunner_tmpdir_reporting.WADMNevz7sTe/mvce.rs`
error[E0658]: the precise format of `Fn`-family traits' type parameters is subject to change
--> /tmp/icemaker_global_tempdir.9uBYr2RCUK08/rustc_testrunner_tmpdir_reporting.WADMNevz7sTe/mvce.rs:2:40
|
2 | fn boxed<'a: 'a>() -> impl Sized + FnOnce<()>;
| ^^^^^^^^^^ help: use parenthetical notation instead: `FnOnce() -> ()`
|
= note: see issue #29625 <https://github.com/rust-lang/rust/issues/29625> for more information
= help: add `#![feature(unboxed_closures)]` to the crate attributes to enable
= note: this compiler was built on 2024-11-28; consider upgrading it if it is out of date
thread 'rustc' panicked at compiler/rustc_middle/src/ty/generics.rs:311:14:
range end index 2 out of range for slice of length 1
stack backtrace:
0: 0x73c32e66098a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h7f35c99327b4f726
1: 0x73c32ee1c43c - core::fmt::write::h5e97d25008730389
2: 0x73c33028e1d1 - std::io::Write::write_fmt::h1ae9ed15ab76812c
3: 0x73c32e6607e2 - std::sys::backtrace::BacktraceLock::print::ha255d6c77b9d7dcc
4: 0x73c32e662cba - std::panicking::default_hook::{{closure}}::hf875838f93ccbb14
5: 0x73c32e662b20 - std::panicking::default_hook::hd7f37029cbcc8aec
6: 0x73c32d72a895 - std[2251e1dccffdccbc]::panicking::update_hook::<alloc[2789366f424f3027]::boxed::Box<rustc_driver_impl[5761ece06b1326ab]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x73c32e663398 - std::panicking::rust_panic_with_hook::hb2d4571765667af8
8: 0x73c32e66316a - std::panicking::begin_panic_handler::{{closure}}::h88452831bc745907
9: 0x73c32e660e39 - std::sys::backtrace::__rust_end_short_backtrace::h25d51a3907f0597d
10: 0x73c32e662e2c - rust_begin_unwind
11: 0x73c32b0f29b0 - core::panicking::panic_fmt::hadb465d335389640
12: 0x73c32d128f57 - core::slice::index::slice_end_index_len_fail::do_panic::runtime::h62f89e537db414d8
13: 0x73c32c85cd9a - core::slice::index::slice_end_index_len_fail::h3f0970a4378a8a5f
14: 0x73c330b0277e - <rustc_middle[dedf341fffa9562c]::ty::generics::Generics>::own_args_no_defaults.cold
15: 0x73c32dd8f1c3 - <rustc_middle[dedf341fffa9562c]::ty::print::pretty::FmtPrinter as rustc_middle[dedf341fffa9562c]::ty::print::pretty::PrettyPrinter>::pretty_print_opaque_impl_type
16: 0x73c32fea9f62 - <rustc_middle[dedf341fffa9562c]::ty::print::pretty::FmtPrinter as rustc_middle[dedf341fffa9562c]::ty::print::pretty::PrettyPrinter>::pretty_print_type
17: 0x73c32fea7d6e - <rustc_middle[dedf341fffa9562c]::ty::Ty as core[1880415d78dd9be6]::fmt::Display>::fmt
18: 0x73c32ee1c43c - core::fmt::write::h5e97d25008730389
19: 0x73c32ee1c2a0 - alloc::fmt::format::format_inner::h8741e8344250b15d
20: 0x73c3300d6604 - rustc_hir_analysis[843c6fea6ca9eb60]::check::check::check_impl_items_against_trait
21: 0x73c33001f9d1 - rustc_hir_analysis[843c6fea6ca9eb60]::check::check::check_item_type
22: 0x73c32bd8a764 - rustc_hir_analysis[843c6fea6ca9eb60]::check::wfcheck::check_well_formed
23: 0x73c32f3324a7 - rustc_query_impl[2e40c9ff1be86589]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[2e40c9ff1be86589]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[dedf341fffa9562c]::query::erase::Erased<[u8; 1usize]>>
24: 0x73c32f332780 - rustc_query_system[eb8ffdfd9c7325e5]::query::plumbing::try_execute_query::<rustc_query_impl[2e40c9ff1be86589]::DynamicConfig<rustc_data_structures[f36924e63485e28e]::vec_cache::VecCache<rustc_span[47c6aef7bb9494a9]::def_id::LocalDefId, rustc_middle[dedf341fffa9562c]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[eb8ffdfd9c7325e5]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[2e40c9ff1be86589]::plumbing::QueryCtxt, false>
25: 0x73c32f332486 - rustc_query_impl[2e40c9ff1be86589]::query_impl::check_well_formed::get_query_non_incr::__rust_end_short_backtrace
26: 0x73c32f33322c - rustc_hir_analysis[843c6fea6ca9eb60]::check::wfcheck::check_mod_type_wf
27: 0x73c32f33304b - rustc_query_impl[2e40c9ff1be86589]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[2e40c9ff1be86589]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[dedf341fffa9562c]::query::erase::Erased<[u8; 1usize]>>
28: 0x73c32f96ebbd - rustc_query_system[eb8ffdfd9c7325e5]::query::plumbing::try_execute_query::<rustc_query_impl[2e40c9ff1be86589]::DynamicConfig<rustc_query_system[eb8ffdfd9c7325e5]::query::caches::DefaultCache<rustc_span[47c6aef7bb9494a9]::def_id::LocalModDefId, rustc_middle[dedf341fffa9562c]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[2e40c9ff1be86589]::plumbing::QueryCtxt, false>
29: 0x73c32f96e958 - rustc_query_impl[2e40c9ff1be86589]::query_impl::check_mod_type_wf::get_query_non_incr::__rust_end_short_backtrace
30: 0x73c32f0ce81e - rustc_hir_analysis[843c6fea6ca9eb60]::check_crate
31: 0x73c32f7bf68c - rustc_interface[790a5929a84e5ed3]::passes::run_required_analyses
32: 0x73c32f7b399e - rustc_interface[790a5929a84e5ed3]::passes::analysis
33: 0x73c32f7b396f - rustc_query_impl[2e40c9ff1be86589]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[2e40c9ff1be86589]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[dedf341fffa9562c]::query::erase::Erased<[u8; 1usize]>>
34: 0x73c32fe2966e - rustc_query_system[eb8ffdfd9c7325e5]::query::plumbing::try_execute_query::<rustc_query_impl[2e40c9ff1be86589]::DynamicConfig<rustc_query_system[eb8ffdfd9c7325e5]::query::caches::SingleCache<rustc_middle[dedf341fffa9562c]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[2e40c9ff1be86589]::plumbing::QueryCtxt, false>
35: 0x73c32fe2934e - rustc_query_impl[2e40c9ff1be86589]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
36: 0x73c32fdf1155 - rustc_interface[790a5929a84e5ed3]::interface::run_compiler::<core[1880415d78dd9be6]::result::Result<(), rustc_span[47c6aef7bb9494a9]::ErrorGuaranteed>, rustc_driver_impl[5761ece06b1326ab]::run_compiler::{closure#0}>::{closure#1}
37: 0x73c32fcecd54 - std[2251e1dccffdccbc]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[790a5929a84e5ed3]::util::run_in_thread_with_globals<rustc_interface[790a5929a84e5ed3]::util::run_in_thread_pool_with_globals<rustc_interface[790a5929a84e5ed3]::interface::run_compiler<core[1880415d78dd9be6]::result::Result<(), rustc_span[47c6aef7bb9494a9]::ErrorGuaranteed>, rustc_driver_impl[5761ece06b1326ab]::run_compiler::{closure#0}>::{closure#1}, core[1880415d78dd9be6]::result::Result<(), rustc_span[47c6aef7bb9494a9]::ErrorGuaranteed>>::{closure#0}, core[1880415d78dd9be6]::result::Result<(), rustc_span[47c6aef7bb9494a9]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[1880415d78dd9be6]::result::Result<(), rustc_span[47c6aef7bb9494a9]::ErrorGuaranteed>>
38: 0x73c32fceca7d - <<std[2251e1dccffdccbc]::thread::Builder>::spawn_unchecked_<rustc_interface[790a5929a84e5ed3]::util::run_in_thread_with_globals<rustc_interface[790a5929a84e5ed3]::util::run_in_thread_pool_with_globals<rustc_interface[790a5929a84e5ed3]::interface::run_compiler<core[1880415d78dd9be6]::result::Result<(), rustc_span[47c6aef7bb9494a9]::ErrorGuaranteed>, rustc_driver_impl[5761ece06b1326ab]::run_compiler::{closure#0}>::{closure#1}, core[1880415d78dd9be6]::result::Result<(), rustc_span[47c6aef7bb9494a9]::ErrorGuaranteed>>::{closure#0}, core[1880415d78dd9be6]::result::Result<(), rustc_span[47c6aef7bb9494a9]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[1880415d78dd9be6]::result::Result<(), rustc_span[47c6aef7bb9494a9]::ErrorGuaranteed>>::{closure#1} as core[1880415d78dd9be6]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
39: 0x73c32fcec239 - std::sys::pal::unix::thread::Thread::new::thread_start::h4f639267331689fa
40: 0x73c3315b039d - <unknown>
41: 0x73c33163549c - <unknown>
42: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.85.0-nightly (7e565cce6 2024-11-28) running on x86_64-unknown-linux-gnu
query stack during panic:
#0 [check_well_formed] checking that `<impl at /tmp/icemaker_global_tempdir.9uBYr2RCUK08/rustc_testrunner_tmpdir_reporting.WADMNevz7sTe/mvce.rs:5:1: 5:17>` is well-formed
#1 [check_mod_type_wf] checking that types are well-formed in top-level module
end of query stack
error: aborting due to 2 previous errors
Some errors have detailed explanations: E0601, E0658.
For more information about an error, try `rustc --explain E0601`.
```
</p>
</details>
<!--
query stack:
#0 [check_well_formed] checking that `<impl at /tmp/icemaker_global_tempdir.9uBYr2RCUK08/rustc_testrunner_tmpdir_reporting.WADMNevz7sTe/mvce.rs:5:1: 5:17>` is well-formed
#1 [check_mod_type_wf] checking that types are well-formed in top-level module
-->
| A-pretty,I-ICE,T-compiler,C-bug,F-unboxed_closures,S-has-mcve,S-bug-has-test,S-has-bisection | low | Critical |
2,703,246,590 | pytorch | [torch.compile][Megatron] Error with Megatron with Pytorch v2.5.0 using `AOTAutograd` and `torch.compile` | ### 🐛 Describe the bug
I am using `torch.compile` with `AOTAutograd` to compile a Megatron model. It worked fine with Python v2.4.1, but after upgrading to Pytorch 2.5.0, I encountered the following error, but it was fine when running with `torch.compile` alone.
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/workspace/experiment/examples/megatron/megatron_test.py", line 117, in <module>
[rank0]: main()
[rank0]: File "/workspace/experiment/examples/megatron/megatron_test.py", line 69, in main
[rank0]: compiler.compile(
[rank0]: File "/workspace/experiment/experiment/compiler.py", line 289, in compile
[rank0]: compiled_func = compiler.compile(module, loss_func, *args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/workspace/experiment/experiment/compiler.py", line 225, in compile
[rank0]: _ = compiled_func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/workspace/experiment/experiment/compiler.py", line 162, in step_func
[rank0]: output_tensor = module(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/workspace/megatron-lm/megatron/legacy/model/module.py", line 189, in forward
[rank0]: outputs = self.module(*inputs, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/workspace/megatron-lm/megatron/core/models/gpt/gpt_model.py", line 238, in forward
[rank0]: hidden_states = self.decoder(
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1333, in __call__
[rank0]: return self._torchdynamo_orig_callable(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 1124, in __call__
[rank0]: result = self._inner_convert(
[rank0]: ^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 528, in __call__
[rank0]: return _compile(
[rank0]: ^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 948, in _compile
[rank0]: guarded_code = compile_inner(code, one_graph, hooks, transform)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 679, in compile_inner
[rank0]: return _compile_inner(code, one_graph, hooks, transform)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_utils_internal.py", line 87, in wrapper_function
[rank0]: return function(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 712, in _compile_inner
[rank0]: out_code = transform_code_object(code, transform)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/bytecode_transformation.py", line 1337, in transform_code_object
[rank0]: transformations(instructions, code_options)
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 221, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/convert_frame.py", line 641, in transform
[rank0]: tracer.run()
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 2766, in run
[rank0]: super().run()
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 973, in run
[rank0]: while self.step():
[rank0]: ^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 885, in step
[rank0]: self.dispatch_table[inst.opcode](self, inst)
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 2957, in RETURN_VALUE
[rank0]: self._return(inst)
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/symbolic_convert.py", line 2942, in _return
[rank0]: self.output.compile_subgraph(
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 1142, in compile_subgraph
[rank0]: self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 1369, in compile_and_call_fx_graph
[rank0]: compiled_fn = self.call_user_compiler(gm)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 1416, in call_user_compiler
[rank0]: return self._call_user_compiler(gm)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 1465, in _call_user_compiler
[rank0]: raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/output_graph.py", line 1446, in _call_user_compiler
[rank0]: compiled_fn = compiler_fn(gm, self.example_inputs())
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
[rank0]: compiled_gm = compiler_fn(gm, example_inputs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/__init__.py", line 2280, in __call__
[rank0]: return self.compiler_fn(model_, inputs_, **self.kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/workspace/experiment/experiment/compiler.py", line 212, in backend_func
[rank0]: return aot_autograd(
[rank0]: ^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/backends/common.py", line 72, in __call__
[rank0]: cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 1077, in aot_module_simplified
[rank0]: compiled_fn = dispatch_and_compile()
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 1062, in dispatch_and_compile
[rank0]: compiled_fn, _ = create_aot_dispatcher_function(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 523, in create_aot_dispatcher_function
[rank0]: return _create_aot_dispatcher_function(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/aot_autograd.py", line 761, in _create_aot_dispatcher_function
[rank0]: compiled_fn, fw_metadata = compiler_fn(
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 628, in aot_dispatch_autograd
[rank0]: placeholder_list = fx_placeholder_vals(bw_module)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/fx/experimental/symbolic_shapes.py", line 1213, in fx_placeholder_vals
[rank0]: return [n.meta["val"] for n in gm.graph.nodes if n.op == "placeholder"]
[rank0]: ~~~~~~^^^^^^^
[rank0]: torch._dynamo.exc.BackendCompilerFailed: backend='backend_func' raised:
[rank0]: KeyError: 'val'
```
It seems that the error is caused by the following function and the compilation works if I removed the line `predicted_logits[target_mask] = 0.0`.
```
@staticmethod
def calculate_predicted_logits(
vocab_parallel_logits: torch.Tensor,
target: torch.Tensor,
logits_max: torch.Tensor,
vocab_start_index: int,
vocab_end_index: int,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
"""Calculates predicted logits."""
# In-place subtraction reduces memory pressure.
vocab_parallel_logits -= logits_max.unsqueeze(dim=-1)
# Create a mask of valid vocab ids (1 means it needs to be masked).
target_mask = (target < vocab_start_index) | (target >= vocab_end_index)
masked_target = target.clone() - vocab_start_index
masked_target[target_mask] = 0
# Get predicted-logits = logits[target].
# For Simplicity, we convert logits to a 2-D tensor with size
# [*, partition-vocab-size] and target to a 1-D tensor of size [*].
partition_vocab_size = vocab_parallel_logits.size()[-1]
logits_2d = vocab_parallel_logits.view(-1, partition_vocab_size)
masked_target_1d = masked_target.view(-1)
arange_1d = torch.arange(start=0, end=logits_2d.size()[0], device=logits_2d.device)
predicted_logits_1d = logits_2d[arange_1d, masked_target_1d]
predicted_logits_1d = predicted_logits_1d.clone().contiguous()
predicted_logits = predicted_logits_1d.view_as(target)
predicted_logits[target_mask] = 0.0
exp_logits = vocab_parallel_logits
# torch.exp(vocab_parallel_logits, out=exp_logits)
exp_logits = torch.exp_(vocab_parallel_logits) # This is more dynamo-friendly
sum_exp_logits = exp_logits.sum(dim=-1)
return target_mask, masked_target_1d, predicted_logits, sum_exp_logits, exp_logits
```
The following is the generated graphs of the function, and I found that `_tensor_constant1` has no meta['val'].
```
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] TRACED GRAPH
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] ===== Forward graph 10 =====
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] /mnt/data/mksit/anaconda3/envs/experiment/lib/python3.10/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] def forward(self, primals_1: "bf16[64, 8, 128][1024, 128, 1]cuda:0", primals_2: "bf16[100, 128][128, 1]cuda:0", primals_3: "i64[8, 64][64, 1]cuda:0"):
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] # File: /home/mankit/workspace/experiment/third_party/Megatron-LM/megatron/core/tensor_parallel/mappings.py:526 in copy_to_tensor_model_parallel_region, code: return _CopyToModelParallelRegion.apply(input_)
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] view: "bf16[64, 8, 128][1024, 128, 1]cuda:0" = torch.ops.aten.view.default(primals_1, [64, 8, 128]); primals_1 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs]
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] # File: /home/mankit/workspace/experiment/third_party/Megatron-LM/megatron/core/tensor_parallel/layers.py:670 in linear_with_grad_accumulation_and_async_allreduce, code: return LinearWithGradAccumulationAndAsyncCommunication.apply(*args)
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] t: "bf16[128, 100][1, 128]cuda:0" = torch.ops.aten.t.default(primals_2)
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] view_1: "bf16[512, 128][128, 1]cuda:0" = torch.ops.aten.view.default(view, [512, 128])
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] mm: "bf16[512, 100][100, 1]cuda:0" = torch.ops.aten.mm.default(view_1, t); view_1 = t = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] view_2: "bf16[64, 8, 100][800, 100, 1]cuda:0" = torch.ops.aten.view.default(mm, [64, 8, 100]); mm = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs]
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] # File: /home/mankit/workspace/experiment/third_party/Megatron-LM/megatron/core/models/common/language_module/language_module.py:37 in compute_language_model_loss, code: labels = labels.transpose(0, 1).contiguous()
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] transpose: "i64[64, 8][1, 64]cuda:0" = torch.ops.aten.transpose.int(primals_3, 0, 1); primals_3 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] clone: "i64[64, 8][8, 1]cuda:0" = torch.ops.aten.clone.default(transpose, memory_format = torch.contiguous_format); transpose = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs]
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] # File: /home/mankit/workspace/experiment/third_party/Megatron-LM/megatron/core/tensor_parallel/cross_entropy.py:236 in vocab_parallel_cross_entropy, code: return _VocabParallelCrossEntropy.apply(vocab_parallel_logits, target, label_smoothing)
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] _to_copy: "f32[64, 8, 100][800, 100, 1]cuda:0" = torch.ops.aten._to_copy.default(view_2, dtype = torch.float32); view_2 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] max_1 = torch.ops.aten.max.dim(_to_copy, -1)
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] getitem: "f32[64, 8][8, 1]cuda:0" = max_1[0]; max_1 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] all_reduce: "f32[64, 8][8, 1]cuda:0" = torch.ops._c10d_functional.all_reduce.default(getitem, 'max', '8')
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] wait_tensor: "f32[64, 8][8, 1]cuda:0" = torch.ops._c10d_functional.wait_tensor.default(all_reduce); all_reduce = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] copy: "f32[64, 8][8, 1]cuda:0" = torch.ops.aten.copy.default(getitem, wait_tensor); wait_tensor = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] unsqueeze_1: "f32[64, 8, 1][8, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(copy, -1); copy = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] sub: "f32[64, 8, 100][800, 100, 1]cuda:0" = torch.ops.aten.sub.Tensor(_to_copy, unsqueeze_1); _to_copy = unsqueeze_1 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] lt: "b8[64, 8][8, 1]cuda:0" = torch.ops.aten.lt.Scalar(clone, 0)
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] ge: "b8[64, 8][8, 1]cuda:0" = torch.ops.aten.ge.Scalar(clone, 100)
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] bitwise_or: "b8[64, 8][8, 1]cuda:0" = torch.ops.aten.bitwise_or.Tensor(lt, ge); lt = ge = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] clone_1: "i64[64, 8][8, 1]cuda:0" = torch.ops.aten.clone.default(clone); clone = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] sub_1: "i64[64, 8][8, 1]cuda:0" = torch.ops.aten.sub.Tensor(clone_1, 0); clone_1 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] _tensor_constant0 = self._tensor_constant0
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] lift_fresh_copy: "i64[][]cuda:0" = torch.ops.aten.lift_fresh_copy.default(_tensor_constant0); _tensor_constant0 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] where: "i64[64, 8][8, 1]cuda:0" = torch.ops.aten.where.self(bitwise_or, lift_fresh_copy, sub_1); lift_fresh_copy = sub_1 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] view_4: "i64[512][1]cuda:0" = torch.ops.aten.view.default(where, [-1]); where = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] arange: "i64[512][1]cuda:0" = torch.ops.aten.arange.start(0, 512, device = device(type='cuda', index=0), pin_memory = False)
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] view_5: "f32[512, 100][100, 1]cuda:0" = torch.ops.aten.view.default(sub, [-1, 100])
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] index: "f32[512][1]cuda:0" = torch.ops.aten.index.Tensor(view_5, [arange, view_4]); view_5 = arange = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] clone_2: "f32[512][1]cuda:0" = torch.ops.aten.clone.default(index); index = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] view_6: "f32[64, 8][8, 1]cuda:0" = torch.ops.aten.view.default(clone_2, [64, 8]); clone_2 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] _tensor_constant1 = self._tensor_constant1
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] lift_fresh_copy_1: "f32[][]cuda:0" = torch.ops.aten.lift_fresh_copy.default(_tensor_constant1)
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] where_1: "f32[64, 8][8, 1]cuda:0" = torch.ops.aten.where.self(bitwise_or, lift_fresh_copy_1, view_6); lift_fresh_copy_1 = view_6 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] exp: "f32[64, 8, 100][800, 100, 1]cuda:0" = torch.ops.aten.exp.default(sub); sub = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] sum_1: "f32[64, 8][8, 1]cuda:0" = torch.ops.aten.sum.dim_IntList(exp, [-1])
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] all_reduce_1: "f32[64, 8][8, 1]cuda:0" = torch.ops._c10d_functional.all_reduce.default(where_1, 'sum', '8')
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] wait_tensor_1: "f32[64, 8][8, 1]cuda:0" = torch.ops._c10d_functional.wait_tensor.default(all_reduce_1); all_reduce_1 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] copy_1: "f32[64, 8][8, 1]cuda:0" = torch.ops.aten.copy.default(where_1, wait_tensor_1); where_1 = wait_tensor_1 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] all_reduce_2: "f32[64, 8][8, 1]cuda:0" = torch.ops._c10d_functional.all_reduce.default(sum_1, 'sum', '8')
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] wait_tensor_2: "f32[64, 8][8, 1]cuda:0" = torch.ops._c10d_functional.wait_tensor.default(all_reduce_2); all_reduce_2 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] copy_2: "f32[64, 8][8, 1]cuda:0" = torch.ops.aten.copy.default(sum_1, wait_tensor_2); sum_1 = wait_tensor_2 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] log: "f32[64, 8][8, 1]cuda:0" = torch.ops.aten.log.default(copy_2)
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] sub_2: "f32[64, 8][8, 1]cuda:0" = torch.ops.aten.sub.Tensor(log, copy_1); log = copy_1 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs]
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] # File: /home/mankit/workspace/experiment/third_party/Megatron-LM/megatron/core/models/common/language_module/language_module.py:44 in compute_language_model_loss, code: loss = loss.transpose(0, 1).contiguous()
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] transpose_1: "f32[8, 64][1, 8]cuda:0" = torch.ops.aten.transpose.int(sub_2, 0, 1); sub_2 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] clone_3: "f32[8, 64][64, 1]cuda:0" = torch.ops.aten.clone.default(transpose_1, memory_format = torch.contiguous_format); transpose_1 = None
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs] return (clone_3, primals_2, view, getitem, bitwise_or, view_4, _tensor_constant1, exp, copy_2)
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs]
[rank0]:I1128 20:51:42.850000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:523] [25/0] [__aot_graphs]
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] TRACED GRAPH
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] ===== Backward graph 10 =====
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] /mnt/data/mksit/anaconda3/envs/experiment/lib/python3.10/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] def forward(self, primals_2: "bf16[100, 128][128, 1]cuda:0", view: "bf16[64, 8, 128][1024, 128, 1]cuda:0", getitem: "f32[64, 8][8, 1]cuda:0", bitwise_or: "b8[64, 8][8, 1]cuda:0", view_4: "i64[512][1]cuda:0", _tensor_constant1, exp: "f32[64, 8, 100][800, 100, 1]cuda:0", copy_2: "f32[64, 8][8, 1]cuda:0", tangents_1: "f32[8, 64][64, 1]cuda:0"):
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] # File: /home/mankit/workspace/experiment/third_party/Megatron-LM/megatron/core/tensor_parallel/layers.py:670 in linear_with_grad_accumulation_and_async_allreduce, code: return LinearWithGradAccumulationAndAsyncCommunication.apply(*args)
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] t: "bf16[128, 100][1, 128]cuda:0" = torch.ops.aten.t.default(primals_2)
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] view_1: "bf16[512, 128][128, 1]cuda:0" = torch.ops.aten.view.default(view, [512, 128])
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] mm: "bf16[512, 100][100, 1]cuda:0" = torch.ops.aten.mm.default(view_1, t); view_1 = t = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] view_2: "bf16[64, 8, 100][800, 100, 1]cuda:0" = torch.ops.aten.view.default(mm, [64, 8, 100]); mm = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs]
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] # File: /home/mankit/workspace/experiment/third_party/Megatron-LM/megatron/core/tensor_parallel/cross_entropy.py:236 in vocab_parallel_cross_entropy, code: return _VocabParallelCrossEntropy.apply(vocab_parallel_logits, target, label_smoothing)
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] _to_copy: "f32[64, 8, 100][800, 100, 1]cuda:0" = torch.ops.aten._to_copy.default(view_2, dtype = torch.float32); view_2 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] all_reduce: "f32[64, 8][8, 1]cuda:0" = torch.ops._c10d_functional.all_reduce.default(getitem, 'max', '8')
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] wait_tensor: "f32[64, 8][8, 1]cuda:0" = torch.ops._c10d_functional.wait_tensor.default(all_reduce); all_reduce = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] copy: "f32[64, 8][8, 1]cuda:0" = torch.ops.aten.copy.default(getitem, wait_tensor); getitem = wait_tensor = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] unsqueeze_1: "f32[64, 8, 1][8, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(copy, -1); copy = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] sub: "f32[64, 8, 100][800, 100, 1]cuda:0" = torch.ops.aten.sub.Tensor(_to_copy, unsqueeze_1); _to_copy = unsqueeze_1 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] arange: "i64[512][1]cuda:0" = torch.ops.aten.arange.start(0, 512, device = device(type='cuda', index=0), pin_memory = False)
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] view_5: "f32[512, 100][100, 1]cuda:0" = torch.ops.aten.view.default(sub, [-1, 100]); sub = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] index: "f32[512][1]cuda:0" = torch.ops.aten.index.Tensor(view_5, [arange, view_4]); view_5 = arange = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] clone_2: "f32[512][1]cuda:0" = torch.ops.aten.clone.default(index); index = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] view_6: "f32[64, 8][8, 1]cuda:0" = torch.ops.aten.view.default(clone_2, [64, 8]); clone_2 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] lift_fresh_copy_1: "f32[][]cuda:0" = torch.ops.aten.lift_fresh_copy.default(_tensor_constant1); _tensor_constant1 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] where_1: "f32[64, 8][8, 1]cuda:0" = torch.ops.aten.where.self(bitwise_or, lift_fresh_copy_1, view_6); lift_fresh_copy_1 = view_6 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] sum_1: "f32[64, 8][8, 1]cuda:0" = torch.ops.aten.sum.dim_IntList(exp, [-1])
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] all_reduce_1: "f32[64, 8][8, 1]cuda:0" = torch.ops._c10d_functional.all_reduce.default(where_1, 'sum', '8'); where_1 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] wait_tensor_1: "f32[64, 8][8, 1]cuda:0" = torch.ops._c10d_functional.wait_tensor.default(all_reduce_1); all_reduce_1 = wait_tensor_1 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] all_reduce_2: "f32[64, 8][8, 1]cuda:0" = torch.ops._c10d_functional.all_reduce.default(sum_1, 'sum', '8'); sum_1 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] wait_tensor_2: "f32[64, 8][8, 1]cuda:0" = torch.ops._c10d_functional.wait_tensor.default(all_reduce_2); all_reduce_2 = wait_tensor_2 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] unsqueeze_3: "f32[64, 8, 1][8, 1, 1]cuda:0" = torch.ops.aten.unsqueeze.default(copy_2, -1); copy_2 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] div: "f32[64, 8, 100][800, 100, 1]cuda:0" = torch.ops.aten.div.Tensor(exp, unsqueeze_3); exp = unsqueeze_3 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs]
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] # File: /home/mankit/workspace/experiment/third_party/Megatron-LM/megatron/core/models/common/language_module/language_module.py:44 in compute_language_model_loss, code: loss = loss.transpose(0, 1).contiguous()
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] transpose_2: "f32[64, 8][1, 64]cuda:0" = torch.ops.aten.transpose.int(tangents_1, 0, 1); tangents_1 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs]
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] # File: /home/mankit/workspace/experiment/third_party/Megatron-LM/megatron/core/tensor_parallel/cross_entropy.py:236 in vocab_parallel_cross_entropy, code: return _VocabParallelCrossEntropy.apply(vocab_parallel_logits, target, label_smoothing)
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] view_7: "f32[512, 100][100, 1]cuda:0" = torch.ops.aten.view.default(div, [-1, 100]); div = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] arange_1: "i64[512][1]cuda:0" = torch.ops.aten.arange.start(0, 512, device = device(type='cuda', index=0), pin_memory = False)
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] view_8: "b8[512][1]cuda:0" = torch.ops.aten.view.default(bitwise_or, [-1]); bitwise_or = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] _to_copy_1: "f32[512][1]cuda:0" = torch.ops.aten._to_copy.default(view_8, dtype = torch.float32); view_8 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] rsub: "f32[512][1]cuda:0" = torch.ops.aten.rsub.Scalar(_to_copy_1, 1.0); _to_copy_1 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] index_1: "f32[512][1]cuda:0" = torch.ops.aten.index.Tensor(view_7, [arange_1, view_4])
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] sub_3: "f32[512][1]cuda:0" = torch.ops.aten.sub.Tensor(index_1, rsub); index_1 = rsub = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] index_put: "f32[512, 100][100, 1]cuda:0" = torch.ops.aten.index_put.default(view_7, [arange_1, view_4], sub_3); view_7 = arange_1 = view_4 = sub_3 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] view_9: "f32[64, 8, 100][800, 100, 1]cuda:0" = torch.ops.aten.view.default(index_put, [64, 8, 100]); index_put = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] unsqueeze_4: "f32[64, 8, 1][1, 64, 1]cuda:0" = torch.ops.aten.unsqueeze.default(transpose_2, -1); transpose_2 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] mul: "f32[64, 8, 100][800, 100, 1]cuda:0" = torch.ops.aten.mul.Tensor(view_9, unsqueeze_4); view_9 = unsqueeze_4 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] _to_copy_2: "bf16[64, 8, 100][800, 100, 1]cuda:0" = torch.ops.aten._to_copy.default(mul, dtype = torch.bfloat16); mul = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs]
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] # File: /home/mankit/workspace/experiment/third_party/Megatron-LM/megatron/core/tensor_parallel/layers.py:670 in linear_with_grad_accumulation_and_async_allreduce, code: return LinearWithGradAccumulationAndAsyncCommunication.apply(*args)
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] view_11: "bf16[512, 100][100, 1]cuda:0" = torch.ops.aten.view.default(_to_copy_2, [512, 100])
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] mm_1: "bf16[512, 128][128, 1]cuda:0" = torch.ops.aten.mm.default(view_11, primals_2); view_11 = primals_2 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] view_12: "bf16[64, 8, 128][1024, 128, 1]cuda:0" = torch.ops.aten.view.default(mm_1, [64, 8, 128]); mm_1 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] view_13: "bf16[512, 100][100, 1]cuda:0" = torch.ops.aten.view.default(_to_copy_2, [512, 100]); _to_copy_2 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] view_14: "bf16[512, 128][128, 1]cuda:0" = torch.ops.aten.view.default(view, [512, 128]); view = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] t_1: "bf16[100, 512][1, 100]cuda:0" = torch.ops.aten.t.default(view_13); view_13 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] mm_2: "bf16[100, 128][128, 1]cuda:0" = torch.ops.aten.mm.default(t_1, view_14); t_1 = view_14 = None
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs] return (view_12, mm_2, None)
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs]
[rank0]:I1128 20:51:42.852000 1695611 site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py:534] [25/0] [__aot_graphs]
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A5000
GPU 1: NVIDIA RTX A5000
GPU 2: NVIDIA RTX A5000
GPU 3: NVIDIA RTX A5000
GPU 4: NVIDIA RTX A5000
GPU 5: NVIDIA RTX A5000
GPU 6: NVIDIA RTX A5000
GPU 7: NVIDIA RTX A5000
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7453 28-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 28
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3488.5249
CPU min MHz: 1500.0000
BogoMIPS: 5499.46
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca sev sev_es debug_swap
Virtualisation: AMD-V
L1d cache: 1.8 MiB (56 instances)
L1i cache: 1.8 MiB (56 instances)
L2 cache: 28 MiB (56 instances)
L3 cache: 128 MiB (8 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-27
NUMA node1 CPU(s): 28-55
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.6.0
[pip3] numpy==1.26.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.0+cu124
[pip3] torchaudio==2.5.0+cu124
[pip3] torchvision==0.20.0+cu124
[pip3] torchviz==0.0.2
[pip3] triton==3.1.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.0+cu124 pypi_0 pypi
[conda] torchaudio 2.5.0+cu124 pypi_0 pypi
[conda] torchvision 0.20.0+cu124 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | triaged,oncall: pt2,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,703,252,372 | godot | Godot becomes non-responsive sometimes when using color picker. | ### Tested versions
Godot 4.3.
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 Laptop GPU (NVIDIA; 32.0.15.6614) - 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz (16 Threads)
### Issue description
Randomly when I use the color picker and setting the uniforms for a shader. Once the color picker window comes up, Godot becomes 'non-responsive'. There is no errors logged using --verbose, and not .log file generated. It doesn't always happen when I use the color picker, but sometimes. This is only on a larger project.
### Steps to reproduce
I am not sure. It seems random to me. I wanna find out how to reproduce it. Any tips?
### Minimal reproduction project (MRP)
I don't have one but can provide one as soon as I can figure out how to reproduce the bug. | bug,needs testing,topic:gui | low | Critical |
2,703,254,120 | godot | DisplayServer.get_display_safe_area() wrongly accounts for taskbar on Windows, even on fullscreen mode | ### Tested versions
- Reproducible in: 4.3.stable, 4.4.dev5
### System information
Windows 10.0.19045 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated GeForce GTX 765M - Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz (8 threads)
### Issue description
On a Windows 10 laptop with a 1920x1080 pixel screen with NO notches/cutouts, both DisplayServer.get_display_safe_area() and DisplayServer.screen_get_usable_rect() report a usable height of only 1040 pixels.
This seems to suggest that the Windows taskbar is being accounted for (it is exactly 40 pixels), which is neither useful nor a true account of the screen's safe area, especially for a fullscreen application. Running the application in windowed mode, fullscreen mode or exclusive fullscreen mode makes no difference to the reported value.

### Steps to reproduce
Import the MRP and run the example scene.
Verify that toggling between exclusive fullscreen mode and windowed mode (with the "T" key) does not change the reported safe_area / usable_rect values.
### Minimal reproduction project (MRP)
[wrong_safe_area.zip](https://github.com/user-attachments/files/17952301/wrong_safe_area.zip)
| bug,platform:windows,topic:porting | low | Minor |
2,703,353,716 | pytorch | [`FlexAttention`] Request for the Support of Dropout | ### 🚀 The feature, motivation and pitch
Dropout is a fairly common concept in anything related to deep learning. This includes the attention mechanism which I shouldn't have to explain why this feature might be important.
It should be handled natively imo, especially since it's fairly simple (I think).
### Alternatives
I thought about including it in the score mod for example but this would alter common behaviour and deviate from other implementations, i.e. the issue is that dropout is applied after the softmax (and the score mod is applied before softmax).
Just a quick thought, we probably could create a dummy value tensor or use the lse to recreate the softmax outputs. But that's a lot of overhead.
### Additional context
_No response_
cc @zou3519 @bdhirsh @penguinwu @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng @chauhang @ydwu4 | triaged,oncall: pt2,module: pt2-dispatcher,module: flex attention | low | Minor |
2,703,365,459 | neovim | Lua stdlib deprecation strategy; "frozen" phase | # (This is a stub; I will leave a comment when this is fully-formed and ready for comment)
# Problem
We have a "can't win" situation:
1. Users and plugin authors get exhausted by deprecation frequency.
2. Core devs get exhausted by the API "maturation" cycle (including "bikeshedding" and other discussions to avoid hard-to-reverse mistakes),
3. And *conversely*, core devs also get exhausted by
1. the deprecation phases, and/or
2. maintenance of old but disfavored APIs.
# Expected behavior
We can actually have it both ways, roughly outlined here: https://sink.io/jmk/neovimconf-2024/#future-deprecation-strategy
## Define "experiment" phase
We have "experimental" stuff but it's not clear what that means.
## Introduce a new deprecation phase: "frozen"
Frozen interfaces continue to be backwards-compatible at an "ABI" level (i.e. programmatic consumers won't break), but their docs and implementation are shuffled to a remote location, safely protecting innocent villagers from their ionizing radiation.
1. move its implementation to `_deprecated.lua`
2. its programmatic interface ("ABI" compatibility) continues to work
3. delete its docs; it also should not auto-complete by `:lua vim.xx.<tab>`
4. mention it in `:help deprecated`
5. it is "frozen" and unsupported, though programmatic consumers can continue to use it without worrying about hard-breakage
- bug reports involving a frozen API get rejected.
- we will keep the old tests in `test/.../deprecated_spec.lua` and that's it. no other support.
This allows us to minimize the API (not ABI) much more aggressively, without breaking anyone.
It still leaves the door open for "full removal" as a later step, but completely avoids debates about whether it is "worth it" to make these kinds of interface-simplifications. And that is exactly the case for `start_client`. We can clear away all of its redundant docs and explanations, without breaking downstream.
## Open questions
- Do we accept PRs for fixing bugs for these, even if they are very small?
- What should be done with vim.deprecate? Should warnings still be printed to command line? Should functions be listed in deprecated checkhealth?
## Closed questions
- tbd
## Outcomes
1. Doing a "breaking change" pr and shepherding it through, advocating for it, etc is exhausting. It's much easier to do that as 1 batch where a bunch of already-frozen APIs get full-removed in 1 PR.
## Todo
1. document `:help stdlib-contract` , `:help stdlib-frozen`
- Q: what if a bug is reported on a frozen API? (A: rejected) | enhancement,project-management | low | Critical |
2,703,447,436 | yt-dlp | Provide an option to cleanup temporary files before exiting, or moving on to the next video | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
There's currently a problem with temporarily files lingering around, partially because the default `--continue` option relies on this behavior, therefore there's no cleanup, even `--no-continue` just results in ignoring the files, with no option available to clean the temporary directory.
Splitting off from #2532 which already describes a good case where the temporary files left around are not expected to be used ever again, I have another case where the files even get in the way of normal operation.
My setup opportunistically uses tmpfs to avoid unnecessary disk I/O, giving the temporary (-P "temp:") directory a quite limited size with no room to spare. When downloading is interrupted, the temporary data is left lingering around, resulting in subsequent downloads no longer fitting into the temporary directory, making the rest of the playlist also fail depending on how much space was left.
This problem can be commonly observed with Twitch as some VODs have missing pieces, resulting in an archiving setup with `--abort-on-unavailable-fragments` failing on the affected video.
Example command leaving behind temporary files:
`yt-dlp --abort-on-unavailable-fragments -vU https://www.twitch.tv/videos/1870517085` (attached output too)
Avoided the "decoration" with `--no-continue`, `--no-keep-video` (it's the default anyway), and other options as despite the "spirit" of having an explicit temporary directory and indicating no desire of keeping files for later, there's nothing stopping the littering.
I've already had containerization taking care of leftover files on exit, and as an extra workaround `--exec before_dl:"clean.sh"` seems to help where `clean.sh` cleans the temporary directory.
Beware: Early attempts like `--exec before_dl:"rm -rf /tmp/yt-dlp"` didn't work, because the "temp" directory needs to exist, and `--exec` also stuffs an extra parameter to the end which seems to be impossible to avoid if templating is just undesired. A helper script ended up being much simpler.
This also presents the opportunity of exposing the hidden _no_ytdl_file option which benefits mostly HDDs. If the user can signal no desire for later continuation, there's also no need to keep on dumping the current state to the disk.
For large videos not fitting into tmpfs, I'm using a patch getting rid of the ytdl file, and pushing fragment files into tmpfs, resulting in significantly more efficient disk I/O as a result of having a single write stream during the downloading part.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--abort-on-unavailable-fragments', '-vU', 'https://www.twitch.tv/videos/1870517085']
[debug] Encodings: locale utf-8, fs utf-8, pref utf-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (zip)
[debug] Lazy loading extractors is disabled
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-47-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1
[debug] Optional libraries: sqlite3-3.45.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[twitch:vod] Extracting URL: https://www.twitch.tv/videos/1870517085
[twitch:vod] 1870517085: Downloading stream metadata GraphQL
[twitch:vod] 1870517085: Downloading video access token GraphQL
[twitch:vod] 1870517085: Downloading m3u8 information
[twitch:vod] 1870517085: Downloading storyboard metadata JSON
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] v1870517085: Downloading 1 format(s): 1080p60
[debug] Invoking hlsnative downloader on "https://dgeft87wbj63p.cloudfront.net/ae1b0881cb83a24fdfa0_sodapoppin_67689483633_2831051625/chunked/highlight-1870517085.m3u8"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 2574
[download] Destination: Highlight 07⧸05⧸23: im in the UK bruv | !contest | @StarforgeSystems | @MadMushroomGG | sodaPride [v1870517085].mp4
[download] 87.9% of ~ 22.23GiB at 7.80MiB/s ETA 06:03 (frag 2264/2574)[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 2265 (1/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 2265 (2/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 2265 (3/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 2265 (4/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 2265 (5/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 2265 (6/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 2265 (7/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 2265 (8/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 2265 (9/10)...
[download] Got error: HTTP Error 403: Forbidden. Retrying fragment 2265 (10/10)...
[download] Got error: HTTP Error 403: Forbidden. Giving up after 10 retries
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/root/bin/yt-dlp/__main__.py", line 17, in <module>
yt_dlp.main()
File "/root/bin/yt-dlp/yt_dlp/__init__.py", line 1093, in main
_exit(*variadic(_real_main(argv)))
File "/root/bin/yt-dlp/yt_dlp/__init__.py", line 1083, in _real_main
return ydl.download(all_urls)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3604, in download
self.__download_wrapper(self.extract_info)(
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3577, in wrapper
res = func(*args, **kwargs)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1613, in extract_info
return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1624, in wrapper
return func(self, *args, **kwargs)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1780, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1839, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3010, in process_video_result
self.process_info(new_info)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 177, in wrapper
return func(self, *args, **kwargs)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3478, in process_info
success, real_download = self.dl(temp_filename, info_dict)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3198, in dl
return fd.download(name, new_info, subtitle)
File "/root/bin/yt-dlp/yt_dlp/downloader/common.py", line 464, in download
ret = self.real_download(filename, info_dict)
File "/root/bin/yt-dlp/yt_dlp/downloader/hls.py", line 381, in real_download
return self.download_and_append_fragments(ctx, fragments, info_dict)
File "/root/bin/yt-dlp/yt_dlp/downloader/fragment.py", line 513, in download_and_append_fragments
download_fragment(fragment, ctx)
File "/root/bin/yt-dlp/yt_dlp/downloader/fragment.py", line 459, in download_fragment
for retry in RetryManager(self.params.get('fragment_retries'), error_callback):
File "/root/bin/yt-dlp/yt_dlp/utils/_utils.py", line 5251, in __iter__
self.error_callback(self.error, self.attempt, self.retries)
File "/root/bin/yt-dlp/yt_dlp/downloader/fragment.py", line 456, in error_callback
self.report_retry(err, count, retries, frag_index, fatal)
File "/root/bin/yt-dlp/yt_dlp/downloader/common.py", line 410, in report_retry
RetryManager.report_retry(
File "/root/bin/yt-dlp/yt_dlp/utils/_utils.py", line 5258, in report_retry
return error(f'{e}. Giving up after {count - 1} retries') if count > 1 else error(str(e))
File "/root/bin/yt-dlp/yt_dlp/downloader/common.py", line 413, in <lambda>
error=IDENTITY if not fatal else lambda e: self.report_error(f'\r[download] Got error: {e}'),
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1090, in report_error
self.trouble(f'{self._format_err("ERROR:", self.Styles.ERROR)} {message}', *args, **kwargs)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1018, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
ERROR: fragment 2265 not found, unable to continue
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/root/bin/yt-dlp/__main__.py", line 17, in <module>
yt_dlp.main()
File "/root/bin/yt-dlp/yt_dlp/__init__.py", line 1093, in main
_exit(*variadic(_real_main(argv)))
File "/root/bin/yt-dlp/yt_dlp/__init__.py", line 1083, in _real_main
return ydl.download(all_urls)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3604, in download
self.__download_wrapper(self.extract_info)(
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3577, in wrapper
res = func(*args, **kwargs)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1613, in extract_info
return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1624, in wrapper
return func(self, *args, **kwargs)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1780, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1839, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3010, in process_video_result
self.process_info(new_info)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 177, in wrapper
return func(self, *args, **kwargs)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3478, in process_info
success, real_download = self.dl(temp_filename, info_dict)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3198, in dl
return fd.download(name, new_info, subtitle)
File "/root/bin/yt-dlp/yt_dlp/downloader/common.py", line 464, in download
ret = self.real_download(filename, info_dict)
File "/root/bin/yt-dlp/yt_dlp/downloader/hls.py", line 381, in real_download
return self.download_and_append_fragments(ctx, fragments, info_dict)
File "/root/bin/yt-dlp/yt_dlp/downloader/fragment.py", line 514, in download_and_append_fragments
result = append_fragment(
File "/root/bin/yt-dlp/yt_dlp/downloader/fragment.py", line 479, in append_fragment
self.report_error(f'fragment {frag_index} not found, unable to continue')
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1090, in report_error
self.trouble(f'{self._format_err("ERROR:", self.Styles.ERROR)} {message}', *args, **kwargs)
File "/root/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1018, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
```
```
| enhancement,triage,core:downloader | low | Critical |
2,703,518,028 | rust | Document the runtime environment assumptions of `std` | `std` makes a number of assumptions about the runtime environment that go beyond what is guaranteed by the ABI. They should be documented somewhere. I'm filing this issue to both collect these assumptions and have a discussion about what the best place is to put them.
Assumptions `std` makes:
* On pretty much any platform: `std` is not used when the C platform functions are not available. E.g. using `clone` on Linux and then calling into `std` is definitely not sound. (This one is very obvious)
* `std` is not called from an asynchronous signal handler or after `fork`. `std` makes no attempt at being async-signal-safe ([except](https://doc.rust-lang.org/nightly/std/os/unix/process/trait.CommandExt.html#notes-and-safety) for `panic!` after `always_abort` has been called).
* On UNIX: threads must not be exited or cancelled (e.g. using `pthread_exit` or `pthread_cancel`).
This is because cancellation does not result in proper unwinding meaning destructors aren't called, but the thread's stack is reused – this is incompatible with `pin!` and probably a bunch of library code.
* On UNIX: the stdio file descriptors (`STDIN_FILENO`, `STDOUT_FILENO`, `STDERR_FILENO`) are assumed to be usable as standard input/output/error.
* On UNIX: `sigaltstack` is not used by code that does not also install it's own handlers for `SIGSEGV` and `SIGBUS` (only applicable when `sigaltstack` is used in threads not spawned by `std`).
`std` uses TLS variables to store the address of the stack guard page. While TLS accesses are not documented to be async-signal-safe, they mostly are – except for the first access to a variable. `std` makes sure to always access the relevant variables before setting up a `sigaltstack`, thereby ensuring that the TLS accesses are safe when the signal handler is successfully called. User code has no access to these variables and therefore cannot initialize them, so it must ensure that `std`'s signal handler is not called when they could be uninitialized.
* On older macOS systems: the first access to any TLS variable with a destructor is not done from within `_tlv_atexit`.
There was a platform bug that resulted in crashes when `_tlv_atexit` is recursively called. It has since been fixed, but our minimum supported version is not high enough to include the fix.
* ... | A-runtime,A-docs,C-tracking-issue,T-libs | low | Critical |
2,703,526,118 | godot | Generated trimesh collision shapes have debug fill enabled resulting in z-fighting | ### Tested versions
v4.4.dev.custom_build [0eadbdb5d]
### System information
Godot v4.4.dev (0eadbdb5d) - macOS 15.1.0 - Multi-window, 1 monitor - Metal (Forward+) - integrated Apple M1 Max (Apple7) - Apple M1 Max (10 threads)
### Issue description
Collision shapes now have fill and I think it's nice that it's enabled by default, however I believe it's better to disable it when generating trimesh collision shapes from existing meshes because they match the source geometry resulting in z-fighting

### Steps to reproduce
You can either import a mesh with a `-col` hint or simply create a new MeshInstance3D and click Mesh -> Create Collision Shape on it:
<img width="500" src="https://github.com/user-attachments/assets/75ec33e9-1863-43f6-a8de-305be9aecaab" />
### Minimal reproduction project (MRP)
N/A | bug,topic:physics,topic:import,topic:3d | low | Critical |
2,703,532,950 | langchain | Regexp Separator not working OOTB with (Recursive)CharacterSplitter | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
import re
text_splitter = CharacterTextSplitter(
# Updated separator to match both uppercase and title case chapter headings
separator="\bCHAPTER\b", # doesn't work
# works separator=r"\bCHAPTER\b",
chunk_size=500, chunk_overlap = 0,
is_separator_regex=True,
)
char_chunks = text_splitter.split_text(full_book)
print([c[0:10] for c in char_chunks])
len(char_chunks), len(char_chunks[0])
```
['Acknowledg', '1\nIntroduc', '2\nOrganizi']
### Error Message and Stack Trace (if applicable)
Non-Working case with: `separator="\bCHAPTER\b",`
```
['Acknowledg']
(1, 2996)
````
Working case with r-string: `separator=r"\bCHAPTER\b",`
```
['Acknowledg', '1\nIntroduc', '2\nOrganizi']
(3, 696)
```
### Description
we just spent two hours trying to figure out how to use recursive/character text splitter with regexp-separators
it turned out none of the docs or the code had the right information, there is no mention of r-strings anywhere in the docs and the example also doesn't have any. And it also says "interpreted as regexp" which is not true.
https://python.langchain.com/docs/how_to/recursive_text_splitter/
> `is_separator_regex`: Whether the separator list (defaulting to `["\n\n", "\n", " ", ""]`) should be *interpreted* as regex.
We thought strings are turned automatically into regexps, but it doesn't seem so, it only escapes non-regexp-strings if `is_separator_regex` is False
see https://github.com/langchain-ai/langchain/blob/master/libs/text-splitters/langchain_text_splitters/character.py#L24-L93
so the solution was :exploding_head: to use r-strings `r"^CHAPTER \d+$"` otherwise you get only a single chunk because your regexp is not found as a separator.
Not sure how any of the language stuff that has regexpes actuallly works?
e.g. Markdown
https://github.com/langchain-ai/langchain/blob/master/libs/text-splitters/langchain_text_splitters/character.py#L440-L443
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Wed Jul 31 20:49:39 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_ARM64_T6000
> Python Version: 3.11.10 (main, Sep 7 2024, 01:03:31) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.35
> langchain: 0.2.14
> langchain_community: 0.2.12
> langsmith: 0.1.104
> langchain-genai-website: Installed. No version info available.
> langchain_anthropic: 0.1.13
> langchain_aws: 0.1.6
> langchain_cli: 0.0.22
> langchain_experimental: 0.0.64
> langchain_fireworks: 0.1.3
> langchain_google_genai: 1.0.4
> langchain_google_vertexai: 1.0.4
> langchain_groq: 0.1.5
> langchain_openai: 0.1.22
> langchain_text_splitters: 0.2.2
> langserve: 0.1.1
...
> tomlkit: 0.12.0
> typer[all]: Installed. No version info available.
> typing-extensions: 4.12.2
> uvicorn: 0.30.6
``` | 🤖:bug | low | Critical |
2,703,533,321 | go | build: build failure on gotip-netbsd-arm | ```
#!watchflakes
default <- builder == "gotip-netbsd-arm" && repo == "go" && mode == "build"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730003863371080785)):
[I2024-11-28T22:16:19.152630Z 10154 0 sink.go:277] SinkServer: warm-up started
[I2024-11-28T22:16:19.152959Z 10154 0 sink.go:350] SinkServer: starting HTTP server...
[I2024-11-28T22:16:19.165678Z 10154 0 sink.go:282] SinkServer: warm-up ended
[I2024-11-28T22:16:19.168201Z 10154 0 cmd_stream.go:492] rdb-stream: starting the test command - ["/home/swarming/.swarming/w/ir/cache/tools/bin/result_adapter" "go" "-v=false" "-dump-json" "/home/swarming/.swarming/w/ir/x/w/dist.testjson" "--" "/home/swarming/.swarming/w/ir/x/w/goroot/bin/go" "tool" "dist" "test" "-json"]
cmd/compile/internal/ssa: /home/swarming/.swarming/w/ir/x/w/goroot/pkg/tool/netbsd_arm/vet: signal: segmentation fault (core dumped)
go tool dist: Failed: exit status 1
ok archive/tar 2.052s
ok archive/zip 8.381s
ok bufio 0.808s
ok bytes 4.393s
...
[I2024-11-28T23:33:00.489825Z 10154 0 sink.go:353] SinkServer: HTTP server stopped with "http: Server closed"
[I2024-11-28T23:33:00.490031Z 10154 0 sink_server.go:96] SinkServer: draining TestResult channel started
[I2024-11-28T23:33:01.635177Z 10154 0 sink_server.go:98] SinkServer: draining TestResult channel ended
[I2024-11-28T23:33:01.635505Z 10154 0 sink_server.go:100] SinkServer: draining Artifact channel started
[I2024-11-28T23:33:01.986535Z 10154 0 sink_server.go:102] SinkServer: draining Artifact channel ended
[I2024-11-28T23:33:01.987526Z 10154 0 sink.go:378] SinkServer: shutdown completed successfully
[I2024-11-28T23:33:01.992109Z 10154 0 cmd_stream.go:445] Caught InterruptEvent
[I2024-11-28T23:33:01.992466Z 10154 0 terminate_unix.go:32] Sending syscall.SIGTERM to subprocess
[W2024-11-28T23:33:01.996632Z 10154 0 cmd_stream.go:451] Could not terminate subprocess (os: process already finished), cancelling its context
[I2024-11-28T23:33:01.996923Z 10154 0 cmd_stream.go:420] rdb-stream: exiting with 1
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,703,568,710 | pytorch | DISABLED test_mnist_exported_with_no_warnings (__main__.TestFxToOnnx) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mnist_exported_with_no_warnings&suite=TestFxToOnnx&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/33674449149).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mnist_exported_with_no_warnings`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/_internal/_exporter_legacy.py", line 774, in dynamo_export
return Exporter(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/_internal/_exporter_legacy.py", line 546, in export
graph_module = self.options.fx_tracer.generate_fx(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 217, in generate_fx
return self.pre_export_passes(options, model, graph_module, updated_model_args) # type: ignore[return-value]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/_internal/fx/dynamo_graph_extractor.py", line 226, in pre_export_passes
return _exporter_legacy.common_pre_export_passes(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/_internal/_exporter_legacy.py", line 805, in common_pre_export_passes
module = passes.Decompose(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 146, in wrapper
ctx.log_and_raise_if_error(diag)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/_internal/diagnostics/infra/context.py", line 355, in log_and_raise_if_error
raise diagnostic.source_exception
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 130, in wrapper
return_values = fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/_internal/fx/_pass.py", line 275, in run
module = self._run(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/_internal/fx/passes/decomp.py", line 70, in _run
decomposed_module = proxy_tensor.make_fx(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 2196, in wrapped
return make_fx_tracer.trace(f, *args)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 2134, in trace
return self._trace_inner(f, *args)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 2105, in _trace_inner
t = dispatch_trace(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_compile.py", line 32, in inner
return disable_fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 744, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 1138, in dispatch_trace
graph = tracer.trace(root, concrete_args) # type: ignore[arg-type]
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 744, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 843, in trace
(self.create_arg(fn(*args)),),
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 1193, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/_internal/fx/passes/_utils.py", line 28, in wrapped
return torch.fx.Interpreter(graph_module).run(*args)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/interpreter.py", line 228, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/interpreter.py", line 357, in call_module
return submod(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 821, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 1045, in call_module
return forward(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 814, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 554, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 549, in _conv_forward
return F.conv2d(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 1241, in __torch_function__
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 1343, in __torch_dispatch__
return proxy_call(self, func, self.pre_dispatch, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py", line 912, in proxy_call
out = func(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_ops.py", line 723, in __call__
return self._op(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1270, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1810, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 1380, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_subclasses/fake_tensor.py", line 2348, in _dispatch_impl
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_subclasses/fake_impls.py", line 160, in dispatch_to_op_implementations_dict
return op_implementations_dict[func](fake_mode, func, *args, **kwargs)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_subclasses/fake_impls.py", line 743, in conv
conv_backend = torch._C._select_conv_backend(**kwargs)
RuntimeError: expected stride to be a single integer value or a list of -1 values to match the convolution dimensions, but got stride=[1, 1]
While executing %tensor_x : [num_users=1] = call_module[target=L__self___conv1](args = (%l_tensor_x_,), kwargs = {})
Original traceback:
File "/var/lib/jenkins/workspace/test/onnx/test_fx_to_onnx.py", line 69, in forward
tensor_x = self.conv1(tensor_x)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/onnx/test_fx_to_onnx.py", line 82, in test_mnist_exported_with_no_warnings
onnx_program = dynamo_export(MNISTModel(), tensor_x)
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/__init__.py", line 525, in dynamo_export
return dynamo_export(
File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/onnx/_internal/_exporter_legacy.py", line 790, in dynamo_export
raise errors.OnnxExporterError(message) from e
torch.onnx.OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at 'report_dynamo_export.sarif'. SARIF is a standard format for the output of static analysis tools. SARIF logs can be loaded in VS Code SARIF viewer extension, or SARIF web viewer (https://microsoft.github.io/sarif-web-component/). Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues
To execute this test, run the following from the base repo dir:
python test/onnx/test_fx_to_onnx.py TestFxToOnnx.test_mnist_exported_with_no_warnings
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `onnx/test_fx_to_onnx.py`
cc @clee2000 @wdvr | module: onnx,triaged,module: flaky-tests,skipped | low | Critical |
2,703,579,968 | pytorch | running my facebook/bart-base for summarization task : MPS does not support cumsum op with int64 input | ### 🐛 Describe the bug
''' Summarization implementation '''
from domain.interfaces import SummarizationServiceInterface
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
class SummarizationService(SummarizationServiceInterface):
"""Summarizes text using a pre-trained model."""
def __init__(self, model_name: str = "facebook/bart-base", device: str = "mps"):
"""
Initializes the summarization service.
Args:
model_name (str): Name of the pre-trained model to load.
device (str): Device to run the model on ('cpu' or 'mps').
"""
self.device = torch.device(device)
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(self.device)
def summarize(self, texts: list[str]) -> str:
"""
Summarizes a list of texts.
Args:
texts (list[str]): A list of text to summarize.
Returns:
str: The summarized text.
"""
combined_text = " ".join(texts)
inputs = self.tokenizer(
combined_text, max_length=1024, return_tensors="pt", truncation=True
).to(self.device)
summary_ids = self.model.generate(
inputs["input_ids"], max_length=150, min_length=40, num_beams=1, early_stopping=True
)
return self.tokenizer.decode(summary_ids[0], skip_special_tokens=True)
The error message is:
Summarizing Cluster 231 with 153 reviews...
/Users/..../Code/cluster-summ-reviews/venv/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:638: UserWarning: `num_beams` is set to 1. However, `early_stopping` is set to `True` -- this flag is only used in beam-based generation modes. You should set `num_beams>1` or unset `early_stopping`.
warnings.warn(
INFO:torch.distributed.nn.jit.instantiator:Created a temporary directory at /var/folders/vh/thwrv1sj5cz7rnp9b4lg_wwr0000gn/T/tmpd_c_swrv
INFO:torch.distributed.nn.jit.instantiator:Writing /var/folders/vh/thwrv1sj5cz7rnp9b4lg_wwr0000gn/T/tmpd_c_swrv/_remote_module_non_scriptable.py
/Users/..../Code/cluster-summ-reviews/venv/lib/python3.9/site-packages/transformers/pytorch_utils.py:325: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
test_elements = torch.tensor(test_elements)
ERROR:root:An error occurred: MPS does not support cumsum op with int64 input
### Versions
zsh: command not found: wget
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | triaged,enhancement,module: mps | low | Critical |
2,703,597,166 | stable-diffusion-webui | [Bug]:SD3.5 Large Turbo,4080Super显卡,Windows10, 后台不报错,但是生成图片都是色块 | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
后台不报错,但是生成图片都是色块
### Steps to reproduce the problem
启动后,Prompt输入后,图片可生成,但是都是色块,所有Sampling Method都有这个问题,所有参数是默认设置
### What should have happened?
显示的色块
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
所有设置均默认
### Console logs
```Shell
To create a public link, set `share=True` in `launch()`.
Startup time: 6.0s (prepare environment: 1.2s, import torch: 1.9s, import gradio: 0.5s, setup paths: 0.5s, initialize shared: 0.2s, other imports: 0.3s, list SD models: 0.3s, load scripts: 0.4s, create ui: 0.2s, gradio launch: 0.6s).
Applying attention optimization: Doggettx... done.
Model loaded in 11.2s (load weights from disk: 0.3s, create model: 0.4s, apply weights to model: 7.4s, apply float(): 2.3s, move model to device: 0.3s, calculate empty prompt: 0.5s).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:27<00:00, 7.40s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [02:37<00:00, 7.86s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [02:37<00:00, 8.41s/it]
```
### Additional information
_No response_ | bug-report | low | Critical |
2,703,599,631 | pytorch | AOTAutograd cache isn't obviously hit/miss in tlparse rendering | ### 🐛 Describe the bug
See
<img width="501" alt="image" src="https://github.com/user-attachments/assets/c1235c02-d42e-4068-94cb-bda5d676fefa">
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225 @jamesjwu
### Versions
main | triaged,oncall: pt2,module: aotdispatch | low | Critical |
2,703,600,049 | pytorch | torch.roll runs too slow at MPS backend | ### 🐛 Describe the bug
# Versions
torch: 2.5.1
GPU: Apple M3 Pro
OS: Mac OS 15.1
# Problem
Hi, I found that my model ran too slow at MPS backend, and I believe it happens due to the inefficient `torch.roll` function at MPS backend. I ran the following tests and found my CPU backend is at least 50x faster than MPS in any data type.
```
Device: cpu, Mean time per run: 2.54 µs
Device: mps, Mean time per run: 157.64 µs
```
Is it possible to fix or avoid this issue? Thanks.
```python
import torch
import torch.utils.benchmark as benchmark
def test_roll(device):
# Create the tensor on the specified device
x = torch.randint(0, 100, (20, 20), dtype=torch.int64, device=device)
# Define the function to benchmark
if device == 'mps':
def fn():
y = torch.roll(x, shifts=1, dims=1)
torch.mps.synchronize() # Synchronize for accurate timing
else:
def fn():
y = torch.roll(x, shifts=1, dims=1)
# No synchronization needed for CPU
# Setup the timer
t = benchmark.Timer(
stmt='fn()',
globals={'fn': fn},
num_threads=1,
)
# Run the benchmark
m = t.timeit(1000)
print(f"Device: {device}, Mean time per run: {m.mean * 1e6:.2f} µs")
# List of devices to test
devices = ['cpu']
if torch.backends.mps.is_available():
devices.append('mps')
else:
print("MPS device not available. Only CPU will be tested.")
# Test torch.roll on each device
for device in devices:
test_roll(device)
```
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.29.5
Libc version: N/A
Python version: 3.11.10 (main, Oct 3 2024, 02:26:51) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.5.1
[pip3] torch_geometric==2.4.0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
cc @msaroufim @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | module: performance,triaged,module: mps | low | Critical |
2,703,613,127 | pytorch | AOTAutograd cache always misses even on simple program | ### 🐛 Describe the bug
```
(/home/ezyang/local/b/pytorch-env) [[email protected] ~/local/b/pytorch (a1348a73)]$ TORCHINDUCTOR_AUTOGRAD_CACHE=1 TORCH_LOGS=torch._functorch._aot_autograd.autograd_cache python aaa.py
I1128 17:54:16.961000 1579551 torch/_functorch/_aot_autograd/autograd_cache.py:665] [0/0] AOTAutograd cache miss for key a3fjisonhscsotmkqcifvx22whts72lyron5lrhskrbowx3hfbav
(/home/ezyang/local/b/pytorch-env) [[email protected] ~/local/b/pytorch (a1348a73)]$ TORCHINDUCTOR_AUTOGRAD_CACHE=1 TORCH_LOGS=torch._functorch._aot_autograd.autograd_cache python aaa.py
I1128 17:54:26.680000 1582644 torch/_functorch/_aot_autograd/autograd_cache.py:665] [0/0] AOTAutograd cache miss for key a3fjisonhscsotmkqcifvx22whts72lyron5lrhskrbowx3hfbav
(/home/ezyang/local/b/pytorch-env) [[email protected] ~/local/b/pytorch (a1348a73)]$ vim aaa.py
(/home/ezyang/local/b/pytorch-env) [[email protected] ~/local/b/pytorch (a1348a73)]$ cat aaa.py
import torch
import torch._dynamo.config
# Artificially generate lots of small kernels
torch._inductor.config.realize_reads_threshold = 1
torch._inductor.config.realize_opcount_threshold = 1
torch._inductor.config.max_fusion_size = 1
N = 2
@torch.compile(fullgraph=True)
def f(a, b):
for i in range(N):
a = a + b * i
return a
f(torch.randn(2, device="cuda", requires_grad=True), torch.randn(2, device="cuda")).sum().backward()
```
### Versions
main
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | triaged,oncall: pt2,module: aotdispatch | low | Critical |
2,703,616,659 | vscode | With the `Light+` theme, the input area of Chat in Editor is indistinguishable from the background when out of focus | Version: 1.96.0-insider
Commit: 1bdee4500fc32f4eb195e087501b8c9331447fb3
Date: 2024-11-28T10:48:06.788Z
Electron: 32.2.5
ElectronBuildId: 10579404
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Darwin x64 22.6.0
Steps to Reproduce:
1. Set `"workbench.colorTheme": "Default Light+"`
2. Open Chat in Editor.
3. Focus to a other part.
4. The input area of Chat in Editor is indistinguishable from the background.
The input area should have a border even when out of focus.

| bug,themes,confirmed,chat | low | Minor |
2,703,674,083 | godot | Web Editor – "Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'msg_send_queue')" | ### Tested versions
- Reproducible in 4.4.dev [0eadbdb5d0709e4e557e52377fa075d3e2f0ad1f]
### System information
Google Chrome 131.0.6778.86 on macOS Sequoia 15.1.1 with Intel processor
### Issue description
Following the instructions for [Building the Editor on Compiling for the Web](https://docs.godotengine.org/en/stable/contributing/development/compiling/compiling_for_web.html#building-the-editor), the Loader screen (where a project ZIP can be selected) appears correct but when I attempt to load the actual editor, it gets stuck on the Godot splash screen.
In the JavaScript console, I see the following error:
```
Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'msg_send_queue')
at Object.sendmsg (godot.editor.js:5034:14)
at Object.write (godot.editor.js:4640:28)
at Object.write (godot.editor.js:3478:42)
at ___syscall_sendto (godot.editor.js:5428:17)
at godot.web.editor.dev.wasm32.wasm.sendto (godot.web.editor.dev.wasm32.wasm-7f02190a:0x3816d24)
at godot.web.editor.dev.wasm32.wasm.send (godot.web.editor.dev.wasm32.wasm-7f02190a:0x3816d11)
at godot.web.editor.dev.wasm32.wasm.__netlink_enumerate (godot.web.editor.dev.wasm32.wasm-7f02190a:0x37e8599)
at godot.web.editor.dev.wasm32.wasm.__rtnetlink_enumerate (godot.web.editor.dev.wasm32.wasm-7f02190a:0x37e852c)
at godot.web.editor.dev.wasm32.wasm.getifaddrs (godot.web.editor.dev.wasm32.wasm-7f02190a:0x37e5bfc)
at godot.web.editor.dev.wasm32.wasm.IPUnix::get_local_interfaces(HashMap<String, IP::Interface_Info, HashMapHasherDefault, HashMapComparatorDefault<String>, DefaultTypedAllocator<HashMapElement<String, IP::Interface_Info>>>*) const (wasm://wasm/godot.web.editor.dev.wasm32.wasm-7f02190a)
```
Resizing the window causes the Godot splash screen to disappear and be replaced with a pure black screen.
### Steps to reproduce
Build the editor for web using
```
scons platform=web target=editor
```
Run the web editor:
```
python platform/web/serve.py
```
and select `godot.web.editor.dev.wasm32/` from the directory listing, then `godot.editor.html`.
The loader screen should appear correctly. Open the JavaScript console. Click "Start Godot editor".
You should see the usual output from Godot's console on startup...
```
Godot Engine v4.4.dev.custom_build.0eadbdb5d (2024-11-28 14:08:33 UTC) - https://godotengine.org
OpenGL API OpenGL ES 3.0 (WebGL 2.0 (OpenGL ES 3.0 Chromium)) - Compatibility - Using Device: WebKit - WebKit WebGL
```
...but after that, the JavaScript error written above.
A full image of the JavaScript console for reference:

### Minimal reproduction project (MRP)
N/A | bug,platform:web,topic:editor,topic:porting | low | Critical |
2,703,770,514 | flutter | [Android 13] New transition when opening new activities | ### Use case
Flutter apps running on Android 13 or higher don't use the new transition when starting a new activity.
### Proposal
Extend the [PageTransitionsBuilder](https://github.com/flutter/flutter/blob/bc0b62a2ab6c02e91f84f9b4b4cedeeacba89146/packages/flutter/lib/src/material/page_transitions_theme.dart#L554) class to define a default page transition that's similar to the one provided by Android T
https://github.com/user-attachments/assets/b3c5ba21-3c7b-43d2-a5cd-e0e084399d4e
**Note:** Even if the app doesn't use Material 3, the new page transition should be used when the app runs on Android 13 or higher. | c: new feature,platform-android,framework,f: material design,c: proposal,e: OS-version specific,p: animations,P3,team-android,triaged-android | low | Minor |
2,703,771,036 | godot | Crash from AnimationPlayer if initial properties are removed | ### Tested versions
- Reproducible in 4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1650 (NVIDIA; 31.0.15.3203) - Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz (12 Threads)
### Issue description
When reparenting nodes, property information is lost for animations - property track. (such as when you are reparenting a control to the direct node needed as root - ColorRect)
Then when clicking on the 'null' property Godot crashes. (so the name of the node displayed in the animation screen)
I expect that Godot should NOT crash after clicking on the node name.
Perhaps a popup to state that the selection has been deleted/null and cannot be clicked on.
If possible, a way to update the intended node that is being used by selecting the correct object, in the same way that you would have had to add the ColorRect initially.
It seems that when everything is working, the editor opens the inspector with the object information, so there is an assumption that the null node is trying to be read and then crashes.
I have tried to keep the language common as that you can understand it but it might not be the correct terminology. (nodes might be something else, etc.) I have meant nodes as the nodes that you see in the scene. I will try to respond when possible.
I did see that there were a few issues that were similar (null animations) but not the exact case that was reproduced. (more code, than editor)
### Steps to reproduce
- Create a Control - New Scene > User Interface - Attempting to create a fade animation
- add ColorRect
- add AnimationPlayer
- add animations for fade out - property track - modulate colorRect
- Reparent node for ColorRect- called 'background'
- delete Control
- attempt to redo animation - should see red 'x's for 'empty animations'
- click on the name of the property - 'background'
- crash
work around - delete animation
- create a new Animation Node with redone Animations/ColorRect
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,needs testing,crash,topic:animation | low | Critical |
2,703,787,764 | langchain | importing hub fails with latest(0.3.9) version | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```python
from langchain import hub
```
### Error Message and Stack Trace (if applicable)
```
% uv run scripts/script.py
Traceback (most recent call last):
File "/path/scripts/script.py", line 5, in <module>
from langchain import hub
File "/path/.venv/lib/python3.11/site-packages/langchain/__init__.py", line 8, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/path/.venv/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import Agent
File "/path/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 10, in <module>
from langchain.chains.base import Chain
File "/path/.venv/lib/python3.11/site-packages/langchain/chains/__init__.py", line 2, in <module>
from langchain.chains.conversation.base import ConversationChain
File "/path/.venv/lib/python3.11/site-packages/langchain/chains/conversation/base.py", line 7, in <module>
from langchain.chains.conversation.memory import ConversationBufferMemory
File "/path/.venv/lib/python3.11/site-packages/langchain/chains/conversation/memory.py", line 7, in <module>
from langchain.chains.conversation.prompt import SUMMARY_PROMPT
File "/path/.venv/lib/python3.11/site-packages/langchain/chains/conversation/prompt.py", line 2, in <module>
from langchain.prompts.prompt import PromptTemplate
File "/path/.venv/lib/python3.11/site-packages/langchain/prompts/__init__.py", line 2, in <module>
from langchain.prompts.base import BasePromptTemplate
File "/path/.venv/lib/python3.11/site-packages/langchain/prompts/base.py", line 35, in <module>
class BasePromptTemplate(BaseModel, ABC):
File "/path/.venv/lib/python3.11/site-packages/langchain/prompts/base.py", line 41, in BasePromptTemplate
@root_validator()
^^^^^^^^^^^^^^^^
File "/path/.venv/lib/python3.11/site-packages/pydantic/deprecated/class_validators.py", line 240, in root_validator
raise PydanticUserError(
pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.
For further information visit https://errors.pydantic.dev/2.10/u/root-validator-pre-skip
```
### Description
I try to use hub for pulling my prompts.
it worked with version 0.3.7 but when I use 0.3.9, above error occurs and it fails for just importing
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:49 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6020
> Python Version: 3.11.4 (v3.11.4:d2340ef257, Jun 6 2023, 19:15:51) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.3.21
> langchain: 0.3.7
> langsmith: 0.1.147
> langchain_openai: 0.2.10
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.8
> async-timeout: Installed. No version info available.
> httpx: 0.28.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.55.3
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2 | 🤖:bug,investigate | low | Critical |
2,703,904,774 | tauri | [bug] Unable to get resource path on Android device | ### Describe the bug
On `Android` devices, the `app.path().resource_dir().unwrap()` obtained by `rust` is `asset://localhost`, while on `MacOS` it is the normal path.
### Reproduction
Running [this example](https://github.com/tauri-apps/tauri/blob/d6bed20a0e326d7d4a7488a719fedfd89f11fb4e/examples/resources/src-tauri/src/main.rs#L22) on an `Android` device will not succeed because it does not have a valid path. When it reaches this point, it will be stuck (neither successful nor failed).
### Expected behavior
_No response_
### Full `tauri info` output
<details>
<summary>Tauri info</summary>
<pre><code>
[✔] Environment
- OS: Mac OS 15.1.1 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 18.20.4
- yarn: 1.22.22
- npm: 10.7.0
- bun: 1.1.37
- deno: deno 2.1.1
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.0.3
- @tauri-apps/cli : 2.0.5
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1
- tauri-plugin-fs 🦀: 2.0.3
- @tauri-apps/plugin-fs : 2.0.2
- tauri-plugin-http 🦀: 2.0.3
- @tauri-apps/plugin-http : 2.0.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: other://localhost
- framework: React
[-] iOS
- Developer Teams: ******
✨ Done in 23.77s.
</code></pre>
</details>
### Stack trace
_No response_
### Additional context
I see that there are other people [asking](https://stackoverflow.com/questions/79105428/how-to-use-tauri-resource-files-properly-on-android) this question.
This question seems to be related to [another question](https://github.com/tauri-apps/tauri/issues/10338#issue-2421159146). | type: bug,status: needs triage | low | Critical |
2,703,913,271 | langchain | Similarity search in FAISS vector first search for fetch_k document and then filter which is a issue | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
index.asimilarity_search_eith_score_by_vector(embedding=emb, k =20, fetch_k=10000,filter={'domain':metadata_filter})
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to search for a query with a domain filter. The domain is stored as metadata with documents. Right now asimilarity_search_eith_score_by_vector function first fetches "fetch_k" number of documents based on the similarity score and then filter based domain metadata and returns "k" results. My documents across domains are of similar kind, so sometimes based on score whatever documents are extracted do not contain any given domain related documents and I get an empty response. In actual case it should first filter documents in index with domain and then do similarity search among filtered document only and return top k.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #21~22.04.1-Ubuntu SMP Thu Nov 7 17:33:30 UTC 2024
> Python Version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.38
> langchain: 0.2.16
> langchain_community: 0.2.16
> langsmith: 0.1.117
> langchain_chroma: 0.1.3
> langchain_huggingface: 0.0.3
> langchain_text_splitters: 0.2.4
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: 4.0.3
> chromadb: 0.5.3
> dataclasses-json: 0.6.7
> fastapi: 0.114.1
> httpx: 0.27.2
> huggingface-hub: 0.24.6
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.1
> PyYAML: 6.0.2
> requests: 2.32.3
> sentence-transformers: 3.0.1
> SQLAlchemy: 2.0.34
> tenacity: 8.5.0
> tokenizers: 0.19.1
> transformers: 4.44.2
> typing-extensions: 4.12.2
| Ɑ: vector store,investigate | low | Critical |
2,704,008,626 | tailwindcss | [v4] Standalone cli doesn't bundle tailwindcss js library | <!-- Please provide all of the information requested below. We're a small team and without all of this information it's not possible for us to help and your bug report will be closed. -->
**What version of Tailwind CSS are you using?**
v4.0.0-beta.3
**What build tool (or framework if it abstracts the build tool) are you using?**
None
**What version of Node.js are you using?**
v22.8.0
**What browser are you using?**
N/A
**What operating system are you using?**
Ubuntu 24.10
**Reproduction URL**
https://github.com/leifmetcalf/tailwind-repro/tree/8cfa9c2b2b13a3d3e87cc115000ee553ea0a11df
**Describe your issue**
Run this command in the reproduction repo above
```
tailwindcss-linux-x64 --input app.css --output out.css
```
The command returns error code 1 with no message. Installing tailwind with `npm install tailwindcss` fixes the problem and the output file is created as expected. | v4,bc | low | Critical |
2,704,059,984 | PowerToys | Color issues | ### Description of the new feature / enhancement
파워토이의 "자르기 및 잠금" (Snip and Lock) 기능을 사용하고 있습니다. 이 기능은 화면을 자르고 나서 다른 작업을 하거나 동영상을 따로 볼 수 있는 유용한 도구입니다. 그러나, 이 기능을 사용한 후 자른 화면의 윈도우 테두리 색상이 윈도우 테마에서 지정한 어두운 색상과 다르게 너무 밝은 색으로 나타나는 문제가 발생하고 있습니다.
현재 윈도우 테마에서는 어두운 색상을 설정해두었지만, "자르기 및 잠금" 기능을 사용하면 자른 화면의 테두리 색상이 밝은 흰색으로 표시됩니다. 이 부분을 사용자가 지정한 색상으로 조정할 수 없거나, 윈도우 테마와 일치하도록 자동으로 맞춰지지 않아 불편을 겪고 있습니다.
이 문제를 해결하거나, 사용자 지정 색상에 맞게 테두리 색상이 변경될 수 있는 방법이 있다면 안내해 주시면 감사하겠습니다.
I am using the "Snip and Lock" feature of PowerToys. This feature is a useful tool for cropping parts of the screen and continuing other tasks or watching videos separately. However, after using this feature, I encountered an issue where the window border color of the cropped screen appears in a very bright color, which does not match the dark color set in my Windows theme.
I have set a dark color in the Windows theme, but after using the "Snip and Lock" feature, the border color of the cropped screen appears as a bright white color. I am unable to adjust this to the color I have set, and the color does not automatically match my Windows theme, causing some inconvenience.
I would appreciate it if there is a way to resolve this issue or adjust the border color to match the user-defined theme color.
### Scenario when this would be used?
파워토이의 "자르기 및 잠금" (Snip and Lock) 기능을 사용하고 있습니다. 이 기능은 화면을 자르고 나서 다른 작업을 하거나 동영상을 따로 볼 수 있는 유용한 도구입니다. 그러나, 이 기능을 사용한 후 자른 화면의 윈도우 테두리 색상이 윈도우 테마에서 지정한 어두운 색상과 다르게 너무 밝은 색으로 나타나는 문제가 발생하고 있습니다.
현재 윈도우 테마에서는 어두운 색상을 설정해두었지만, "자르기 및 잠금" 기능을 사용하면 자른 화면의 테두리 색상이 밝은 흰색으로 표시됩니다. 이 부분을 사용자가 지정한 색상으로 조정할 수 없거나, 윈도우 테마와 일치하도록 자동으로 맞춰지지 않아 불편을 겪고 있습니다.
이 문제를 해결하거나, 사용자 지정 색상에 맞게 테두리 색상이 변경될 수 있는 방법이 있다면 안내해 주시면 감사하겠습니다.
I am using the "Snip and Lock" feature of PowerToys. This feature is a useful tool for cropping parts of the screen and continuing other tasks or watching videos separately. However, after using this feature, I encountered an issue where the window border color of the cropped screen appears in a very bright color, which does not match the dark color set in my Windows theme.
I have set a dark color in the Windows theme, but after using the "Snip and Lock" feature, the border color of the cropped screen appears as a bright white color. I am unable to adjust this to the color I have set, and the color does not automatically match my Windows theme, causing some inconvenience.
I would appreciate it if there is a way to resolve this issue or adjust the border color to match the user-defined theme color.
### Supporting information
_No response_ | Needs-Triage,Product-CropAndLock | low | Minor |
2,704,063,710 | rust | Forbid disabling SSE on x86 targets that have SSE in their "baseline" | Passing `-Ctarget-feature=-sse` on an x86-64 target currently produces an ugly [LLVM error](https://rust.godbolt.org/z/3j8rnfrzP).
Doing the same on a x86-32 target leads to [unsound floating-point behavior](https://github.com/rust-lang/rust/issues/114479).
Therefore, I think we should deprecate and eventually fully forbid toggling the `sse`/`sse2` target features on x86 targets, except for those targets that do not have these features to begin with (e.g. `i586-unknown-linux-gnu`).
I am implementing some machinery [here](https://github.com/rust-lang/rust/pull/133099) that could help with that, but properly implementing this will be tricky since one can also [use `-Ctarget-cpu`](https://rust.godbolt.org/z/vaGscs6q1) to disable these target features.
Once this is implemented, we have some options for improving the Rust ABI on these targets as well:
- on x86-32, we could use SSE registers to return float values, instead of `PassMode::Indirect`
- on all x86 targets, we could pass SIMD vectors of up to 128 bits in registers rather than indirectly
[Here](https://github.com/rust-lang/rust/issues/114479#issuecomment-2082686372), compiler team triage decided "Current Tier 1 x86 targets require SSE-based floats at minimum". The concrete proposal above got MCP'd in https://github.com/rust-lang/compiler-team/issues/808.
### Open questions
How do we best implement this? It's non trivial since `-Ctarget-cpu` can alter the `sse`/`sse2` feature gates, so our usual approach of just checking which target features got toggled with `-Ctarget-feature` does not work.
To make things worse, the way we control whether `sse`/`sse2` is available in the "basline" is via the `base.cpu` field, not via the explicit list of target features, so if we want to do things "only if the baseline has SSE", that's non-trivial to implement. Maybe we should just add a `bool` field to the target spec that directly controls "use SSE registers for Rust ABI" or "requires SSE registers" or so?
Cc @bjorn3 @workingjubilee @Amanieu | O-x86_64,T-compiler,C-discussion,A-target-feature,O-x86_32 | low | Critical |
2,704,152,827 | godot | Shader failing to compile on exported projects in Android web | ### Tested versions
Found this bug on projects built and exported using Godot v4.3.stable.official [77dcf97d8]
### System information
Android Web Browser (trying to run exported Godot project)
> **Note:** I first had this issue reported on a Samsung phone. I was able to reproduce it and copy the error log on my Amazon Fire HD 10 using the Kiwi browser.
### Issue description
I have a game that runs fine on desktop web browsers. But i'm running into an issue on Android web browsers. I have a TextureRect that uses a shader to display the main game world. The shader fails to compile and so the TextureRect is just black on Android web browsers. ~~You should be able to see the issue on the game here: https://ramblingstranger.itch.io/quantum-rift~~
***Edit:** I've figured out a workaround for my game for now, so you shouldn't actually see any issues at the link above. You can still see the issue in the MRP linked below.*
### MRP and steps to reproduce
I've also created an MRP. The MRP has two TextureRects. The one without a shader displays fine (on the left), but the one with a shader doesn't display (on the right). You can find the MRP here:
* https://github.com/ramblingstranger/shader-issue-mrp
* https://ramblingstranger.github.io/shader-issue-mrp/
In the MRP, the shader that i am using on the TextureRect is just this:
```glsl
shader_type canvas_item;
uniform sampler2D image;
void fragment()
{
vec4 pixel = texture(image, UV);
// This line causes the problem
vec4[1] pixels = {pixel};
COLOR = pixels[0];
}
```
> **Note:** The issue goes away if i change the fragment function to this:
> ```glsl
> void fragment()
> {
> vec4 pixel = texture(image, UV);
> COLOR = pixel;
> }
> ```
I used the Kiwi browser to access a dev console on Android and found the following error:
```
USER ERROR: CanvasShaderGLES3: Fragment shader compilation failed:
0:163 S0032: no default precision defined for variable 'vec4[1]'
at: _display_error_with_code (drivers/gles3/shader_gles3.cpp:254)
USER ERROR: Method/function failed.
at: _compile_specialization (drivers/gles3/shader_gles3.cpp:396)
USER WARNING: shader failed to compile, unable to bind shader.
at: _version_bind_sahder (./drivers/gles3/shader_gles3.h:222)
```
The dev console also spits out the shader that failed to compile. It appears to be the GLSL version of the Godot shader shown above. The part that directly corresponds to the shader i wrote is:
```glsl
{
vec4 m_pixel=texture(m_image, uv);
vec4 m_pixels[1]=vec4[1](m_pixel);
color=m_pixels[0];
}
``` | bug,platform:web,platform:android,needs testing,topic:shaders | low | Critical |
2,704,157,852 | tensorflow | MixedPrecision + XLA: Seen floating point types of different precisions | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.18.0
### Custom code
No
### OS platform and distribution
Google Colab
### Mobile device
_No response_
### Python version
Google Colab default
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When using bilinear interpolation + XLA + mixed_float16 policy issue raises during compilation.
Without bilinear interpolation or without XLA or without mixed_float16 there is no issue.
In Google Colab i have this issue only on CPU with TF 2.17 and both CPU & GPU with TF 2.18.
### Standalone code to reproduce the issue
```shell
https://colab.research.google.com/drive/1joSiScbM7Stc9bn1C4R_4sFkDzakrTsJ?usp=sharing
```
### Relevant log output
```shell
---------------------------------------------------------------------------
InternalError Traceback (most recent call last)
<ipython-input-4-9cc273be2d5a> in <cell line: 28>()
26 model.compile(loss='mse', optimizer='adam', run_eagerly=False, jit_compile=True)
27
---> 28 model.fit(dataset)
1 frames
/usr/local/lib/python3.10/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
51 try:
52 ctx.ensure_initialized()
---> 53 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
54 inputs, attrs, num_outputs)
55 except core._NotOkStatusException as e:
InternalError: Graph execution error:
Detected at node StatefulPartitionedCall defined at (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
File "/usr/local/lib/python3.10/dist-packages/colab_kernel_launcher.py", line 37, in <module>
File "/usr/local/lib/python3.10/dist-packages/traitlets/config/application.py", line 992, in launch_instance
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py", line 619, in start
File "/usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py", line 195, in start
File "/usr/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
File "/usr/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
File "/usr/lib/python3.10/asyncio/events.py", line 80, in _run
File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 685, in <lambda>
File "/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py", line 738, in _run_callback
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 825, in inner
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 786, in run
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 377, in dispatch_queue
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 250, in wrapper
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 748, in __init__
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 786, in run
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 361, in process_one
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 261, in dispatch_shell
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
File "/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py", line 539, in execute_request
File "/usr/local/lib/python3.10/dist-packages/tornado/gen.py", line 234, in wrapper
File "/usr/local/lib/python3.10/dist-packages/ipykernel/ipkernel.py", line 302, in do_execute
File "/usr/local/lib/python3.10/dist-packages/ipykernel/zmqshell.py", line 539, in run_cell
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 2975, in run_cell
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3030, in _run_cell
File "/usr/local/lib/python3.10/dist-packages/IPython/core/async_helpers.py", line 78, in _pseudo_sync_runner
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3257, in run_cell_async
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3473, in run_ast_nodes
File "/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py", line 3553, in run_code
File "<ipython-input-4-9cc273be2d5a>", line 28, in <cell line: 28>
File "/usr/local/lib/python3.10/dist-packages/keras/src/utils/traceback_utils.py", line 117, in error_handler
File "/usr/local/lib/python3.10/dist-packages/keras/src/backend/tensorflow/trainer.py", line 368, in fit
File "/usr/local/lib/python3.10/dist-packages/keras/src/backend/tensorflow/trainer.py", line 216, in function
File "/usr/local/lib/python3.10/dist-packages/keras/src/backend/tensorflow/trainer.py", line 129, in multi_step_on_iterator
during context [Unknown]: Seen floating point types of different precisions in %multiply.43589 = f32[2,8,8,1280]{3,2,1,0} multiply(f32[2,8,8,1280]{3,2,1,0} %add.43539, f16[2,8,8,1280]{3,2,1,0} %multiply.43588), metadata={op_type="Mul" op_name="mul_9" source_file="/usr/local/lib/python3.10/dist-packages/tensorflow/python/framework/ops.py" source_line=1196}, but mixed precision is disallowed.
[[{{node StatefulPartitionedCall}}]] [Op:__inference_multi_step_on_iterator_124474]
```
| stat:awaiting tensorflower,type:bug,comp:xla,TF 2.18 | medium | Critical |
2,704,163,230 | rust | ICE: infer: `index out of bounds: the len is 0 but the index is 0` | <!--
[31mICE[0m: Rustc ./a.rs '' 'thread 'rustc' panicked at compiler/rustc_infer/src/infer/mod.rs:1219:26: 'index out of bounds: the len is 0 but the index is 0'', 'thread 'rustc' panicked at compiler/rustc_infer/src/infer/mod.rs:1219:26: 'index out of bounds: the len is 0 but the index is 0''
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
struct Wrapper<'a, 'b>(T)
trait IntFactory {
fn stream(&self) -> impl IntFactory<stream(..): IntFactory<stream(..): Send> + Send>;
}
fn main() {}
````
original:
````rust
struct Wrapper<'a, 'b>(T)
trait IntFactory {
fn stream(&self) -> impl IntFactory<stream(..): IntFactory<stream(..): Send> + Send>;
//~^ ERROR cycle detected when resolving lifetimes for `IntFactory::stream`
}
fn main() {}
````
Version information
````
rustc 1.85.0-nightly (d6f88291f 2024-11-29)
binary: rustc
commit-hash: d6f88291f3ce96375683acc62d54710add042f98
commit-date: 2024-11-29
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.4
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/d6f88291f3ce96375683acc62d54710add042f98/compiler/rustc_infer/src/infer/mod.rs#L1213-L1225
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<details><summary><strong>Program output</strong></summary>
<p>
```
error: expected `;`, found keyword `trait`
--> /tmp/icemaker_global_tempdir.FqLUzmaudTzI/rustc_testrunner_tmpdir_reporting.xdJGS2xYMk6C/mvce.rs:1:26
|
1 | struct Wrapper<'a, 'b>(T)
| ^ help: add `;` here
2 |
3 | trait IntFactory {
| ----- unexpected token
error[E0412]: cannot find type `T` in this scope
--> /tmp/icemaker_global_tempdir.FqLUzmaudTzI/rustc_testrunner_tmpdir_reporting.xdJGS2xYMk6C/mvce.rs:1:24
|
1 | struct Wrapper<'a, 'b>(T)
| ^ not found in this scope
|
help: you might be missing a type parameter
|
1 | struct Wrapper<'a, 'b, T>(T)
| +++
error[E0658]: return type notation is experimental
--> /tmp/icemaker_global_tempdir.FqLUzmaudTzI/rustc_testrunner_tmpdir_reporting.xdJGS2xYMk6C/mvce.rs:4:47
|
4 | fn stream(&self) -> impl IntFactory<stream(..): IntFactory<stream(..): Send> + Send>;
| ^^^^
|
= note: see issue #109417 <https://github.com/rust-lang/rust/issues/109417> for more information
= help: add `#![feature(return_type_notation)]` to the crate attributes to enable
= note: this compiler was built on 2024-11-29; consider upgrading it if it is out of date
error[E0658]: return type notation is experimental
--> /tmp/icemaker_global_tempdir.FqLUzmaudTzI/rustc_testrunner_tmpdir_reporting.xdJGS2xYMk6C/mvce.rs:4:70
|
4 | fn stream(&self) -> impl IntFactory<stream(..): IntFactory<stream(..): Send> + Send>;
| ^^^^
|
= note: see issue #109417 <https://github.com/rust-lang/rust/issues/109417> for more information
= help: add `#![feature(return_type_notation)]` to the crate attributes to enable
= note: this compiler was built on 2024-11-29; consider upgrading it if it is out of date
thread 'rustc' panicked at compiler/rustc_infer/src/infer/mod.rs:1219:26:
index out of bounds: the len is 0 but the index is 0
stack backtrace:
0: 0x7fe9dbf5972a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hec1837868ef45960
1: 0x7fe9dc613d22 - core::fmt::write::h8542321c14f36bce
2: 0x7fe9dd9c9291 - std::io::Write::write_fmt::hf2a72a9478b52b86
3: 0x7fe9dbf59582 - std::sys::backtrace::BacktraceLock::print::h54b78d015c817d7f
4: 0x7fe9dbf5ba8a - std::panicking::default_hook::{{closure}}::h59fe07b020797497
5: 0x7fe9dbf5b8d3 - std::panicking::default_hook::h95e9258765c75220
6: 0x7fe9db0d8428 - std[8cd9e1f924e20ad9]::panicking::update_hook::<alloc[e7baf234c2c812f]::boxed::Box<rustc_driver_impl[14afabfc160a9240]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7fe9dbf5c248 - std::panicking::rust_panic_with_hook::hc2d98b5446c61d2e
8: 0x7fe9dbf5bf3a - std::panicking::begin_panic_handler::{{closure}}::h835f83c27c39eff3
9: 0x7fe9dbf59bd9 - std::sys::backtrace::__rust_end_short_backtrace::h1c30ff1ce1e39439
10: 0x7fe9dbf5bbfd - rust_begin_unwind
11: 0x7fe9d94976d0 - core::panicking::panic_fmt::h0563cf7c13bf6d1f
12: 0x7fe9da8282e6 - core::panicking::panic_bounds_check::h37a714e361d2a1ae
13: 0x7fe9dcee16f8 - <&rustc_middle[27682da1f3d8f4de]::ty::list::RawList<(), rustc_middle[27682da1f3d8f4de]::ty::generic_args::GenericArg> as rustc_type_ir[b61a683bada45ed3]::fold::TypeFoldable<rustc_middle[27682da1f3d8f4de]::ty::context::TyCtxt>>::try_fold_with::<rustc_middle[27682da1f3d8f4de]::ty::fold::BoundVarReplacer<<rustc_infer[279a1b95418dab6f]::infer::InferCtxt>::instantiate_binder_with_fresh_vars::ToFreshVars>>
14: 0x7fe9dcee2620 - <rustc_middle[27682da1f3d8f4de]::ty::fold::BoundVarReplacer<<rustc_infer[279a1b95418dab6f]::infer::InferCtxt>::instantiate_binder_with_fresh_vars::ToFreshVars> as rustc_type_ir[b61a683bada45ed3]::fold::TypeFolder<rustc_middle[27682da1f3d8f4de]::ty::context::TyCtxt>>::fold_ty
15: 0x7fe9dcee1544 - <&rustc_middle[27682da1f3d8f4de]::ty::list::RawList<(), rustc_middle[27682da1f3d8f4de]::ty::generic_args::GenericArg> as rustc_type_ir[b61a683bada45ed3]::fold::TypeFoldable<rustc_middle[27682da1f3d8f4de]::ty::context::TyCtxt>>::try_fold_with::<rustc_middle[27682da1f3d8f4de]::ty::fold::BoundVarReplacer<<rustc_infer[279a1b95418dab6f]::infer::InferCtxt>::instantiate_binder_with_fresh_vars::ToFreshVars>>
16: 0x7fe9dcee0f75 - <rustc_infer[279a1b95418dab6f]::infer::InferCtxt>::instantiate_binder_with_fresh_vars::<rustc_type_ir[b61a683bada45ed3]::predicate::TraitRef<rustc_middle[27682da1f3d8f4de]::ty::context::TyCtxt>>
17: 0x7fe9d9649431 - <rustc_trait_selection[dbc08e0067937613]::traits::select::SelectionContext>::candidate_from_obligation_no_cache
18: 0x7fe9d96908e4 - <rustc_trait_selection[dbc08e0067937613]::traits::select::SelectionContext>::poly_select::{closure#0}
19: 0x7fe9dcdee4ad - rustc_trait_selection[dbc08e0067937613]::traits::project::opt_normalize_projection_term
20: 0x7fe9dcde9715 - <rustc_trait_selection[dbc08e0067937613]::traits::normalize::AssocTypeNormalizer as rustc_type_ir[b61a683bada45ed3]::fold::TypeFolder<rustc_middle[27682da1f3d8f4de]::ty::context::TyCtxt>>::fold_ty
21: 0x7fe9dcd86e15 - <&rustc_middle[27682da1f3d8f4de]::ty::list::RawList<(), rustc_middle[27682da1f3d8f4de]::ty::generic_args::GenericArg> as rustc_type_ir[b61a683bada45ed3]::fold::TypeFoldable<rustc_middle[27682da1f3d8f4de]::ty::context::TyCtxt>>::try_fold_with::<rustc_trait_selection[dbc08e0067937613]::traits::normalize::AssocTypeNormalizer>
22: 0x7fe9dcd86c5a - <rustc_trait_selection[dbc08e0067937613]::traits::normalize::AssocTypeNormalizer as rustc_type_ir[b61a683bada45ed3]::fold::FallibleTypeFolder<rustc_middle[27682da1f3d8f4de]::ty::context::TyCtxt>>::try_fold_predicate
23: 0x7fe9dd64dd70 - rustc_trait_selection[dbc08e0067937613]::traits::normalize::normalize_with_depth_to::<rustc_middle[27682da1f3d8f4de]::ty::predicate::Clause>::{closure#0}
24: 0x7fe9dd64d953 - <rustc_trait_selection[dbc08e0067937613]::traits::engine::ObligationCtxt<rustc_trait_selection[dbc08e0067937613]::traits::FulfillmentError>>::normalize::<rustc_middle[27682da1f3d8f4de]::ty::predicate::Clause>
25: 0x7fe9dd64d76a - <rustc_hir_analysis[36ad47a662f37d3b]::check::wfcheck::WfCheckingCtxt>::normalize::<rustc_middle[27682da1f3d8f4de]::ty::predicate::Clause>
26: 0x7fe9dcd8da1f - rustc_hir_analysis[36ad47a662f37d3b]::check::wfcheck::check_associated_item
27: 0x7fe9dd0b4784 - rustc_hir_analysis[36ad47a662f37d3b]::check::wfcheck::check_well_formed
28: 0x7fe9dd0b3e75 - rustc_query_impl[4f8f76f3d31f708e]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[4f8f76f3d31f708e]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[27682da1f3d8f4de]::query::erase::Erased<[u8; 1usize]>>
29: 0x7fe9dd0b3642 - rustc_query_system[8e784080364d5611]::query::plumbing::try_execute_query::<rustc_query_impl[4f8f76f3d31f708e]::DynamicConfig<rustc_data_structures[d2e27b1fdf5c45a]::vec_cache::VecCache<rustc_span[c479824c5b44d1f2]::def_id::LocalDefId, rustc_middle[27682da1f3d8f4de]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[8e784080364d5611]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[4f8f76f3d31f708e]::plumbing::QueryCtxt, false>
30: 0x7fe9dd0b3346 - rustc_query_impl[4f8f76f3d31f708e]::query_impl::check_well_formed::get_query_non_incr::__rust_end_short_backtrace
31: 0x7fe9dd0b062b - rustc_middle[27682da1f3d8f4de]::query::plumbing::query_ensure_error_guaranteed::<rustc_data_structures[d2e27b1fdf5c45a]::vec_cache::VecCache<rustc_span[c479824c5b44d1f2]::def_id::LocalDefId, rustc_middle[27682da1f3d8f4de]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[8e784080364d5611]::dep_graph::graph::DepNodeIndex>, ()>
32: 0x7fe9dd0b0ab8 - rustc_hir_analysis[36ad47a662f37d3b]::check::wfcheck::check_mod_type_wf
33: 0x7fe9dd0b064b - rustc_query_impl[4f8f76f3d31f708e]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[4f8f76f3d31f708e]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[27682da1f3d8f4de]::query::erase::Erased<[u8; 1usize]>>
34: 0x7fe9dd15be48 - rustc_query_system[8e784080364d5611]::query::plumbing::try_execute_query::<rustc_query_impl[4f8f76f3d31f708e]::DynamicConfig<rustc_query_system[8e784080364d5611]::query::caches::DefaultCache<rustc_span[c479824c5b44d1f2]::def_id::LocalModDefId, rustc_middle[27682da1f3d8f4de]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[4f8f76f3d31f708e]::plumbing::QueryCtxt, false>
35: 0x7fe9dd15bbf0 - rustc_query_impl[4f8f76f3d31f708e]::query_impl::check_mod_type_wf::get_query_non_incr::__rust_end_short_backtrace
36: 0x7fe9dc84e4dc - rustc_hir_analysis[36ad47a662f37d3b]::check_crate
37: 0x7fe9dcf5c9fc - rustc_interface[c5a3bcc8dca6839d]::passes::run_required_analyses
38: 0x7fe9dcf5759e - rustc_interface[c5a3bcc8dca6839d]::passes::analysis
39: 0x7fe9dcf5756f - rustc_query_impl[4f8f76f3d31f708e]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[4f8f76f3d31f708e]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[27682da1f3d8f4de]::query::erase::Erased<[u8; 1usize]>>
40: 0x7fe9dd56623a - rustc_query_system[8e784080364d5611]::query::plumbing::try_execute_query::<rustc_query_impl[4f8f76f3d31f708e]::DynamicConfig<rustc_query_system[8e784080364d5611]::query::caches::SingleCache<rustc_middle[27682da1f3d8f4de]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[4f8f76f3d31f708e]::plumbing::QueryCtxt, false>
41: 0x7fe9dd565f0e - rustc_query_impl[4f8f76f3d31f708e]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
42: 0x7fe9dd5fc407 - rustc_interface[c5a3bcc8dca6839d]::interface::run_compiler::<core[5de9dc4edf684167]::result::Result<(), rustc_span[c479824c5b44d1f2]::ErrorGuaranteed>, rustc_driver_impl[14afabfc160a9240]::run_compiler::{closure#0}>::{closure#1}
43: 0x7fe9dd532fe1 - std[8cd9e1f924e20ad9]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[c5a3bcc8dca6839d]::util::run_in_thread_with_globals<rustc_interface[c5a3bcc8dca6839d]::util::run_in_thread_pool_with_globals<rustc_interface[c5a3bcc8dca6839d]::interface::run_compiler<core[5de9dc4edf684167]::result::Result<(), rustc_span[c479824c5b44d1f2]::ErrorGuaranteed>, rustc_driver_impl[14afabfc160a9240]::run_compiler::{closure#0}>::{closure#1}, core[5de9dc4edf684167]::result::Result<(), rustc_span[c479824c5b44d1f2]::ErrorGuaranteed>>::{closure#0}, core[5de9dc4edf684167]::result::Result<(), rustc_span[c479824c5b44d1f2]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5de9dc4edf684167]::result::Result<(), rustc_span[c479824c5b44d1f2]::ErrorGuaranteed>>
44: 0x7fe9dd532c88 - <<std[8cd9e1f924e20ad9]::thread::Builder>::spawn_unchecked_<rustc_interface[c5a3bcc8dca6839d]::util::run_in_thread_with_globals<rustc_interface[c5a3bcc8dca6839d]::util::run_in_thread_pool_with_globals<rustc_interface[c5a3bcc8dca6839d]::interface::run_compiler<core[5de9dc4edf684167]::result::Result<(), rustc_span[c479824c5b44d1f2]::ErrorGuaranteed>, rustc_driver_impl[14afabfc160a9240]::run_compiler::{closure#0}>::{closure#1}, core[5de9dc4edf684167]::result::Result<(), rustc_span[c479824c5b44d1f2]::ErrorGuaranteed>>::{closure#0}, core[5de9dc4edf684167]::result::Result<(), rustc_span[c479824c5b44d1f2]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5de9dc4edf684167]::result::Result<(), rustc_span[c479824c5b44d1f2]::ErrorGuaranteed>>::{closure#1} as core[5de9dc4edf684167]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
45: 0x7fe9dd5323bb - std::sys::pal::unix::thread::Thread::new::thread_start::h54530dbf71bba30b
46: 0x7fe9dedc439d - <unknown>
47: 0x7fe9dee4949c - <unknown>
48: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.85.0-nightly (d6f88291f 2024-11-29) running on x86_64-unknown-linux-gnu
query stack during panic:
#0 [check_well_formed] checking that `IntFactory::stream` is well-formed
#1 [check_mod_type_wf] checking that types are well-formed in top-level module
end of query stack
error: aborting due to 4 previous errors
Some errors have detailed explanations: E0412, E0658.
For more information about an error, try `rustc --explain E0412`.
```
</p>
</details>
<!--
query stack:
#0 [check_well_formed] checking that `IntFactory::stream` is well-formed
#1 [check_mod_type_wf] checking that types are well-formed in top-level module
-->
| I-ICE,T-compiler,C-bug,S-bug-has-test | low | Critical |
2,704,168,869 | storybook | [Bug]: Including stories in documentation pages isn't possible | ### Describe the bug
I want to create a documentation page showcasing several interface components together. This page isn't bound to the stories of another component, though. So I followed the [documentation on MDX](https://storybook.js.org/docs/writing-docs/mdx#writing-unattached-documentation) and created the following MDX file next to the relevant stories:
```mdx
import { Meta, Story } from '@storybook/blocks';
import * as TextFieldStories from '@/components/Interface/Form/TextField/TextField.stories';
<Meta title="Widgets/Interactive/Forms" />
Forms
=====
To collect user input, you can use a wide range of form fields, including text fields, checkboxes, radio buttons, and more.
<Story of={TextFieldStories.Basic} meta={TextFieldStories} />
```
This, however, yields the following error:
<details>
<summary>
<code>Error: Unexpected `of={undefined}`, did you mistype a CSF file reference?</code>
</summary>
```
at gn.referenceMeta (http://localhost:6006/sb-preview/runtime.js:6379:13)
at getStoryId2 (http://localhost:6006/node_modules/.cache/storybook/1c3385a5d25e538d10b518b310c74d3ca2690b6aaffeadccd74da79736171f86/sb-vite/deps/chunk-Q4YTDD5O.js?v=78d66922:2869:26)
at Story2 (http://localhost:6006/node_modules/.cache/storybook/1c3385a5d25e538d10b518b310c74d3ca2690b6aaffeadccd74da79736171f86/sb-vite/deps/chunk-Q4YTDD5O.js?v=78d66922:2882:70)
at renderWithHooks (http://localhost:6006/node_modules/.cache/storybook/1c3385a5d25e538d10b518b310c74d3ca2690b6aaffeadccd74da79736171f86/sb-vite/deps/chunk-ALAXW4XP.js?v=78d66922:11548:26)
at mountIndeterminateComponent (http://localhost:6006/node_modules/.cache/storybook/1c3385a5d25e538d10b518b310c74d3ca2690b6aaffeadccd74da79736171f86/sb-vite/deps/chunk-ALAXW4XP.js?v=78d66922:14926:21)
at beginWork (http://localhost:6006/node_modules/.cache/storybook/1c3385a5d25e538d10b518b310c74d3ca2690b6aaffeadccd74da79736171f86/sb-vite/deps/chunk-ALAXW4XP.js?v=78d66922:15914:22)
at beginWork$1 (http://localhost:6006/node_modules/.cache/storybook/1c3385a5d25e538d10b518b310c74d3ca2690b6aaffeadccd74da79736171f86/sb-vite/deps/chunk-ALAXW4XP.js?v=78d66922:19753:22)
at performUnitOfWork (http://localhost:6006/node_modules/.cache/storybook/1c3385a5d25e538d10b518b310c74d3ca2690b6aaffeadccd74da79736171f86/sb-vite/deps/chunk-ALAXW4XP.js?v=78d66922:19201:20)
at workLoopSync (http://localhost:6006/node_modules/.cache/storybook/1c3385a5d25e538d10b518b310c74d3ca2690b6aaffeadccd74da79736171f86/sb-vite/deps/chunk-ALAXW4XP.js?v=78d66922:19137:13)
at renderRootSync (http://localhost:6006/node_modules/.cache/storybook/1c3385a5d25e538d10b518b310c74d3ca2690b6aaffeadccd74da79736171f86/sb-vite/deps/chunk-ALAXW4XP.js?v=78d66922:19116:15)
```
</details>
I experimented with different import variants and assured the project is configured correctly; I then discovered it *did* render when I included CSF meta from a story at the top:
```tsx
<Meta of={TextFieldStories} />
```
That is not what I want, though. It seems like it's possible to either have standalone MDX pages that can use some React components, but none of the Story-related ones, or Story MDX pages—but not both.
If that observation is correct, the documentation should make it clear. Right now, there's nothing suggesting this limitation.
### Reproduction link
https://stackblitz.com/edit/github-x4jdox?file=src%2Fstories%2FComponents.mdx
### Reproduction steps
1. Go to above link
2. Witness error
### System
```bash
Storybook Environment Info:
System:
OS: macOS 15.0
CPU: (12) arm64 Apple M3 Pro
Shell: 3.7.1 - /opt/homebrew/bin/fish
Binaries:
Node: 23.3.0 - /opt/homebrew/bin/node
Yarn: 1.22.18 - /usr/local/bin/yarn
npm: 10.9.0 - /opt/homebrew/bin/npm <----- active
Browsers:
Chrome: 131.0.6778.86
Safari: 18.0
npmPackages:
@storybook/addon-a11y: ^8.4.5 => 8.4.5
@storybook/addon-designs: ^8.0.4 => 8.0.4
@storybook/addon-essentials: ^8.4.4 => 8.4.5
@storybook/addon-interactions: ^8.4.4 => 8.4.5
@storybook/addon-links: ^8.4.5 => 8.4.5
@storybook/addon-themes: ^8.4.4 => 8.4.5
@storybook/blocks: ^8.4.4 => 8.4.5
@storybook/test: ^8.4.4 => 8.4.5
@storybook/vue3: ^8.4.4 => 8.4.5
@storybook/vue3-vite: ^8.4.4 => 8.4.5
eslint-plugin-storybook: ^0.11.1 => 0.11.1
msw-storybook-addon: ^2.0.4 => 2.0.4
storybook: ^8.4.4 => 8.4.5
storybook-dark-mode: ^4.0.2 => 4.0.2
```
### Additional context
_No response_ | bug,mdx | low | Critical |
2,704,180,476 | ollama | goroutine 7 [running] | ### What is the issue?
Hello.
I am testing with olama. Using olama, the embedded model and LLM model are on gpu. As soon as you enter the question, the embedded and LLM models are sequentially performed.
And since we built a web server using streamlit, we have to handle a lot of question inputs. We ran the olama server by adding an option to quickly process a large number of requests using olama. The commands used to run the server are as follows. CUDA_VISIBLE_DEVICE=1 OLAMA_GPU_OVERHEAD=500000000 OLAMA_NUM_PARALLEL=11 OLAMA_KEEP_ALIVE=-1 olama service
Run the streamlit app and access the web server to input and answer questions well. However, at an unspecified time, olama outputs the following message.
The first picture is ollama 0.4.2, and the second picture is a message picture that occurred when using ollama 0.4.6.
In 0.4.2, the answer is not answered only when the error message has occurred, and the next question is generated again. However, after the error message in 0.4.6, the olama lm model is deallocated from gpu memory and the olama server is also stopped. For reference, the same phenomenon as 0.4.6 occurs in all 0.4.3 to 0.4.5.


### OS
Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.6, 0.4.2 | bug,needs more info | low | Critical |
2,704,247,832 | PowerToys | Dynamically resize window based on current focus | ### Description of the new feature / enhancement
I often use two browser windows side-by-side on a widescreen monitor. I would like to see a feature that lets me define a resizing behavior for the window currently in focus. The adjoining window would shrink accordingly.
Settings To Consider:
- Scaling Direction to scale (up, down, left, right)
- Scaling Length (percentage? points?)
- Activation / Trigger (focus or mouse hover)
- Transition Speed
- Apply to Adjoining (yes/no)
- Apply to All (affecting all windows on the screen)
Note: Multiple profiles might be required.
How To Enable:
Enabled via keyboard shortcut.
Example:
I have 2x windows grouped together side-by-side. I activate the feature using a shortcut. The window currently in focus becomes the "master" and scales outward to 66% screen width (based on my settings). Adjoining windows scale inward at the adjoining edge, keepign the full window in view; any other edges remain static. When the adjoining window comes into focus, it reverts back to its original size, or scales out to 66% screen width if the Apply to Adjoining setting is enabled.
### Scenario when this would be used?
Example Scenarios:
- Side-by-side browser windows that alternate in width to give a wider viewport
- Tiled shells / terminals that auto-expand on focus
- Word documents kept open side-by-side to transfer content or because you have multiple projects and keep them all on a specific virtual desktop. (auto-expanding would display more ribbon options!)
### Supporting information
No additional info available. | Idea-New PowerToy,Product-Window Manager,Needs-Triage | low | Minor |
2,704,262,275 | kubernetes | informer.AddEventHandler: handle.HasSynced always returns false after panic | ### What happened?
I was testing something roughly like this:
```golang
informer := NewSharedInformer(source, &v1.Pod{}, 1*time.Second)
go informer.RunWithContext(ctx)
require.Eventually(t, informer.HasSynced, time.Minute, time.Millisecond, "informer has synced")
handler := ResourceEventHandlerFuncs{
AddFunc: func(obj any) {
panic("fake panic")
},
}
handle, err := informer.AddEventHandlerWithContext(ctx, handler, HandlerOptions{})
require.NoError(t, err)
require.Eventually(t, handle.HasSynced, time.Minute, time.Millisecond, "handler has synced")
```
This times out waiting for the handler to sync.
### What did you expect to happen?
`handle.HasSynced` = `ResourceEventHandlerRegistration.HasSynced` should return true eventually.
### How can we reproduce it (as minimally and precisely as possible)?
I'll have a full reproducer in one of my PRs soon.
### Anything else we need to know?
/sig api-machinery
### Kubernetes version
master ~= 1.32
| kind/bug,sig/api-machinery,triage/accepted | low | Critical |
2,704,286,524 | vscode | SCM Graph - Add horizontal scrollbar for source control graph | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Add horizontal scrollbar to source control graph in the source control panel. It should probably appear automatically if source control graph content doesn't fit into window, like it does in text editors. `shift+scroll` for horizontal scrolling would also be nice. I checked out source control settings but couldn't find any mention of horizontal scrolling. I also haven't found a similar feature request.
Maybe it won't be useful 99% of the time, but once in a while you get assigned to one of *those* projects...
 | bug,ux,scm | low | Minor |
2,704,287,776 | rust | dash: `x.py install` removes rustc and cargo, without installing them | With https://static.rust-lang.org/dist/rustc-1.83.0-src.tar.xz I create this config.toml
```
# Use different pre-set defaults than the global defaults.
#
# See `src/bootstrap/defaults` for more information.
# Note that this has no default value (x.py uses the defaults in `config.example.toml`).
profile = 'dist'
[llvm]
# Indicates whether the LLVM build is a Release or Debug build
optimize = true
# Whether to build LLVM as a dynamically linked library (as opposed to statically linked).
# Under the hood, this passes `--shared` to llvm-config.
# NOTE: To avoid performing LTO multiple times, we suggest setting this to `true` when `thin-lto` is enabled.
link-shared = true
[build]
# Which triples to build libraries (core/alloc/std/test/proc_macro) for. Each of these triples will
# be bootstrapped from the build triple themselves. In other words, this is the list of triples for
# which to build a library that can CROSS-COMPILE to that triple.
#
# Defaults to `host`. If you set this explicitly, you likely want to add all
# host triples to this list as well in order for those host toolchains to be
# able to compile programs for their native target.
target = ['x86_64-unknown-linux-gnu']
# Instead of downloading the src/stage0 version of Cargo specified, use
# this Cargo binary instead to build all Rust code
# If you set this, you likely want to set `rustc` as well.
cargo = '/usr/local/bin/cargo'
# Instead of downloading the src/stage0 version of the compiler
# specified, use this rustc binary instead as the stage0 snapshot compiler.
# If you set this, you likely want to set `cargo` as well.
rustc = '/usr/local/bin/rustc'
# Whether to build documentation by default. If false, rustdoc and
# friends will still be compiled but they will not be used to generate any
# documentation.
#
# You can still build documentation when this is disabled by explicitly passing paths,
# e.g. `x doc library`.
docs = false
# Enable a build of the extended Rust tool set which is not only the compiler
# but also tools such as Cargo. This will also produce "combined installers"
# which are used to install Rust and Cargo together.
# The `tools` (check `config.example.toml` to see its default value) option specifies
# which tools should be built if `extended = true`.
#
# This is disabled by default.
extended = true
# Build the sanitizer runtimes
sanitizers = true
# Build the profiler runtime (required when compiling with options that depend
# on this runtime, such as `-C profile-generate` or `-C instrument-coverage`).
profiler = true
# Arguments passed to the `./configure` script, used during distcheck. You
# probably won't fill this in but rather it's filled in by the `./configure`
# script. Useful for debugging.
configure-args = ['--enable-optimize-llvm', '--enable-extended', '--llvm-root=/usr/local', '--enable-profiler', '--enable-llvm-link-shared', '--enable-sanitizers', '--enable-local-rust', '--disable-docs', '--target=x86_64-unknown-linux-gnu']
[install]
[rust]
[target.x86_64-unknown-linux-gnu]
# Path to the `llvm-config` binary of the installation of a custom LLVM to link
# against. Note that if this is specified we don't compile LLVM at all for this
# target.
llvm-config = '/usr/local/bin/llvm-config'
[dist]
```
by running `../configure --enable-optimize-llvm --enable-extended --llvm-root=/usr/local --enable-profiler --enable-llvm-link-shared --enable-sanitizers --enable-local-rust --disable-docs --target=x86_64-unknown-linux-gnu`. Doing then `x.py -j1 build && x.py -j1 install` does print
```
…
Compiling proc-macro-srv-cli v0.0.0 (/src/rustc-1.83.0-src/src/tools/rust-analyzer/crates/proc-macro-srv-cli)
Finished `release` profile [optimized] target(s) in 4m 39s
Dist rustc-nightly-x86_64-unknown-linux-gnu
finished in 32.652 seconds
Installing stage2 rustc (x86_64-unknown-linux-gnu)
install: uninstalling component 'rustc'
install: creating uninstall script at /usr/local/lib/rustlib/uninstall.sh
install: installing component 'rustc'
rustc installed.
Uplifting rustc (stage1 -> stage3)
Building tool cargo (stage2 -> stage3, x86_64-unknown-linux-gnu)
Finished `release` profile [optimized] target(s) in 33.90s
Dist cargo-nightly-x86_64-unknown-linux-gnu
finished in 19.542 seconds
Installing stage2 cargo (x86_64-unknown-linux-gnu)
install: uninstalling component 'cargo'
install: creating uninstall script at /usr/local/lib/rustlib/uninstall.sh
install: installing component 'cargo'
cargo installed.
Building tool rust-analyzer (stage2 -> stage3, x86_64-unknown-linux-gnu)
thread 'main' panicked at src/core/build_steps/compile.rs:2239:19:
failed to execute command: cd "/src/rustc-1.83.0-src" && env -u MAKEFLAGS -u MFLAGS AR_x86_64_unknown_linux_gnu="ar" CARGO_INCREMENTAL="0" CARGO_PROFILE_RELEASE_DEBUG="0" CARGO_PROFILE_RELEASE_DEBUG_ASSERTIONS="false" CARGO_PROFILE_RELEASE_OVERFLOW_CHECKS="false" CARGO_PROFILE_RELEASE_STRIP="false" CARGO_TARGET_DIR="/src/rustc-1.83.0-src/A/build/x86_64-unknown-linux-gnu/stage2-tools" CC_x86_64_unknown_linux_gnu="cc" CFG_COMPILER_BUILD_TRIPLE="x86_64-unknown-linux-gnu" CFG_COMPILER_HOST_TRIPLE="x86_64-unknown-linux-gnu" CFG_RELEASE="1.83.0-nightly" CFG_RELEASE_CHANNEL="nightly" CFG_RELEASE_NUM="1.83.0" CFG_VERSION="1.83.0-nightly (90b35a623 2024-11-26) (built from a source tarball)" CFG_VER_DATE="2024-11-26" CFG_VER_HASH="90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf" CFLAGS_x86_64_unknown_linux_gnu="-ffunction-sections -fdata-sections -fPIC -m64" CXXFLAGS_x86_64_unknown_linux_gnu="-ffunction-sections -fdata-sections -fPIC -m64" CXX_x86_64_unknown_linux_gnu="c++" DOC_RUST_LANG_ORG_CHANNEL="https://doc.rust-lang.org/nightly" FORCE_ON_BROKEN_PIPE_KILL="-Zon-broken-pipe=kill" LIBC_CHECK_CFG="1" LIBRARY_PATH="/usr/local/lib" LZMA_API_STATIC="1" RANLIB_x86_64_unknown_linux_gnu="ar s" REAL_LIBRARY_PATH_VAR="LD_LIBRARY_PATH" RUSTBUILD_NATIVE_DIR="/src/rustc-1.83.0-src/A/build/x86_64-unknown-linux-gnu/native" RUSTC="/src/rustc-1.83.0-src/A/build/bootstrap/debug/rustc" RUSTC_ALLOW_FEATURES="rustc_private,proc_macro_internals,proc_macro_diagnostic,proc_macro_span,proc_macro_span_shrink,proc_macro_def_site" RUSTC_BOOTSTRAP="1" RUSTC_BREAK_ON_ICE="1" RUSTC_ERROR_METADATA_DST="/src/rustc-1.83.0-src/A/build/tmp/extended-error-metadata" RUSTC_HOST_FLAGS="-Zunstable-options --check-cfg=cfg(bootstrap)" RUSTC_INSTALL_BINDIR="bin" RUSTC_LIBDIR="/src/rustc-1.83.0-src/A/build/x86_64-unknown-linux-gnu/stage2/lib" RUSTC_LINK_STD_INTO_RUSTC_DRIVER="1" RUSTC_LINT_FLAGS="-Wrust_2018_idioms -Wunused_lifetimes -Dwarnings" RUSTC_REAL="/src/rustc-1.83.0-src/A/build/x86_64-unknown-linux-gnu/stage2/bin/rustc" RUSTC_SNAPSHOT="/src/rustc-1.83.0-src/A/build/x86_64-unknown-linux-gnu/stage2/bin/rustc" RUSTC_SNAPSHOT_LIBDIR="/src/rustc-1.83.0-src/A/build/x86_64-unknown-linux-gnu/stage2/lib" RUSTC_STAGE="2" RUSTC_SYSROOT="/src/rustc-1.83.0-src/A/build/x86_64-unknown-linux-gnu/stage2" RUSTC_TLS_MODEL_INITIAL_EXEC="1" RUSTC_VERBOSE="0" RUSTC_WRAPPER="/src/rustc-1.83.0-src/A/build/bootstrap/debug/rustc" RUSTDOC="/src/rustc-1.83.0-src/A/build/bootstrap/debug/rustdoc" RUSTDOCFLAGS="-Z threads=1 --cfg=windows_raw_dylib -Csymbol-mangling-version=v0 -Zunstable-options --check-cfg=cfg(bootstrap) --check-cfg=cfg(llvm_enzyme) --check-cfg=cfg(parallel_compiler) --check-cfg=cfg(rust_analyzer) -Dwarnings -Wrustdoc::invalid_codeblock_attributes --crate-version 1.83.0-nightly\t(90b35a623\t2024-11-26)\t(built\tfrom\ta\tsource\ttarball) --cfg=parallel_compiler" RUSTDOC_REAL="/path/to/nowhere/rustdoc/not/required" RUSTFLAGS="-Z threads=1 --cfg=windows_raw_dylib -Csymbol-mangling-version=v0 -Zunstable-options --check-cfg=cfg(bootstrap) --check-cfg=cfg(llvm_enzyme) --check-cfg=cfg(parallel_compiler) --check-cfg=cfg(rust_analyzer) -Zmacro-backtrace -Csplit-debuginfo=off --cfg=parallel_compiler -Clink-args=-Wl,-z,origin -Clink-args=-Wl,-rpath,$ORIGIN/../lib -Zunstable-options" RUST_TEST_THREADS="1" SYSROOT="/src/rustc-1.83.0-src/A/build/x86_64-unknown-linux-gnu/stage2" __CARGO_DEFAULT_LIB_METADATA="nightlytool-rustc" "/usr/local/bin/cargo" "build" "--target" "x86_64-unknown-linux-gnu" "--release" "-Zbinary-dep-depinfo" "-j" "1" "--manifest-path" "/src/rustc-1.83.0-src/src/tools/rust-analyzer/Cargo.toml" "--features" "in-rust-tree" "--message-format" "json-render-diagnostics"
ERROR: No such file or directory (os error 2)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Build completed unsuccessfully in 0:18:26
```
At this moment rustc and cargo are not anymore installed under /usr/local/bin. The same upgrade procedure has worked for 1.82, 1.81, 1.80, so this is a regression. | T-bootstrap,C-bug,regression-untriaged | low | Critical |
2,704,295,805 | ant-design | Form.Item 错误提示信息中新增的additionalDom 未作为参数传给_internalItemRender,导致pro-components中的错误信息跳动 | ### Reproduction link
[](https://codesandbox.io/p/sandbox/objective-aj-dhwfyt)
### Steps to reproduce
清空第一个input的数据
### What is expected?
可以将additionalDom 变量传入_internalItemRender 这样 pro-components可以直接使用它
### What is actually happening?
错误信息跳动了
| Environment | Info |
| --- | --- |
| antd | 5.22.2 |
| React | 18.2.0 |
| System | windows |
| Browser | chrome版本 113.0.5672.127(正式版本) (64 位) |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Major |
2,704,314,172 | vscode | VSCode freezing during search | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.95.3 (user setup)
Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813
Date: 2024-11-13T14:50:04.152Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.19045
Steps to Reproduce:
Scenario 1:

Scenario 2:

The path in files.includes can be changed to an absolute path.

1. Open the vscode source code project.
2. Open the settings.json file.
3. Search`\((.*\n.*)*\);`and select the regular button.
4. The vscode freezes and a dialog is displayed.

| bug,search | low | Critical |
2,704,326,524 | ollama | Add a CORS permissions model into the Ollama UI ("Allow example.com to use Ollama? [Yes] [No]") | Lots of AI apps out there solve access to LLM in a few different ways:
- Directly use a hosted model and foot the bill for the user
- Ask the user to provide their own hosted model API key (😬)
- Let the user host the app themselves, providing the API key this way
- Connect with a local model provider like Ollama, but this has several issues today[^1]
I think Ollama is in wide enough circulation that it could create a permissions standard around local model access from the browser. An initial draft of this could be very simple:
The first time a request comes in with an `Origin` value that's never been seen before, hold the request and ask the user with a system notification: "Allow example.com to use Ollama?" If the user chooses to allow, the domain gets added to an allow list, which is used to send a valid CORS header to the incoming request. If the user chooses to deny, add the domain to a deny list which just means the CORS header will not be sent. If the user makes no choice, then time out the request and ask again next time.
This can still be combined with the existing `OLLAMA_ORIGINS` setting so if something is in there it's automatically allowed (except for `*`).
[^1]: The first issue is that the user must run terminal commands to enable CORS. The second one is that unless they use `*` then turning on access for one app will remove access for another (unless they know how to read and combine the list of domains). And finally the third one is that the user will be lazy and pick `*` and now any site in the world can use their local model. | feature request | low | Minor |
2,704,399,908 | tauri | [feat] feature request: allow config open new window (target=blank) behavior when build webview window | ### Describe the problem
Currently, the Tauri application only supports configuring the behavior of opening new windows (target="_blank") using the shell plugin, which defaults to opening new windows in the default browser. It would be highly beneficial to have additional configuration options that allow developers to:
- Open new windows using a Tauri window.
- Open new windows within the current window.
To achieve this currently, we need to inject JavaScript and monitor click events. This approach is not very stable and may cause unexpected behavior by polluting the front-end page
**Rationale**
Providing these configuration options would enhance the flexibility and usability of Tauri applications, allowing developers to better control the user experience and behavior of their app. (some security software will prevent pop up default browser)
Related discussion:
https://github.com/tauri-apps/tauri/discussions/11809
https://github.com/tauri-apps/tauri/issues/1657
### Describe the solution you'd like
To achieve this currently, we need to inject JavaScript and monitor click events. This approach is not very stable and may cause unexpected behavior by polluting the front-end page
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Minor |
2,704,458,205 | kubernetes | Automatic `fsGroup` handling | ### What would you like to be added?
> [!NOTE]
> The template for feature requests says: _"Feature requests are unlikely to make progress as issues. Please consider engaging with SIGs on slack and mailing lists, instead."_
> I did start a [thread on Slack](https://kubernetes.slack.com/archives/C0BP8PW9G/p1732711766035909) and was asked to create an issue instead.
A feature to allow "automatic" handling of `fsGroup`. Currently users have to pick a number while the actual number often will not matter much. I'd like a feature asking Kubernetes to assign a `fsGroup` automatically if none has been specified explicitly or e.g. via a webhook/controller/etc.
This is part feature request part request for help in figuring out best practices, so I'd also be happy to just have a good discussion.
When researching this issue I would have _loved_ to find this exact issue to save me a lot of time.
And I do understand (more on that later) that this is partially due to OpenShift specifics and therefore not relevant to _this_ project.
/sig node
### Why is this needed?
We need to mount a few ephemeral volumes into a pod/container.
These mounts are accessible only to root, meaning the container user (non-root) cannot access them without setting `securityContext.fsGroup`, which is what we currently do in our operators (set a random `fsGroup` that is) but this has problems.
### Background
* Vanilla Kubernetes requires an `fsGroup` for non-root container users to access these volumes.
* OpenShift provides `SecurityContextConstraints` (SCC), which can automatically assign a valid fsGroup from a specified range if not explicitly set (depending on how exactly they are configured)
* This helps us avoid specifying an fsGroup directly, which can simplify configuration
* However, SCCs complicate things as they’re not available in vanilla Kubernetes
* SCCs can put constraints on the allowed `fsGroup` values
#### Summary
* `Vanilla Kubernetes` requires an explicit fsGroup
* If not set: Containers fail to access volumes
* `OpenShift` behavior will depend on the exact `SCC` active (it's not possible to find out upfront which is):
* It might fail if a `fsGroup` is set and the group is out of the specified range -> Better solution here is to set no `fsGroup` so OpenShift can assign one automatically
* It might fail if no `fsGroup` is set and OpenShift is set up to not assign one automatically
This leaves us with no good option.
### Current Proposed Solution/Idea
> [!NOTE]
> This came out of a [longer discussion](https://kubernetes.slack.com/archives/CAW0GV7A5/p1730929137603739) in #kubernetes-operator on Slack.
* Detect OpenShift (e.g., by querying the SCC API)
* If OpenShift:
* Set a special annotation requiring the `restricted-v2` SCC, allowing automatic fsGroup assignment.
* Don’t set securityContext.fsGroup.
* Else (vanilla Kubernetes):
* Hardcode fsGroup.
One could argue that this is an OpenShift-specific problem that we’ve addressed.
However, I wonder if there’s value in a generalized feature allowing for an automatically assigned fsGroup—without having to specify one manually—when the specific integer id to use is non-essential.
### Suggestion
* Leave the current functionality of `securityContext.fsGroup` as it is:
* If set to an integer use that
* If not set, don't set `fsGroup` at all
* Add a new value `auto` that will automatically pick a `fsGroup` value if none has been set
* This is just _one idea_, this could be implemented multiple ways and I do understand that picking the "wrong" group Id might also have security implications | sig/node,kind/feature,needs-triage | low | Minor |
2,704,564,316 | tailwindcss | V4 Minification Issue with ::after | In v4 beta 1
I had this...
```CSS
.btn-menu::after {
@apply absolute -bottom-1.5 block h-0.5 w-0 bg-neutral-200 dark:bg-neutral-700 transition-all content-[''];
}
```
When I compile the npm run build using laravel and vite...
```
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
export default defineConfig({
plugins: [
laravel({
input: [
'resources/css/styles.css',
'resources/js/custom.js',
'resources/css/admin.css',
'resources/js/admin.js'],
refresh: true,
}),
],
});
```
I get this error:
```
vite v5.4.9 building for production...
✓ 27 modules transformed.
rendering chunks (1)...warnings when minifying css:
▲ [WARNING] Unexpected ")" [css-syntax-error]
<stdin>:2:62793:
2 │ ...ion:absolute}.btn-menu:after:is(){background-color:var(--color-n...
╵ ^
```
This was fine with v3, any idea what I'm doing wrong? | needs reproduction,v4 | low | Critical |
2,704,639,559 | rust | Missed optimization in sum of remainders | I tried these codes:
```rust
const N: usize = 100000;
#[unsafe(no_mangle)]
fn foo0(n: usize) -> usize {
// assert!(n > 0);
if n == 0 {
return n;
}
let s = (0..N).map(|i| i % n).sum();
s
}
```
```rust
const N: usize = 100000;
#[unsafe(no_mangle)]
fn foo1(n: usize) -> usize {
// assert!(n > 0);
if n == 0 {
return n;
}
let mut s = 0;
let (q, r) = (N / n, N % n);
if q > 0 {
s += q * (n - 1) * n / 2;
}
if r > 0 {
s += (r - 1) * r / 2;
}
s
}
```
I expected the loop in `foo0` to be optimized away to a result similar to `foo1`. | A-LLVM,T-compiler,C-optimization | low | Minor |
2,704,679,415 | langchain | Milvus upsert doesnt retain previous id/pk | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
from langchain_milvus import Milvus
URI = "http://localhost:19530"
vector_store = Milvus(
embedding_function=embeddings,
connection_args={"uri": URI},
collection_name="langchain_example",
auto_id = True,
drop_old = True
)
from langchain_core.documents import Document
document_1 = Document(
page_content="I had chocalate chip pancakes and scrambled eggs for breakfast this morning.",
metadata={"source": "tweet"},
)
document_2 = Document(
page_content="The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.",
metadata={"source": "news"},
)
document_3 = Document(
page_content="Building an exciting new project with LangChain - come check it out!",
metadata={"source": "tweet"},
)
document_4 = Document(
page_content="Robbers broke into the city bank and stole $1 million in cash.",
metadata={"source": "news"},
)
document_5 = Document(
page_content="Wow! That was an amazing movie. I can't wait to see it again.",
metadata={"source": "tweet"},
)
document_6 = Document(
page_content="Is the new iPhone worth the price? Read this review to find out.",
metadata={"source": "website"},
)
document_7 = Document(
page_content="The top 10 soccer players in the world right now.",
metadata={"source": "website"},
)
document_8 = Document(
page_content="LangGraph is the best framework for building stateful, agentic applications!",
metadata={"source": "tweet"},
)
document_9 = Document(
page_content="The stock market is down 500 points today due to fears of a recession.",
metadata={"source": "news"},
)
document_10 = Document(
page_content="I have a bad feeling I am going to get deleted :(",
metadata={"source": "tweet"},
)
documents = [
document_1,
document_2,
document_3,
document_4,
document_5,
document_6,
document_7,
document_8,
document_9,
document_10,
]
vector_store.add_documents(documents=documents)
results = vector_store.similarity_search(
"LangChain provides abstractions to make working with LLMs easy",
k=2,
)
id = results[0].metadata['pk']
vector_store.upsert(ids=[id],documents=[Document(
page_content="Building an exciting new project with LangChain - come check it out!",
metadata={"source": "samsung"},
)])
print("updated list")
results = vector_store.similarity_search(
"LangChain provides abstractions to make working with LLMs easy",
k=3,
)
print("modified: ")
pprint(results)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use langchain milvus to update the metadata in my chunks. Currently im able to replace the metadata and content, but it generates a new pk/id when doing so. Please add another function argument that lets us use the same id as the original chunk. (I know its stupid but my boss wants this, dont ask me)
so something like this:
```python
def upsert( # type: ignore
self,
ids: Optional[List[str]] = None,
documents: List[Document] | None = None,
use_original_ids : Optional[Bool], # something like this
**kwargs: Any,
) -> List[str] | None:
"""Update/Insert documents to the vectorstore.
Args:
ids: IDs to update - Let's call get_pks to get ids with expression \n
documents (List[Document]): Documents to add to the vectorstore.
Returns:
List[str]: IDs of the added texts.
"""
if documents is None or len(documents) == 0:
logger.debug("No documents to upsert.")
return None
if ids is not None and len(ids):
try:
self.delete(ids=ids)
except MilvusException:
pass
try:
if(use_original_ids): # something like this
return self.add_documents(documents=documents, ids=ids, **kwargs)
else:
return self.add_documents(documents=documents, **kwargs)
except MilvusException as exc:
logger.error(
"Failed to upsert entities: %s error: %s", self.collection_name, exc
)
raise exc
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #21~22.04.1-Ubuntu SMP Thu Nov 7 17:33:30 UTC 2024
> Python Version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.15
> langchain: 0.3.7
> langchain_community: 0.2.3
> langsmith: 0.1.129
> langchain_milvus: 0.1.6
> langchain_openai: 0.2.1
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.9.5
> aiosqlite: 0.20.0
> aleph-alpha-client: Installed. No version info available.
> anthropic: Installed. No version info available.
> arxiv: Installed. No version info available.
> assemblyai: Installed. No version info available.
> async-timeout: 4.0.3
> atlassian-python-api: Installed. No version info available.
> azure-ai-documentintelligence: Installed. No version info available.
> azure-identity: Installed. No version info available.
> azure-search-documents: Installed. No version info available.
> beautifulsoup4: 4.12.3
> bibtexparser: Installed. No version info available.
> cassio: Installed. No version info available.
> chardet: 4.0.0
> cloudpathlib: Installed. No version info available.
> cloudpickle: 3.0.0
> cohere: Installed. No version info available.
> databricks-vectorsearch: Installed. No version info available.
> dataclasses-json: 0.6.6
> datasets: Installed. No version info available.
> dgml-utils: Installed. No version info available.
> elasticsearch: 8.14.0
> esprima: Installed. No version info available.
> faiss-cpu: Installed. No version info available.
> feedparser: Installed. No version info available.
> fireworks-ai: Installed. No version info available.
> friendli-client: Installed. No version info available.
> geopandas: Installed. No version info available.
> gitpython: Installed. No version info available.
> google-cloud-documentai: Installed. No version info available.
> gql: Installed. No version info available.
> gradientai: Installed. No version info available.
> hdbcli: Installed. No version info available.
> hologres-vector: Installed. No version info available.
> html2text: Installed. No version info available.
> httpx: 0.27.0
> httpx-sse: Installed. No version info available.
> javelin-sdk: Installed. No version info available.
> jinja2: 3.1.4
> jq: Installed. No version info available.
> jsonpatch: 1.33
> jsonschema: 4.21.1
> lxml: 5.2.2
> markdownify: Installed. No version info available.
> motor: Installed. No version info available.
> msal: Installed. No version info available.
> mwparserfromhell: Installed. No version info available.
> mwxml: Installed. No version info available.
> newspaper3k: Installed. No version info available.
> numexpr: Installed. No version info available.
> numpy: 2.0.0
> nvidia-riva-client: Installed. No version info available.
> oci: Installed. No version info available.
> openai: 1.52.2
> openapi-pydantic: Installed. No version info available.
> oracle-ads: Installed. No version info available.
> oracledb: Installed. No version info available.
> orjson: 3.10.3
> packaging: 23.2
> pandas: 2.2.2
> pdfminer-six: 20231228
> pgvector: Installed. No version info available.
> praw: Installed. No version info available.
> premai: Installed. No version info available.
> psychicapi: Installed. No version info available.
> py-trello: Installed. No version info available.
> pydantic: 2.9.2
> pyjwt: 2.3.0
> pymilvus: 2.4.3
> pymupdf: Installed. No version info available.
> pypdf: 4.3.0
> pypdfium2: 4.30.0
> pyspark: Installed. No version info available.
> PyYAML: 6.0.1
> rank-bm25: Installed. No version info available.
> rapidfuzz: 3.9.4
> rapidocr-onnxruntime: Installed. No version info available.
> rdflib: Installed. No version info available.
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> rspace_client: Installed. No version info available.
> scikit-learn: 1.5.0
> simsimd: Installed. No version info available.
> SQLAlchemy: 2.0.30
> sqlite-vss: Installed. No version info available.
> streamlit: Installed. No version info available.
> sympy: 1.13.0
> telethon: Installed. No version info available.
> tenacity: 8.3.0
> tidb-vector: Installed. No version info available.
> tiktoken: 0.7.0
> timescale-vector: Installed. No version info available.
> tqdm: 4.66.4
> tree-sitter: Installed. No version info available.
> tree-sitter-languages: Installed. No version info available.
> typer: 0.12.3
> typing-extensions: 4.12.1
> upstash-redis: Installed. No version info available.
> vdms: Installed. No version info available.
> xata: Installed. No version info available.
> xmltodict: 0.13.0 | Ɑ: vector store | low | Critical |
2,704,716,148 | PowerToys | QuickAccent - Toolbar overflow | ### Description of the new feature / enhancement

In quick accent, when using windows with 125% scaling the toolbar for the character `e` (that has a lot of options when selecting it) the bar goes overboard and I cannot see all the options.
Make it possible for the toolbar to wrap into multiple lines so that we can navigate it (maybe even using up and down arrows).
### Scenario when this would be used?
When the toolbar overflows
### Supporting information
_No response_ | Idea-Enhancement,Needs-Triage,Product-Quick Accent | low | Minor |
2,704,734,725 | pytorch | [typing] add `nn.Module` overload type hint to `torch.compile`. | ### 🚀 The feature, motivation and pitch
`torch.compile` can be applied to `nn.Module` instances, but [its current type hints do not reflect it](https://github.com/pytorch/pytorch/blob/3c63e76b03737085e2eb2e7fb7163d7ba16986ba/torch/__init__.py#L2364-L2387), which causes all sorts of typing errors:
```python
from typing import reveal_type
import torch
m = torch.nn.Linear(3, 4)
m_compiled = torch.compile(m)
reveal_type(m_compiled) # Callable[..., Any]
module_instance: torch.nn.Module = m_compiled # ❌ incompatible type
m_compiled.parameters() # ❌ unknown attribute
```
Typing `torch.compile` precisely is challenging, due to lack of some features in the typing system like intersection types (https://github.com/python/typing/issues/213). In principle, there should be an overload of the form
```python
@overload
def compile[M: Module](module: M, ...) -> M & OptimizedModule
```
Due to the lack of intersection types, I would propose to annotate as `M`, since users might want to call methods defined on `M`, which raise `unknown attribute` messages currently.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ezyang @malfet @xuzhao9 @gramster | module: nn,module: typing,triaged | low | Critical |
2,704,752,698 | godot | _get and _set are not called for real properties | ### Tested versions
-Reproducible in Godot v4.3.stable, v4.2.stable, 4.0.3.stable, 4.0.stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce GTX 1080 Ti (NVIDIA; 31.0.15.3713) - 12th Gen Intel(R) Core(TM) i5-12400F (12 Threads)
### Issue description
When reading or writing real members defined in the class, `_get` and `_set` are ignored, they are only called for custom properties that are returned via `_get_property_list`. The documentation and meaningful special values like `null` suggest that this is not desired behavior.
### Steps to reproduce
```gdscript
extends Control
var num := 5
func _get(property: StringName) -> Variant:
if (property == "num"):
return 6
return null
func _ready() -> void:
assert(num == 6, "_get is not working on real properties")
```
### Minimal reproduction project (MRP)
[get-bug_repro.zip](https://github.com/user-attachments/files/17958071/get-bug_repro.zip)
| topic:core,topic:gdscript,documentation | low | Critical |
2,704,776,799 | svelte | list with inert elements at the end causes item to be added out of order | ### Describe the bug
adding an item to a list that contains "inert" elements pushes the DOM element after the final non-inert element, it does not push it to the end of the list, this only affects lists that contain inert items at the end.
is this expected behaviour?
### Reproduction
https://svelte.dev/playground/hello-world?version=5.2.10#H4sIAAAAAAAACm2SYW-bMBCG_8rNmxRSUSBtmiYUIu3jPuwXNP3g4GOxZmxkH-0qxH-fDo9W6yYkkA6_z_ve-UZhZYeiFNqiJ3ADXbv22nmFHjz23olUtNpgEOXjKOi157NcEOmi_Nr3WXhGQ1w7y4D_qzfOEloKohRVaLzu6XiyJ8qv-M0PueinhgbLpXgNjdHNT0g8du4Z1-As0AWh1VYa0IRd9n705aINgibQAchLGzRpZ7X9wV2lC0kqtT7ZqGIUQ-BFGwNnBKkUKjhj6zzORtJ4lOr1A4816YLQAaQJDi6y79EG0C0rPYL0CN1gSPccax4vCwNImtloVQZMucpnlj2R7nrnCUYIRitMoZUKYYLWuw5WcZb5e5TVA2sMLtwavgSShMkjFNlmt9vst5vi_u7-fl9sb29TKLKbu8N-u9vuNofDfnu42aWwipOFDlfwtH6Ik2kH27ABz-MbYZesYYztzkZZP4RL8l3SJfPSKtcl61lJ0wd5ZM8EFv5NgTrGzlptCH2ioT7G2_hU16DfiFX-ti-V0s_8OQ9EzoKz853W45-Y0_GrUlUe_87rVQ3mGC3Hzyiby3IBIRrFVNOyQpXRvColT70eR1CDl9xHCXdFUcA0HUcWTPBPgGTN4T-2Ox1j5S1SlRu95Mk5z2xd5XPKKo_d8SNSQfiLREl-wOlp-g2SbBsFpgMAAA==
### Logs
_No response_
### System Info
```shell
irrelevant
```
### Severity
annoyance | transition/animation | low | Critical |
2,704,778,951 | pytorch | inconsistency in ```torch.Tensor.logcumsumexp``` on CPU and GPU | ### 🐛 Describe the bug
getting inconsistent results of ```torch.Tensor.logcumsumexp``` between CPU and GPU
```python #
import torch
self = torch.tensor([[2.4375, -0.3340, -0.3711]], dtype=torch.bfloat16)
self_cuda = self.cuda()
result_cpu = self.logcumsumexp(-1)
result_gpu = self_cuda.logcumsumexp(-1)
print("CPU result:\n", result_cpu)
print("GPU result:\n", result_gpu)
inconsistent = not torch.allclose(result_cpu, result_gpu.cpu(), atol=1e-02, rtol=1e-03)
print(f"inconsistency with atol=1e-02 and rtol=1e-03: {inconsistent}")
```
outputs:
```
CPU result:
tensor([[2.4375, 2.5000, 2.5469]], dtype=torch.bfloat16)
GPU result:
tensor([[2.4375, 2.5000, 2.5625]], device='cuda:0', dtype=torch.bfloat16)
inconsistency with atol=1e-02 and rtol=1e-03: True
```
### Versions
(executed on google colab)
PyTorch version: 2.5.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.3.3
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvtx==0.2.10
[pip3] optree==0.13.0
[pip3] pynvjitlink-cu12==0.4.0
[pip3] torch==2.5.0+cu121
[pip3] torchaudio==2.5.0+cu121
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.0+cu121
[conda] Could not collect
cc @albanD | module: numerical-stability,triaged,module: python frontend | low | Critical |
2,704,820,489 | yt-dlp | An option like `--no-playlist` except playlist metadata is still extracted | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
Is there any option to get the title of a playlist (**playlist_title**) and write it to the meta file (**meta_album** for my case), without downloading the entire playlist ?
When using the **--no playlist** option in combination with a youtube link that also contains the playlist,
**https://www.youtube.com/watch?v=z5rRZdiu1UE&list=PL28E94F6A589337C8**
then **playlist_title** is not available ( NA / null) :
**[MetadataParser] Parsed meta_album from '%(playlist_title)s': 'NA'**
Query example:
`yt-dlp --extract-audio --audio-format mp3 --audio-quality 0 --parse-metadata "playlist_title:%(meta_album)s" --embed-metadata --write-thumbnail --no-playlist "https://www.youtube.com/watch?v=z5rRZdiu1UE&list=PL28E94F6A589337C8"`
Query result:
```
[youtube:tab] Extracting URL: https://www.youtube.com/watch?v=z5rRZdiu1UE&list=PL28E94F6A589337C8
[youtube:tab] Downloading just the video z5rRZdiu1UE because of --no-playlist
[youtube] Extracting URL: https://www.youtube.com/watch?v=z5rRZdiu1UE
[youtube] z5rRZdiu1UE: Downloading webpage
[youtube] z5rRZdiu1UE: Downloading ios player API JSON
[youtube] z5rRZdiu1UE: Downloading mweb player API JSON
[youtube] z5rRZdiu1UE: Downloading m3u8 information
[MetadataParser] Parsed meta_album from '%(playlist_title)s': 'NA'
[info] z5rRZdiu1UE: Downloading 1 format(s): 251
[info] Downloading video thumbnail 42 ...
[info] Writing video thumbnail 42 to: Beastie Boys - Sabotage (Official Music Video) [z5rRZdiu1UE].webp
[download] Destination: Beastie Boys - Sabotage (Official Music Video) [z5rRZdiu1UE].webm
[download] 100% of 2.56MiB in 00:00:00 at 19.37MiB/s
[ExtractAudio] Destination: Beastie Boys - Sabotage (Official Music Video) [z5rRZdiu1UE].mp3
Deleting original file Beastie Boys - Sabotage (Official Music Video) [z5rRZdiu1UE].webm (pass -k to keep)
[Metadata] Adding metadata to "Beastie Boys - Sabotage (Official Music Video) [z5rRZdiu1UE].mp3"
```
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_ | enhancement,triage,core:extractor | low | Critical |
2,704,861,839 | react | [React 19] Hydration appending RSC component to body | ## Summary
Consuming RSC payload in a component using `createFromFetch` causing the response to get appended in the DOM instead of replacing if not wrapped with `Suspense`
```
const RouteWrapper = () => {
const location = useLocation();
const [rscContent, setRscContent] = useState(null);
useEffect(() => {
const content = createFromFetch(
fetch("/rsc?location=" + location.pathname),
{
callServer,
}
);
setRscContent(content);
}, [location.pathname]);
return rscContent
};
```
## Link to repositories
1. Hydration appending RSC response to body - https://github.com/tata1mg/catalyst-core/tree/hydration-error
2. Hydration causing mismatch - https://github.com/tata1mg/catalyst-core/tree/rsc-demo-routing | React 19 | medium | Critical |
2,704,869,159 | rust | ICE: entered unreachable code: FieldsShape::offset: `Primitive`s have no fields | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
//@ run-pass
use std::ops::{Index, IndexMut};
struct Foo {
x: isize,
y: isize,
}
impl Index<isize> for Foo {
type Output = isize;
fn index(&self, z: isize) -> &isize {
if z == 0 {
&self.x
} else {
&self.y
}
}
}
impl IndexMut<isize> for Foo {
fn index_mut(&mut self, z: isize) -> &mut isize {
if z == 0 {
&mut self.x
} else {
&mut self.y
}
}
}
trait Int {
fn get(self) -> isize;
fn get_from_ref(&self) -> isize;
fn inc(&mut self);
}
impl Int for isize {
fn get(self) -> isize { self }
fn get_from_ref(&self) -> isize { *self }
fn inc(&mut self, z: isize) { *self += 1; }
}
fn main() {
let mut f = Foo {
x: 1,
y: 2,
};
assert_eq!(f[1], 2);
f[0] = 3;
assert_eq!(f[0], 3);
{
let p = &mut f[1];
*p = 4;
}
{
let p = &f[1];
assert_eq!(*p, 4);
}
// Test calling methods with `&mut self`, `self, and `&self` receivers:
f[1].inc();
assert_eq!(f[1].get(), 5);
assert_eq!(f[1].get_from_ref(), 5);
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.85.0-nightly (cb2bd2bb0 2024-11-29)
binary: rustc
commit-hash: cb2bd2bb06380896368b0edb02ada0117cc856be
commit-date: 2024-11-29
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.4
```
### Error output
`rustc -Zmir-opt-level=5 -Zvalidate-mir -Zcross-crate-inline-threshold=always`
```
error[E0050]: method `inc` has 2 parameters but the declaration in trait `Int::inc` has 1
--> b.rs:40:12
|
34 | fn inc(&mut self);
| --------- trait requires 1 parameter
...
40 | fn inc(&mut self, z: isize) { *self += 1; }
| ^^^^^^^^^^^^^^^^^^^ expected 1 parameter, found 2
warning: unused variable: `z`
--> b.rs:40:23
|
40 | fn inc(&mut self, z: isize) { *self += 1; }
| ^ help: if this is intentional, prefix it with an underscore: `_z`
|
= note: `#[warn(unused_variables)]` on by default
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
thread 'rustc' panicked at /rustc/cb2bd2bb06380896368b0edb02ada0117cc856be/compiler/rustc_abi/src/lib.rs:1279:17:
internal error: entered unreachable code: FieldsShape::offset: `Primitive`s have no fields
stack backtrace:
0: 0x73db1615a52a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h6c57494279711599
1: 0x73db16813ce2 - core::fmt::write::h3cad130fa8f43609
2: 0x73db17bc6751 - std::io::Write::write_fmt::hb86a90f8009f8eb9
3: 0x73db1615a382 - std::sys::backtrace::BacktraceLock::print::hb87e1976e8deede9
4: 0x73db1615c88a - std::panicking::default_hook::{{closure}}::hf27528882c284812
5: 0x73db1615c6d3 - std::panicking::default_hook::h2e4e154622bd23bd
6: 0x73db152d7458 - std[26132d93986e58f8]::panicking::update_hook::<alloc[ce9158905f7620dd]::boxed::Box<rustc_driver_impl[4e4c9531434a88da]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x73db1615d048 - std::panicking::rust_panic_with_hook::hbc36709f273d6692
8: 0x73db1615cd06 - std::panicking::begin_panic_handler::{{closure}}::hdaf35ac8208622f3
9: 0x73db1615a9e9 - std::sys::backtrace::__rust_end_short_backtrace::h77be1680146547f4
10: 0x73db1615c9fd - rust_begin_unwind
11: 0x73db13697550 - core::panicking::panic_fmt::h7e7c15009d6c0ed9
12: 0x73db182a4dfb - <rustc_abi[e5f09d209a26d145]::FieldsShape<rustc_abi[e5f09d209a26d145]::layout::ty::FieldIdx>>::offset.cold
13: 0x73db172f9138 - <rustc_const_eval[a260793c7dd70fb7]::interpret::eval_context::InterpCx<rustc_const_eval[a260793c7dd70fb7]::const_eval::dummy_machine::DummyMachine>>::project_field::<rustc_const_eval[a260793c7dd70fb7]::interpret::operand::OpTy>
14: 0x73db15ade63a - <rustc_mir_transform[b565ecd2e11aba68]::dataflow_const_prop::ConstAnalysis>::assign_constant::{closure#0}
15: 0x73db159eb8cf - <rustc_mir_dataflow[235f8b34f07bfca]::value_analysis::Map>::for_each_projection_value::<rustc_const_eval[a260793c7dd70fb7]::interpret::operand::OpTy, <rustc_mir_transform[b565ecd2e11aba68]::dataflow_const_prop::ConstAnalysis>::assign_constant::{closure#0}, <rustc_mir_transform[b565ecd2e11aba68]::dataflow_const_prop::ConstAnalysis>::assign_constant::{closure#1}>
16: 0x73db15ade5fe - <rustc_mir_transform[b565ecd2e11aba68]::dataflow_const_prop::ConstAnalysis>::assign_constant
17: 0x73db15aa018e - <rustc_mir_transform[b565ecd2e11aba68]::dataflow_const_prop::ConstAnalysis>::assign_operand
18: 0x73db15a9eb0f - <rustc_mir_transform[b565ecd2e11aba68]::dataflow_const_prop::ConstAnalysis as rustc_mir_dataflow[235f8b34f07bfca]::framework::Analysis>::apply_statement_effect
19: 0x73db15adf306 - <rustc_mir_transform[b565ecd2e11aba68]::dataflow_const_prop::DataflowConstProp as rustc_mir_transform[b565ecd2e11aba68]::pass_manager::MirPass>::run_pass
20: 0x73db1680f32e - rustc_mir_transform[b565ecd2e11aba68]::pass_manager::run_passes_inner
21: 0x73db16da84da - rustc_mir_transform[b565ecd2e11aba68]::optimized_mir
22: 0x73db16da7dab - rustc_query_impl[a54b6583e60c070e]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a54b6583e60c070e]::query_impl::optimized_mir::dynamic_query::{closure#2}::{closure#0}, rustc_middle[8f8812feb7945426]::query::erase::Erased<[u8; 8usize]>>
23: 0x73db1683e208 - rustc_query_system[4ef38b7e62ca61e0]::query::plumbing::try_execute_query::<rustc_query_impl[a54b6583e60c070e]::DynamicConfig<rustc_query_system[4ef38b7e62ca61e0]::query::caches::DefIdCache<rustc_middle[8f8812feb7945426]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[a54b6583e60c070e]::plumbing::QueryCtxt, false>
24: 0x73db1683d7b3 - rustc_query_impl[a54b6583e60c070e]::query_impl::optimized_mir::get_query_non_incr::__rust_end_short_backtrace
25: 0x73db13046650 - <rustc_middle[8f8812feb7945426]::ty::context::TyCtxt>::instance_mir
26: 0x73db1715d429 - rustc_interface[c86b95b32cbbd6f5]::passes::run_required_analyses
27: 0x73db1715645e - rustc_interface[c86b95b32cbbd6f5]::passes::analysis
28: 0x73db1715642f - rustc_query_impl[a54b6583e60c070e]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[a54b6583e60c070e]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[8f8812feb7945426]::query::erase::Erased<[u8; 1usize]>>
29: 0x73db1775483a - rustc_query_system[4ef38b7e62ca61e0]::query::plumbing::try_execute_query::<rustc_query_impl[a54b6583e60c070e]::DynamicConfig<rustc_query_system[4ef38b7e62ca61e0]::query::caches::SingleCache<rustc_middle[8f8812feb7945426]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[a54b6583e60c070e]::plumbing::QueryCtxt, false>
30: 0x73db1775450e - rustc_query_impl[a54b6583e60c070e]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
31: 0x73db177eab87 - rustc_interface[c86b95b32cbbd6f5]::interface::run_compiler::<core[10e5919d76f1e6dc]::result::Result<(), rustc_span[2eb738640235e66]::ErrorGuaranteed>, rustc_driver_impl[4e4c9531434a88da]::run_compiler::{closure#0}>::{closure#1}
32: 0x73db177324c7 - std[26132d93986e58f8]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[c86b95b32cbbd6f5]::util::run_in_thread_with_globals<rustc_interface[c86b95b32cbbd6f5]::util::run_in_thread_pool_with_globals<rustc_interface[c86b95b32cbbd6f5]::interface::run_compiler<core[10e5919d76f1e6dc]::result::Result<(), rustc_span[2eb738640235e66]::ErrorGuaranteed>, rustc_driver_impl[4e4c9531434a88da]::run_compiler::{closure#0}>::{closure#1}, core[10e5919d76f1e6dc]::result::Result<(), rustc_span[2eb738640235e66]::ErrorGuaranteed>>::{closure#0}, core[10e5919d76f1e6dc]::result::Result<(), rustc_span[2eb738640235e66]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[10e5919d76f1e6dc]::result::Result<(), rustc_span[2eb738640235e66]::ErrorGuaranteed>>
33: 0x73db17732162 - <<std[26132d93986e58f8]::thread::Builder>::spawn_unchecked_<rustc_interface[c86b95b32cbbd6f5]::util::run_in_thread_with_globals<rustc_interface[c86b95b32cbbd6f5]::util::run_in_thread_pool_with_globals<rustc_interface[c86b95b32cbbd6f5]::interface::run_compiler<core[10e5919d76f1e6dc]::result::Result<(), rustc_span[2eb738640235e66]::ErrorGuaranteed>, rustc_driver_impl[4e4c9531434a88da]::run_compiler::{closure#0}>::{closure#1}, core[10e5919d76f1e6dc]::result::Result<(), rustc_span[2eb738640235e66]::ErrorGuaranteed>>::{closure#0}, core[10e5919d76f1e6dc]::result::Result<(), rustc_span[2eb738640235e66]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[10e5919d76f1e6dc]::result::Result<(), rustc_span[2eb738640235e66]::ErrorGuaranteed>>::{closure#1} as core[10e5919d76f1e6dc]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
34: 0x73db177318ab - std::sys::pal::unix::thread::Thread::new::thread_start::h2571e9214e43aed4
35: 0x73db18f3339d - <unknown>
36: 0x73db18fb849c - <unknown>
37: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/tmp/im/rustc-ice-2024-11-29T11_27_25-3367734.txt` to your bug report
note: compiler flags: -Z mir-opt-level=5 -Z validate-mir -Z cross-crate-inline-threshold=always
query stack during panic:
#0 [optimized_mir] optimizing MIR for `main`
#1 [analysis] running analysis passes on this crate
end of query stack
error: aborting due to 1 previous error; 1 warning emitted
For more information about this error, try `rustc --explain E0050`.
```
</p>
</details>
| I-ICE,E-needs-test,T-compiler,C-bug,A-mir-opt,A-mir-opt-inlining,-Zvalidate-mir | low | Critical |
2,704,876,000 | PowerToys | Awake does not work if DC power is connected | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Awake
### Steps to reproduce
Awake does not work if DC power is connected ie. with Enable Awake=ON and DC power connected, after inactivity console locks and computer gets into sleep mode.
On the other hand id DC power is not connected and Enable Awake=ON, no amound of inactivity (as long as battery power works) makes consle lock and computer does not go into sleep mode.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
With Enable Awake=ON and DC power connected, no ammound of inactivity should make computer go into lock&sleep.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response,Product-Awake | low | Minor |
2,704,975,924 | angular | Angular 19: NG0200: Circular dependency in DI detected for ChangeDetectionScheduler | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
Yes
### Description
When updating from Angular 18 to 19 I get a "Circular dependency in DI detected" for `ChangeDetectionScheduler` which I did not get in Angular 18.
The scenario is a bit complex and uses `CDK`, but I managed to make a simpler reproduction in StackBlitz. The issue related to using `effect` in the bootstrapping code. Angular 18 used micro tasks and that made it work. In the StackBlitz in `DirectionalityCdkOverride` it is possible to get it to run by replacing `effect` with `ɵmicrotaskEffect`. This however seems like a hack and I suppose that `ɵmicrotaskEffect` will not be available in the future.
#### The scenario
We have our own service for keeping language information including text direction LTR/RTL. We therefore override CDK's Directionality with our own version. We also get the users preference when initializing Angular.
#### The problem/question
It does not seem like our application has circular dependencies as the error message suggests. Also the message refers to `ChangeDetectionScheduler` which is a class out of our reach. So maybe the error message is not really applicable/informational for this scenario?
Also is this scenario "illegal" or is it a complex scenario that `effect` does not handle well?
Thanks
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-esaf7e?file=src%2Fdirectionality-cdk-override.service.ts
### Please provide the exception or error you saw
```true
Error: NG0200: Circular dependency in DI detected for ChangeDetectionScheduler. Find more at https://angular.dev/errors/NG0200
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.0.2
Node: 20.18.0
Package Manager: npm 10.8.2
OS: win32 x64
Angular: 19.0.1
... animations, cdk, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.2
@angular-devkit/build-angular 19.0.2
@angular-devkit/core 19.0.2
@angular-devkit/schematics 19.0.2
@angular/cli 19.0.2
@schematics/angular 19.0.2
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else?
To see the error in StackBlitz, open the browsers DevTools console. | area: core,core: di,core: reactivity | low | Critical |
2,704,979,182 | godot | Error `Condition "!unique_ids.has(p_id)" is true` from scripts moved to other folders when the window of the editor gets unfocused | ### Tested versions
4.4 custom dev (master branch), after dev5 introducing universal UID support.
### System information
Windows 11 24H2
### Issue description
It's really a weird bug. Check the video, or see the reproduction steps.
<video src="https://github.com/user-attachments/assets/abf3e198-3895-4140-a77c-14123eb98aa9" controls="" height=400 width=600> </video>
### Steps to reproduce
* Create a script
* Unfocus the editor (by clicking other applications running in the background)
* Refocus on the editor and move the script into any folder
* Unfocus and refocus again
### Minimal reproduction project (MRP)
[private-and-protected.zip](https://github.com/user-attachments/files/17959110/private-and-protected.zip)
| bug,topic:editor,needs testing | low | Critical |
2,704,984,045 | angular | Two-way binding ignores output type since angular 17.2+ | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
Yes
### Description
When using two-way binding (aka banana-in-a-box), the output type is ignored or treated as `any`, after some testing it seems it started happening after v17.2.0.
This error stills happening in the current version (v19.0.1) and happens either with `@Output`, `output` or `model`, also happens when you bind a `WritableSignal`.
Example:
```tsx
@Compotnent({
selector: 'foo',
...
})
export class FooComponent {
@Input() value: string | number | undefined;
@Output() valueChange = new EventEmitter<number>();
}
@Component({
template: `
<foo [(value)]="str"/> <!-- ERROR: doesn't complain with 'number' is not assignable to 'string' -->
<foo [value]="str" (valueChange)="str = $event"/> <!-- OK: complains with 'number' is not assignable to 'string' -->
`
})
export class AppComponent {
str: string = "";
}
```
Reproduction with ng17.0: https://stackblitz.com/edit/stackblitz-starters-krwib7?file=src%2Fmain.ts
Reproduction with ng17.2: https://stackblitz.com/edit/stackblitz-starters-tky4w1?file=src%2Fmain.ts
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-tky4w1?file=src%2Fmain.ts
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 17.2.3
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 17.2.4
... animations, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1702.3
@angular-devkit/build-angular 17.2.3
@angular-devkit/core 17.2.3
@angular-devkit/schematics 17.2.3
@angular/cli 17.2.3
@schematics/angular 17.2.3
rxjs 7.8.1
typescript 5.2.2
zone.js 0.14.10
```
### Anything else?
_No response_ | state: has PR,area: compiler,P3,compiler: template type-checking,bug | low | Critical |
2,705,023,986 | angular | cheatsheet not found in new website | ### Describe the problem that you experienced
I was reading Angular cheatsheet from https://v17.angular.io/guide/cheatsheet and I was loving it. Then, I have tried to load it in new website and turns out the page is not exists at https://angular.dev/guide/cheatsheet.
### Enter the URL of the topic with the problem
https://v17.angular.io/guide/cheatsheet
### Describe what you were looking for in the documentation
Need the awesome cheatsheet like before in the new website.
### Describe the actions that led you to experience the problem
Go to https://v17.angular.io/guide/cheatsheet and read the cheatsheet. Then remove `v17` and `io` from domain and try replace `io` to `dev` to load the page in the new website because all other pages works like that if you want to read from new website. But loading the url https://angular.dev/guide/cheatsheet fails.
### Describe what you want to experience that would fix the problem
Need to bring back the cheatsheet page to new website. If there's already a cheatsheet, need to redirect to that page if anyone comes from old url.
### Add a screenshot if that helps illustrate the problem
### Old website

### New website
 | area: docs | low | Minor |
2,705,067,555 | deno | allow task to override built-in commands (e.g `fmt`, `test`) | - built-in commands like `deno test` suddenly get not so simple when permissions or external commands are required (e.g `deno test -RW` for snapshot tests)
- running `deno test` is better and less confusing than `deno task test` for new contributors. | suggestion,deno fmt,lint | low | Major |
2,705,075,334 | rust | Cross-building rust 1.83.0 on & for NetBSD failure | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I am doing my best to try to keep the various NetBSD targets for rust working.
As part of that process, I am cross-building binaries from x86_64 to most of the
other targets, and carry out testing of the results, for some of the targets this involves
self-hosting the rust compiler.
I have two new observations for rust 1.83.0 (1.82.0 works fine):
1. A new seemingly-non-fatal error message which is difficult to get a grasp on has started appearing in the build log:
```
1.288127806s INFO prepare_target{force=false package_id=sysroot v0.0.0 (/usr
/pkgsrc/wip/rust183/work/rustc-1.83.0-src/library/sysroot) target="sysroot"}: ca
rgo::core::compiler::fingerprint: fingerprint error for sysroot v0.0.0 (/usr/pkg
src/wip/rust183/work/rustc-1.83.0-src/library/sysroot)/Build/TargetInner { name_inferred: true, ..: lib_target("sysroot", ["lib"], "/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/library/sysroot/src/lib.rs", Edition2021) }
1.288196443s INFO prepare_target{force=false package_id=sysroot v0.0.0 (/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/library/sysroot) target="sysroot"}: cargo::core::compiler::fingerprint: err: failed to read `/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage0-std/x86_64-unknown-netbsd/release/.fingerprint/sysroot-40f70deaa15a4468/lib-sysroot`
Caused by:
No such file or directory (os error 2)
```
There are *many* such instances. Testing for the existence of the quoted file name afterwards does not reproduce the problem(!)
Looking for the file afterwards shows that it exists:
```
$ find work -type f -name lib-sysroot
work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage1-std/armv7-unknown-netbsd-eabihf/release/.fingerprint/sysroot-76159fffe79d6c08/lib-sysroot
work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage1-std/x86_64-unknown-netbsd/release/.fingerprint/sysroot-904060a630664f36/lib-sysroot
work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage0-std/x86_64-unknown-netbsd/release/.fingerprint/sysroot-40f70deaa15a4468/lib-sysroot
$
$ cat /usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage0-std/x86_64-unknown-netbsd/release/.fingerprint/sysroot-40f70deaa15a4468/lib-sysroot; echo
07b22e914d1cad45
$
```
2. While natively building the rust compiler on x86_64 works fine, and the resulting compiler also works, I get a new (to me) problem while cross- building the rust compiler, in this first instance for our `armv7hf` target:
```
using sysroot /usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage1
running: "/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage1/bin/rustc" "--target" "armv7-unknown-netbsd-eabihf" "--print=file-names" "--crate-type=proc-macro" "-" (failure_mode=Exit) (created at src/core/builder.rs:1704:33, executed at src/core/builder.rs:1710:26)
Command "/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage1/bin/rustc" "--target" "armv7-unknown-netbsd-eabihf" "--print=file-names" "--crate-type=proc-macro" "-" (failure_mode=Exit) did not execute successfully.
Expected success, got exit status: 1
Created at: src/core/builder.rs:1704:33
Executed at: src/core/builder.rs:1710:26
STDOUT ----
STDERR ----
/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage1/bin/rustc: Shared object "librustc_driver-299f82a8112084d1.so" not found
```
The asked-for file exists in the build tree, ref.:
```
$ find work -name librustc_driver-299f82a8112084d1.so -type f
work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage1/lib/librustc_driver-299f82a8112084d1.so
work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage0-sysroot/lib/rustlib/x86_64-unknown-netbsd/lib/librustc_driver-299f82a8112084d1.so
work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage0-rustc/x86_64-unknown-netbsd/release/deps/librustc_driver-299f82a8112084d1.so
$
```
but apparently, this particular `rustc` instance isn't being pointed in the right direction.
It is also somewhat unclear to me whether this second problem is related to the first issue listed above.
My initial build attempt was done with `-j 32` (I have the resources for it), but repeating it with `-j 1` to root out whether this is a bug related to parallelism of the build reveals that the answer to that question is "no".
I expected to see this happen: Cross-building the rust compiler ought to succeed.
Instead, this happened:
The build errored out with
```
Finished `release` profile [optimized] target(s) in 2m 45s
using sysroot /usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage1
running: "/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage1/bin/rustc" "--target" "armv7-unknown-netbsd-eabihf" "--print=file-names" "--crate-type=proc-macro" "-" (failure_mode=Exit) (created at src/core/builder.rs:1704:33, executed at src/core/builder.rs:1710:26)
Command "/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage1/bin/rustc" "--target" "armv7-unknown-netbsd-eabihf" "--print=file-names" "--crate-type=proc-macro" "-" (failure_mode=Exit) did not execute successfully.
Expected success, got exit status: 1
Created at: src/core/builder.rs:1704:33
Executed at: src/core/builder.rs:1710:26
STDOUT ----
STDERR ----
/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/build/x86_64-unknown-netbsd/stage1/bin/rustc: Shared object "librustc_driver-299f82a8112084d1.so" not found
Traceback (most recent call last):
File "/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/./x.py", line 50, in <module>
bootstrap.main()
File "/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/src/bootstrap/bootstrap.py", line 1227, in main
bootstrap(args)
File "/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/src/bootstrap/bootstrap.py", line 1203, in bootstrap
run(args, env=env, verbose=build.verbose, is_bootstrap=True)
File "/usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/src/bootstrap/bootstrap.py", line 202, in run
raise RuntimeError(err)
RuntimeError: failed to run: /usr/pkgsrc/wip/rust183/work/rustc-1.83.0-src/build/bootstrap/debug/bootstrap -v dist -j 1
*** Error code 1
Stop.
make[1]: stopped in /usr/pkgsrc/wip/rust183
*** Error code 1
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
This build is obviously using the previous version, `rust 1.82.0`:
`rustc --version --verbose`:
```
$ work/rust-1.82.0-x86_64-unknown-netbsd/rustc/bin/rustc --version --verbose
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-netbsd
release: 1.82.0
LLVM version: 19.1.1
$
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
Backtraces and details from the build provided above.
If wanted, I can provide a copy of the entire build log, hints for how to do so appreciated.
</p>
</details>
| A-cross,O-netbsd,T-bootstrap,C-bug | medium | Critical |
2,705,192,378 | youtube-dl | Servus TV live stream | https://www.servustv.com/jetzt-live/
Output:
`yt-dlp -v https://www.servustv.com/jetzt-live/
[debug] Command-line config: ['-v', 'https://www.servustv.com/jetzt-live/']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (zip)
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-49-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2023.11.17, mutagen-1.46.0, requests-2.31.0, secretstorage-3.3.3, sqlite3-3.45.1, urllib3-2.0.7, websockets-10.4
[debug] Proxy map: {'no': 'localhost,127.0.0.0/8,::1'}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[generic] Extracting URL: https://www.servustv.com/jetzt-live/
[generic] jetzt-live: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] jetzt-live: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.servustv.com/jetzt-live/
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1624, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1759, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/generic.py", line 2553, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.servustv.com/jetzt-live/` | site-support-request | low | Critical |
2,705,200,951 | godot | Pin bottom panel does not work with some panels | ### Tested versions
Godot v4.4.dev5
### System information
Fedora Linux 40 (KDE Plasma) on Wayland - X11 display driver, Single-window
### Issue description
Pin bottom panel implemented in https://github.com/godotengine/godot/pull/98074 does not work for multiple panels.
https://github.com/user-attachments/assets/a4d7418b-d27b-4b0f-9891-75f76ca1b74c
### Steps to reproduce
Click pin and switch between nodes that have a bottom panel
### Minimal reproduction project (MRP)
[docks.zip](https://github.com/user-attachments/files/17960013/docks.zip)
| enhancement,discussion,topic:editor | low | Major |
2,705,267,936 | flutter | Run `flutter` command failed after `flutter channel beta` | ### Steps to reproduce
1. Follow [Flutter Docs](https://docs.flutter.dev/get-started/install) install VSCode, Flutter Extension, Dart Extension
- Flutter SDK path: `D:\dev\flutter`
- VSCode path: `D:\Program Files\Microsoft VS Code`
2. Run `flutter doctor`, output normally and successful, see **Flutter Doctor output** below.
3. Run `flutter channel beta` or `master` or `main`, output failed:
```
> flutter channel beta
Switching to flutter channel 'beta'...
Upgrading engine...
Flutter failed to run "bin\flutter --no-color --no-version-check precache". The flutter tool cannot access the
file or directory.
Please ensure that the SDK and/or project is installed in a location that has read/write permissions for the
current user.
```
4. Run `flutter doctor` or any flutter comman, output failed:
```
> flutter doctor
ResourceUnavailable: Program 'flutter.bat' failed to run: An error occurred trying to start process 'D:\dev\flutter\bin\flutter.bat' with working directory 'C:\Users\thats'. Access denied. At line:1 char:1
+ flutter doctor
+ ~~~~~~~~~~~~~~.
```
### Actual results
**All flutter command run failed after run `flutter channel xxx`.**
Reinstall flutter works for me, but it happens everytime after run `flutter channel xxx`.
Change owner of `D:\dev\flutter` does not work either.
### Flutter Doctor output
<details open>
<summary>Doctor output</summary>
```console
> flutter --verbose doctor
[✓] Flutter (Channel stable, 3.24.5, on Microsoft Windows [版本 10.0.22631.4460], locale zh-CN)
• Flutter version 3.24.5 on channel stable at D:\dev\flutter
• Upstream repository https://github.com/flutter/flutter.git
• FLUTTER_GIT_URL = https://github.com/flutter/flutter.git
• Framework revision dec2ee5c1f (2 weeks ago), 2024-11-13 11:13:06 -0800
• Engine revision a18df97ca5
• Dart version 3.5.4
• DevTools version 2.37.3
[✓] Windows Version (Installed version of Windows is version 10 or higher)
[✗] Android toolchain - develop for Android devices
✗ ANDROID_HOME = C:\Users\thats\AppData\Local\Android\Sdk
but Android SDK not found at this location.
[✓] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[✓] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.12.2)
• Visual Studio at D:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.12.35521.163
• Windows 10 SDK version 10.0.22621.0
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/windows-android-setup for detailed instructions).
[✓] IntelliJ IDEA Ultimate Edition (version 2024.3)
• IntelliJ at D:\Program Files\JetBrains\Toolbox\app\IntelliJ IDEA Ultimate
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
[✓] Proxy Configuration
• HTTP_PROXY is set
• NO_PROXY is localhost,127.0.0.1,::1
• NO_PROXY contains localhost
• NO_PROXY contains ::1
• NO_PROXY contains 127.0.0.1
[✓] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [版本 10.0.22631.4460]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.86
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.70
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 2 categories.
The Flutter CLI developer tool uses Google Analytics to report usage and diagnostic
data along with package dependencies, and crash reporting to send basic crash
reports. This data is used to help improve the Dart platform, Flutter framework,
and related tools.
Telemetry is not sent on the very first run. To disable reporting of telemetry,
run this terminal command:
flutter --disable-analytics
If you opt out of telemetry, an opt-out event will be sent, and then no further
information will be sent. This data is collected in accordance with the Google
Privacy Policy (https://policies.google.com/privacy).
```
</details>
| tool,team-tool | low | Critical |
2,705,274,702 | TypeScript | `CredentialsContainer.create()` & `.get()` should be able to return `PublicKeyCredential`, and the PKC interface should be more well-defined | ### ⚙ Compilation target
ESNext or ES2024
(I am building for current up-to-date browsers.)
### ⚙ Library
DOM
### Missing / Incorrect Definition
[`CredentialsContainer.create()`](https://developer.mozilla.org/en-US/docs/Web/API/CredentialsContainer/create) and [`CredentialsContainer.get()`](https://developer.mozilla.org/en-US/docs/Web/API/CredentialsContainer/get) currently always return a `Promise<Credential | null>` regardless of config. This is incomplete and less helpful than it could be.
In actual implementation, if a `publicKey` config is specified (and _one_ type of credential config _must_ be specified for both methods), both methods _will_ return a `PublicKeyCredential` (or null). Currently we're forced to downcast to the correct type, e.g.:
```ts
const credential = await CredentialsContainer.create({ publicKey: { ... } });
if (credential) {
const pkc = credential as PublicKeyCredential;
// do things with pkc
}
```
Additionally, the `PublicKeyCredential` type definition is incomplete.
- Instances are missing the [`pkc.toJSON()`](https://developer.mozilla.org/en-US/docs/Web/API/PublicKeyCredential/toJSON) helper method.
- Browser support still varies, so leaving it out might be deliberate. Firefox and Chrome support it; Safari does not, Opera is developing it, and all device WebViews do not support it.
- The `pkc.response` field currently always has type `AuthenticatorResponse`. This is incomplete. [Per MDN](https://developer.mozilla.org/en-US/docs/Web/API/PublicKeyCredential/response):
- if the PublicKeyCredential was obtained via `create()`, the response field will be of type `AuthenticatorAttestationResponse` (attestation)
- if it was obtained via `get()`, the response field will be of type `AuthenticatorAssertionResponse` (assertion).
Both `AuthenticatorAttestationResponse` and `AuthenticatorAssertionResponse` are already defined in `lib.dom.d.ts`.
### Recommendation
`PublicKeyCredential` should instead be defined with a generic property that accepts the response type, e.g.:
```ts
interface PublicKeyCredential<Response extends AuthenticatorResponse = AuthenticatorResponse> {
response: Response,
// (other fields omitted)
}
```
Then `create()` and `get()` should be overloaded methods approximately like the following. All the types referenced below already exist in `lib.dom.d.ts`.
```ts
interface CredentialsContainer {
create(options?: CredentialCreationOptions & { publicKey: PublicKeyCredentialCreationOptions }): Promise<PublicKeyCredential<AuthenticatorAttestationResponse> | null>;
create(options?: CredentialCreationOptions): Promise<Credential | null>;
get(options?: CredentialRequestOptions & { publicKey: PublicKeyCredentialRequestOptions }): Promise<PublicKeyCredential<AuthenticatorAssertionResponse> | null>;
get(options?: CredentialRequestOptions): Promise<Credential | null>;
}
```
### Sample Code
```TypeScript
const toUint8Array = (str: string) => Uint8Array.from(str, c => c.charCodeAt(0));
const credential = await navigator.credentials.create({
publicKey: {
challenge: toUint8Array("random challenge from server"),
rp: {
name: "WebAuthN",
id: "webauth.io", // Run this code on the webauthn.io domain, or change this to another domain.
},
user: {
id: toUint8Array("username"),
name: "Name",
displayName: "Display name",
},
pubKeyCredParams: [
{ alg: -7, type: "public-key" },
],
attestation: "direct",
},
});
if (credential) {
console.debug(credential.toJSON());
// ~~~~~~
// Property 'toJSON' does not exist on type 'Credential'.ts(2339)
console.debug(credential.response);
// ~~~~~~~~
// Property 'response' does not exist on type 'Credential'.ts(2339)
console.debug((credential as PublicKeyCredential).response.getAuthenticatorData());
// ~~~~~~~~~~~~~~~~~~~~
// Property 'getAuthenticatorData' does not exist on type 'AuthenticatorResponse'.ts(2339)
// Currently forced to downcast twice like this essentially, which TypeScript permits:
const pkc = credential as (PublicKeyCredential & { response: AuthenticatorAttestationResponse });
console.debug(pkc.response.getAuthenticatorData());
}
```
### Documentation Link
- `CredentialsContainer.create()`: https://developer.mozilla.org/en-US/docs/Web/API/CredentialsContainer/create
- `CredentialsContainer.get()`: https://developer.mozilla.org/en-US/docs/Web/API/CredentialsContainer/get
- `PublicKeyCredential.toJSON()`: https://developer.mozilla.org/en-US/docs/Web/API/PublicKeyCredential/toJSON
- `PublicKeyCredential.response`: https://developer.mozilla.org/en-US/docs/Web/API/PublicKeyCredential/response
- `AuthenticatorAttestationResponse.getAuthenticatorData()`: https://developer.mozilla.org/en-US/docs/Web/API/AuthenticatorAttestationResponse/getAuthenticatorData | Suggestion,Help Wanted,Domain: lib.d.ts,Experience Enhancement | low | Critical |
2,705,328,918 | three.js | TSL nodes Composability / Stacking / Extends ? | ### Description
Using the legacy system for shaders, we could replace parts of the shading with some other parts :
Example :
```
source: "vec4 diffuseColor = vec4( diffuse, opacity );",
replace: "vec4 diffuseColor = vec4( diffuse, opacity * 0.5 );"
```
With TSL (Three.js Shader Language), the challenge is: ( Unless I missed something, or could not find a solution on the wiki or examples )
I cannot find a way extend/modify an existing node **after** it's been processed by the material's built-in computations (lighting, textures etc.), the only way now is to override the colorNode entirely.
Two key technical limitations:
the node isn't available immediately on material creation
No built-in mechanism to stack/chain node operations
### Solution
Potential solution approaches needed:
- A node stacking system: material.whatevernode.stack(modifierNode)
- A way to reference the computed whatevernode value into the next stacks..
### Additional context
_No response_ | Documentation,TSL | medium | Major |
2,705,384,791 | PowerToys | After Deactivating FancyZones, sometimes the windows have randomly sizes after hibernate | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
FancyZones
### Steps to reproduce
Create custom zones
Adjust Windows into zones
Deactivate Fancy Zones
Move/resize windows
Hibenate PC
Activate PC
Windows (sometimes) have random sizes (perhaps the sizes of the deactivates zones)
### ✔️ Expected Behavior
The Windows should have the previously set sizes!
### ❌ Actual Behavior
Afte Recovering from hibernate the windows have sometimes randomly sizes
### Other Software
_No response_ | Issue-Bug,Product-FancyZones,Needs-Triage,Needs-Team-Response | low | Minor |
2,705,404,951 | angular | Missing documentation regarding `outputMode` | ### Describe the problem that you experienced
I did not find any documentations regarding `outputMode`.
Is it planned for a futur version of Angular ? This would require some changes for pages such as https://angular.dev/guide/prerendering#prerendering-parameterized-routes | area: docs | low | Minor |
2,705,428,450 | PowerToys | Missleading german translation | ### Microsoft PowerToys version
0.81.1
### Utility with translation issue
Keyboard Manager
### 🌐 Language affected
German
### ❌ Actual phrase(s)
SMS senden + Schlüssel
 + 
### ✔️ Expected phrase(s)
"SMS senden": "Text senden"
"Schlüssel": "Text"
### ℹ Why is the current translation wrong
As it has nothing to do with sending a text message over SMS with your phone | Issue-Bug,Area-Localization,Needs-Triage,Issue-Translation | low | Minor |
2,705,547,560 | rust | cargo doc for diesel triggers debug_assertion in can_elide_generic_arg` | running rustdoc for diesel master (https://github.com/diesel-rs/diesel/commit/8939fce6651a79433d3cdc7ec69e9ec9aaafea1c) with `debug_assertions` enabled triggers an ICE
```
Documenting diesel v2.2.4 (/home/lcnr/diesel/diesel)
thread 'rustc' panicked at src/librustdoc/clean/utils.rs:162:5:
assertion `left matches right` failed
left: (Lifetime('query/#0), Type(connection::DefaultLoadingMode))
right: (ty::GenericArgKind::Lifetime(_), ty::GenericArgKind::Lifetime(_)) |
(ty::GenericArgKind::Type(_), ty::GenericArgKind::Type(_)) |
(ty::GenericArgKind::Const(_), ty::GenericArgKind::Const(_))
```
backtrace
<details>
``` bash
lcnr@lcnrPC:~/diesel$ cargo +stage2 doc --manifest-path diesel/Cargo.toml --no-deps --no-default-features
Compiling proc-macro2 v1.0.92
Compiling unicode-ident v1.0.14
Compiling fnv v1.0.7
Compiling ident_case v1.0.1
Compiling strsim v0.11.1
Compiling either v1.13.0
Compiling heck v0.5.0
Checking downcast-rs v1.2.1
Compiling quote v1.0.37
Compiling syn v2.0.89
Compiling darling_core v0.20.10
Compiling diesel_table_macro_syntax v0.2.0 (/home/lcnr/diesel/diesel_table_macro_syntax)
Compiling darling_macro v0.20.10
Compiling darling v0.20.10
Compiling dsl_auto_type v0.1.0 (/home/lcnr/diesel/dsl_auto_type)
Compiling diesel_derives v2.2.0 (/home/lcnr/diesel/diesel_derives)
Documenting diesel v2.2.4 (/home/lcnr/diesel/diesel)
thread 'rustc' panicked at src/librustdoc/clean/utils.rs:162:5:
assertion `left matches right` failed
left: (Lifetime('query/#0), Type(connection::DefaultLoadingMode))
right: (ty::GenericArgKind::Lifetime(_), ty::GenericArgKind::Lifetime(_)) |
(ty::GenericArgKind::Type(_), ty::GenericArgKind::Type(_)) |
(ty::GenericArgKind::Const(_), ty::GenericArgKind::Const(_))
stack backtrace:
0: 0x7f28b96933e1 - trace
at /home/lcnr/rust3/library/std/src/../../backtrace/src/backtrace/libunwind.rs:116:5
1: 0x7f28b96933e1 - trace_unsynchronized<std::sys::backtrace::_print_fmt::{closure_env#1}>
at /home/lcnr/rust3/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x7f28b96933e1 - _print_fmt
at /home/lcnr/rust3/library/std/src/sys/backtrace.rs:66:9
3: 0x7f28b96933e1 - fmt
at /home/lcnr/rust3/library/std/src/sys/backtrace.rs:39:26
4: 0x7f28b9739983 - fmt
at /home/lcnr/rust3/library/core/src/fmt/rt.rs:177:76
5: 0x7f28b9739983 - write
at /home/lcnr/rust3/library/core/src/fmt/mod.rs:1189:21
6: 0x7f28b96a5d79 - write_fmt<std::sys::pal::unix::stdio::Stderr>
at /home/lcnr/rust3/library/std/src/io/mod.rs:1887:15
7: 0x7f28b9693283 - print
at /home/lcnr/rust3/library/std/src/sys/backtrace.rs:42:9
8: 0x7f28b96e669e - {closure#1}
9: 0x7f28b96e6473 - default_hook
at /home/lcnr/rust3/library/std/src/panicking.rs:311:9
10: 0x7f28b568bbde - {closure#0}
at /home/lcnr/rust3/compiler/rustc_driver_impl/src/lib.rs:1429:17
11: 0x7f28b568bbde - call<(&(dyn core::ops::function::Fn<(&std::panic::PanicHookInfo), Output=()> + core::marker::Send + core::marker::Sync), &std::panic::PanicHookInfo), rustc_driver_impl::install_ice_hook::{closure_env#0}, alloc::alloc::Global>
at /home/lcnr/rust3/library/alloc/src/boxed.rs:1986:9
12: 0x7f28b96e6e9f - rust_panic_with_hook
at /home/lcnr/rust3/library/std/src/panicking.rs:825:13
13: 0x7f28b969389c - {closure#0}
at /home/lcnr/rust3/library/std/src/panicking.rs:690:13
14: 0x7f28b96935e9 - std::sys::backtrace::__rust_end_short_backtrace::ha8d371277671dcc0
at /home/lcnr/rust3/library/std/src/sys/backtrace.rs:170:18
15: 0x7f28b96e678d - begin_panic_handler
at /home/lcnr/rust3/library/std/src/panicking.rs:681:5
16: 0x7f28b9741f20 - panic_fmt
at /home/lcnr/rust3/library/core/src/panicking.rs:76:14
17: 0x7f28b9742491 - assert_failed_inner
18: 0x618960cd9836 - core[31c131443582fd6c]::panicking::assert_matches_failed::<(rustc_type_ir[c0a93252c9596940]::generic_arg::GenericArgKind<rustc_middle[7864bb30f132c2cd]::ty::context::TyCtxt>, rustc_type_ir[c0a93252c9596940]::generic_arg::GenericArgKind<rustc_middle[7864bb30f132c2cd]::ty::context::TyCtxt>)>
at /home/lcnr/rust3/library/core/src/panicking.rs:393:5
19: 0x618960aa7e85 - can_elide_generic_arg
at /home/lcnr/rust3/src/librustdoc/clean/utils.rs:162:5
20: 0x618960aa7e85 - {closure#0}
at /home/lcnr/rust3/src/librustdoc/clean/utils.rs:120:16
21: 0x618960aa7e85 - call_mut<((usize, &rustc_middle::ty::generic_args::GenericArg)), rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>
at /home/lcnr/rust3/library/core/src/ops/function.rs:294:13
22: 0x618960b29622 - {closure#0}<(usize, &rustc_middle::ty::generic_args::GenericArg), rustdoc::clean::types::GenericArg, &mut rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>
at /home/lcnr/rust3/library/core/src/iter/traits/iterator.rs:2859:32
23: 0x618960b29622 - {closure#0}<&rustc_middle::ty::generic_args::GenericArg, (), core::ops::control_flow::ControlFlow<rustdoc::clean::types::GenericArg, ()>, core::iter::traits::iterator::Iterator::find_map::check::{closure_env#0}<(usize, &rustc_middle::ty::generic_args::GenericArg), rustdoc::clean::types::GenericArg, &mut rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>>
at /home/lcnr/rust3/library/core/src/iter/adapters/enumerate.rs:179:17
24: 0x618960b29622 - try_rfold<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>, (), core::iter::adapters::enumerate::{impl#2}::try_rfold::enumerate::{closure_env#0}<&rustc_middle::ty::generic_args::GenericArg, (), core::ops::control_flow::ControlFlow<rustdoc::clean::types::GenericArg, ()>, core::iter::traits::iterator::Iterator::find_map::check::{closure_env#0}<(usize, &rustc_middle::ty::generic_args::GenericArg), rustdoc::clean::types::GenericArg, &mut rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>>, core::ops::control_flow::ControlFlow<rustdoc::clean::types::GenericArg, ()>>
at /home/lcnr/rust3/library/core/src/iter/traits/double_ended.rs:238:21
25: 0x618960b29622 - try_rfold<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>, (), core::iter::traits::iterator::Iterator::find_map::check::{closure_env#0}<(usize, &rustc_middle::ty::generic_args::GenericArg), rustdoc::clean::types::GenericArg, &mut rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>, core::ops::control_flow::ControlFlow<rustdoc::clean::types::GenericArg, ()>>
at /home/lcnr/rust3/library/core/src/iter/adapters/enumerate.rs:184:9
26: 0x618960b29622 - try_fold<core::iter::adapters::enumerate::Enumerate<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>>, (), core::iter::traits::iterator::Iterator::find_map::check::{closure_env#0}<(usize, &rustc_middle::ty::generic_args::GenericArg), rustdoc::clean::types::GenericArg, &mut rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>, core::ops::control_flow::ControlFlow<rustdoc::clean::types::GenericArg, ()>>
at /home/lcnr/rust3/library/core/src/iter/adapters/rev.rs:57:9
27: 0x618960920982 - find_map<core::iter::adapters::rev::Rev<core::iter::adapters::enumerate::Enumerate<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>>>, rustdoc::clean::types::GenericArg, &mut rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>
at /home/lcnr/rust3/library/core/src/iter/traits/iterator.rs:2865:9
28: 0x618960920982 - next<rustdoc::clean::types::GenericArg, core::iter::adapters::rev::Rev<core::iter::adapters::enumerate::Enumerate<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>>>, rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>
at /home/lcnr/rust3/library/core/src/iter/adapters/filter_map.rs:64:9
29: 0x618960920982 - extend_desugared<rustdoc::clean::types::GenericArg, alloc::alloc::Global, core::iter::adapters::filter_map::FilterMap<core::iter::adapters::rev::Rev<core::iter::adapters::enumerate::Enumerate<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>>>, rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>>
at /home/lcnr/rust3/library/alloc/src/vec/mod.rs:3520:35
30: 0x618960920982 - spec_extend<rustdoc::clean::types::GenericArg, core::iter::adapters::filter_map::FilterMap<core::iter::adapters::rev::Rev<core::iter::adapters::enumerate::Enumerate<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>>>, rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>, alloc::alloc::Global>
at /home/lcnr/rust3/library/alloc/src/vec/spec_extend.rs:19:9
31: 0x618960ac2aa1 - extend<rustdoc::clean::types::GenericArg, alloc::alloc::Global, core::iter::adapters::filter_map::FilterMap<core::iter::adapters::rev::Rev<core::iter::adapters::enumerate::Enumerate<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>>>, rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>>
at /home/lcnr/rust3/library/alloc/src/vec/mod.rs:3481:9
32: 0x618960ac2aa1 - clean_middle_generic_args
at /home/lcnr/rust3/src/librustdoc/clean/utils.rs:148:16
33: 0x618960ac7ef8 - projection_to_path_segment
at /home/lcnr/rust3/src/librustdoc/clean/mod.rs:510:19
34: 0x618960ac7ce9 - clean_projection
at /home/lcnr/rust3/src/librustdoc/clean/mod.rs:486:16
35: 0x618960ae0903 - {closure#0}
at /home/lcnr/rust3/src/librustdoc/clean/mod.rs:2167:13
36: 0x618960ae0903 - clean_middle_ty
at /home/lcnr/rust3/src/librustdoc/clean/mod.rs:2019:1
37: 0x618960aa7d71 - {closure#0}
at /home/lcnr/rust3/src/librustdoc/clean/utils.rs:130:63
38: 0x618960aa7d71 - call_mut<((usize, &rustc_middle::ty::generic_args::GenericArg)), rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>
at /home/lcnr/rust3/library/core/src/ops/function.rs:294:13
39: 0x618960b29622 - {closure#0}<(usize, &rustc_middle::ty::generic_args::GenericArg), rustdoc::clean::types::GenericArg, &mut rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>
at /home/lcnr/rust3/library/core/src/iter/traits/iterator.rs:2859:32
40: 0x618960b29622 - {closure#0}<&rustc_middle::ty::generic_args::GenericArg, (), core::ops::control_flow::ControlFlow<rustdoc::clean::types::GenericArg, ()>, core::iter::traits::iterator::Iterator::find_map::check::{closure_env#0}<(usize, &rustc_middle::ty::generic_args::GenericArg), rustdoc::clean::types::GenericArg, &mut rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>>
at /home/lcnr/rust3/library/core/src/iter/adapters/enumerate.rs:179:17
41: 0x618960b29622 - try_rfold<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>, (), core::iter::adapters::enumerate::{impl#2}::try_rfold::enumerate::{closure_env#0}<&rustc_middle::ty::generic_args::GenericArg, (), core::ops::control_flow::ControlFlow<rustdoc::clean::types::GenericArg, ()>, core::iter::traits::iterator::Iterator::find_map::check::{closure_env#0}<(usize, &rustc_middle::ty::generic_args::GenericArg), rustdoc::clean::types::GenericArg, &mut rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>>, core::ops::control_flow::ControlFlow<rustdoc::clean::types::GenericArg, ()>>
at /home/lcnr/rust3/library/core/src/iter/traits/double_ended.rs:238:21
42: 0x618960b29622 - try_rfold<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>, (), core::iter::traits::iterator::Iterator::find_map::check::{closure_env#0}<(usize, &rustc_middle::ty::generic_args::GenericArg), rustdoc::clean::types::GenericArg, &mut rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>, core::ops::control_flow::ControlFlow<rustdoc::clean::types::GenericArg, ()>>
at /home/lcnr/rust3/library/core/src/iter/adapters/enumerate.rs:184:9
43: 0x618960b29622 - try_fold<core::iter::adapters::enumerate::Enumerate<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>>, (), core::iter::traits::iterator::Iterator::find_map::check::{closure_env#0}<(usize, &rustc_middle::ty::generic_args::GenericArg), rustdoc::clean::types::GenericArg, &mut rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>, core::ops::control_flow::ControlFlow<rustdoc::clean::types::GenericArg, ()>>
at /home/lcnr/rust3/library/core/src/iter/adapters/rev.rs:57:9
44: 0x618960920a3c - find_map<core::iter::adapters::rev::Rev<core::iter::adapters::enumerate::Enumerate<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>>>, rustdoc::clean::types::GenericArg, &mut rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>
at /home/lcnr/rust3/library/core/src/iter/traits/iterator.rs:2865:9
45: 0x618960920a3c - next<rustdoc::clean::types::GenericArg, core::iter::adapters::rev::Rev<core::iter::adapters::enumerate::Enumerate<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>>>, rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>
at /home/lcnr/rust3/library/core/src/iter/adapters/filter_map.rs:64:9
46: 0x618960920a3c - extend_desugared<rustdoc::clean::types::GenericArg, alloc::alloc::Global, core::iter::adapters::filter_map::FilterMap<core::iter::adapters::rev::Rev<core::iter::adapters::enumerate::Enumerate<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>>>, rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>>
at /home/lcnr/rust3/library/alloc/src/vec/mod.rs:3520:35
47: 0x618960920a3c - spec_extend<rustdoc::clean::types::GenericArg, core::iter::adapters::filter_map::FilterMap<core::iter::adapters::rev::Rev<core::iter::adapters::enumerate::Enumerate<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>>>, rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>, alloc::alloc::Global>
at /home/lcnr/rust3/library/alloc/src/vec/spec_extend.rs:19:9
48: 0x618960ac2aa1 - extend<rustdoc::clean::types::GenericArg, alloc::alloc::Global, core::iter::adapters::filter_map::FilterMap<core::iter::adapters::rev::Rev<core::iter::adapters::enumerate::Enumerate<core::slice::iter::Iter<rustc_middle::ty::generic_args::GenericArg>>>, rustdoc::clean::utils::clean_middle_generic_args::{closure_env#0}>>
at /home/lcnr/rust3/library/alloc/src/vec/mod.rs:3481:9
49: 0x618960ac2aa1 - clean_middle_generic_args
at /home/lcnr/rust3/src/librustdoc/clean/utils.rs:148:16
50: 0x618960ac2ee6 - clean_middle_generic_args_with_constraints
at /home/lcnr/rust3/src/librustdoc/clean/utils.rs:223:16
51: 0x618960ac2ee6 - clean_middle_path
at /home/lcnr/rust3/src/librustdoc/clean/utils.rs:241:19
52: 0x618960ae05fb - {closure#0}
at /home/lcnr/rust3/src/librustdoc/clean/mod.rs:2079:24
53: 0x618960ae05fb - clean_middle_ty
at /home/lcnr/rust3/src/librustdoc/clean/mod.rs:2019:1
54: 0x618960acd75f - {closure#0}
at /home/lcnr/rust3/src/librustdoc/clean/mod.rs:1287:21
55: 0x618960acd75f - with_param_env<rustdoc::clean::types::Item, rustdoc::clean::clean_impl_item::{closure_env#0}>
at /home/lcnr/rust3/src/librustdoc/core.rs:86:19
56: 0x618960acd75f - clean_impl_item
at /home/lcnr/rust3/src/librustdoc/clean/mod.rs:1271:5
57: 0x61896092ff71 - {closure#4}
at /home/lcnr/rust3/src/librustdoc/clean/inline.rs:519:29
58: 0x61896092ff71 - call_once<(&rustc_hir::hir::ImplItem), rustdoc::clean::inline::build_impl::{closure_env#4}>
at /home/lcnr/rust3/library/core/src/ops/function.rs:305:13
59: 0x61896092ff71 - map<&rustc_hir::hir::ImplItem, rustdoc::clean::types::Item, &mut rustdoc::clean::inline::build_impl::{closure_env#4}>
at /home/lcnr/rust3/library/core/src/option.rs:1113:29
60: 0x61896092ff71 - next<rustdoc::clean::types::Item, core::iter::adapters::filter::Filter<core::iter::adapters::map::Map<core::slice::iter::Iter<rustc_hir::hir::ImplItemRef>, rustdoc::clean::inline::build_impl::{closure_env#2}>, rustdoc::clean::inline::build_impl::{closure_env#3}>, rustdoc::clean::inline::build_impl::{closure_env#4}>
at /home/lcnr/rust3/library/core/src/iter/adapters/map.rs:107:26
61: 0x61896092ff71 - from_iter<rustdoc::clean::types::Item, core::iter::adapters::map::Map<core::iter::adapters::filter::Filter<core::iter::adapters::map::Map<core::slice::iter::Iter<rustc_hir::hir::ImplItemRef>, rustdoc::clean::inline::build_impl::{closure_env#2}>, rustdoc::clean::inline::build_impl::{closure_env#3}>, rustdoc::clean::inline::build_impl::{closure_env#4}>>
at /home/lcnr/rust3/library/alloc/src/vec/spec_from_iter_nested.rs:25:32
62: 0x61896092ff71 - from_iter<rustdoc::clean::types::Item, core::iter::adapters::map::Map<core::iter::adapters::filter::Filter<core::iter::adapters::map::Map<core::slice::iter::Iter<rustc_hir::hir::ImplItemRef>, rustdoc::clean::inline::build_impl::{closure_env#2}>, rustdoc::clean::inline::build_impl::{closure_env#3}>, rustdoc::clean::inline::build_impl::{closure_env#4}>>
at /home/lcnr/rust3/library/alloc/src/vec/spec_from_iter.rs:34:9
63: 0x618960abfd2f - from_iter<rustdoc::clean::types::Item, core::iter::adapters::map::Map<core::iter::adapters::filter::Filter<core::iter::adapters::map::Map<core::slice::iter::Iter<rustc_hir::hir::ImplItemRef>, rustdoc::clean::inline::build_impl::{closure_env#2}>, rustdoc::clean::inline::build_impl::{closure_env#3}>, rustdoc::clean::inline::build_impl::{closure_env#4}>>
at /home/lcnr/rust3/library/alloc/src/vec/mod.rs:3412:9
64: 0x618960abfd2f - collect<core::iter::adapters::map::Map<core::iter::adapters::filter::Filter<core::iter::adapters::map::Map<core::slice::iter::Iter<rustc_hir::hir::ImplItemRef>, rustdoc::clean::inline::build_impl::{closure_env#2}>, rustdoc::clean::inline::build_impl::{closure_env#3}>, rustdoc::clean::inline::build_impl::{closure_env#4}>, alloc::vec::Vec<rustdoc::clean::types::Item, alloc::alloc::Global>>
at /home/lcnr/rust3/library/core/src/iter/traits/iterator.rs:1971:9
65: 0x618960abfd2f - build_impl
at /home/lcnr/rust3/src/librustdoc/clean/inline.rs:520:18
66: 0x618960ad66d3 - {closure#5}
at /home/lcnr/rust3/src/librustdoc/passes/collect_trait_impls.rs:82:17
67: 0x618960ad66d3 - with_param_env<(), rustdoc::passes::collect_trait_impls::collect_trait_impls::{closure_env#5}>
at /home/lcnr/rust3/src/librustdoc/core.rs:86:19
68: 0x6189608fc061 - collect_trait_impls
at /home/lcnr/rust3/src/librustdoc/passes/collect_trait_impls.rs:81:13
69: 0x618960d1e4a7 - {closure#6}
at /home/lcnr/rust3/src/librustdoc/core.rs:430:55
70: 0x618960d1e4a7 - run<rustdoc::clean::types::Crate, rustdoc::core::run_global_ctxt::{closure_env#6}>
at /home/lcnr/rust3/compiler/rustc_data_structures/src/profiling.rs:753:9
71: 0x618960d1e4a7 - time<rustdoc::clean::types::Crate, rustdoc::core::run_global_ctxt::{closure_env#6}>
at /home/lcnr/rust3/compiler/rustc_session/src/utils.rs:16:9
72: 0x618960ad9f3a - run_global_ctxt
at /home/lcnr/rust3/src/librustdoc/core.rs:430:25
73: 0x618960d1d809 - {closure#0}
at /home/lcnr/rust3/src/librustdoc/lib.rs:867:21
74: 0x618960d1d809 - run<core::result::Result<(rustdoc::clean::types::Crate, rustdoc::config::RenderOptions, rustdoc::formats::cache::Cache), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure#2}::{closure#0}::{closure#0}::{closure_env#0}>
at /home/lcnr/rust3/compiler/rustc_data_structures/src/profiling.rs:753:9
75: 0x618960d1d809 - time<core::result::Result<(rustdoc::clean::types::Crate, rustdoc::config::RenderOptions, rustdoc::formats::cache::Cache), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure#2}::{closure#0}::{closure#0}::{closure_env#0}>
at /home/lcnr/rust3/compiler/rustc_session/src/utils.rs:16:9
76: 0x618960a33361 - {closure#0}
at /home/lcnr/rust3/src/librustdoc/lib.rs:866:55
77: 0x618960a33361 - {closure#1}<rustdoc::main_args::{closure#2}::{closure#0}::{closure_env#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/compiler/rustc_middle/src/ty/context.rs:1371:37
78: 0x618960a33361 - {closure#0}<rustc_middle::ty::context::{impl#20}::enter::{closure_env#1}<rustdoc::main_args::{closure#2}::{closure#0}::{closure_env#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/compiler/rustc_middle/src/ty/context/tls.rs:72:9
79: 0x618960a33361 - try_with<core::cell::Cell<*const ()>, rustc_middle::ty::context::tls::enter_context::{closure_env#0}<rustc_middle::ty::context::{impl#20}::enter::{closure_env#1}<rustdoc::main_args::{closure#2}::{closure#0}::{closure_env#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/library/std/src/thread/local.rs:309:12
80: 0x618960a33361 - with<core::cell::Cell<*const ()>, rustc_middle::ty::context::tls::enter_context::{closure_env#0}<rustc_middle::ty::context::{impl#20}::enter::{closure_env#1}<rustdoc::main_args::{closure#2}::{closure#0}::{closure_env#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/library/std/src/thread/local.rs:273:9
81: 0x618960a33361 - enter_context<rustc_middle::ty::context::{impl#20}::enter::{closure_env#1}<rustdoc::main_args::{closure#2}::{closure#0}::{closure_env#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/compiler/rustc_middle/src/ty/context/tls.rs:69:9
82: 0x618960a33361 - enter<rustdoc::main_args::{closure#2}::{closure#0}::{closure_env#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/compiler/rustc_middle/src/ty/context.rs:1371:9
83: 0x618960a33361 - enter<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure#2}::{closure#0}::{closure_env#0}>
at /home/lcnr/rust3/compiler/rustc_interface/src/queries.rs:64:9
84: 0x618960a33361 - {closure#0}
at /home/lcnr/rust3/src/librustdoc/lib.rs:865:13
85: 0x618960a33361 - enter<rustdoc::main_args::{closure#2}::{closure_env#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/compiler/rustc_interface/src/queries.rs:211:19
86: 0x618960ca964b - {closure#2}
at /home/lcnr/rust3/src/librustdoc/lib.rs:859:9
87: 0x618960ca964b - {closure#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>
at /home/lcnr/rust3/compiler/rustc_interface/src/interface.rs:505:27
88: 0x618960c9a785 - {closure#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/compiler/rustc_interface/src/util.rs:143:13
89: 0x618960c9a785 - {closure#0}<rustc_interface::util::run_in_thread_pool_with_globals::{closure_env#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/compiler/rustc_interface/src/util.rs:106:21
90: 0x618960c9a785 - set<rustc_span::SessionGlobals, rustc_interface::util::run_in_thread_with_globals::{closure#0}::{closure#0}::{closure_env#0}<rustc_interface::util::run_in_thread_pool_with_globals::{closure_env#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/.cargo/registry/src/index.crates.io-6f17d22bba15001f/scoped-tls-1.0.1/src/lib.rs:137:9
91: 0x618960c9a785 - create_session_globals_then<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_interface::util::run_in_thread_with_globals::{closure#0}::{closure#0}::{closure_env#0}<rustc_interface::util::run_in_thread_pool_with_globals::{closure_env#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>>
at /home/lcnr/rust3/compiler/rustc_span/src/lib.rs:138:5
92: 0x618960c9a785 - {closure#0}<rustc_interface::util::run_in_thread_pool_with_globals::{closure_env#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/compiler/rustc_interface/src/util.rs:105:17
93: 0x618960c9a785 - __rust_begin_short_backtrace<rustc_interface::util::run_in_thread_with_globals::{closure#0}::{closure_env#0}<rustc_interface::util::run_in_thread_pool_with_globals::{closure_env#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/library/std/src/sys/backtrace.rs:154:18
94: 0x618960cac405 - {closure#0}<rustc_interface::util::run_in_thread_with_globals::{closure#0}::{closure_env#0}<rustc_interface::util::run_in_thread_pool_with_globals::{closure_env#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/library/std/src/thread/mod.rs:561:17
95: 0x618960cac405 - call_once<core::result::Result<(), rustc_span::ErrorGuaranteed>, std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure_env#0}<rustc_interface::util::run_in_thread_with_globals::{closure#0}::{closure_env#0}<rustc_interface::util::run_in_thread_pool_with_globals::{closure_env#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>>
at /home/lcnr/rust3/library/core/src/panic/unwind_safe.rs:272:9
96: 0x618960cac405 - do_call<core::panic::unwind_safe::AssertUnwindSafe<std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure_env#0}<rustc_interface::util::run_in_thread_with_globals::{closure#0}::{closure_env#0}<rustc_interface::util::run_in_thread_pool_with_globals::{closure_env#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/library/std/src/panicking.rs:573:40
97: 0x618960cac405 - try<core::result::Result<(), rustc_span::ErrorGuaranteed>, core::panic::unwind_safe::AssertUnwindSafe<std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure_env#0}<rustc_interface::util::run_in_thread_with_globals::{closure#0}::{closure_env#0}<rustc_interface::util::run_in_thread_pool_with_globals::{closure_env#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>>>
at /home/lcnr/rust3/library/std/src/panicking.rs:536:19
98: 0x618960cac405 - catch_unwind<core::panic::unwind_safe::AssertUnwindSafe<std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure_env#0}<rustc_interface::util::run_in_thread_with_globals::{closure#0}::{closure_env#0}<rustc_interface::util::run_in_thread_pool_with_globals::{closure_env#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/library/std/src/panic.rs:358:14
99: 0x618960cac405 - {closure#1}<rustc_interface::util::run_in_thread_with_globals::{closure#0}::{closure_env#0}<rustc_interface::util::run_in_thread_pool_with_globals::{closure_env#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>
at /home/lcnr/rust3/library/std/src/thread/mod.rs:559:30
100: 0x618960cac405 - call_once<std::thread::{impl#0}::spawn_unchecked_::{closure_env#1}<rustc_interface::util::run_in_thread_with_globals::{closure#0}::{closure_env#0}<rustc_interface::util::run_in_thread_pool_with_globals::{closure_env#0}<rustc_interface::interface::run_compiler::{closure_env#1}<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustdoc::main_args::{closure_env#2}>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, core::result::Result<(), rustc_span::ErrorGuaranteed>>, ()>
at /home/lcnr/rust3/library/core/src/ops/function.rs:250:5
101: 0x7f28b96a3a4d - call_once<(), dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>
at /home/lcnr/rust3/library/alloc/src/boxed.rs:1972:9
102: 0x7f28b96a3a4d - call_once<(), alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>, alloc::alloc::Global>
at /home/lcnr/rust3/library/alloc/src/boxed.rs:1972:9
103: 0x7f28b969691f - thread_start
at /home/lcnr/rust3/library/std/src/sys/pal/unix/thread.rs:105:17
104: 0x7f28b349ca94 - start_thread
at ./nptl/pthread_create.c:447:8
105: 0x7f28b3529c3c - clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
106: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-rustdoc&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/home/lcnr/diesel/rustc-ice-2024-11-29T15_58_56-64760.txt` to your bug report
note: compiler flags: --crate-type lib
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
error: could not document `diesel`
```
</details> | T-rustdoc,C-bug | low | Critical |
2,705,549,398 | deno | Clash between `RequestInit` deno global and `@types/node` global | ```ts
/// <reference types="npm:@types/node" />
export const example = (
rawFetch: typeof globalThis.fetch,
timeout = 123,
) => {
return async (...args: Parameters<typeof globalThis.fetch>) => {
const { signal: userSignal } = args[1] ?? {};
}
}
```
```shellsession
> deno check deno.test.ts
Check file:///V:/scratch/deno.test.ts
error: TS2339 [ERROR]: Property 'signal' does not exist on type 'RequestInit'.
const { signal: userSignal } = args[1] ?? {};
~~~~~~
at file:///V:/scratch/deno.test.ts:7:13
```
Strange. | bug,tsc | low | Critical |
2,705,567,542 | rust | Rust release signing key uses SHA1 self-/binding signatures | Hi!
Debian is currently evaluating switching apt/gpgv over to a gpgv implementation backed by Sequoia instead of gnupg. While test-driving that on my machine, while working on preparing the rustc 1.83 update, which entails downloading and verifying the upstream rustc-src tarball, I noticed that the rustc release signing key fails the default policy of Sequoia, which doesn't accept current signatures derived directly or transitively from SHA-1.
While it's easy enough to work around locally, I thought I'd report it anyway in case you want to update those signatures to a better cryptographic hash algorithm.
Here's the output of `sq cert lint <public key>` (there also is a `--fix` parameter, which should regenerate those signatures with better algorithms, but that of course requires access to the corresponding private key material which I lack):
```
$ sq cert lint --cert-file ./rust/debian/upstream/*.asc
Certificate 85AB96E6FA1BE5FE is not valid under the standard policy: No binding signature at time
2024-11-29T16:07:22Z
Certificate 85AB96E6FA1BE5FE contains a User ID ("Rust Language (Tag and Release Signing Key) <rust-
[email protected]>") protected by SHA-1
Certificate 85AB96E6FA1BE5FE, key 5CB4A9347B3B09DC uses a SHA-1-protected binding signature.
Certificate 85AB96E6FA1BE5FE, key 8E9AA3F7AB3F5826 uses a SHA-1-protected binding signature.
Examined 1 certificate.
0 certificates are invalid and were not linted. (GOOD)
1 certificate was linted.
1 of the 1 certificates (100%) has at least one issue. (BAD)
0 of the linted certificates were revoked.
0 of the 0 certificates has revocation certificates that are weaker than the certificate and should be recreated. (GOOD)
0 of the linted certificates were expired.
1 of the non-revoked linted certificate has at least one non-revoked User ID:
1 has at least one User ID protected by SHA-1. (BAD)
1 has all User IDs protected by SHA-1. (BAD)
1 of the non-revoked linted certificates has at least one non-revoked, live subkey:
1 has at least one non-revoked, live subkey with a binding signature that uses SHA-1. (BAD)
0 of the non-revoked linted certificates have at least one non-revoked, live, signing-capable
subkey:
0 certificates have at least one non-revoked, live, signing-capable subkey with a strong binding signature, but a backsig that uses SHA-1. (GOOD)
```
note the lines ending with `(BAD)` that all point to the same underlying issue - the subkeys being bound via SHA-1 signatures.
apologies if this is the wrong channel to report such issues, feel free to forward it or redirect me to a more appropriate venue! | T-infra,T-release,C-discussion | low | Major |
2,705,575,862 | create-react-app | Use Yarn Create React-APP TS-Demo-Template TypeScript error | Have you ever encountered this situation? Use Yarn Create React-APP XXX-Template TypeScript to create a project when downloading the dependence. No Longer Support. Please see https://eslint.org/version-support for other options. What happened, Node changed several versions, but the error still appeared
[model-snap.zip](https://github.com/user-attachments/files/17961417/model-snap.zip)
| needs triage | low | Critical |
2,705,576,555 | excalidraw | embed dropbox video link inside the excalidraw online | I've been struggling to feature for my organisation needs for hrs and excalidraw might be the one.
I need a way to organize and visualize links (dropbox/youtube), I don't want to download and upload it I just want to manipulate the link.
Surprisingly is proving very difficult to have a database style (notion/airtable) sofwtware which one can add a link and it creates a rendition of that where you can interect with it. Only thing I found closer to this is excalidraw local.
I use notion for organising my projects/tasks and I don't seem to be able to interact with a excalidraw local document inside notion, tried pasting the obsidian URL link from within the local obsidian document.
By using excalidraw online I can preview it inside notion. Issue is when I try and add a dropbox web embed I get an error: "embedding this URL is currently not allowed, raise an issue inside github to get the link whitelisted'. so here I am.
| whitelist | low | Critical |
2,705,576,607 | vscode | Memory leak | I'm getting this in dev console. Based on git-blame at contentHoverStatusBar.ts:48, assigning to you @aiday-mar
Not sure how to repro, but should be traceable by code inspection.
```
Error: Trying to add a disposable to a DisposableStore that has already been disposed of. The added object will be leaked!
at rhi.add (lifecycle.ts:425:18)
at qse.B (lifecycle.ts:497:22)
at qse.addAction (contentHoverStatusBar.ts:48:8)
at markerHoverParticipant.ts:239:23
```

Version: 1.96.0-insider
Commit: 275faf6f08b7aa50843f3c18406b4d5969784e52
Date: 2024-11-28T18:04:55.375Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.1.0
| bug,editor-hover | low | Critical |
2,705,577,346 | angular | NG04008 Documentation Missing | ### Describe the problem that you experienced
We have updated our Ionic Angular Application to 18.2.12 and since then we get a lot of Sentry Errors with just `Error: NG04008` and no further description.
Please adapt more Information about this to [https://angular.dev/errors](https://angular.dev/errors).
I just found a similar Topic on [Stack-Overflow](https://stackoverflow.com/questions/71122841/error-error-the-requested-path-contains-undefined-segment-at-index-1) but I even get the Errors with no Childpath appending.
### Enter the URL of the topic with the problem
https://angular.dev/errors
### Describe what you were looking for in the documentation
Any information about NG04008.
### Describe the actions that led you to experience the problem
Got Sentry Error in our Application and never experienced it in testing.
| help wanted,good first issue,area: docs | low | Critical |
2,705,599,852 | deno | deno doc --html : do not use the '~' character in the file hierarchy as generated by deno doc | When generating the documentation, deno doc produces subdirectories (for individual symbols) with the name `~`, and then uses URLs of the form `/~/`in the html files. This creates all kinds of problems:
- Manipulation the document hierarchy from a terminal is awkward, because the `cd ~` commands leads to the home directory (I know, `cd \~` can be used, but it is a pain)
- When generating documentation and pushing it to GitHub, github.io does not find the files. I believe it expects URL-s using the URL encoding of the `~` character, and it looks as if the `~` directory was not interpreted either by GitHub pages.
I do not see any advantages using that character, only disadvantages...
Thanks. | suggestion | low | Minor |
2,705,605,512 | deno | deno doc --html : incorporate the content of the readme.md file into the documentation (if available) | The issue title tells it all. It is always a good idea to write a GitHub readme.md file that would be some sort of an introduction to a package, but it should also be used as part of the documentation on the front page. Converting that file into the HTML index page is very helpful.
(I am obviously influenced by the fact that typedoc does exactly that, and I found that a really useful feature when writing a documentation.) | suggestion | low | Minor |
2,705,692,987 | rust | Tracking issue for release notes of #86319: Tracking Issue for Path::file_prefix |
This issue tracks the release notes text for #86319.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Tracking Issue for Path::file_prefix](https://github.com/rust-lang/rust/issues/86319)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @mbhall88 -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,relnotes-tracking-issue | low | Minor |
2,705,729,477 | deno | run::spawn_kill_permissions is flaky | ```
---- run::spawn_kill_permissions stdout ----
command /Users/runner/work/deno/deno/target/debug/deno run --quiet --allow-run=cat spawn_kill_permissions.ts
command cwd /Users/runner/work/deno/deno/tests/testdata
OUTPUT
error: Uncaught (in promise) TypeError: Child process has already terminated.
child.kill("SIGTERM");
^
at ChildProcess.kill (ext:runtime/40_process.js:357:5)
at file:///Users/runner/work/deno/deno/tests/testdata/spawn_kill_permissions.ts:6:7
OUTPUT
thread 'run::spawn_kill_permissions' panicked at tests/integration/run_tests.rs:1990:1:
bad exit code, expected: 0, actual: 1
stack backtrace:
0: 0x1068f9366 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hcaf66bc4c0c453df
1: 0x10691ddcb - core::fmt::write::hc9c5f1836b413410
2: 0x1068f50a2 - std::io::Write::write_fmt::h5ce5bc7686de8484
3: 0x1068faba8 - std::panicking::default_hook::{{closure}}::h52c0b2f44f6107c5
4: 0x1068fa728 - std::panicking::default_hook::h5a6cf31501c161b2
5: 0x10482903b - test::test_main::{{closure}}::h3d665f637d89df05
6: 0x1068fb910 - std::panicking::rust_panic_with_hook::hda4640ee332466e9
7: 0x1068fb195 - std::panicking::begin_panic_handler::{{closure}}::haa3060694b34ea3d
8: 0x1068f9849 - std::sys::backtrace::__rust_end_short_backtrace::h8eb44913cfe71457
9: 0x1068faddc - _rust_begin_unwind
10: 0x10694fb5a - core::panicking::panic_fmt::h31edc3d6ff0aadca
11: 0x104a3496a - test_server::builders::TestCommandOutput::assert_exit_code::hf6962d833f7ce52f
12: 0x10456b003 - integration_tests::run::spawn_kill_permissions::h10458c72b0e86c20
13: 0x10456a9b5 - integration_tests::run::spawn_kill_permissions::{{closure}}::h4f2c7ba775890cd1
14: 0x1044174f8 - core::ops::function::FnOnce::call_once::h8048a78777d92048
15: 0x10482d60b - test::__rust_begin_short_backtrace::h93e921f893a90bb1
16: 0x10482ce71 - test::run_test::{{closure}}::h42576c3f690468a6
17: 0x1047f1b1d - std::sys::backtrace::__rust_begin_short_backtrace::hb534ca9204c87c90
18: 0x1047f5322 - core::ops::function::FnOnce::call_once{{vtable.shim}}::h95ab9f9938de870a
19: 0x106900e8b - std::sys::pal::unix::thread::Thread::new::thread_start::h55ff15b5f[2276](https://github.com/denoland/deno/actions/runs/12088287790/job/33711490397?pr=27151#step:43:2277)bcd
20: 0x7ff80ce3f1d3 - __pthread_start
failures:
run::spawn_kill_permissions
```
https://github.com/denoland/deno/actions/runs/12088287790/job/33711490397?pr=27151 | flaky | low | Critical |
2,705,731,593 | rust | Tracking issue for release notes of #133089: Stabilize noop_waker |
This issue tracks the release notes text for #133089.
### Steps
- [x] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [x] Issue is nominated for release team review of clarity for wider audience.
- [x] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Stabilized APIs
- [`std::task::Waker::noop`](https://doc.rust-lang.org/stable/std/task/struct.Waker.html#method.noop)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @eholk, @compiler-errors -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,relnotes-tracking-issue | low | Critical |
2,705,866,813 | rust | Tracking issue for release notes of #129322: Tracking Issue for `ptr::fn_addr_eq` |
This issue tracks the release notes text for #129322.
### Steps
- [X] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Stabilized APIs
- [`ptr::fn_addr_eq`](https://doc.rust-lang.org/std/ptr/fn.fn_addr_eq.html)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @Urgau -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,relnotes-tracking-issue | low | Minor |
2,705,928,033 | rust | Missed optimizations: Err returns should be out of line, loop not recognized as iterating at least once | As of today (2024-11-29), neither the release nor the nightly compiler
does an ideal job of optimizing this string validation function.
```rust
pub enum IdError {
IdEmpty,
IdTooLong,
IdNotAscii,
}
#[inline(never)]
pub fn check_id(value: &str, limit: usize) -> Result<(), IdError> {
if value.is_empty() {
return Err(IdError::IdEmpty);
}
if value.len() > limit - 1 {
return Err(IdError::IdTooLong);
}
if value.as_bytes().iter().any(|c| !(1u8..=127u8).contains(c)) {
return Err(IdError::IdNotAscii)
}
Ok(())
}
```
There are two missed optimizations. For code like this, the compiler
would ideally recognize that conditional `Err` returns are unlikely,
and make the fall-through path be the one that (eventually) returns
`Ok`. Nightly gets this right for the first two checks but not the
third. Second, the compiler does not notice that, if the first two
checks succeed, then the loop in the third check must iterate at least
once. In addition to the redundant test, this might be inhibiting
vectorization of the loop.
The dec/cmp/jb sequence used for the second test can also be
micro-optimized to cmp/jbe but that’s extremely minor.
[playground link](https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=7b445753298ffbd4f08469ea576172b1)
<details><summary>assembly generated by 1.85.0-nightly (2024-11-28 a2545fd6fc66b4323f55)</summary>
```asm
playground::check_id:
testq %rsi, %rsi
je .LBB0_1
decq %rdx
movb $1, %al ; 1 = Err(IdTooLong)
cmpq %rsi, %rdx
jb .LBB0_8
xorl %eax, %eax
.LBB0_4:
cmpq %rax, %rsi
je .LBB0_5
cmpb $0, (%rdi,%rax)
leaq 1(%rax), %rax
jg .LBB0_4
movb $2, %al ; 2 = Err(IdNotAscii)
.LBB0_8:
retq
.LBB0_1:
xorl %eax, %eax ; 0 = Err(IdEmpty)
retq
.LBB0_5:
movb $3, %al ; 3 = Ok(())
retq
```
</details>
<details><summary>hand-optimized desired assembly (no vectorization)</summary>
```asm
playground::check_id:
xor %eax, %eax ; 0 = Err(IdEmpty); recycled as index register
test %rsi, %rsi
je .L3
cmp %rsi, %rdx
jbe .L2
.L1:
cmpb $0, (%rdi,%rax)
jng .L4
inc %rax
cmp %rax, %rsi
jb .L1
mov $3, %al ; 3 = Ok(())
ret
.L2:
mov $1, %al ; 1 = Err(IdTooLong)
.L3:
ret
.L4:
mov $2, %al ; 2 = Err(IdNotAscii)
ret
```
</details> | T-compiler,C-optimization | low | Critical |
2,705,940,464 | deno | WebSocket memory leaks | Version: Deno 2.1.2
https://github.com/denoland/deno/blob/8626ec7c25e25c628eed9fdd517c0ec20b01d0a6/ext/websocket/01_websocket.js#L431
This line will wait even if the connection is closed locally.
Running the following code may reproduce the issue unstablely.
```ts
Deno.test("ws leak", async () => {
const ws = new WebSocket(`validTarget`);
await new Promise((resolve, reject) => {
ws.addEventListener("open", resolve);
});
ws.close();
});
```
Run `deno test` would return the following error sometimes:
```
error: Leaks detected:
- "serverWebSocket" was created during the test, but not cleaned up during the test. Close the resource before the end of the test.
- An async operation to receive the next message on a WebSocket was started in this test, but never completed. This is often caused by not closing a `WebSocket` or `WebSocketStream`. The operation was started here:
at op_ws_next_event (ext:core/00_infra.js:250:13)
at WebSocket.[[[eventLoop]]] (ext:deno_websocket/01_websocket.js:431:26)
at eventLoopTick (ext:core/01_core.js:175:7)
``` | needs investigation,ext/websocket | low | Critical |
2,705,960,303 | next.js | Navigating out of "Not Found" page gives error if Parallel routing exists | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/gallant-kalam-2xqmnr?workspaceId=08193210-3c43-4dee-852a-15fcfd6a6e98
### To Reproduce
Fork the sandbox to see error in terminal.
If you have parallel routes defined somewhere in the project and put it inside layout, eventhough it successfully redirects, nextjs throws error when inside notfound page with a link.
I have tried my best to log what's undefined at 0 index, NextJS gives this :
{ children: [ '/_not-found', { children: [Array] } ] } children
{ children: [ '/_not-found', { children: [Array] } ] } testRoute
it tries to reach parallel route slot name but only has children.
In order to replecate the issue :
1 - Have a parallel route and a interceptor route inside
2 - ```<Link>``` inside notfound page.
3 - Go to notfound page (force nextjs give not found- go to any route that doesn't exists)
4 - click on the link.
```
⨯ next/dist/src/server/app-render/walk-tree-with-flight-router-state.tsx (238:50) @ walkTreeWithFlightRouterState
⨯ TypeError: Cannot read properties of undefined (reading '0')
at async doRender (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/base-server.ts:2660:21)
at async responseGenerator (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/base-server.ts:3021:21)
at async DevServer.renderToResponseWithComponentsImpl (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/base-server.ts:3033:23)
at async DevServer.renderPageComponent (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/base-server.ts:3613:15)
at async DevServer.renderToResponseImpl (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/base-server.ts:3675:23)
at async DevServer.pipeImpl (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/base-server.ts:1698:20)
at async NextNodeServer.handleCatchallRenderRequest (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/next-server.ts:1046:6)
at async DevServer.handleRequestImpl (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/base-server.ts:1463:8)
at async (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/dev/next-dev-server.ts:512:13)
at async Span.traceAsyncFn (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/trace/trace.ts:143:13)
at async DevServer.handleRequest (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/dev/next-dev-server.ts:510:19)
at async invokeRender (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/lib/router-server.ts:284:10)
at async handleRequest (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/lib/router-server.ts:530:15)
at async requestHandlerImpl (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/lib/router-server.ts:576:6)
at async Server.requestListener (node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/src/server/lib/start-server.ts:152:6) {
page: '/'
}
236 | subPath[0] === DEFAULT_SEGMENT_KEY &&
237 | flightRouterState &&
> 238 | !!flightRouterState[1][parallelRouteKey][0] &&
| ^
239 | flightRouterState[1][parallelRouteKey][3] !== 'refetch'
240 | ) {
241 | continue
```
### Current vs. Expected behavior
Current:
Throws an error.
Expected:
Should not throw an error
### Provide environment information
```bash
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.0.4-canary.31 // Latest available version is detected (15.0.4-canary.31).
eslint-config-next: N/A
react: 19.0.0-rc-b01722d5-20241114
react-dom: 19.0.0-rc-b01722d5-20241114
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation, Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Other (Deployed)
### Additional context
We are currently using ^14 in production the issue exists. My sandbox (attached to issue) is ^15 canary.31 still have this issue.

| bug,Navigation,Parallel & Intercepting Routes | low | Critical |
2,706,048,361 | rust | Tracking issue for release notes of #131784: Stabilize unsigned and float variants of `num_midpoint` feature |
This issue tracks the release notes text for #131784.
### Steps
- [X] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Stabilized APIs
- [`{float}::midpoint`](https://doc.rust-lang.org/core/primitive.f32.html#method.midpoint)
- [Unsigned `{integer}::midpoint`](https://doc.rust-lang.org/std/primitive.u64.html#method.midpoint)
- [`NonZeroU*::midpoint`](https://doc.rust-lang.org/std/num/type.NonZeroU32.html#method.midpoint)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @Urgau, @dtolnay -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,relnotes-tracking-issue | low | Minor |
2,706,151,614 | yt-dlp | Hoichoi does not work anymore, not even for the free content | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United States
### Provide a description that is worded well enough to be understood
I have checked the past reported issues. It used to work, but it does not anymore. I have tried the cookies procedure with different browsers and by placing them in the same directory, and using the plugins.
The output I receive is:
`C:\Tools\yt-dlp>yt-dlp --cookies-from-browser firefox https://www.hoichoi.tv/movies/dracula-sir-2020
[viewlift] Extracting URL: https://www.hoichoi.tv/movies/dracula-sir-2020
Extracting cookies from firefox
Extracted 1264 cookies from firefox
ERROR: [viewlift] movies/dracula-sir-2020: Cookies (not necessarily logged in) are needed to download from this website. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies`
I was able to download this movie with a different tool.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--cookies-from-browser', 'firefox', 'https://www.hoichoi.tv/movies/dracula-sir-2020']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg N-115832-g6225ad5c19-20240614 (setts)
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
Extracting cookies from firefox
[debug] Extracting cookies from: "C:\Users\Jacob\AppData\Roaming\Mozilla\Firefox\Profiles\5d34vvzy.default-release\cookies.sqlite"
Extracted 1264 cookies from firefox
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Plugin directories: ['C:\\Tools\\yt-dlp\\yt-dlp-plugins\\yt-dlp-ChromeCookieUnlock\\yt_dlp_plugins']
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[viewlift] Extracting URL: https://www.hoichoi.tv/movies/dracula-sir-2020
ERROR: [viewlift] movies/dracula-sir-2020: Cookies (not necessarily logged in) are needed to download from this website. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\viewlift.py", line 344, in _real_extract
File "yt_dlp\extractor\viewlift.py", line 46, in _call_api
File "yt_dlp\extractor\viewlift.py", line 43, in _fetch_token
File "yt_dlp\extractor\common.py", line 1258, in raise_login_required
```
| site-bug,triage | low | Critical |
2,706,166,891 | kubernetes | Failure cluster [796ac905...] `[sig-node] [NodeFeature:SidecarContainers] Containers Lifecycle when A pod with restartable init containers is terminating when The PreStop hooks don't exit should terminate sidecars simultaneously if prestop doesn't exit` | ### Failure cluster [796ac905269872192a09](https://go.k8s.io/triage#796ac905269872192a09)
##### Error text:
```
[FAILED] expected PostStart 1 to live for ~32 seconds, got 0) 2024-11-16 19:59:45.714 +0000 UTC restartable-init-1 Starting
1) 2024-11-16 19:59:45.727 +0000 UTC restartable-init-1 Started
2) 2024-11-16 19:59:45.736 +0000 UTC restartable-init-1 Delaying
3) 2024-11-16 19:59:46.784 +0000 UTC restartable-init-2 Starting
4) 2024-11-16 19:59:46.803 +0000 UTC restartable-init-2 Started
5) 2024-11-16 19:59:46.819 +0000 UTC restartable-init-2 Delaying
6) 2024-11-16 19:59:47.775 +0000 UTC restartable-init-3 Starting
7) 2024-11-16 19:59:47.784 +0000 UTC restartable-init-3 Started
8) 2024-11-16 19:59:47.791 +0000 UTC restartable-init-3 Delaying
9) 2024-11-16 19:59:48.851 +0000 UTC regular-1 Starting
10) 2024-11-16 19:59:48.872 +0000 UTC regular-1 Started
11) 2024-11-16 19:59:48.902 +0000 UTC regular-1 Delaying
12) 2024-11-16 19:59:53.932 +0000 UTC regular-1 Exiting
13) 2024-11-16 19:59:54.952 +0000 UTC PreStop-restartable-init-2 Starting
14) 2024-11-16 19:59:54.952 +0000 UTC PreStop-restartable-init-1 Starting
15) 2024-11-16 19:59:55.004 +0000 UTC PreStop-restartable-init-3 Starting
16) 2024-11-16 19:59:55.081 +0000 UTC PreStop-restartable-init-1 Started
17) 2024-11-16 19:59:55.087 +0000 UTC PreStop-restartable-init-3 Started
18) 2024-11-16 19:59:55.09 +0000 UTC PreStop-restartable-init-2 Started
19) 2024-11-16 19:59:55.134 +0000 UTC PreStop-restartable-init-1 Delaying
20) 2024-11-16 19:59:55.136 +0000 UTC PreStop-restartable-init-2 Delaying
21) 2024-11-16 19:59:55.138 +0000 UTC PreStop-
```
#### Recent failures:
[11/28/2024, 9:19:03 PM ci-cri-containerd-node-e2e-features](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-cri-containerd-node-e2e-features/1862320113907142656)
[11/28/2024, 8:18:03 PM ci-cri-containerd-node-e2e-features](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-cri-containerd-node-e2e-features/1862304762922274816)
[11/28/2024, 6:18:01 PM ci-cri-containerd-node-e2e-features](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-cri-containerd-node-e2e-features/1862274563467907072)
[11/28/2024, 4:18:08 PM ci-cri-containerd-node-e2e-features](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-cri-containerd-node-e2e-features/1862244364151951360)
[11/26/2024, 7:11:03 PM ci-cri-containerd-node-e2e-features](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-cri-containerd-node-e2e-features/1861562867275272192)
/kind failing-test
<!-- If this is a flake, please add: /kind flake -->
/sig node | sig/node,kind/failing-test,needs-triage | low | Critical |
2,706,170,285 | stable-diffusion-webui | [Bug]: Unable to create venv, The system cannot find the path specified. on error code 9009 | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I tried to change the set PATH variable in the webui-user.bat file because its trying to get a Python exe file in the wrong path, but does not work even after saving the bat file.

### Steps to reproduce the problem
1. Fresh install python
2. Run the webui.bat file or the webui-user.bat file
### What should have happened?
Should get the right location for the Python executable
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
I dont know how to get the sysinfo because I dont know where to put the --dump-sysinfo commandline argument
### Console logs
```Shell
Creating venv in directory C:\HardDrive\stable-diffusion-webui-1.10.0 (1)\stable-diffusion-webui-1.10.0\venv using python "C:\Users\keyto\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\python.exe"
Unable to create venv in directory "C:\HardDrive\stable-diffusion-webui-1.10.0 (1)\stable-diffusion-webui-1.10.0\venv"
exit code: 9009
stderr:
'"C:\Users\keyto\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\python.exe"' is not recognized as an internal or external command,
operable program or batch file.
Launch unsuccessful. Exiting.
Press any key to continue . . .
```
### Additional information
_No response_ | bug-report | low | Critical |
2,706,256,358 | rust | Tracking issue for banning field projecting into `[rustc_layout_scalar_valid_range_*]` types (MCP807) | For https://github.com/rust-lang/compiler-team/issues/807
Steps:
- [ ] Update the standard library to stop doing this
- #133651
- #135236
- …
- [ ] Update the compiler to stop doing this
- https://github.com/rust-lang/rust/blob/490b2cc09860dd62a7595bb07364d71c12ce4e60/compiler/rustc_mir_transform/src/elaborate_box_derefs.rs#L37
- #135182
- …
- [ ] Add MIR-opts for any new patterns that show up from doing this
- #133324
- …
- [ ] Ban it in the MIR validator
- …
- [ ] Update MIR-opts to be more willing to merge projections and such
- …
SEO: `rustc_layout_scalar_valid_range_start` `rustc_layout_scalar_valid_range_end` `NonZero` `NonNull` "pattern types". | T-compiler,C-tracking-issue,A-mir-opt,T-libs,WG-mir-opt | low | Minor |
2,706,267,170 | godot | Cut shortcut in readonly TextEdit copies text | ### Tested versions
4.4 dev5
### System information
W10
### Issue description
When you have readonly TextEdit and do Ctrl+X on selected text, the text will be copied to clipboard.
Should it behave like that?
For reference, here you can check the behavior in browser: (this is more a LineEdit, but I think they should be consistent)
https://www.w3schools.com/tags/tryit.asp?filename=tryhtml_input_readonly
In Chrome Ctrl+X will do nothing, in Firefox it behaves like in Godot.
idk how it behaves in other apps. Note however that Cut option in context menu is disabled for readonly TextEdits.

### Steps to reproduce
1. Add TextEdit
2. Put some text in `text`
3. Disable `editable`
4. Run and press Ctrl+X after selecting text
5. Paste it somewhere
### Minimal reproduction project (MRP)
N/A | discussion,topic:gui | low | Minor |
2,706,280,936 | tensorflow | Unspecific error message "None values not supported" when no labels are provided in dataset | ### Issue type
Documentation Feature Request
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.18.0
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
If I write some tf/keras code that fails to include a label in the dataset, the resulting error message is: `ValueError: None values not supported.`
The backtrace leads me through `optree/ops.py`.
It's not clear from the message what the `None` values are referring to, nor does the backtrace give me any useful information (as far as I can tell) about the source of the error. I posted this issue here: https://stackoverflow.com/q/79236884/1613983
If the error message could instead make reference to a missing labels set, null `y_true`, etc. that would make it much easier to figure out what's going on.
Somewhat ironically, I ended up solving it after asking GPT o1 about the problem, and it got the answer in one go.
### Standalone code to reproduce the issue
```shell
from tensorflow.keras import layers, Model, Input
from tensorflow.keras.models import Sequential
import pandas as pd
import numpy as np
import tensorflow as tf
input_data = pd.DataFrame(np.random.rand(1000, 10))
data = tf.data.Dataset.zip({'input_values' : tf.data.Dataset.from_tensor_slices(input_data.values)})
batch_size = 100
train_split = 0.8
train_rows = int(train_split * input_data.shape[0])
train_dataset = data.take(train_rows)
validation_dataset = data.skip(train_rows)
train_data_batched = train_dataset.batch(batch_size).prefetch(tf.data.AUTOTUNE)
validation_data_batched = validation_dataset.batch(batch_size).prefetch(tf.data.AUTOTUNE)
num_outputs = 10
input_layer = Input(shape=(num_outputs,), name=f'input_values')
output = layers.Dense(num_outputs, activation='sigmoid', name='output')(input_layer)
# Define the model
model = Model(
inputs=[input_layer],
outputs=output,
)
max_epochs = 10
def loss(y_true, y_pred):
return 2.0
model.compile(
loss=loss,
optimizer='adam',
)
history = model.fit(
train_data_batched,
epochs=max_epochs,
validation_data=validation_data_batched
)
```
### Relevant log output
```shell
File /tmp/virtualenvs/python3.11/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.__traceback__)
120 # To get the full stack trace, call:
121 # `keras.config.disable_traceback_filtering()`
--> 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
File /tmp/virtualenvs/python3.11/lib/python3.11/site-packages/optree/ops.py:752, in tree_map(func, tree, is_leaf, none_is_leaf, namespace, *rests)
750 leaves, treespec = _C.flatten(tree, is_leaf, none_is_leaf, namespace)
751 flat_args = [leaves] + [treespec.flatten_up_to(r) for r in rests]
--> 752 return treespec.unflatten(map(func, *flat_args))
ValueError: None values not supported.
```
| type:feature,comp:data,type:docs-feature | low | Critical |
2,706,337,758 | angular | docs: first app tutorial still uses assets folder for images instead of public | ### Describe the problem that you experienced
When a new project is generated, a `public` folder is created for images. The tutorial still uses the older `assets` folder. The tutorial should stay up to date with how projects are generated on users local machines, as it is very confusing when they encounter something different in their generated project.
Already had multiple people ask questions about this on Discord.
### Enter the URL of the topic with the problem
https://angular.dev/tutorials/first-app/02-HomeComponent#add-the-new-component-to-your-apps-layout
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
I want the tutorial to stay up to date with what users experience on their local machine, when they generate a project.
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
```true
```
### If the problem is browser-specific, please specify the device, OS, browser, and version
```true
```
### Provide any additional information here in as much as detail as you can
```true
``` | area: docs-infra | low | Critical |
2,706,339,060 | PowerToys | Microsft Mouse and Keyboard Center breaks Keyboard Manager | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
I use Keyboard Manager to remap the Previous Track/Pause Play/Next Track buttons to use in LIghtroom. That worked well until today, when I installed the "Microsoft Mouse and Keyboard Center" to use with a Microsoft Mouse I had on the shelf.
After installing this program from the Window store, every remapped key is sent twice. I also have "Browser Home" mapped to Ctrl+Z for Undo, and when I press it, it does the Undo and opens a new instance of Firefox (my default browser).
To summarize: the Microsoft Mouse and Keyboard Center breaks Powertoys Keyboard Manager. As soon as I uninstalled the Microsoft Mouse and Keyboard Center, Keyboard Manager started working properly again.
Note that I do not have a Microsoft Keyboard, but I do have a Microsoft Mouse.
### ✔️ Expected Behavior
I expected Keyboard Manager to continue working as before after I installed "Microsoft Mouse and Keyboard Center".
### ❌ Actual Behavior
Every re-mapped keystroke was duplicated. Also, when I double-checked my settings, when pressing "Save", I got a pop up message about the keys I had assigned NOT being assigned.
### Other Software
Microsoft Mouse and Keyboard center from the Windows store. I got a popup suggesting that I install this after I connected a Microsoft Mouse, but the problem did not go away when I resumed using my Logitech mouse. | Issue-Bug,Product-Keyboard Shortcut Manager,Needs-Triage,Area-App Compat | low | Minor |
2,706,348,458 | go | runtime: clear() is slow for maps with big capacity and small number of items | ## The issue
The following pattern is frequently used in order to avoid excess memory allocations by re-using the map:
```go
func f() {
m := make(map[string]int)
for {
addSomeItemsToMap(m)
useMap(m)
// clear the map for subsequent re-use
clear(m)
}
}
```
It has been appeared that `clear(m)` performance is proportional to the number of buckets in `m`. The number of buckets can grow significantly at `addSomeItemsToMap()`. After that the performance of `clear(m)` can slow down significantly (and forever), even if only a few items are added into the map on subsequent iterations.
See https://philpearl.github.io/post/map_clearing_and_size/ for more details.
## The solution
Go runtime must be able to switch between the algorithm, which unconditionally clears all the buckets in `m`, and the algorithm, which clears only the buckets, which contain at least a single item, depending on the ratio between the number of items in the map and the number of buckets in it. This should improve performance of `clear(m)` in the pattern above when every iteration can store widely different number of items in m. | Performance,NeedsInvestigation,compiler/runtime | low | Major |
2,706,350,740 | deno | Remove usages of `winapi` and consolidate on `windows-sys` | We sometimes use `winapi` and sometimes use `windows-sys`. We should consolidate on `windows-sys` since it's maintained by Microsoft and receiving updates.
That said, looks like a lot of our deps use winapi. | refactor,chore | low | Minor |
2,706,352,843 | godot | Script button in connect a signal dialog does nothing when clicked | ### Tested versions
Godot v4.4.dev5
### System information
Fedora Linux 40 (KDE Plasma) on Wayland - X11 display driver, Single-window
### Issue description
Pressed the Script button and thought I selected the correct node to connect to, but it does nothing so ended up connecting to another node.
https://github.com/user-attachments/assets/ce5fa21a-bf1d-434b-b4ed-9de6338e4e81
### Steps to reproduce
Open connect signal dialog and press on the script button
### Minimal reproduction project (MRP)
[signal_connect.zip](https://github.com/user-attachments/files/17963439/signal_connect.zip)
| bug,topic:editor,usability,topic:gui | low | Minor |
2,706,404,126 | deno | Allow multiple prompters in deno_runtime crate (make PERMISSION_PROMPTER thread-local) | I'm running `deno_runtime` in multiple threads and trying to use different prompters to manage the permissions independently, but I hit a roadblock because `PERMISSION_PROMPTER` is `static`. This is an example of how I'm using `deno_runtime`: https://github.com/carloslfu/tauri-deno-example.
A solution could be using the `thread_local! {}` macro. I created [this branch](https://github.com/carloslfu/deno/commit/546afe37748fecbcf515855f60d358926a615920) with the necessary changes and will submit a PR. | embedder | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.