id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,777,873,539 | rust | `adt_sized_constraint` called on non-struct type | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
Note: Error is in big private project, I can follow up with a smaller example this weekend.
Sharing a rough outline now of what I think is relevant:
```Rust
#[derive(Debug, Clone)]
pub enum SourceDocument {
Document {
source: SourceDocumentModel,
},
}
#[async_graphql::Object]
impl SourceDocument {
async fn version_id(&self) -> Uuid {
match self {
SourceDocument::Document { source } => source.version_id.clone(),
}
}
}
pub struct SourceDocumentLoader {
pub pool: ReadDbPool,
}
impl Loader<Uuid> for SourceDocumentLoader {
type Value = SourceDocument;
type Error = LoaderError;
async fn load(&self, keys: &[Uuid]) -> Result<HashMap<Uuid, Self::Value>, Self::Error> {
// Uses diesel to read from a postgres db
todo!()
}
}
#[derive(thiserror::Error, Clone, Debug)]
pub enum LoaderError {
#[error("Error loading data: {message}")]
Error { message: String },
}
#[derive(Debug, Clone, Queryable, Selectable, Insertable)]
#[diesel(table_name = crate::schema::source_documents)]
#[diesel(check_for_backend(diesel::pg::Pg))]
#[diesel(primary_key(version_id))]
pub struct SourceDocumentModel {
pub version_id: Uuid,
}
```
Code uses [async_graphql](https://github.com/async-graphql/async-graphql), [diesel](https://github.com/diesel-rs/diesel), and [thiserror](https://docs.rs/thiserror/latest/thiserror/).
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.86.0-nightly (a580b5c37 2025-01-08)
binary: rustc
commit-hash: a580b5c379b4fca50dfe5afc0fc0ce00921e4e00
commit-date: 2025-01-08
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.6
```
### Error output
```
error: internal compiler error: compiler/rustc_ty_utils/src/ty.rs:93:9: `adt_sized_constraint` called on non-struct type: source_document::SourceDocument
```
[rustc-ice-2025-01-09T13_24_52-67690.txt](https://github.com/user-attachments/files/18362235/rustc-ice-2025-01-09T13_24_52-67690.txt)
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
thread 'rustc' panicked at compiler/rustc_ty_utils/src/ty.rs:93:9:
Box<dyn Any>
stack backtrace:
0: 0x75cb7b8e3a7a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hc48a3003e72a1fb5
1: 0x75cb7c013126 - core::fmt::write::h244b471aa97315bd
2: 0x75cb7cf22211 - std::io::Write::write_fmt::h63cacec38f2c7922
3: 0x75cb7b8e38d2 - std::sys::backtrace::BacktraceLock::print::h8e88b30882062ce8
4: 0x75cb7b8e5e77 - std::panicking::default_hook::{{closure}}::he2b2f35b61e5b6b4
5: 0x75cb7b8e5c60 - std::panicking::default_hook::h2b9fc43053d2ca15
6: 0x75cb7aa568a8 - std[c08ade969ea4a026]::panicking::update_hook::<alloc[ecf666bc700c7d64]::boxed::Box<rustc_driver_impl[7ccddf7307cff907]::install_ice_hook::{closure#1}>>::{closure#0}
7: 0x75cb7b8e66c3 - std::panicking::rust_panic_with_hook::h1126dbe971a7f919
8: 0x75cb7aa8ecc1 - std[c08ade969ea4a026]::panicking::begin_panic::<rustc_errors[1eaa06d4a73dafc5]::ExplicitBug>::{closure#0}
9: 0x75cb7aa83ea6 - std[c08ade969ea4a026]::sys::backtrace::__rust_end_short_backtrace::<std[c08ade969ea4a026]::panicking::begin_panic<rustc_errors[1eaa06d4a73dafc5]::ExplicitBug>::{closure#0}, !>
10: 0x75cb7aa83c63 - std[c08ade969ea4a026]::panicking::begin_panic::<rustc_errors[1eaa06d4a73dafc5]::ExplicitBug>
11: 0x75cb7aa98c21 - <rustc_errors[1eaa06d4a73dafc5]::diagnostic::BugAbort as rustc_errors[1eaa06d4a73dafc5]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x75cb7b06db03 - rustc_middle[bc4ae9fc2bf4235a]::util::bug::opt_span_bug_fmt::<rustc_span[cfc513d191a8d813]::span_encoding::Span>::{closure#0}
13: 0x75cb7b052b1a - rustc_middle[bc4ae9fc2bf4235a]::ty::context::tls::with_opt::<rustc_middle[bc4ae9fc2bf4235a]::util::bug::opt_span_bug_fmt<rustc_span[cfc513d191a8d813]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x75cb7b0529ab - rustc_middle[bc4ae9fc2bf4235a]::ty::context::tls::with_context_opt::<rustc_middle[bc4ae9fc2bf4235a]::ty::context::tls::with_opt<rustc_middle[bc4ae9fc2bf4235a]::util::bug::opt_span_bug_fmt<rustc_span[cfc513d191a8d813]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x75cb791d3d20 - rustc_middle[bc4ae9fc2bf4235a]::util::bug::bug_fmt
16: 0x75cb7d80725b - rustc_ty_utils[677f1c67437d8f87]::ty::adt_sized_constraint.cold
17: 0x75cb7c63787b - rustc_query_impl[9bbd9b92042adb73]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[9bbd9b92042adb73]::query_impl::adt_sized_constraint::dynamic_query::{closure#2}::{closure#0}, rustc_middle[bc4ae9fc2bf4235a]::query::erase::Erased<[u8; 8usize]>>
18: 0x75cb7c49f571 - rustc_query_system[7c1448542a492b7f]::query::plumbing::try_execute_query::<rustc_query_impl[9bbd9b92042adb73]::DynamicConfig<rustc_query_system[7c1448542a492b7f]::query::caches::DefIdCache<rustc_middle[bc4ae9fc2bf4235a]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt, true>
19: 0x75cb7ce3b807 - rustc_query_impl[9bbd9b92042adb73]::plumbing::force_from_dep_node::<rustc_query_impl[9bbd9b92042adb73]::DynamicConfig<rustc_query_system[7c1448542a492b7f]::query::caches::DefIdCache<rustc_middle[bc4ae9fc2bf4235a]::query::erase::Erased<[u8; 8usize]>>, false, false, false>>
20: 0x75cb7b3fc6fd - <rustc_query_impl[9bbd9b92042adb73]::plumbing::query_callback<rustc_query_impl[9bbd9b92042adb73]::query_impl::adt_sized_constraint::QueryType>::{closure#0} as core[6b5cbebef9c0da3b]::ops::function::FnOnce<(rustc_middle[bc4ae9fc2bf4235a]::ty::context::TyCtxt, rustc_query_system[7c1448542a492b7f]::dep_graph::dep_node::DepNode)>>::call_once
21: 0x75cb7c02d6be - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
22: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
23: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
24: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
25: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
26: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
27: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
28: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
29: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
30: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
31: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
32: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
33: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
34: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
35: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
36: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
37: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
38: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
39: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
40: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
41: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
42: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
43: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
44: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
45: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
46: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
47: 0x75cb7c48ea50 - rustc_query_system[7c1448542a492b7f]::query::plumbing::try_execute_query::<rustc_query_impl[9bbd9b92042adb73]::DynamicConfig<rustc_query_system[7c1448542a492b7f]::query::caches::DefaultCache<rustc_type_ir[72683c14ceefff53]::canonical::CanonicalQueryInput<rustc_middle[bc4ae9fc2bf4235a]::ty::context::TyCtxt, rustc_middle[bc4ae9fc2bf4235a]::ty::ParamEnvAnd<rustc_middle[bc4ae9fc2bf4235a]::ty::predicate::Predicate>>, rustc_middle[bc4ae9fc2bf4235a]::query::erase::Erased<[u8; 2usize]>>, false, false, false>, rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt, true>
48: 0x75cb7c49ade1 - rustc_query_impl[9bbd9b92042adb73]::query_impl::evaluate_obligation::get_query_incr::__rust_end_short_backtrace
49: 0x75cb7c9aa9b5 - <rustc_trait_selection[3b93a0d4a4051551]::traits::fulfill::FulfillProcessor as rustc_data_structures[ad275ecca387d218]::obligation_forest::ObligationProcessor>::process_obligation
50: 0x75cb7c004e4b - <rustc_data_structures[ad275ecca387d218]::obligation_forest::ObligationForest<rustc_trait_selection[3b93a0d4a4051551]::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection[3b93a0d4a4051551]::traits::fulfill::FulfillProcessor>
51: 0x75cb7c36abe9 - <rustc_trait_selection[3b93a0d4a4051551]::traits::fulfill::FulfillmentContext<rustc_trait_selection[3b93a0d4a4051551]::traits::FulfillmentError> as rustc_infer[5a45bd38abbfece3]::traits::engine::TraitEngine<rustc_trait_selection[3b93a0d4a4051551]::traits::FulfillmentError>>::select_all_or_error
52: 0x75cb7ac43d62 - rustc_hir_analysis[f90ae88cc1f43808]::check::compare_impl_item::collect_return_position_impl_trait_in_trait_tys::{closure#0}
53: 0x75cb7b434a13 - rustc_query_impl[9bbd9b92042adb73]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[9bbd9b92042adb73]::query_impl::collect_return_position_impl_trait_in_trait_tys::dynamic_query::{closure#2}::{closure#0}, rustc_middle[bc4ae9fc2bf4235a]::query::erase::Erased<[u8; 8usize]>>
54: 0x75cb7c49f571 - rustc_query_system[7c1448542a492b7f]::query::plumbing::try_execute_query::<rustc_query_impl[9bbd9b92042adb73]::DynamicConfig<rustc_query_system[7c1448542a492b7f]::query::caches::DefIdCache<rustc_middle[bc4ae9fc2bf4235a]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt, true>
55: 0x75cb7ce3b807 - rustc_query_impl[9bbd9b92042adb73]::plumbing::force_from_dep_node::<rustc_query_impl[9bbd9b92042adb73]::DynamicConfig<rustc_query_system[7c1448542a492b7f]::query::caches::DefIdCache<rustc_middle[bc4ae9fc2bf4235a]::query::erase::Erased<[u8; 8usize]>>, false, false, false>>
56: 0x75cb7b3fe0cd - <rustc_query_impl[9bbd9b92042adb73]::plumbing::query_callback<rustc_query_impl[9bbd9b92042adb73]::query_impl::collect_return_position_impl_trait_in_trait_tys::QueryType>::{closure#0} as core[6b5cbebef9c0da3b]::ops::function::FnOnce<(rustc_middle[bc4ae9fc2bf4235a]::ty::context::TyCtxt, rustc_query_system[7c1448542a492b7f]::dep_graph::dep_node::DepNode)>>::call_once
57: 0x75cb7c02d6be - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
58: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
59: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
60: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
61: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
62: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
63: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
64: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
65: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
66: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
67: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
68: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
69: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
70: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
71: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
72: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
73: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
74: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
75: 0x75cb7c02d62a - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
76: 0x75cb7c02cdf4 - <rustc_query_system[7c1448542a492b7f]::dep_graph::graph::DepGraphData<rustc_middle[bc4ae9fc2bf4235a]::dep_graph::DepsType>>::try_mark_green::<rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
77: 0x75cb7ca8977a - rustc_query_system[7c1448542a492b7f]::query::plumbing::ensure_must_run::<rustc_query_impl[9bbd9b92042adb73]::DynamicConfig<rustc_query_system[7c1448542a492b7f]::query::caches::DefaultCache<rustc_span[cfc513d191a8d813]::def_id::LocalModDefId, rustc_middle[bc4ae9fc2bf4235a]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt>
78: 0x75cb7cf6b297 - rustc_query_impl[9bbd9b92042adb73]::query_impl::check_mod_type_wf::get_query_incr::__rust_end_short_backtrace
79: 0x75cb7c425038 - rustc_hir_analysis[f90ae88cc1f43808]::check_crate
80: 0x75cb7c2c84e8 - rustc_interface[9ea0a400d5371a28]::passes::run_required_analyses
81: 0x75cb7cf2609e - rustc_interface[9ea0a400d5371a28]::passes::analysis
82: 0x75cb7cf2606f - rustc_query_impl[9bbd9b92042adb73]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[9bbd9b92042adb73]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[bc4ae9fc2bf4235a]::query::erase::Erased<[u8; 0usize]>>
83: 0x75cb7cf2b83c - rustc_query_system[7c1448542a492b7f]::query::plumbing::try_execute_query::<rustc_query_impl[9bbd9b92042adb73]::DynamicConfig<rustc_query_system[7c1448542a492b7f]::query::caches::SingleCache<rustc_middle[bc4ae9fc2bf4235a]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[9bbd9b92042adb73]::plumbing::QueryCtxt, true>
84: 0x75cb7cf2b095 - rustc_query_impl[9bbd9b92042adb73]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
85: 0x75cb7cf5885e - rustc_interface[9ea0a400d5371a28]::passes::create_and_enter_global_ctxt::<core[6b5cbebef9c0da3b]::option::Option<rustc_interface[9ea0a400d5371a28]::queries::Linker>, rustc_driver_impl[7ccddf7307cff907]::run_compiler::{closure#0}::{closure#2}>::{closure#2}::{closure#0}
86: 0x75cb7cfcba64 - rustc_interface[9ea0a400d5371a28]::interface::run_compiler::<(), rustc_driver_impl[7ccddf7307cff907]::run_compiler::{closure#0}>::{closure#1}
87: 0x75cb7ce2d651 - std[c08ade969ea4a026]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[9ea0a400d5371a28]::util::run_in_thread_with_globals<rustc_interface[9ea0a400d5371a28]::util::run_in_thread_pool_with_globals<rustc_interface[9ea0a400d5371a28]::interface::run_compiler<(), rustc_driver_impl[7ccddf7307cff907]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
88: 0x75cb7ce2db08 - <<std[c08ade969ea4a026]::thread::Builder>::spawn_unchecked_<rustc_interface[9ea0a400d5371a28]::util::run_in_thread_with_globals<rustc_interface[9ea0a400d5371a28]::util::run_in_thread_pool_with_globals<rustc_interface[9ea0a400d5371a28]::interface::run_compiler<(), rustc_driver_impl[7ccddf7307cff907]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[6b5cbebef9c0da3b]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
89: 0x75cb7ce2f0af - std::sys::pal::unix::thread::Thread::new::thread_start::h2f2b51e924b57f78
90: 0x75cb7709ca94 - start_thread
at ./nptl/pthread_create.c:447:8
91: 0x75cb77129c3c - clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78:0
92: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/home/doug/Development/lobbying/rustc-ice-2025-01-09T13_24_52-67690.txt` to your bug report
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [adt_sized_constraint] computing the `Sized` constraint for `source_document::SourceDocument`
#1 [evaluate_obligation] evaluating trait selection obligation `{coroutine witness@async_graphql::dataloader::DataLoader<source_document::SourceDocumentLoader>::load_one<uuid::Uuid>::{closure#0}}: core::marker::Send`
end of query stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 TraitSelect(c36863bb4250ede5-f0dd5b3545e01e15)
#1 TraitSelect(4c24be15b2d6f3af-b372a1afbf71a4aa)
#2 TraitSelect(69ab8d1745af0068-b0944b0cab84b24)
#3 TraitSelect(c153f366ec878262-f3ae313c38edf5e7)
#4 TraitSelect(2f883a331de2a01c-9238e411f0403744)
#5 TraitSelect(6dca713fc985efa2-af0ad76122c8aa17)
#6 TraitSelect(64cfb3c63d150d75-1e1f5a402c317172)
#7 TraitSelect(f9af3800d44ff3e4-c672ae11c7e73560)
#8 TraitSelect(ef208155b0fe538e-b7da9cf6c85f2cd7)
#9 TraitSelect(4f5d3a840d9726c3-d1e1fb34a4f029c3)
#10 TraitSelect(2432ab797fc17b91-58047adae3b4e49c)
#11 TraitSelect(d77b66b9d9582d06-c5d1b5ac3ba18980)
#12 TraitSelect(9da3316b7c7fe753-e7982e0523071a7d)
#13 TraitSelect(90e7cc615586a753-4fa0d478fe37a4bc)
#14 TraitSelect(e353561ffe759df0-3da4a5ac9199f1ce)
#15 TraitSelect(b0438a1675ccac99-3c282d273a68e797)
#16 TraitSelect(8998c777296d1649-1dfc41a77f09603a)
#17 TraitSelect(f942389db325f641-b8f5c2ca965388ee)
#18 TraitSelect(cbd4e3a9281c1930-4262525b51832562)
#19 TraitSelect(684a1519b66b3e04-c5a996d7ec774d25)
#20 TraitSelect(3783c758b5bcb5ec-6b7963105e2658c4)
#21 TraitSelect(eeaa1c4e693c75e1-f0989f528411392)
#22 TraitSelect(de190a34dd4b679-4bc8f1e5638b02c3)
#23 TraitSelect(439355cf8e13ae6e-27d4890120885cbf)
#24 TraitSelect(4d717f5e53cbbdca-7536b26de4960f8c)
#25 evaluate_obligation(dd25ae4b162e8c5e-adc1b9760f7d6431)
end of try_mark_green dep node stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 type_of(lobbying_server[564e]::node::_#3::{impl#2}::{synthetic#0})
#1 TraitSelect(7731928422ddb96d-f76ce588425f3ba6)
#2 TraitSelect(471ed7e67a5e48b2-58428cd36fa43e7f)
#3 TraitSelect(a712978cb0ce4e44-675a7994a48a9d4)
#4 TraitSelect(7988b5f2f3494ece-fd576515aca27bf0)
#5 TraitSelect(108f37a0d93bb6de-f6e6be7f42115633)
#6 TraitSelect(4fa9c120236ed362-79730a0412cd8a6a)
#7 TraitSelect(e8fdbc73d73f4ba6-a1f4d1bd97cb19b7)
#8 TraitSelect(51e3c72960005881-e2d885b735040628)
#9 TraitSelect(508a4e880caee318-af4ed7ea6ef94577)
#10 TraitSelect(cab73f48aa018ede-7ffa95981727c00c)
#11 TraitSelect(dbf609b2611019a8-87b6693ebc9448d9)
#12 TraitSelect(3f53904974293e5b-2f4101f90ee3494c)
#13 TraitSelect(1ed16fe09dbace5-3b14393e81f122d5)
#14 evaluate_obligation(933304b9bf69f351-ab6b6da8258b8577)
#15 collect_return_position_impl_trait_in_trait_tys(lobbying_server[564e]::{impl#3}::resolve_field)
#16 type_of(lobbying_server[564e]::{impl#3}::{synthetic#0})
#17 check_well_formed(lobbying_server[564e]::{impl#3})
#18 check_mod_type_wf(lobbying_server[564e])
end of try_mark_green dep node stack[rustc-ice-2025-01-09T13_24_52-67690.txt](https://github.com/user-attachments/files/18362124/rustc-ice-2025-01-09T13_24_52-67690.txt)
```
</p>
</details>
| I-ICE,T-compiler,A-incr-comp,C-bug,S-has-mcve | low | Critical |
2,777,876,819 | node | Use the core module without importing or requiring it. | ### What is the problem this feature will solve?
Utilize a core module directly in your application without the need for importing or requiring it.
### What is the feature you are proposing to solve the problem?
direclty using its like fs.readFile etc
### What alternatives have you considered?
_No response_ | feature request | low | Minor |
2,777,887,480 | angular | Docs search doesn't find page by its title (Migrations) | ### Describe the problem that you experienced
I want to see this Migrations page https://angular.dev/reference/migrations on another version but the website search does not find it. Only way that page is findable is by using Google or other external search engine. I have tried to search `migrations` and `migrate` and such but the results are not relevant at all. I would be happy if it would even give me sub pages as results.
If I try to search "Migration to Control Flow syntax" it doesn't find any hits.
It would be also really nice, that when you change website version, it would keep you on same page if possible.
### Enter the URL of the topic with the problem
https://angular.dev/reference/migrations
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
_No response_
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
```true
```
### If the problem is browser-specific, please specify the device, OS, browser, and version
```true
```
### Provide any additional information here in as much as detail as you can
```true
``` | area: docs-infra | low | Critical |
2,777,907,035 | rust | compiletest: path normalizations are sensitive to diagnostics width, can lead to confusing failures | See discussions at https://rust-lang.zulipchat.com/#narrow/channel/182449-t-compiler.2Fhelp/topic/.60tests.5Cui.5Ctraits.5Cfn-pointer.5Cbare-fn-no-impl-fn-ptr-99875.2Ers.60 about tests like `tests/ui/traits/fn-pointer/bare-fn-no-impl-fn-ptr-99875.rs`.
Example:
```diff
20 --> $DIR/bare-fn-no-impl-fn-ptr-99875.rs:14:11
21 |
22 LL | takes(|_: Argument| -> Return { todo!() });
- | ----- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ unsatisfied trait bound
+ | ----- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Trait` is not implemented for closure `{closure@$DIR/bare-fn-no-impl-fn-ptr-99875.rs:14:11: 14:34}`
24 | |
25 | required by a bound introduced by this call
26 |
- = help: the trait `Trait` is not implemented for closure `{closure@$DIR/bare-fn-no-impl-fn-ptr-99875.rs:14:11: 14:34}`
28 = help: the trait `Trait` is implemented for fn pointer `fn(Argument) -> Return`
29 note: required by a bound in `takes`
30 --> $DIR/bare-fn-no-impl-fn-ptr-99875.rs:9:18
Note: some mismatched output was normalized before being compared
- | ----- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Trait` is not implemented for closure `{closure@F:\rust\tests\ui\traits\fn-pointer\bare-fn-no-impl-fn-ptr-99875.rs:14:11: 14:34}`
+ | ----- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Trait` is not implemented for closure `{closure@$DIR/bare-fn-no-impl-fn-ptr-99875.rs:14:11: 14:34}`
```
Before compiletest path normalization, the length of the parent directories leading to the test file e.g. `F:\rust\` vs `F:\Longer\rust\` can influence the specific diagnostics shown as it affects the width. | A-testsuite,T-compiler,T-bootstrap,C-bug,D-diagnostic-infra,A-compiletest,A-compiletest-normalizations | low | Critical |
2,777,930,220 | vscode | Add an option to control the default debugging terminal. | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
"debug.console.default"
- "externalTerminal"
- "integratedTerminal"
- "internalConsole"
Otherwise, I have to modify all my debugging launch file.
thank vscode | feature-request,debug | low | Critical |
2,777,961,980 | transformers | flash_attention_2 2.7.2.post1 seems to crash when using `torch.compile` and `DataCollatorWithFlattening` | ### System Info
- `transformers` version: 4.47.1
- Platform: Linux-6.6.20-aufs-1-x86_64-with-glibc2.36
- Python version: 3.11.2
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: yes
- GPU type: NVIDIA RTX A5000
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
update to latest flash attention version (as the time of writing 2.7.2). this should be torch.compile compatible as described in https://github.com/Dao-AILab/flash-attention
load a model with fa2 (tested with opt and qwen)
use trainer with `DataCollatorWithFlattening` and train.
this causes a crash with the following stacktrace:
```
Traceback (most recent call last):
File "/cs/labs/oabend/avishai.elma/slm_eval/slm_eval/train.py", line 89, in main
trainer.train(resume_from_checkpoint=cfg.cont_training)
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/trainer.py", line 2164, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/trainer.py", line 2524, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/slm_eval/trainer/slam_trainer.py", line 71, in training_step
return super().training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/trainer.py", line 3654, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/trainer.py", line 3708, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/accelerate/utils/operations.py", line 823, in forward
return model_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/accelerate/utils/operations.py", line 811, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/slm_eval/model/unit_lm.py", line 118, in forward
def forward(self,
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 1109, in forward
@add_start_docstrings_to_model_forward(QWEN2_INPUTS_DOCSTRING)
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 895, in forward
layer_outputs = decoder_layer(
^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 584, in forward
def forward(
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 364, in forward
def forward(
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 419, in torch_dynamo_resume_in_forward_at_419
logger.warning_once(
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/modeling_flash_attention_utils.py", line 231, in _flash_attention_forward
def _flash_attention_forward(
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/modeling_flash_attention_utils.py", line 329, in torch_dynamo_resume_in__flash_attention_forward_at_329
max_length_q is not None or (query_length != 1 and not (torch.diff(position_ids, dim=-1) >= 0).all())
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1064, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 1680, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/misc.py", line 1024, in call_function
return self.obj.call_method(tx, self.name, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/misc.py", line 774, in call_method
return self.call_apply(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/misc.py", line 699, in call_apply
).call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 2015, in call_function
(fwd_out, _), fwd_graph, fwd_freevars = speculate_subgraph(
^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 462, in speculate_subgraph
output = f.call_function(tx, args, sub_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2279, in CALL
self._call(inst)
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2273, in _call
self.call_function(fn, args, kwargs)
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/torch.py", line 897, in call_function
tensor_variable = wrap_fx_proxy(
^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2037, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/variables/builder.py", line 2124, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2082, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2017, in get_fake_value
ret_val = wrap_fake_exception(
^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 1574, in wrap_fake_exception
return fn()
^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2018, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2150, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2132, in run_node
return node.target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/_ops.py", line 1116, in __call__
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.TorchRuntimeError: Failed running call_function flash_attn._flash_attn_varlen_forward(*(FakeTensor(..., device='cuda:0', size=(s3, s4, s5), dtype=torch.float16,
grad_fn=<AsStridedBackward0>), FakeTensor(..., device='cuda:0', size=(s6, s7, s8), dtype=torch.float16,
grad_fn=<Error>), FakeTensor(..., device='cuda:0', size=(s9, s10, s11), dtype=torch.float16,
grad_fn=<Error>), FakeTensor(..., device='cuda:0', size=(s13,), dtype=torch.int32), FakeTensor(..., device='cuda:0', size=(s13,), dtype=torch.int32), FakeTensor(..., device='cuda:0', size=(), dtype=torch.int64), FakeTensor(..., device='cuda:0', size=(), dtype=torch.int64), 0.0, FloatPow(ToFloat(s5), -0.5)), **{'causal': True, 'window_size_left': -1, 'window_size_right': -1, 'softcap': 0.0, 'alibi_slopes': None, 'return_softmax': False, 'block_table': None}):
flash_attn::_flash_attn_varlen_forward() Expected a value of type 'int' for argument 'max_seqlen_q' but instead found type 'FakeTensor'.
Position: 5
Value: FakeTensor(..., device='cuda:0', size=(), dtype=torch.int64)
Declaration: flash_attn::_flash_attn_varlen_forward(Tensor q, Tensor k, Tensor v, Tensor cu_seqlens_q, Tensor cu_seqlens_k, SymInt max_seqlen_q, SymInt max_seqlen_k, float dropout_p, float softmax_scale, bool causal, SymInt window_size_left=-1, SymInt window_size_right=-1, float softcap=0., Tensor? alibi_slopes=None, bool return_softmax=False, Tensor? block_table=None, Tensor? leftpad_k=None, Tensor? seqused_k=None) -> (Tensor, Tensor, Tensor, Tensor)
Cast error details: Unable to cast Python instance of type <class 'torch._subclasses.fake_tensor.FakeTensor'> to C++ type '?' (#define PYBIND11_DETAILED_ERROR_MESSAGES or compile in debug mode for details)
from user code:
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/modeling_flash_attention_utils.py", line 346, in torch_dynamo_resume_in__flash_attention_forward_at_335
attn_output = flash_attn_varlen_func(
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/flash_attn/flash_attn_interface.py", line 1412, in flash_attn_varlen_func
return FlashAttnVarlenFunc.apply(
File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/flash_attn/flash_attn_interface.py", line 901, in forward
out_padded, softmax_lse, S_dmask, rng_state = _wrapped_flash_attn_varlen_forward(
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
the code works fine when not using compile.
the code doesn't crash when using compile but **not** using `DataCollatorWithFlattening`.
when using compile and **not** using `DataCollatorWithFlattening` I am getting the following graph break with qwen2.5
```
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] Graph break from `Tensor.item()`, consider setting:
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] torch._dynamo.config.capture_scalar_outputs = True
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] or:
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] env TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] to include these operations in the captured graph.
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0]
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] Graph break: from user code at:
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/accelerate/utils/operations.py", line 823, in forward
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] return model_forward(*args, **kwargs)
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/accelerate/utils/operations.py", line 811, in __call__
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] return convert_to_fp32(self.model_forward(*args, **kwargs))
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] return func(*args, **kwargs)
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/slm_eval/model/unit_lm.py", line 138, in forward
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] outputs = self.lm(
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 1165, in forward
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] outputs = self.model(
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 864, in forward
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] causal_mask = self._update_causal_mask(
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] File "/cs/labs/oabend/avishai.elma/slm_eval/.slm_env2/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 943, in _update_causal_mask
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0] if attention_mask is not None and 0.0 in attention_mask:
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0]
W0109 16:15:01.606000 4138800 torch/_dynamo/variables/tensor.py:776] [0/0]
```
### Expected behavior
the training shouldn't crash. | bug | low | Critical |
2,777,997,113 | node | Expose cppgc::CppHeap::CollectStatistics() in v8 module | ### What is the problem this feature will solve?
Since we've been migrating things to use `CppHeap`, we have a bunch of extra GC-managed memory which has become a bit less mysterious to V8. This means we can take better advantage of V8 interfaces to get observability into this memory.
### What is the feature you are proposing to solve the problem?
It would likely be an easy win to expose `CppHeap` data via [`cppgc::CppHeap::CollectStatistics()`](https://v8docs.nodesource.com/node-22.4/d9/dc4/classv8_1_1_cpp_heap.html#a3a5d09567758e608fffde50eeabc2feb) in the `v8` module.
cc @nodejs/diagnostics
### What alternatives have you considered?
_No response_ | feature request,diag-agenda | low | Minor |
2,778,006,325 | flutter | Icon does not pay attention to emoji's real width | ### Steps to reproduce
Drawing Icon with emoji in IconData type argument inside colored/bordered/etc. container leads to misaligned icon relatively horizontal center (tested on both Linux and Android).
I looked up into Icon implementation and tried fix it myself just for fun, I have achieved expected result when multiplied values both fontSize and height to 0.81 in TextStyle fontStyle inside Icon's build() method. Tested only on default material font used by Icon class by default.
<details><summary>Code sample to compare default Icon's behavior and modified one's (you also can compare code below with lib/src/widgets/icon.dart)</summary>
```dart
import 'package:flutter/material.dart';
int getCodepointFromString(String s) {
assert([1, 2].contains(s.codeUnits.length));
if (s.codeUnits.length == 2) {
final (highCodeUnit, lowCodeUnit) = (s.codeUnitAt(0), s.codeUnitAt(1));
assert((highCodeUnit >= 0xD800 && highCodeUnit <= 0xDBFF) && (lowCodeUnit >= 0xDC00 && lowCodeUnit <= 0xDFFF));
return ((highCodeUnit - 0xD800) << 10) + (lowCodeUnit - 0xDC00) + 0x10000;
}
return s.codeUnitAt(0);
}
void main() {
runApp(const MyApp());
}
class MyApp extends StatefulWidget {
const MyApp({super.key});
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
var index = 0;
@override
Widget build(BuildContext context) {
final emojiCode = getCodepointFromString('😀');
final size = MediaQuery.of(context).size.shortestSide / 2;
final color = Colors.indigoAccent;
final widgets = <Widget>[ Icon(IconData(emojiCode), size: size), LabelIcon(IconData(emojiCode), size: size) ];
return MaterialApp(
home: Scaffold(
body: GestureDetector(
onTap: () => setState(() => index ^= 1),
child: Center(
child: ColoredBox(
color: color,
child: widgets[index],
),
),
),
),
);
}
}
class LabelIcon extends StatelessWidget {
/// Creates an icon.
const LabelIcon(
this.icon, {
super.key,
this.size,
this.fill,
this.weight,
this.grade,
this.opticalSize,
this.color,
this.shadows,
this.semanticLabel,
this.textDirection,
this.applyTextScaling,
this.blendMode,
}) : assert(fill == null || (0.0 <= fill && fill <= 1.0)),
assert(weight == null || (0.0 < weight)),
assert(opticalSize == null || (0.0 < opticalSize));
final IconData? icon;
final double? size;
final double? fill;
final double? weight;
final double? grade;
final double? opticalSize;
final Color? color;
final List<Shadow>? shadows;
final String? semanticLabel;
final TextDirection? textDirection;
final bool? applyTextScaling;
final BlendMode? blendMode;
@override
Widget build(BuildContext context) {
assert(this.textDirection != null || debugCheckHasDirectionality(context));
final TextDirection textDirection = this.textDirection ?? Directionality.of(context);
final IconThemeData iconTheme = IconTheme.of(context);
final bool applyTextScaling = this.applyTextScaling ?? iconTheme.applyTextScaling ?? false;
final double tentativeIconSize = size ?? iconTheme.size ?? kDefaultFontSize;
final double iconSize = applyTextScaling ? MediaQuery.textScalerOf(context).scale(tentativeIconSize) : tentativeIconSize;
final double? iconFill = fill ?? iconTheme.fill;
final double? iconWeight = weight ?? iconTheme.weight;
final double? iconGrade = grade ?? iconTheme.grade;
final double? iconOpticalSize = opticalSize ?? iconTheme.opticalSize;
final List<Shadow>? iconShadows = shadows ?? iconTheme.shadows;
final IconData? icon = this.icon;
if (icon == null) {
return Semantics(
label: semanticLabel,
child: SizedBox(width: iconSize, height: iconSize),
);
}
final double iconOpacity = iconTheme.opacity ?? 1.0;
Color? iconColor = color ?? iconTheme.color!;
Paint? foreground;
if (iconOpacity != 1.0) {
iconColor = iconColor.withOpacity(iconColor.opacity * iconOpacity);
}
if (blendMode != null) {
foreground = Paint()
..blendMode = blendMode!
..color = iconColor;
// Cannot provide both a color and a foreground.
iconColor = null;
}
const fix = 0.81;
final TextStyle fontStyle = TextStyle(
fontVariations: <FontVariation>[
if (iconFill != null) FontVariation('FILL', iconFill),
if (iconWeight != null) FontVariation('wght', iconWeight),
if (iconGrade != null) FontVariation('GRAD', iconGrade),
if (iconOpticalSize != null) FontVariation('opsz', iconOpticalSize),
],
inherit: false,
color: iconColor,
fontSize: iconSize * fix,
fontFamily: icon.fontFamily,
package: icon.fontPackage,
fontFamilyFallback: icon.fontFamilyFallback,
shadows: iconShadows,
height: 1.0 * fix, // Makes sure the font's body is vertically centered within the iconSize x iconSize square.
leadingDistribution: TextLeadingDistribution.even,
foreground: foreground,
);
Widget iconWidget = RichText(
overflow: TextOverflow.visible, // Never clip.
textDirection: textDirection, // Since we already fetched it for the assert...
text: TextSpan(
text: String.fromCharCode(icon.codePoint),
style: fontStyle,
),
);
if (icon.matchTextDirection) {
switch (textDirection) {
case TextDirection.rtl:
iconWidget = Transform(
transform: Matrix4.identity()..scale(-1.0, 1.0, 1.0),
alignment: Alignment.center,
transformHitTests: false,
child: iconWidget,
);
case TextDirection.ltr:
break;
}
}
return Semantics(
label: semanticLabel,
child: ExcludeSemantics(
child: SizedBox(
width: iconSize,
height: iconSize,
child: Center(
child: iconWidget,
),
),
),
);
}
}
```
</details>
### Expected results
Emoji IconData should be centered horizontally.
### Actual results
Emoji IconData is not centered horizontally and overflows.
### Code sample
Minimal reproducible example.
<details><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
int getCodepointFromString(String s) {
assert([1, 2].contains(s.codeUnits.length));
if (s.codeUnits.length == 2) {
final (highCodeUnit, lowCodeUnit) = (s.codeUnitAt(0), s.codeUnitAt(1));
assert((highCodeUnit >= 0xD800 && highCodeUnit <= 0xDBFF) && (lowCodeUnit >= 0xDC00 && lowCodeUnit <= 0xDFFF));
return ((highCodeUnit - 0xD800) << 10) + (lowCodeUnit - 0xDC00) + 0x10000;
}
return s.codeUnitAt(0);
}
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
final emojiCode = getCodepointFromString('😀');
final size = MediaQuery.of(context).size.shortestSide / 2;
final color = Colors.indigoAccent;
return MaterialApp(
home: Scaffold(
body: Center(
child: ColoredBox(
color: color,
child: Icon(IconData(emojiCode), size: size),
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details>
<summary>Screenshots / Video demonstration</summary>
Actual result:

Expected result:

</details>
### Logs
_No response_
### Flutter Doctor output
<details><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on Manjaro Linux 6.12.4-1-MANJARO, locale en_US.UTF-8)
• Flutter version 3.27.1 on channel stable at /opt/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (3 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /home/serge/AndroidSDK
• Platform android-35, build-tools 34.0.0
• ANDROID_HOME = /home/serge/AndroidSDK
• Java binary at: /usr/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.13+11)
• All Android licenses accepted.
[✗] Chrome - develop for the web (Cannot find Chrome executable at google-chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Linux toolchain - develop for Linux desktop
• clang version 18.1.8
• cmake version 3.31.2
• ninja version 1.12.1
• pkg-config version 2.3.0
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/linux-android-setup for detailed instructions).
[✓] Connected device (1 available)
• Linux (desktop) • linux • linux-x64 • Manjaro Linux 6.12.4-1-MANJARO
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 2 categories.
```
</details>
| framework,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.27,found in release: 3.28 | low | Critical |
2,778,008,373 | rust | ICE: `trimmed_def_paths` called, diagnostics were expected but none were emitted | beta triggered an ICE in Quinn's CI today:
https://github.com/quinn-rs/quinn/actions/runs/12691355015/job/35374229601?pr=2130
Maybe similar/related to:
- #134345
Maybe a fix that could be backported?
```
thread 'rustc' panicked at compiler/rustc_errors/src/lib.rs:646:17:
`trimmed_def_paths` called, diagnostics were expected but none were emitted. Use `with_no_trimmed_paths` for debugging. Backtraces are currently disabled: set `RUST_BACKTRACE=1` and re-run to see where it happened.
stack backtrace:
0: 0x7fc24a2de3ba - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h4877db58cd1f68da
1: 0x7fc24aa135e6 - core::fmt::write::ha314f3a66d347c48
2: 0x7fc24b903a11 - std::io::Write::write_fmt::h50f19c6ac271d942
3: 0x7fc24a2de212 - std::sys::backtrace::BacktraceLock::print::hf8fc8e4f6df76466
4: 0x7fc24a2e07b7 - std::panicking::default_hook::{{closure}}::hc1cbccbc75363945
5: 0x7fc24a2e05a0 - std::panicking::default_hook::ha6ac61b9282038d9
6: 0x7fc249453168 - std[a5195d1e2fc22c41]::panicking::update_hook::<alloc[20639d31ccb56773]::boxed::Box<rustc_driver_impl[5024ff76ded81fa1]::install_ice_hook::{closure#1}>>::{closure#0}
7: 0x7fc24a2e1003 - std::panicking::rust_panic_with_hook::h5663cbee266e1761
8: 0x7fc24a2e0cfa - std::panicking::begin_panic_handler::{{closure}}::hbbe6da1959d575ac
9: 0x7fc24a2de889 - std::sys::backtrace::__rust_end_short_backtrace::h8c70aabd292ca3dd
10: 0x7fc24a2e09bd - rust_begin_unwind
11: 0x7fc246f9ff80 - core::panicking::panic_fmt::h044f8a6ac8b759ab
12: 0x7fc24b850471 - <rustc_errors[697e64861a5c752e]::DiagCtxtInner as core[355654ca938a17d7]::ops::drop::Drop>::drop
13: 0x7fc24b850f9c - core[355654ca938a17d7]::ptr::drop_in_place::<rustc_errors[697e64861a5c752e]::DiagCtxt>
14: 0x7fc24b9a551a - core[355654ca938a17d7]::ptr::drop_in_place::<rustc_session[7408041faf332997]::parse::ParseSess>
15: 0x7fc24b9a64a0 - core[355654ca938a17d7]::ptr::drop_in_place::<rustc_interface[2b78e7c3f7f2a3ff]::interface::Compiler>
16: 0x7fc24b9ada79 - rustc_interface[2b78e7c3f7f2a3ff]::interface::run_compiler::<(), rustc_driver_impl[5024ff76ded81fa1]::run_compiler::{closure#0}>::{closure#1}
17: 0x7fc24b853b95 - std[a5195d1e2fc22c41]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[2b78e7c3f7f2a3ff]::util::run_in_thread_with_globals<rustc_interface[2b78e7c3f7f2a3ff]::util::run_in_thread_pool_with_globals<rustc_interface[2b78e7c3f7f2a3ff]::interface::run_compiler<(), rustc_driver_impl[5024ff76ded81fa1]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
18: 0x7fc24b854048 - <<std[a5195d1e2fc22c41]::thread::Builder>::spawn_unchecked_<rustc_interface[2b78e7c3f7f2a3ff]::util::run_in_thread_with_globals<rustc_interface[2b78e7c3f7f2a3ff]::util::run_in_thread_pool_with_globals<rustc_interface[2b78e7c3f7f2a3ff]::interface::run_compiler<(), rustc_driver_impl[5024ff76ded81fa1]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[355654ca938a17d7]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
19: 0x7fc24b855601 - std::sys::pal::unix::thread::Thread::new::thread_start::h535af24502242ac5
20: 0x7fc245a9ca94 - <unknown>
21: 0x7fc245b29c3c - <unknown>
22: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.85.0-beta.1 (e30eefff4 2025-01-08) running on x86_64-unknown-linux-gnu
note: compiler flags: -C embed-bitcode=no -C debuginfo=2
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
error: could not compile `quinn-proto` (lib test)
Caused by:
process didn't exit successfully: `/home/runner/.rustup/toolchains/beta-x86_64-unknown-linux-gnu/bin/rustc --crate-name quinn_proto --edition=2021 quinn-proto/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --emit=dep-info,link -C embed-bitcode=no -C debuginfo=2 --warn=unexpected_cfgs --check-cfg 'cfg(fuzzing)' --test --cfg 'feature="default"' --cfg 'feature="log"' --cfg 'feature="platform-verifier"' --cfg 'feature="ring"' --cfg 'feature="rustls-ring"' --check-cfg 'cfg(docsrs,test)' --check-cfg 'cfg(feature, values("arbitrary", "aws-lc-rs", "aws-lc-rs-fips", "default", "log", "platform-verifier", "ring", "rustls", "rustls-aws-lc-rs", "rustls-aws-lc-rs-fips", "rustls-log", "rustls-ring"))' -C metadata=e4cd9ebb7814c565 -C extra-filename=-875e02898ea8dd6b --out-dir /home/runner/work/quinn/quinn/target/debug/deps -L dependency=/home/runner/work/quinn/quinn/target/debug/deps --extern assert_matches=/home/runner/work/quinn/quinn/target/debug/deps/libassert_matches-ff215179dd669654.rlib --extern bytes=/home/runner/work/quinn/quinn/target/debug/deps/libbytes-a85ea1e0de84b215.rlib --extern hex_literal=/home/runner/work/quinn/quinn/target/debug/deps/libhex_literal-26f17529c9d73dcc.rlib --extern lazy_static=/home/runner/work/quinn/quinn/target/debug/deps/liblazy_static-b03541b63c18dba7.rlib --extern rand=/home/runner/work/quinn/quinn/target/debug/deps/librand-717c8c34580787c0.rlib --extern rcgen=/home/runner/work/quinn/quinn/target/debug/deps/librcgen-86629bfa01ffb0b4.rlib --extern ring=/home/runner/work/quinn/quinn/target/debug/deps/libring-b326a68ecfa48aef.rlib --extern rustc_hash=/home/runner/work/quinn/quinn/target/debug/deps/librustc_hash-311bd3609ea046c6.rlib --extern rustls=/home/runner/work/quinn/quinn/target/debug/deps/librustls-df6c75c11fb1ff22.rlib --extern rustls_platform_verifier=/home/runner/work/quinn/quinn/target/debug/deps/librustls_platform_verifier-11787e01b1085472.rlib --extern slab=/home/runner/work/quinn/quinn/target/debug/deps/libslab-3c03d7cf5b378de6.rlib --extern thiserror=/home/runner/work/quinn/quinn/target/debug/deps/libthiserror-efb27899156c3206.rlib --extern tinyvec=/home/runner/work/quinn/quinn/target/debug/deps/libtinyvec-1f34b1656bcd04e6.rlib --extern tracing=/home/runner/work/quinn/quinn/target/debug/deps/libtracing-908c8b1357dbbdf0.rlib --extern tracing_subscriber=/home/runner/work/quinn/quinn/target/debug/deps/libtracing_subscriber-d313c5fd47fc560a.rlib --extern wasm_bindgen_test=/home/runner/work/quinn/quinn/target/debug/deps/libwasm_bindgen_test-35a5da5beb78a7c9.rlib -L native=/home/runner/work/quinn/quinn/target/debug/build/ring-f079ed4dd15e693d/out` (exit status: 101)
``` | A-diagnostics,I-ICE,T-compiler,C-bug,WG-diagnostics,A-patterns,S-has-mcve | low | Critical |
2,778,020,400 | langchain | SurrealDBStore returns error `ImportError: cannot import name 'Surreal' from 'surrealdb'` | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
#!/usr/bin/env python3
from langchain_community.vectorstores import SurrealDBStore
dg = SurrealDBStore(
dburl="ws://10.0.0.2:28000/rpc",
embedding_function=None
)
```
### Error Message and Stack Trace (if applicable)
```
./surreal.py
Traceback (most recent call last):
File "/home/frja/.local/share/virtualenvs/index-docs-py-M1grQ13y/lib/python3.12/site-packages/langchain_community/vectorstores/surrealdb.py", line 59, in __init__
from surrealdb import Surreal
ImportError: cannot import name 'Surreal' from 'surrealdb' (/home/frja/.local/share/virtualenvs/index-docs-py-M1grQ13y/lib/python3.12/site-packages/surrealdb/__init__.py). Did you mean: 'SurrealDB'?
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/frja/rust/avassa-chatbot/index-docs-py/./surreal.py", line 5, in <module>
dg = SurrealDBStore(
^^^^^^^^^^^^^^^
File "/home/frja/.local/share/virtualenvs/index-docs-py-M1grQ13y/lib/python3.12/site-packages/langchain_community/vectorstores/surrealdb.py", line 61, in __init__
raise ImportError(
ImportError: Cannot import from surrealdb.
please install with `pip install surrealdb`.
```
### Description
I believe the import should be `from surrealdb import SurrealDB`
### System Info
```
aiohappyeyeballs==2.4.4
aiohttp==3.11.11
aiosignal==1.3.2
annotated-types==0.7.0
anyio==4.8.0
attrs==24.3.0
cbor2==5.6.5
certifi==2024.12.14
charset-normalizer==3.4.1
dataclasses-json==0.6.7
frozenlist==1.5.0
greenlet==3.1.1
h11==0.14.0
httpcore==1.0.7
httpx==0.28.1
httpx-sse==0.4.0
idna==3.10
jsonpatch==1.33
jsonpointer==3.0.0
langchain==0.3.14
langchain-community==0.3.14
langchain-core==0.3.29
langchain-text-splitters==0.3.5
langsmith==0.2.10
marshmallow==3.24.2
multidict==6.1.0
mypy-extensions==1.0.0
numpy==2.2.1
orjson==3.10.14
packaging==24.2
propcache==0.2.1
pydantic==2.10.5
pydantic-settings==2.7.1
pydantic_core==2.27.2
python-dotenv==1.0.1
PyYAML==6.0.2
requests==2.32.3
requests-toolbelt==1.0.0
sniffio==1.3.1
SQLAlchemy==2.0.36
surrealdb==0.4.1
tenacity==9.0.0
typing-inspect==0.9.0
typing_extensions==4.12.2
urllib3==2.3.0
websockets==14.1
yarl==1.18.3
``` | 🤖:bug | low | Critical |
2,778,024,582 | react-native | Text component moves child element to left when using RTL in system | ### Description
If the phone has the RTL (Hebrew) language installed, then the Text component behaves incomprehensibly. The child element moves to the left. This bug does not occur on all devices. In the emulator, Galaxy Tab S6, web or in expo go, everything is displayed well. I noticed this bug on the Galaxy A9. If you change the system language to LTR, then everything is fine and the shift disappears.
React components should be able to be placed anywhere in the application. In my case, I wanted to place a View in a Link from @react-navigation/native and noticed this bug. But this only applies to View component
my code:
```tsx
import React from 'react';
import {SafeAreaView, Text, View} from 'react-native';
function App(): React.JSX.Element {
return (
<SafeAreaView>
<View
style={{
backgroundColor: 'black',
paddingVertical: 10,
paddingHorizontal: 50,
}}>
<Text style={{backgroundColor: 'red', paddingVertical: 5}}>
<View
style={{
backgroundColor: 'yellow',
paddingVertical: 5,
width: '100%',
}}>
<Text style={{backgroundColor: 'green'}}>
{'\u05E9\u05DC\u05D5\u05DD'}
</Text>
</View>
</Text>
</View>
</SafeAreaView>
);
}
export default App;
```
### Steps to reproduce
1. Set language on mobile device Hebrew
2. Install app to device
3. Look at the block offset
### React Native Version
0.76.5
### Affected Platforms
Runtime - Android, Build - Linux
### Output of `npx react-native info`
```text
info Fetching system and libraries information...
System:
OS: Linux 6.12 Arch Linux
CPU: (12) x64 Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
Memory: 16.67 GB / 31.26 GB
Shell:
version: "5.9"
path: /usr/bin/zsh
Binaries:
Node:
version: 20.10.0
path: ~/.nvm/versions/node/v20.10.0/bin/node
Yarn:
version: 1.22.22
path: /usr/bin/yarn
npm:
version: 10.9.2
path: ~/.nvm/versions/node/v20.10.0/bin/npm
Watchman: Not Found
SDKs:
Android SDK:
API Levels:
- "30"
- "34"
- "35"
Build Tools:
- 30.0.2
- 34.0.0
- 35.0.0
System Images:
- android-35 | Google Play Intel x86_64 Atom
Android NDK: Not Found
IDEs:
Android Studio: AI-242.23339.11.2421.12700392
Languages:
Java:
version: javac 23
path: /usr/bin/javac
Ruby: Not Found
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.5
wanted: 0.76.5
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
not have error in logs
```
### Reproducer
https://github.com/whiskeycola/test-react-native-text
### Screenshots and Videos
This is what I got as a result:

Expected behavior: the child element must fit within the Text horizontally.

| Issue: Author Provided Repro,Newer Patch Available | low | Critical |
2,778,043,541 | ui | [bug]: Popover cannot be worked properly inside Dialog with React 19 | ### Describe the bug
just example code here:
`<Dialog>
<DialogTrigger asChild>
<Button>Open Dialog</Button>
</DialogTrigger>
<DialogContent>
<DialogHeader>
<DialogTitle>Are you absolutely sure?</DialogTitle>
<Popover>
<PopoverTrigger
asChild
className="flex-1 flex gap-2 flex-col justify-start"
>
<Button>
<Input type="text" />
</Button>
</PopoverTrigger>
<PopoverContent className="flex flex-col gap-2 w-80">
<Input type="text" placeholder="more.." />
<Input type="text" placeholder="more.." />
<Input type="text" placeholder="more.." />
<Input type="text" placeholder="more.." />
</PopoverContent>
</Popover>
</DialogHeader>
</DialogContent>
</Dialog>`

### Affected component/components
Popover
### How to reproduce
Click on
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Browser
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,778,075,978 | kubernetes | FUSE mounts in emptyDir volumes cannot be cleaned | ### What happened?
This issue is to summarize some conversation towards the end of https://github.com/kubernetes/kubernetes/issues/7890, as requested by @thockin.
If an application in a privileged pod or container creates a FUSE mount in an emptyDir volume, but fails to unmount it before terminating (either due to that being a conscious choice by the application, or due to a SIGKILL from kubernetes), the kubelet will fail to clean up the pod. A recurring error will appear in the kubelet logs, and the pod will remain in API.
Here is an example error log from the kubelet during cleanup:
```
Jan 08 19:06:04 <hostname omitted> kubelet[12511]: E0108 19:06:04.507950 12511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/empty-dir/30b506e8-b18a-4d5c-bf7d-17fbae54a5d0-worker podName:30b506e8-b18a-4d5c-bf7d-17fbae54a5d0 nodeName:}" failed. No retries permitted until 2025-01-08 19:08:06.507933266 +0000 UTC m=+1437.970062341 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "worker" (UniqueName: "kubernetes.io/empty-dir/30b506e8-b18a-4d5c-bf7d-17fbae54a5d0-worker") pod "30b506e8-b18a-4d5c-bf7d-17fbae54a5d0" (UID: "30b506e8-b18a-4d5c-bf7d-17fbae54a5d0") : openfdat /var/lib/kubelet/pods/30b506e8-b18a-4d5c-bf7d-17fbae54a5d0/volumes/kubernetes.io~empty-dir/worker/build: transport endpoint is not connected
```
The offending code seems to be here: https://github.com/kubernetes/kubernetes/blob/release-1.31/pkg/volume/emptydir/empty_dir.go#L490-L495
When cleaning up emptyDirs, we start with `os.RemoveAll`, as it's recursing through the directory, it will eventually try to inspect the contents of the FUSE mount, which will result in an error.
### What did you expect to happen?
The kubelet is able to clean the pod up eventually.
### How can we reproduce it (as minimally and precisely as possible)?
1. Run a privileged container that can generate a FUSE mount within an emptydir
2. Configure the application to not clean the FUSE mount, or forcefully terminate the pod so the mount cannot be cleaned.
### Anything else we need to know?
I'd be happy to try and put a patch together to address this with a little guidance. It seems like we should be able to inspect for any mounts beneath the empty directory's `MetaDir` and `umount` them before we attempt to call the `os.RemoveAll`.
### Kubernetes version
<details>
```console
Client Version: v1.30.5
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.31.3-eks-56e63d8
```
</details>
### Cloud provider
<details>
AWS EKS
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
NAME=Bottlerocket
ID=bottlerocket
VERSION="1.29.0 (aws-k8s-1.31)"
PRETTY_NAME="Bottlerocket OS 1.29.0 (aws-k8s-1.31)"
VARIANT_ID=aws-k8s-1.31
VERSION_ID=1.29.0
BUILD_ID=c55d099c
HOME_URL="https://github.com/bottlerocket-os/bottlerocket"
SUPPORT_URL="https://github.com/bottlerocket-os/bottlerocket/discussions"
BUG_REPORT_URL="https://github.com/bottlerocket-os/bottlerocket/issues"
DOCUMENTATION_URL="https://bottlerocket.dev"
$ uname -a
# paste output here
Linux <hostname omitted> 6.1.119 #1 SMP Thu Dec 12 20:00:51 UTC 2024 aarch64 GNU/Linux
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,needs-triage | low | Critical |
2,778,092,242 | pytorch | Partitioner's auto-AC misbehaves with mixed dtypes | ### 🚀 The feature, motivation and pitch
When using the partitioner to do some automatic selective activation checkpointing, the default mode consists in estimating the runtime cost of the operators by computing the number of FLOPs they consume. Unfortunately, this is a bad proxy metric in cases where different operators are executed with different dtypes because, in a sense, not all FLOPs are equal.
Concretely, this occurs when using fp8 matmuls on H100 GPUs, because (in most/all current recipes) only the matmuls are converted to fp8, whereas the self-attention remains in bf16. Moreover, in some cases only _some_ matmuls get converted to fp8 (e.g., some layers, or some specific weights within layers).
If the partitioner just compares FLOPs without adjusting them by the time it takes to execute a FLOP in a given dtype this might lead to a suboptimal solution to the AC problem.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu | feature,module: autograd,triaged,oncall: pt2,activation-checkpointing | low | Minor |
2,778,093,149 | ant-design | 【Button】colorPrimary 不支持语义化的颜色 | ### Reproduction link
[https://github.com/consistent-k/VodNext/blob/main/components/providers/ThemeProvider.tsx](https://github.com/consistent-k/VodNext/blob/main/components/providers/ThemeProvider.tsx)
https://stackblitz.com/edit/react-y7v5ygbf?file=demo.tsx
### Steps to reproduce
在ConfigProvider内 Token配置为`colorPrimary: 'skyblue'` 按钮颜色会显示为黑色,配置十六进制则没问题
```ts
<ConfigProvider
theme={{
token: {
colorPrimary: 'red',
},
}}
>
<Flex gap="small" wrap>
<Button type="primary">Primary Button</Button>
<Button>Default Button</Button>
<Button type="dashed">Dashed Button</Button>
<Button type="text">Text Button</Button>
<Button type="link">Link Button</Button>
</Flex>
</ConfigProvider>
```
<img width="1374" alt="image" src="https://github.com/user-attachments/assets/00f69c65-efaa-42db-9e34-c487d4c5d199" />
语义化:
<img width="364" alt="image" src="https://github.com/user-attachments/assets/e8895460-b908-473b-9c57-92bf644d9ee9" />
十六进制:
<img width="360" alt="image" src="https://github.com/user-attachments/assets/de7736f2-1799-4c42-aedf-292295a38de5" />
### What is expected?
期望可以支持语义化颜色
### What is actually happening?
实际不支持
| Environment | Info |
| --- | --- |
| antd | 5.23.0 |
| React | 19.0.0 |
| System | Mac OS |
| Browser | 131.0.6778.109 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug | low | Major |
2,778,116,984 | electron | Caret in focused text input stops blinking after context menu has been shown (until page is clicked) | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.3.1
### What operating system(s) are you using?
macOS
### Operating System Version
macOS Sonoma 14.7
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
Behavior was the same (not working) in `31.1.0`
### Expected Behavior
When a context menu is opened using `Menu.popup()`, then closed (either using the `Escape` key, or by clicking one of the menu options) and then a text input is focused, I would expect the input caret to render normally (e.g. blinking).
### Actual Behavior
When a context menu is opened using `Menu.popup()` and then closed (either using the `Escape` key, or by clicking one of the menu options), the caret of text inputs doesn't blink (even though the input has focus and it's it's possible to write in the text input). When the page is clicked, the caret blinking goes back to normal.
### Testcase Gist URL
https://gist.github.com/heyman/4917fd8871f6bb1f322960c31f1efcdd
### Additional Information
_No response_ | platform/macOS,bug :beetle:,blocked/upstream ❌,has-repro-gist,33-x-y | low | Critical |
2,778,152,484 | ant-design | The itemBg property of the Pagination component does not apply without using !important | ### Reproduction link
[](https://codesandbox.io/p/sandbox/gracious-brattain-5zm8km)
### Steps to reproduce
1. Configure a custom theme using `ThemeConfig` in an Ant Design project.
2. Change the `Pagination.itemBg` property to a color like `"red"`.
3. Notice that the background color is not applied to the component.
4. Apply `!important` to the `itemBg` value (e.g., `"red !important"`).
5. Observe that the background color is applied but overrides other styles
### What is expected?
The background color (`itemBg`) should be applied correctly without the need to use `!important`.
### What is actually happening?
- Without `!important`: The background color defined in `itemBg` is not applied.
| Environment | Info |
| --- | --- |
| antd | 5.23.0 |
| React | 18 |
| System | Windows |
| Browser | Chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Minor |
2,778,156,265 | godot | Custom Resource Default settings not being set when game is exported | ### Tested versions
Reproducible in v4.3.stable.official.
MacOS version.
Demonstrated here: https://github.com/joelldt/GodotBugTest
### System information
MacBook Pro M1 2020, macOS Sonoma Version 14.5, 16GB memory
### Issue description
The best way I can explain it is with an example. Say you have a custom resource, Item:
```gdscript
extends Resource
class_name Item
@export var name: String
@export var sound: AudioStream = load("res://assets/sound/noisy.wav")
```
You create your first item, SWORD, and in the inspector you name it "Sword" and assign an AudioStream "res://assets/sound/noisy.wav" IN THE INSPECTOR.
You then create a second item, AXE, and in the inspector you name it "Axe" and don't assign any AudioStream to it in the inspector.
If you then run your project within Godot, both the SWORD and the AXE will have the AudioStream "res://assets/sound/noisy.wav" playing when they need to be, which I guess is because the AXE gets it from the default value being = load("res://assets/sound/noisy.wav") ...
However, when you come to export your project, ONLY THE SWORD will have the audio stream playing, the axe will have no sound at all. I am not quite sure why this is? I think either the reference to the default value is lost completely (I'm not sure it always is or I think more of my game would have broken... lol...) or the res:// part is now no longer relevant in the exported version of the game (but this doesn't really make sense, again I think more of my game would have broken if that was the case).
If anyone can explain what's happening better here I'm all ears! But this does feel very unexpected (different output in Godot playback versus exported game).
### Steps to reproduce
New project.
Add an audio file to the project.
Create new custom resource class.
Add this variable to the custom resource script:
```gdscript
@export var soundEffect: AudioStream = preload("your_audio_file")
```
Add two new custom resources of this class, in one assign the audio file to the soundEffect variable in the inspector, in the other leave the inspector blank.
Create a method of calling these (I use buttons in the MRP with an audio_stream_player_2d attached) that is the same scene that has an @export var blah: your_custom_resource
In the ready() function for that scene, assign the audio stream
```gdscript
func _ready() -> void:
audio_stream_player.stream = item.soundEffect
```
Export the game to .exe or .dmg.
This should work fine in the Godot editor when you run the project, but the custom resource without an assigned audio will have no audio in the exported version of the game.
### Minimal reproduction project (MRP)
https://github.com/joelldt/GodotBugTest
[testproject.zip](https://github.com/user-attachments/files/18363722/testproject.zip)
| topic:core,needs testing | low | Critical |
2,778,160,607 | vscode | Editor GPU: Double rendering after undo | Repro:
1. Enable editor gpu rendering, reload window
2. Go to this line in vscode: https://github.com/microsoft/vscode/blob/77027e72e5f497e72edeba40aa9360a0865eec08/extensions/terminal-suggest/src/terminalSuggestMain.ts#L243-L247
3. Delete that function
4. Undo, 🐛 bad render

| bug,editor-gpu | low | Major |
2,778,160,679 | go | proposal: text/templates: custom block preprocessor | ### Proposal Details
I wanted to share a feature request that I have been seeking for a long time. Would love to get feedback and see the appetite for this. The templating package is simple and yet powerful by design. Today the actions are hard coded in the parser (text/template/parse/parse.go). Templating languages like Jinja offer custom "nodes" - extensions for user defined action types instead of the fixed ones defined by the template parser. This would allow users to build custom functionality at parse time itself.
#### Example Usage:
```
// text/template/parse:
// Block take a parse tree and transforms/rewrites it.
type Block interface {
// TODO - Needs further if parent/root template is passed for sharing context etc is the right way
HandleTree(t *parse.Tree, root *parse.Node) *parse.Node
}
type BlockMap map[string]Block
// text.template to also have an BlockMap
```
```
func main() {
actionMap := template.BlockMap{
"myaction": func(root *parse.Tree, node *parse.Node) *parse.Node {
... process/rewrite/transform body of custom action
}
}
contents := `{{ myaction pipeline }} .... template body {{ end }}`
t := template.Must(template.New("helloworld").Actions(actionMap).Parse(contents)
}
Parse(contents))
```
#### Goals:
1. Must be optin
2. Must not break existing templates
3. Must not break existing syntax
4. Functions to still be Prioritized over actions (??)
5. Minimize exposure of parser interface - currently the parser is pretty closed and we may not want to open up top-down parser completely. eg, Should we add a new "custom node" type in the Tree?
6. [Added] Secure: As called out in #33449 html/template performs escaping of text/template before rendering. This must not be circumventable or incompatible.
In this proposal when a custom action is encountered the child is parsed and passed to the action instead of the action driving its own parser - this is in line with (2). Doing so will incur an inefficiency. However this is optin and parsed templates can be cached making this a one off.
To address (6). This proposal is only for a parser. By making the parser only act as a preprocessor and rewriting the parse tree, the standard exec phase can apply. For example a "switch" action could rewrite the body in terms of a "if/else-if" actions.
#### Use cases
##### "switch" action
```
{{ switch pipeline }}
{{ case cond1 }}
{{ end }}
{{ case cond2 }}
{{ end }}
{{ case cond3 }}
{{ end }}
{{ default }}
{{ end }}
{{ end }}
```
##### "import" action
Instead of loading all templates, we could only provide a root template that loads its dependencies providing better isolation:
```
{{ import HomeHeader }}
{{ import HomeFooter }}
{{ define HomePage }}
{{ template HomeHeader }}
.... Home Page template
{{ template HomeFooter }}
{{ end }}
```
| Proposal | low | Major |
2,778,250,250 | transformers | Inconsistent saving of tokenizer with custom code from HF hub vs. local directory | ### System Info
- `transformers` version: 4.47.1
- Platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.4.5
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: no
- GPU type: NVIDIA RTX A6000
### Who can help?
@ArthurZucker or @itazap
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1GPySa5JEclxzlVZbI1mg3ajWzjv3mf42?usp=sharing
When loading a tokenizer that requires custom code from HF hub and saving it, the custom Python file is not saved. In contrast, loading the same tokenizer from a local clone of the repository and saving it behaves as expected, including the custom Python file.
**Aditional context**
The output during the `save_pretrained` method lists an `added_tokens.json` file, which does not exist after saving and is not part of the original repository files. This happens regardless of where the tokenizer was loaded from.
### Expected behavior
When saving the tokenizer loaded from the HF hub, the custom Python file should be consistently included, ensuring complete functionality of the tokenizer. | bug,Remote code | low | Minor |
2,778,282,499 | ui | [bug]: Toggle pressed error data-state always closed | ### Describe the bug
when pressed toggle is true or false then data-state must be closed
should be data-state=on or off but here it is closed
<img width="581" alt="Jepretan Layar 2025-01-09 pukul 23 26 58" src="https://github.com/user-attachments/assets/ed54b3d8-649e-43ba-8c41-8a30302da7db" />
<img width="581" alt="Jepretan Layar 2025-01-09 pukul 23 26 25" src="https://github.com/user-attachments/assets/b8eef407-d516-4d74-9586-1462908bae6a" />
### Affected component/components
Toggle
### How to reproduce
when pressed
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
"next": "15.1.2",
"react": "^19.0.0",
"@radix-ui/react-toggle": "^1.1.1",
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,778,297,708 | tauri | [bug] Listen permission problem | ### Describe the bug
Hi. I'm having trouble with permission to use listen.
When I use listen in my code and run it for the first time, I get `event.listen not allowed. Command not found`
So I thought I should add the permission explicitly.
Documentation seems to indicate that `core:event:allow-listen` can be added as a permission, so I added that code to `src-tauri/capabilities/default.json`.
```
src-tauri/capabilities/default.json
{
"$schema": "../gen/schemas/desktop-schema.json",
"identifier": "default",
"description": "Capability for the main window",
"windows": ["main"],
"permissions": [
"core:default",
"core:event:allow-listen",
"shell:allow-open",
"fs:read-all",
"fs:write-all",
"fs:allow-read-file",
"fs:allow-write-file",
{
"identifier": "fs:scope",
"allow": ["**/*"]
}
]
}
```
However, this makes a new issue in build time. `Permission core:event:allow-listen not found`.
How can I use listen function?
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 15.1.0 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 21.7.1
- pnpm: 8.15.4
- npm: 10.5.0
- deno: deno 1.46.3
[-] Packages
- tauri :crab:: 2.2.0
- tauri-build :crab:: 2.0.4
- wry :crab:: 0.48.0
- tao :crab:: 0.31.1
- @tauri-apps/api : 2.2.0
- @tauri-apps/cli : 2.2.2
[-] Plugins
- tauri-plugin-fs :crab:: 2.2.0
- @tauri-apps/plugin-fs : 2.2.0
- tauri-plugin-shell :crab:: 2.2.0
- @tauri-apps/plugin-shell : 2.2.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
[1]
1. npm create tauri-app@latest
2. added this code in FE React
```
useEffect(() => {
const unlisten = listen<string>("global-mouse-event", (event) => {
const { x, y } = JSON.parse(event.payload);
console.log(`Mouse moved to: X=${x}, Y=${y}`);
});
return () => {
unlisten.then((f) => f());
};
}, []);
```
3. It needed permission.
`Unhandled Promise Rejection: event. listen not allowed. Command not found`
+ so I added permission
```
{
"$schema": "../gen/schemas/desktop-schema.json",
"identifier": "default",
"description": "Capability for the main window",
"windows": ["main"],
"permissions": ["core:default", "core:event:allow-listen", "opener:default"]
}
```
-> Same problem
[2]
It has same problem even if i rm -rf .cargo and remove src-tauri/target and reinstall rustup. | type: bug,status: needs triage | low | Critical |
2,778,313,721 | godot | `Window.move_to_center` moves the window 15 px lower than the centered initial window position. | ### Tested versions
Reproducable: 4.2-stable - 4.4.dev7
not Reproducable: 4.1-stable (function doesn't exist yet)
### System information
Godot v4.4.dev7 - Windows 10 (build 19045) - Multi-window, 2 monitors - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 2060 SUPER (NVIDIA; 32.0.15.6636) - Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz (8 threads)
### Issue description
the get_window().move_to_center() is not the same position to the initial startup position of the window with default project settings.
this seems to be the case since the function got added to the engine.
### Steps to reproduce
- create a new projcet.
- make a 2d scene. give it a script
- add button to it. connect pressed signal to script
```gdscript
func _on_button_pressed() -> void:
print(get_window().position)
get_window().move_to_center()
print(get_window().position)
```
run the game and press the button.
```
printout
(2624, 381)
(2624, 396)
```
the window moves by 15 px for me.
### Minimal reproduction project (MRP)
N/A | bug,topic:gui | low | Minor |
2,778,314,034 | go | all: replace reflect.DeepEqual for error comparisons in test files | ### Proposal Details
## Description
Currently, several test files across the codebase use `reflect.DeepEqual` to compare errors. This is not the recommended way to compare errors in Go, as it can be both inefficient and potentially fragile. We should replace these comparisons with proper error comparison methods.
## Affected Files
At least the following:
- `src/strconv/atof_test.go`
- `src/strconv/atoi_test.go`
- `src/errors/join_test.go`
- `src/fmt/errors_test.go`
- `src/net/iprawsock_test.go`
- `src/net/mac_test.go`
- `src/net/tcpsock_test.go`
- `src/net/udpsock_test.go`
- `src/encoding/asn1/asn1_test.go`
- `src/encoding/base64/base64_test.go`
- `src/go/build/build_test.go`
## Proposed Solution
Replace `reflect.DeepEqual` error comparisons with either:
1. Direct error string comparison using `err.Error()` for simple cases
2. Custom error comparison functions that handle nil cases and string comparison
3. Use of `errors.Is()` where appropriate
## Implementation Plan
The changes will be submitted as separate PRs grouped by package to make reviews more manageable.
Each PR will:
- Remove usage of `reflect.DeepEqual` for error comparisons
- Add appropriate error comparison functions where needed
- Maintain existing test coverage and functionality
- Include only the changes related to error comparison
## Benefits
- More idiomatic error comparison in tests
- More maintainable and clearer error comparison logic | NeedsInvestigation | low | Critical |
2,778,319,528 | PowerToys | Host File Editor - support for SSH client files | ### Description of the new feature / enhancement
It would be nice if Host File Editor provides one-click editing of Windows' SSH client configuration files:
%USERPROFILE%\.ssh\known_hosts
%USERPROFILE%\.ssh\config
### Scenario when this would be used?
Embedded developers often change / reflash targets, which leads to change of SSH keys. Windows' ssh.exe just refuses to connect if the keys don't match. Editing these files requires either going thru CMD, or navigating to user' profile folder.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,778,320,232 | vscode | Gear Icon Recommended Update Resulted In Loss of All Extensions |
Type: <b>Bug</b>
Top-Right gear icon had blue subicon with a 1 .
Clicked and saw recommendation to restart to update.
Clicked to restart to update.
When restarting, all extensions were gone.
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Darwin arm64 24.0.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M2 Max (12 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|1, 2, 2|
|Memory (System)|32.00GB (0.04GB free)|
|Process Argv|--crash-reporter-id fedf72f0-71db-42ab-97d2-e4f289d52194|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (4)</summary>
Extension|Author (truncated)|Version
---|---|---
codespaces|Git|1.17.3
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,778,333,031 | vscode | Command to install local workspace extension | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
There are existing commands to install extension from either a given location or from a vsix file. However those commands install the extension globally. Currently there is an option to install extensions under .vscode/extensions in the local workspace without installing them globally, however it is currently only available through the UI. Please expose the command to install local workspace extensions.
| feature-request,extensions | low | Minor |
2,778,376,726 | pytorch | [dynamo] Support using UserDefinedFunction as argument (as_proxy). | ### 🚀 The feature, motivation and pitch
At the moment using UserDefainedFunction as_proxy() fails as unimplemented.
The original reason why it is needed - we want to dynamo trace through Subclasses constructor that receives python Callable as an argument (and it is not used inside the constructor).
Dynamo tries to inline it and fails on UserDefinedFunction. as_proxy()
The straightforward approach will be:
1. trace through UserDefinedFn with speculate_subgraph and register subgraph as attribute of output_graph
At the moment of using UserDefinedFn, arguments for this function are not specified.
One of the ideas - could we put some "stub", empty subgraph at the moment of creating an argument and replace it with real subgraph only when this UserDefinedFunciton is called?
The original testcase:
```
def test_unwrap_subclass_parameters_with_unused_callable_arg_in_ctor(self):
def fn(x):
return x
_test_fn = fn
class SC(WrapperSubclass):
@staticmethod
def __new__(cls, a, fn, outer_size=None, outer_stride=None):
return WrapperSubclass.__new__(cls, a, outer_size, outer_stride)
def __init__(self, a, fn, outer_size=None, outer_stride=None):
self.a = a
self.fn = fn
def __tensor_flatten__(self):
return ["a"], [self.fn]
@staticmethod
def __tensor_unflatten__(inner_tensors, meta, outer_size, outer_stride):
a = inner_tensors["a"]
fn = meta[0]
return SC(a, fn, outer_size, outer_stride)
@classmethod
def __torch_dispatch__(cls, func, types, args, kwargs):
if kwargs is None:
kwargs = {}
args_a = pytree.tree_map_only(cls, lambda x: x.a, args)
kwargs_a = pytree.tree_map_only(cls, lambda x: x.a, kwargs)
out_a = func(*args_a, **kwargs_a)
out_a_flat, spec = pytree.tree_flatten(out_a)
out_flat = [
cls(o_a, _test_fn) if isinstance(o_a, torch.Tensor) else o_a
for o_a in out_a_flat
]
out = pytree.tree_unflatten(out_flat, spec)
from torch._higher_order_ops.cond import cond_op
if func is cond_op:
return out
else:
return return_and_correct_aliasing(func, args, kwargs, out)
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.p1 = torch.nn.Parameter(torch.ones(3, 4))
self.p2 = torch.nn.Parameter(SC(torch.ones(3, 4), _test_fn))
def forward(self, x):
return x + 2 * self.p1 + self.p2
m = M()
from torch._functorch._aot_autograd.subclass_parametrization import (
unwrap_tensor_subclass_parameters,
)
unwrap_tensor_subclass_parameters(m)
x = torch.randn(3, 4)
comp_fn = torch.compile(m, backend="aot_eager", fullgraph=True)
out = comp_fn(x)
```
Error:
```
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/symbolic_convert.py", line 1685, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/symbolic_convert.py", line 921, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/variables/user_defined.py", line 600, in call_function
*proxy_args_kwargs(args, kwargs, tx=tx),
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/utils.py", line 976, in proxy_args_kwargs
unimplemented(
File "/data/users/ivankobzarev/a/pytorch/torch/_dynamo/exc.py", line 355, in unimplemented
raise Unsupported(msg, case_name=case_name) from from_exc
torch._dynamo.exc.Unsupported: call_function args: TensorVariable() UserFunctionVariable() ConstantVariable(NoneType: None) ConstantVariable(NoneType: None)
from user code:
File "/data/users/ivankobzarev/a/pytorch/test/functorch/test_aotdispatch.py", line 6421, in forward
return x + 2 * self.p1 + self.p2
File "/data/users/ivankobzarev/a/pytorch/torch/nn/utils/parametrize.py", line 407, in get_parametrized
return parametrization()
File "/data/users/ivankobzarev/a/pytorch/torch/nn/utils/parametrize.py", line 303, in forward
x = self[0](*originals)
File "/data/users/ivankobzarev/a/pytorch/torch/_functorch/_aot_autograd/subclass_parametrization.py", line 16, in forward
rebuilt = tp.__tensor_unflatten__(d, meta, None, None) # type: ignore[attr-defined]
File "/data/users/ivankobzarev/a/pytorch/test/functorch/test_aotdispatch.py", line 6391, in __tensor_unflatten__
return SC(a, fn, outer_size, outer_stride)
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | feature,triaged,oncall: pt2,module: dynamo | low | Critical |
2,778,400,745 | PowerToys | Text Extractor Does NOT Recognize Single Letters or Simple 2-letter Words | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
TextExtractor
### Steps to reproduce
1. Open any picture with words that have plenty of white space around them - ( I worked with https://www.nytimes.com/games/connections )
2. Ensure that the picture has single letters surrounded by white space ( e.g. I A )
3. Ensure that the picture has 2-letter words surrounded by white space ( e.g. IT OF IN AT )
4. Ensure that the picture has other larger words
5. Activate Text Extractor
6. Select all of the text for extraction
7. Paste into Notepad or other text based app
8. Notice that the single letter and or 2-letter words are completely missing
9. This seems to happen more with larger fonts. For example, if you copy the above samples in items 2 and 3, and paste them into a Word document, then enlarge the font (Calibri) to 36, you will see that using Text Extractor against that enlarged font will miss some of the letters (in particular "e.g." ) whereas the smaller font is better interpreted.
### ✔️ Expected Behavior
I was expecting single letter and 2-letter words to be recognized
### ❌ Actual Behavior
Single letter and or 2-letter words are completely missing from the extraction.
This is an issue that has happened for YEARS, which I would have expected someone would have reported by now.
If you can extract every letter of a word, then why can't you recognize individual letters or 2-letter words with the SAME letters in the SAME font????
### Other Software
N/A | Issue-Bug,Needs-Triage | low | Minor |
2,778,423,907 | flutter | [packages] Migrate package groupings to use workspaces | Now that the [workspace](https://dart.dev/tools/pub/workspaces) feature is stable, it makes sense to start using it. Sub-groupings of packages in the packages repo should be migrated to workspaces.
This migration will enable the following benefits:
- Dependency overrides will no longer be necessary during development of package updates
- Only one pub get will be necessary per workspace
- More efficient code analysis
Justification:
- Other Google-owned packages have already been migrated. See [json_serializable](https://github.com/google/json_serializable.dart)
These are the groupings that should be consolidated into workspaces:
- [x] camera
- [x] file_selector
- [x] google_maps_flutter
- [x] google_sign_in
- [x] image_picker
- [x] in_app_purchase
- [x] local_auth
- [x] path_provider
- [x] pointer_interceptor
- [x] quick_actions
- [x] shared_preferences
- [x] url_launcher
- [x] video_player
- [x] webview_flutter | team,package,team-ecosystem,P3,triaged-ecosystem | low | Major |
2,778,439,295 | neovim | Inconsistent behavior of registers when clipboard provider not found | ### Problem
When clipboard provider not found, trying to write to registers `"+` and `"*` shows error, but still this registers gets created and new content gets written to them. We can prove this by checking `:registers` output. But trying to paste from this registers results in pasting from `""`.
### Steps to reproduce
(`PATH=` used to avoid detecting clipboard provider. But better to also check manually that it is not set: `:checkhealth provider.clipboard`)
```
echo -e 'aaa\nbbb' | PATH= /bin/nvim --clean
"+yy
j
yy
"+p
:registers
```
At then end `:registers` shows that `"+` contains `aaa` but we clearly see that when we did `"+p`, it pasted content of `""`, not `"+`.
### Expected behavior
#### Solution 1
Make `"+` and `"*` registers act as any other named register if clipboard provider not found. So it will write and read registers without falling back to `""` as it does now.
#### Solution 2
Prevent creating (and writing to) registers `"+` and `"*` when clipboard provider not found. So, after `"+yy`, doing `"+p` should not paste anything.
### Nvim version (nvim -v)
v0.10.3
### Vim (not Nvim) behaves the same?
no, vim 9.1 (with +clipboard compiled feature)
### Operating system/version
archlinux
### Terminal name/version
kitty
### $TERM environment variable
xterm-kitty
### Installation
pacman | bug,clipboard | low | Critical |
2,778,443,557 | flutter | [tech debt] unify/simplify engine golden tests | We have two separately maintained golden test suites in the engine:
- Impeller goldens.
- Web engine goldens.
This reduces test coverage as some web cases are not covered by Impeller and vice versa. The barrier to adding new goldens is higher due to the bespoke nature of engine golden infra.
The vast majority (if not all) of these tests can be expressed as normal `flutter test` widget tests that just import `dart:ui` and use the framework's existing golden infra for taking and submitting goldens.
## Proposal
We unify as many engine goldens as possible by putting them all into one test suite that uses the framework golden infra. This is now possible with the monorepo. Ideally, we would be able to remove all the engine-specific golden infra.
| engine,P3,c: tech-debt,team-engine,triaged-engine | low | Minor |
2,778,462,857 | next.js | Wrong page when navigating from pages router to app router | ### Link to the code that reproduces this issue
https://github.com/pomber/bug-pages-router-to-app-router
### To Reproduce
In this app:
```
web/
├─ pages/
│ ├─ index.js
│ └─ [...slug].js
└─ app/
└─ [lang]/
└─ page.js
```
1. `npm run dev`
2. from `localhost:3000` click the `Go to App router` link (`href="/hello"`)
### Current vs. Expected behavior
It renders `pages/[...slug].js` instead of rendering `app/[lang]/page.js`.
If instead we refresh the page with the same URL (`localhost:3000/hello`) it renders `app/[lang]/page.js` as expected.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Home
Available memory (MB): 16254
Available CPU cores: 8
Binaries:
Node: 22.11.0
npm: 10.9.0
Yarn: 1.22.18
pnpm: 9.7.1
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
_No response_ | Navigation | low | Critical |
2,778,506,074 | terminal | Autocomplete doesn't remove parenthesis before completing | ### Windows Terminal version
_No response_
### Windows build number
_No response_
### Other Software
_No response_
### Steps to reproduce
https://github.com/user-attachments/assets/dbf4e89f-d673-48bd-8af6-d64d29fdec78
1. Create a folder starting with a (
2. Type `cd (`
3. Hit tab
### Expected Behavior
Autocompletes to `cd "<folder name>"`
### Actual Behavior
Autocompletes to `cd ("<folder name>"` | Product-Cmd.exe,Issue-Bug,Needs-Tag-Fix,Priority-3,zInbox-Bug | low | Minor |
2,778,542,046 | langchain | Additional kwargs key prompt_tokens already exists in left dict and value has unsupported type <class 'int'> in langchain-core/utils/_merge.py merge_dict() function when running with anthropic.claude-3-sonnet | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Below is the method from _merge.py that raises the error.
In my code the prebuilt ReAct agent sits inside the LangGraph node
[Won't be possible to have the entire code in here but the same type of agents executes the first time and fails the second ]
```
def merge_dicts(left: dict[str, Any], *others: dict[str, Any]) -> dict[str, Any]:
"""Merge many dicts, handling specific scenarios where a key exists in both
dictionaries but has a value of None in 'left'. In such cases, the method uses the
value from 'right' for that key in the merged dictionary.
Args:
left: The first dictionary to merge.
others: The other dictionaries to merge.
Returns:
The merged dictionary.
Raises:
TypeError: If the key exists in both dictionaries but has a different type.
TypeError: If the value has an unsupported type.
Example:
If left = {"function_call": {"arguments": None}} and
right = {"function_call": {"arguments": "{\n"}}
then, after merging, for the key "function_call",
the value from 'right' is used,
resulting in merged = {"function_call": {"arguments": "{\n"}}.
"""
merged = left.copy()
for right in others:
for right_k, right_v in right.items():
if right_k not in merged or right_v is not None and merged[right_k] is None:
merged[right_k] = right_v
elif right_v is None:
continue
elif type(merged[right_k]) is not type(right_v):
msg = (
f'additional_kwargs["{right_k}"] already exists in this message,'
" but with a different type."
)
raise TypeError(msg)
elif isinstance(merged[right_k], str):
# TODO: Add below special handling for 'type' key in 0.3 and remove
# merge_lists 'type' logic.
#
# if right_k == "type":
# if merged[right_k] == right_v:
# continue
# else:
# raise ValueError(
# "Unable to merge. Two different values seen for special "
# f"key 'type': {merged[right_k]} and {right_v}. 'type' "
# "should either occur once or have the same value across "
# "all dicts."
# )
merged[right_k] += right_v
elif isinstance(merged[right_k], dict):
merged[right_k] = merge_dicts(merged[right_k], right_v)
elif isinstance(merged[right_k], list):
merged[right_k] = merge_lists(merged[right_k], right_v)
elif merged[right_k] == right_v:
continue
else:
msg = (
f"Additional kwargs key {right_k} already exists in left dict and "
f"value has unsupported type {type(merged[right_k])}."
)
raise TypeError(msg)
return merged
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[12], line 7
2 userInput = '''
3 Develop Business Challenges and Opportunities (BCOs) for brand based on its Strategic Imperatives (SIs).
4 '''
5 message = HumanMessage(content = userInput )
----> 7 graph.invoke(
8 input = {"messages" : [message],
9 "brand" : 'brand',
10 "primary_competitors" : ["competitor 1", "competitor 2", "competitor 3"],
11 "brand_research" : [],
12 "strategic_imperatives" : ["SI-1",
13 "SI-2",
14 "SI-3",
15 "SI-4",
16 "SI-5",
17 "SI-6'),
18 "stratagic_imperatives_research" : [],
19 "plan" : [],
20 "next_actor" : '',
21 "next_task" : '',
22 "sender" : '',
23 },
24 config = {"configurable": {"thread_id": "42"}, "recursion_limit": 30} ,
25
26
27 )
28 # ToolMessage
File /opt/conda/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1940, in Pregel.invoke(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, **kwargs)
1938 else:
1939 chunks = []
-> 1940 for chunk in self.stream(
1941 input,
1942 config,
1943 stream_mode=stream_mode,
1944 output_keys=output_keys,
1945 interrupt_before=interrupt_before,
1946 interrupt_after=interrupt_after,
1947 debug=debug,
1948 **kwargs,
1949 ):
1950 if stream_mode == "values":
1951 latest = chunk
File /opt/conda/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1660, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
1654 # Similarly to Bulk Synchronous Parallel / Pregel model
1655 # computation proceeds in steps, while there are channel updates
1656 # channel updates from step N are only visible in step N+1
1657 # channels are guaranteed to be immutable for the duration of the step,
1658 # with channel updates applied only at the transition between steps
1659 while loop.tick(input_keys=self.input_channels):
-> 1660 for _ in runner.tick(
1661 loop.tasks.values(),
1662 timeout=self.step_timeout,
1663 retry_policy=self.retry_policy,
1664 get_waiter=get_waiter,
1665 ):
1666 # emit output
1667 yield from output()
1668 # emit output
File /opt/conda/lib/python3.11/site-packages/langgraph/pregel/runner.py:167, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter)
165 t = tasks[0]
166 try:
--> 167 run_with_retry(
168 t,
169 retry_policy,
170 configurable={
171 CONFIG_KEY_SEND: partial(writer, t),
172 CONFIG_KEY_CALL: partial(call, t),
173 },
174 )
175 self.commit(t, None)
176 except Exception as exc:
File /opt/conda/lib/python3.11/site-packages/langgraph/pregel/retry.py:40, in run_with_retry(task, retry_policy, configurable)
38 task.writes.clear()
39 # run the task
---> 40 return task.proc.invoke(task.input, config)
41 except ParentCommand as exc:
42 ns: str = config[CONF][CONFIG_KEY_CHECKPOINT_NS]
File /opt/conda/lib/python3.11/site-packages/langgraph/utils/runnable.py:408, in RunnableSeq.invoke(self, input, config, **kwargs)
404 config = patch_config(
405 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
406 )
407 if i == 0:
--> 408 input = step.invoke(input, config, **kwargs)
409 else:
410 input = step.invoke(input, config)
File /opt/conda/lib/python3.11/site-packages/langgraph/utils/runnable.py:184, in RunnableCallable.invoke(self, input, config, **kwargs)
182 else:
183 context.run(_set_config_context, config)
--> 184 ret = context.run(self.func, input, **kwargs)
185 if isinstance(ret, Runnable) and self.recurse:
186 return ret.invoke(input, config)
File ~/MUltiAgent_SI_to_BCO/graph/workflow.py:74, in workflow.init_graph.<locals>.<lambda>(state)
68 self.workflow = StateGraph(state)
70 self.workflow.add_node("brand_research_agent",lambda state: agent_node(state = state,
71 agent = self.agents["brand_research_agent"],
72 name = "brand_research_agent",))
---> 74 self.workflow.add_node("si_research_agent",lambda state: agent_node(state = state,
75 agent = self.agents["si_research_agent"],
76 name = "si_research_agent",))
78 self.workflow.add_node("bco_planning_agent",lambda state: agent_node(state = state,
79 agent = self.agents["bco_planning_agent"],
80 name = "bco_planning_agent",))
82 self.workflow.add_node("bco_formulation_agent",lambda state: agent_node(state = state,
83 agent = self.agents["bco_formulation_agent"],
84 name = "bco_formulation_agent",))
File ~/MUltiAgent_SI_to_BCO/graph/nodes.py:78, in agent_node(state, agent, name)
75 except Exception as e:
76 # Log and raise any exceptions that occur
77 logger.error(f"Error in executing {name} node: {str(e)}")
---> 78 raise e
File ~/MUltiAgent_SI_to_BCO/graph/nodes.py:39, in agent_node(state, agent, name)
36 logger.info(f"executing agent {name}")
38 # Invoke the agent with the current state
---> 39 response = agent.invoke(state)
41 # Extract the content from the response
42 content = response['output'] if isinstance(response, dict) and 'output' in response else response
File /opt/conda/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1940, in Pregel.invoke(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, **kwargs)
1938 else:
1939 chunks = []
-> 1940 for chunk in self.stream(
1941 input,
1942 config,
1943 stream_mode=stream_mode,
1944 output_keys=output_keys,
1945 interrupt_before=interrupt_before,
1946 interrupt_after=interrupt_after,
1947 debug=debug,
1948 **kwargs,
1949 ):
1950 if stream_mode == "values":
1951 latest = chunk
File /opt/conda/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1660, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
1654 # Similarly to Bulk Synchronous Parallel / Pregel model
1655 # computation proceeds in steps, while there are channel updates
1656 # channel updates from step N are only visible in step N+1
1657 # channels are guaranteed to be immutable for the duration of the step,
1658 # with channel updates applied only at the transition between steps
1659 while loop.tick(input_keys=self.input_channels):
-> 1660 for _ in runner.tick(
1661 loop.tasks.values(),
1662 timeout=self.step_timeout,
1663 retry_policy=self.retry_policy,
1664 get_waiter=get_waiter,
1665 ):
1666 # emit output
1667 yield from output()
1668 # emit output
File /opt/conda/lib/python3.11/site-packages/langgraph/pregel/runner.py:167, in PregelRunner.tick(self, tasks, reraise, timeout, retry_policy, get_waiter)
165 t = tasks[0]
166 try:
--> 167 run_with_retry(
168 t,
169 retry_policy,
170 configurable={
171 CONFIG_KEY_SEND: partial(writer, t),
172 CONFIG_KEY_CALL: partial(call, t),
173 },
174 )
175 self.commit(t, None)
176 except Exception as exc:
File /opt/conda/lib/python3.11/site-packages/langgraph/pregel/retry.py:40, in run_with_retry(task, retry_policy, configurable)
38 task.writes.clear()
39 # run the task
---> 40 return task.proc.invoke(task.input, config)
41 except ParentCommand as exc:
42 ns: str = config[CONF][CONFIG_KEY_CHECKPOINT_NS]
File /opt/conda/lib/python3.11/site-packages/langgraph/utils/runnable.py:408, in RunnableSeq.invoke(self, input, config, **kwargs)
404 config = patch_config(
405 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
406 )
407 if i == 0:
--> 408 input = step.invoke(input, config, **kwargs)
409 else:
410 input = step.invoke(input, config)
File /opt/conda/lib/python3.11/site-packages/langgraph/utils/runnable.py:176, in RunnableCallable.invoke(self, input, config, **kwargs)
174 context = copy_context()
175 context.run(_set_config_context, child_config)
--> 176 ret = context.run(self.func, input, **kwargs)
177 except BaseException as e:
178 run_manager.on_chain_error(e)
File /opt/conda/lib/python3.11/site-packages/langgraph/prebuilt/chat_agent_executor.py:560, in create_react_agent.<locals>.call_model(state, config)
558 def call_model(state: AgentState, config: RunnableConfig) -> AgentState:
559 _validate_chat_history(state["messages"])
--> 560 response = model_runnable.invoke(state, config)
561 has_tool_calls = isinstance(response, AIMessage) and response.tool_calls
562 all_tools_return_direct = (
563 all(call["name"] in should_return_direct for call in response.tool_calls)
564 if isinstance(response, AIMessage)
565 else False
566 )
File /opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py:3022, in RunnableSequence.invoke(self, input, config, **kwargs)
3020 input = context.run(step.invoke, input, config, **kwargs)
3021 else:
-> 3022 input = context.run(step.invoke, input, config)
3023 # finish the root run
3024 except BaseException as e:
File /opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py:4711, in RunnableLambda.invoke(self, input, config, **kwargs)
4697 """Invoke this Runnable synchronously.
4698
4699 Args:
(...)
4708 TypeError: If the Runnable is a coroutine function.
4709 """
4710 if hasattr(self, "func"):
-> 4711 return self._call_with_config(
4712 self._invoke,
4713 input,
4714 self._config(config, self.func),
4715 **kwargs,
4716 )
4717 else:
4718 msg = (
4719 "Cannot invoke a coroutine function synchronously."
4720 "Use `ainvoke` instead."
4721 )
File /opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py:1925, in Runnable._call_with_config(self, func, input, config, run_type, serialized, **kwargs)
1921 context = copy_context()
1922 context.run(_set_config_context, child_config)
1923 output = cast(
1924 Output,
-> 1925 context.run(
1926 call_func_with_variable_args, # type: ignore[arg-type]
1927 func, # type: ignore[arg-type]
1928 input, # type: ignore[arg-type]
1929 config,
1930 run_manager,
1931 **kwargs,
1932 ),
1933 )
1934 except BaseException as e:
1935 run_manager.on_chain_error(e)
File /opt/conda/lib/python3.11/site-packages/langchain_core/runnables/config.py:396, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
394 if run_manager is not None and accepts_run_manager(func):
395 kwargs["run_manager"] = run_manager
--> 396 return func(input, **kwargs)
File /opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py:4565, in RunnableLambda._invoke(self, input, run_manager, config, **kwargs)
4563 output = chunk
4564 else:
-> 4565 output = call_func_with_variable_args(
4566 self.func, input, config, run_manager, **kwargs
4567 )
4568 # If the output is a Runnable, invoke it
4569 if isinstance(output, Runnable):
File /opt/conda/lib/python3.11/site-packages/langchain_core/runnables/config.py:396, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
394 if run_manager is not None and accepts_run_manager(func):
395 kwargs["run_manager"] = run_manager
--> 396 return func(input, **kwargs)
File /opt/conda/lib/python3.11/site-packages/langchain_core/messages/utils.py:571, in merge_message_runs(messages, chunk_separator)
564 if (
565 isinstance(last_chunk.content, str)
566 and isinstance(curr_chunk.content, str)
567 and last_chunk.content
568 and curr_chunk.content
569 ):
570 last_chunk.content += chunk_separator
--> 571 merged.append(_chunk_to_msg(last_chunk + curr_chunk))
572 return merged
File /opt/conda/lib/python3.11/site-packages/langchain_core/messages/ai.py:395, in AIMessageChunk.__add__(self, other)
393 def __add__(self, other: Any) -> BaseMessageChunk: # type: ignore
394 if isinstance(other, AIMessageChunk):
--> 395 return add_ai_message_chunks(self, other)
396 elif isinstance(other, (list, tuple)) and all(
397 isinstance(o, AIMessageChunk) for o in other
398 ):
399 return add_ai_message_chunks(self, *other)
File /opt/conda/lib/python3.11/site-packages/langchain_core/messages/ai.py:412, in add_ai_message_chunks(left, *others)
409 raise ValueError(msg)
411 content = merge_content(left.content, *(o.content for o in others))
--> 412 additional_kwargs = merge_dicts(
413 left.additional_kwargs, *(o.additional_kwargs for o in others)
414 )
415 response_metadata = merge_dicts(
416 left.response_metadata, *(o.response_metadata for o in others)
417 )
419 # Merge tool call chunks
File /opt/conda/lib/python3.11/site-packages/langchain_core/utils/_merge.py:58, in merge_dicts(left, *others)
56 merged[right_k] += right_v
57 elif isinstance(merged[right_k], dict):
---> 58 merged[right_k] = merge_dicts(merged[right_k], right_v)
59 elif isinstance(merged[right_k], list):
60 merged[right_k] = merge_lists(merged[right_k], right_v)
File /opt/conda/lib/python3.11/site-packages/langchain_core/utils/_merge.py:68, in merge_dicts(left, *others)
63 else:
64 msg = (
65 f"Additional kwargs key {right_k} already exists in left dict and "
66 f"value has unsupported type {type(merged[right_k])}."
67 )
---> 68 raise TypeError(msg)
69 return merged
TypeError: Additional kwargs key prompt_tokens already exists in left dict and value has unsupported type <class 'int'>.
### Description
I am trying to use the Pre built ReAct agent as nodes inside a LangGraph Graph. The graph as 2 ReAct based agent nodes.Both are exactly identical just with different prompts.
Agent 1 runs fine but Agent 2 gives the below error.
The issue seems to be in _merge.py utility within langchain_core library.
Similar issue with Gemini is discussed in #23827
Thanks in advance for all the help
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue Dec 3 14:36:00 UTC 2024
> Python Version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.7
> langchain_aws: 0.2.10
> langchain_chroma: 0.2.0
> langchain_text_splitters: 0.3.4
> langchainhub: 0.1.21
> langgraph_sdk: 0.1.48
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> boto3: 1.35.91
> chromadb: 0.5.23
> dataclasses-json: 0.6.7
> fastapi: 0.115.6
> httpx: 0.27.0
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> orjson: 3.10.13
> packaging: 23.2
> pydantic: 2.9.2
> pydantic-settings: 2.7.1
> PyYAML: 6.0.1
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.31
> tenacity: 8.4.1
> types-requests: 2.32.0.20241016
> typing-extensions: 4.1 | 🤖:bug | low | Critical |
2,778,565,052 | flutter | [in_app_purchase_storekit] StoreKit2 introductory pricing data is not available | ### Steps to reproduce
1. InAppPurchaseStoreKitPlatform.enableStoreKit2();
2. Configure introductory pricing in AppStore
3. After calling this code `final productDetailsResponse = await _inAppPurchase.queryProductDetails({productIds}); ` if we take a look at ProductDetails data from productDetailsResponse we can't see any 'introductory pricing' data. We only have `sk2Product.subscription?.promotionalOffers`.
This code does not retrieve introductory pricing offers
```
sk2Product.subscription?.promotionalOffers.firstWhereOrNull(
(promotionalOffer) =>
promotionalOffer.type == SK2SubscriptionOfferType.introductory,
)
```
We only see promotional offers with the current implementation.
The native IOS modal shows the introductory pricing amount but the code mentioned at the top used to show the in-app modal does not contain the introductory pricing data.
I'm using
`in_app_purchase_storekit: ^0.3.20+3`
### Expected results
sk2Product.subscription should have an introductoryPricingOffers list
### Actual results
sk2Product.subscription does not provide an introductoryPricingOffers list
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.1.1 24B2091 darwin-arm64, locale en-AR)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2022.3)
[✓] VS Code (version 1.96.2)
[✓] Connected device (5 available)
[✓] Network resources
• No issues found!
```
</details>
| platform-ios,p: in_app_purchase,package,P2,team-ios,triaged-ios | low | Minor |
2,778,594,910 | godot | Theme editor opens when saving a script in a external editor. | ### Tested versions
- Reproducible in: 4.3.stable
### System information
Godot v4.3.stable (77dcf97d8) - Arch Linux #1 SMP PREEMPT_DYNAMIC Thu, 02 Jan 2025 22:52:26 +0000 - Wayland - Vulkan (Forward+) - integrated Intel(R) HD Graphics 5500 (BDW GT2) - Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz (4 Threads)
### Issue description
After saving a script in an external editor the theme editor will open, this does not happen with the built-in editor.
### Steps to reproduce
- Create a custom theme for a Control node, and add a script to it;
- Open a script in an external editor and save it, the theme editor will open.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Minor |
2,778,599,336 | go | x/crypto/acme/autocert: Manager.GetCertificate should check host policy before consulting cache | Manager.GetCertificate doesn't check the host policy if the request is a challenge, probing the cache if there isn't an entry in the certTokens map. While cache probes _should_ be cheap, an attacker may exploit any latency introduced by these probes by forcing repeated cache misses.
This can be fixed by simply moving the host policy check above the cache probe, which is likely to be cheaper, if implemented, than a cache probe.
This is a [PUBLIC track](https://go.dev/security/policy#public) security issue, due to its minimal impact.
Thanks to Dimitri Stiliadis for reporting this issue. | Security,NeedsFix | low | Minor |
2,778,605,439 | flutter | Make it clear `Merge Queue Guard` is really building (required) engine artifacts to run the tests | One issue that comes up frequently when discussing pain points of the mono-repo is the "merge queue is slow".
There are lots of different ways to discuss that, and some optimizations that could happen.
_One_ concrete thing is a confusion (from at least me and @jmagman) that the `Merge Queue Guard` check, that runs on presubmit, is _part_ of the merge queue. From talking to @jtmcdole, what is _actually_ happening is we are building (local) engine artifacts (from source) so that we can actually run the presubmit.
We expect this to be fairly fast (i.e. RBE builds and what not).
It would be a nice qualify of life change to rename this to `Building Engine Artifacts` to reflect what is actually happening, and if that is a problem, we can discuss "Building Engine Artifacts is too slow" instead of merging (heh) this with the concept of the Merge Queue.
/cc @zanderso | team-infra,P2,monorepo | low | Major |
2,778,649,452 | ui | [bug]: Not able to resize icons inside buttons | ### Describe the bug
Because of this change:

I am not able to change the size of the icon inside the button
### Affected component/components
Button
### How to reproduce
```
Button variant="outline" className='text-white bg-red-400 rounded-full ' size={'icon'} >
<HandIcon size={36} />
</Button>
```
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows 11, Edge browser
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,778,664,845 | go | text/template: consider adding recursion depth limit for deeply nested expressions | ### Go version
go version go1.23.4 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/ville/Library/Caches/go-build'
GOENV='/Users/ville/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/ville/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/ville/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/go/1.23.4/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/opt/homebrew/Cellar/go/1.23.4/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.4'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/ville/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/cl/npk3dq855kxf2ns3qth9pv4m0000gn/T/go-build3530549792=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
Note: This is a public issue after discussing with the Go security team.
Created a program to test template parsing with deeply nested parentheses: https://go.dev/play/p/659Ry2YDb4Z
```go
package main
import (
"fmt"
"runtime"
"strings"
"text/template"
)
func main() {
depth := 1000
expr := fmt.Sprintf("{{$x := %s1+2i%s}}",
strings.Repeat("(", depth),
strings.Repeat(")", depth))
fmt.Printf("Input size: %d bytes\n", len(expr))
var m1 runtime.MemStats
runtime.ReadMemStats(&m1)
_, err := template.New("test").Parse(expr)
if err != nil {
fmt.Printf("Error: %v\n", err)
}
var m2 runtime.MemStats
runtime.ReadMemStats(&m2)
allocated := m2.TotalAlloc - m1.TotalAlloc
fmt.Printf("Memory allocated: %d bytes\n", allocated)
fmt.Printf("Amplification factor: %.2fx\n", float64(allocated)/float64(len(expr)))
}
```
### What did you see happen?
1. With moderate nesting (1000 levels), we observe significant memory amplification:
```
Input size: 2014 bytes
Memory allocated: 172256 bytes
Amplification factor: 85.53x
```
2. Attempting deeper nesting (tested with depth = 500000) causes a stack overflow before reaching extreme memory allocation:
```
runtime: goroutine stack exceeds 1000000000-byte limit
runtime: sp=0x14020580340 stack=[0x14020580000, 0x14040580000]
fatal error: stack overflow
```
### What did you expect to see?
Two improvements would be helpful:
1. A reasonable limit on expression nesting depth to prevent accidental stack overflow. Open to help with the implementation. Example:
```go
const maxParenDepth = 100
type Tree struct {
// ... existing fields ...
parenDepth int
}
func (t *Tree) term() Node {
case itemLeftParen:
if t.parenDepth >= maxParenDepth {
t.errorf("expression too deeply nested (max %d)", maxParenDepth)
}
t.parenDepth++
defer func() { t.parenDepth-- }()
// ... rest of implementation
}
```
2. Documentation clarity: While `html/template` documentation explicitly states "The security model used by this package assumes that template authors are trusted" in its [package documentation](https://pkg.go.dev/html/template), `text/template` lacks similar guidance. Adding this documentation would help users better understand the package's security model.
Both changes would align with common parser implementation practices while maintaining clarity about the trust model. | NeedsInvestigation | low | Critical |
2,778,673,167 | tauri | [bug] Cannot deploy to physical iOS device | ### Describe the bug
When attempting to debug a Tauri app on a physical iOS device, the build fails with:
`error: exportArchive No signing certificate "iOS Distribution" found`

A development profile is installed with a valid certificate and the device UDID is included in the profile.
Debugging to the same device from a native Xcode project works without any issues.
### Reproduction
Clone the example app in the plugins-workspace [repository](https://github.com/tauri-apps/plugins-workspace).
Confirm the app builds/deploys using the CLI to an iOS simulator.
Change the bundle ID to match an installed development profile.
Run the following command:
`pnpm tauri ios dev`
### Expected behavior
App should build and deploy to the physical iOS device.
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 15.2.0 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.12.0
- pnpm: 9.15.2
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1 (outdated, latest: 2.2.0)
- @tauri-apps/cli : 2.1.0 (outdated, latest: 2.2.1)
```
### Stack trace
_No response_
### Additional context
#10668
#11147 | type: bug,status: needs triage | low | Critical |
2,778,708,137 | kubernetes | Container remains ready causing Pod and EndpointSlice to Report False Ready State for the entire terminationGracePeriod | ### What happened?
containerStatuses.ready is not set to false immediately when a pod it deleted. This is results in pod reporting ready status when the pod is deleted after terminationGracePeriodSeconds have passed.
### What did you expect to happen?
After the pod termination starts, the containerStatuses should set ready: false for all the containers in the pod and hence pod status should report as ready: false.
### How can we reproduce it (as minimally and precisely as possible)?
```
apiVersion: v1
kind: Pod
metadata:
name: "b382559131"
namespace: "default"
spec:
terminationGracePeriodSeconds: 60
restartPolicy: Never
containers:
- name: c1
image: "k8s.gcr.io/busybox"
command: ["/bin/sh", "-c"]
args:
- |
_term() {
rm -f /tmp/ready
}
trap _term SIGTERM
touch /tmp/ready
while true; do
echo 'helloc1'
ls /tmp/die_now && echo 'dying in 5s...' && sleep 5 && exit 0
sleep 1
done
readinessProbe:
exec:
command:
- sh
- -c
- |
if [ -f "/tmp/ready" ]; then
exit 0
else
touch /tmp/die_now
exit 1
fi
- name: c2
image: "k8s.gcr.io/busybox"
command:
- sh
- -c
- "_term() { while true; do echo \"hello_term_c2\"; sleep 1; done } ; trap _term SIGTERM; while true; do echo \"helloc2\"; sleep 1; done"
```
Run the above pod and issue a delete call when the containers become ready
$ kubectl delete pod b382559131
Monitor the readiness status:
$ while true; do date && kubectl get pod b382559131 -o json | jq .status.containerStatuses.[].ready && sleep 1; done
Sat Dec 21 11:05:16 PM PST 2024
true
true
Sat Dec 21 11:05:17 PM PST 2024
true
true
< snip >
Sat Dec 21 11:06:21 PM PST 2024
true
true
Sat Dec 21 11:06:22 PM PST 2024
Error from server (NotFound): pods "b382559131" not found
Sat Dec 21 11:06:24 PM PST 2024
Error from server (NotFound): pods "b382559131" not found
The pod has a termination grace period of 60s. The delete was issued at 11:05:21 and we see that the ready status was true until the pod was deleted at 11:06:22 (60s later).
The pod reports ready status until the last container exits.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
v1.30.6
</details>
### Cloud provider
<details>
None
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,priority/backlog,sig/node,triage/accepted | low | Critical |
2,778,716,790 | PowerToys | [Settings] Deep link into Settings for a PowerToys Run Plugin | ### Description of the new feature / enhancement
It is possible to deep link into settings for a specific utility:
https://github.com/microsoft/PowerToys/blob/5ef918750d384b76d4db3b82fde8942396c6f3d5/src/common/Common.UI/SettingsDeepLink.cs#L90
Or via:
```
PowerToys.exe --open-settings=<SettingsWindowName>
```
As a PowerToys Run community plugin author, it would be nice to be able to deep link directly in to the settings of a plugin.
For example with:
```
PowerToys.exe --open-settings=Run --plugin-id=CEA0FDFC6D3B4085823D60DC76F28855
```
### Scenario when this would be used?
A plugin could display a result to the user to open the Settings and edit the plugin configuration.
This would be useful for:
- A newly installed plugin that require configuration
- A plugin that is in an invalid configuration state
- A plugin that has a specific query for settings
If the plugin settings are opened, the plugin should be scrolled into position, expanded and focused:

### Supporting information
I would like to use this feature here:
- https://github.com/hlaueriksson/Community.PowerToys.Run.Plugin.Install/blob/e6610898a773456e9805222535dfeb439e1fe447/src/Community.PowerToys.Run.Plugin.Install/Main.cs#L103
- https://github.com/hlaueriksson/Community.PowerToys.Run.Plugins/blob/fff45bb452dcf3c4c5b0bcf68022c9ab71b4f2de/src/Community.PowerToys.Run.Plugin.Twitch/Main.cs#L185 | Product-PowerToys Run,Needs-Triage,Run-Plugin | low | Minor |
2,778,748,564 | flutter | Consider increasing `runDartTest` parallelization past `2` cores | In https://github.com/flutter/flutter/pull/161392, I remove the (now unused) `CPU` environment variable parsing.
For now, we'll continue to do what we did before - default to `2` concurrent tasks unless multi-core is explicitly disabled. At one point though this number was higher - we should experiment with trying `Platform.numberOfProcessors` (or some fraction of that number). | a: tests,team-infra,P3 | low | Minor |
2,778,774,780 | yt-dlp | [Audible] An extractor error has occurred. (caused by KeyError('media')) | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Unable to download a sample audio clip using a url to MPD file on audible.com with this error
`An extractor error has occurred. (caused by KeyError('media'))`
When I download the content of MPD the URL directly, I get following output.
```xml
<?xml version='1.0' encoding='utf-8'?>
<MPD minBufferTime='PT20S' type='static' mediaPresentationDuration='PT3M50.156S'
profiles='urn:mpeg:dash:profile:isoff-main:2011'
xmlns='urn:mpeg:dash:schema:mpd:2011'
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xsi:schemaLocation='urn:mpeg:DASH:schema:MPD:2011 http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-DASH_schema_files/DASH-MPD.xsd'>
<Period id='0' duration='PT3M50.156S'>
<AdaptationSet id='0' contentType='audio' lang='und' segmentAlignment='true' bitstreamSwitching='true'>
<Representation id='0' mimeType='audio/mp4' codecs='mp4a.40.2' bandwidth='32768' audioSamplingRate='22050'>
<AudioChannelConfiguration schemeIdUri='urn:mpeg:dash:23003:3:audio_channel_configuration:2011'
value='1'/>
<ContentProtection xmlns:cenc='urn:mpeg:cenc:2013'
cenc:default_KID='f3886098-1aaa-54db-3353-31f77e16f9f9'
schemeIdUri='urn:mpeg:dash:mp4protection:2011' value='cenc'/>
<ContentProtection xmlns:cenc='urn:mpeg:cenc:2013'
schemeIdUri='urn:uuid:edef8ba9-79d6-4ace-a3c8-27dcd51d21ed'>
<cenc:pssh>
AAAAhnBzc2gAAAAA7e+LqXnWSs6jyCfc1R0h7QAAAGYSEPOIYJgaqlTbM1Mx934W+fkSECXQB/4NCU8DM6/PnzWmA+8aB0F1ZGlibGUiN2NpZDoNCjg0aGdtQnFxVk5zelV6SDNmaGI1K1E9PSxKZEFIL2cwSlR3TXpyOCtmTmFZRDd3PT0=
</cenc:pssh>
</ContentProtection>
<ContentProtection xmlns:cenc='urn:mpeg:cenc:2013'
schemeIdUri='urn:uuid:9a04f079-9840-4286-ab92-e65be0885f95'>
<cenc:pssh>
AAAARHBzc2gBAAAAmgTweZhAQoarkuZb4IhflQAAAALziGCYGqpU2zNTMfd+Fvn5JdAH/g0JTwMzr8+fNaYD7wAAAAA=
</cenc:pssh>
</ContentProtection>
<BaseURL>
../../../../../4cb82855-f620-4da6-aa99-88d537e8e479.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzc24iOiI0Y2I4Mjg1NS1mNjIwLTRkYTYtYWE5OS04OGQ1MzdlOGU0NzkiLCJuYmYiOjE3MzY0NTQzNjAzNjcsImV4cCI6MTczNjU0MDc2MDM2NywicGF0aHMiOlsiL3BlX3BlcmlfMDAwMDQ1LzkyNTA1L2NlbmMvZzEvcGVfcGVyaV8wMDAwNDVfMjJfMzIubXA0Il0sInNyIjoxMDM5LCJlciI6MTAyMzE0NX0.RXFE7W9PQ-iFID__Y0Wr3vAv--8hVQSQIMxxZ4f3x_U/pe_peri_000045/92505/cenc/g1/pe_peri_000045_22_32.mp4
</BaseURL>
<SegmentList timescale='22050' duration='430080'>
<Initialization
sourceURL='../../../../../4cb82855-f620-4da6-aa99-88d537e8e479.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzc24iOiI0Y2I4Mjg1NS1mNjIwLTRkYTYtYWE5OS04OGQ1MzdlOGU0NzkiLCJuYmYiOjE3MzY0NTQzNjAzNjcsImV4cCI6MTczNjU0MDc2MDM2NywicGF0aHMiOlsiL3BlX3BlcmlfMDAwMDQ1LzkyNTA1L2NlbmMvZzEvcGVfcGVyaV8wMDAwNDVfMjJfMzIubXA0Il0sInNyIjowLCJlciI6MTAzOH0.fyIZBDZ3EeKsckM4eNO4VjNStEpJruo3h9EXWTo05S4/pe_peri_000045/92505/cenc/g1/pe_peri_000045_22_32.mp4'
range='0-1038'/>
<SegmentURL mediaRange='1039-87720'/>
<SegmentURL mediaRange='87721-174277'/>
<SegmentURL mediaRange='174278-261104'/>
<SegmentURL mediaRange='261105-347606'/>
<SegmentURL mediaRange='347607-434212'/>
<SegmentURL mediaRange='434213-520838'/>
<SegmentURL mediaRange='520839-607477'/>
<SegmentURL mediaRange='607478-694032'/>
<SegmentURL mediaRange='694033-780658'/>
<SegmentURL mediaRange='780659-867329'/>
<SegmentURL mediaRange='867330-953848'/>
<SegmentURL mediaRange='953849-1023144'/>
</SegmentList>
</Representation>
<Representation id='1' mimeType='audio/mp4' codecs='mp4a.40.2' bandwidth='65536' audioSamplingRate='22050'>
<AudioChannelConfiguration schemeIdUri='urn:mpeg:dash:23003:3:audio_channel_configuration:2011'
value='2'/>
<ContentProtection xmlns:cenc='urn:mpeg:cenc:2013'
cenc:default_KID='25d007fe-0d09-4f03-33af-cf9f35a603ef'
schemeIdUri='urn:mpeg:dash:mp4protection:2011' value='cenc'/>
<ContentProtection xmlns:cenc='urn:mpeg:cenc:2013'
schemeIdUri='urn:uuid:edef8ba9-79d6-4ace-a3c8-27dcd51d21ed'>
<cenc:pssh>
AAAAhnBzc2gAAAAA7e+LqXnWSs6jyCfc1R0h7QAAAGYSEPOIYJgaqlTbM1Mx934W+fkSECXQB/4NCU8DM6/PnzWmA+8aB0F1ZGlibGUiN2NpZDoNCjg0aGdtQnFxVk5zelV6SDNmaGI1K1E9PSxKZEFIL2cwSlR3TXpyOCtmTmFZRDd3PT0=
</cenc:pssh>
</ContentProtection>
<ContentProtection xmlns:cenc='urn:mpeg:cenc:2013'
schemeIdUri='urn:uuid:9a04f079-9840-4286-ab92-e65be0885f95'>
<cenc:pssh>
AAAARHBzc2gBAAAAmgTweZhAQoarkuZb4IhflQAAAALziGCYGqpU2zNTMfd+Fvn5JdAH/g0JTwMzr8+fNaYD7wAAAAA=
</cenc:pssh>
</ContentProtection>
<BaseURL>
../../../../../4cb82855-f620-4da6-aa99-88d537e8e479.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzc24iOiI0Y2I4Mjg1NS1mNjIwLTRkYTYtYWE5OS04OGQ1MzdlOGU0NzkiLCJuYmYiOjE3MzY0NTQzNjAzNjcsImV4cCI6MTczNjU0MDc2MDM2NywicGF0aHMiOlsiL3BlX3BlcmlfMDAwMDQ1LzkyNTA1L2NlbmMvZzEvcGVfcGVyaV8wMDAwNDVfMjJfNjQubXA0Il0sInNyIjoxMDM5LCJlciI6MTk0Mzc2OX0.WXJi4ZRlT_0MY-uSSMx5wG8sskI37AlZXRfHJ-gHBJI/pe_peri_000045/92505/cenc/g1/pe_peri_000045_22_64.mp4
</BaseURL>
<SegmentList timescale='22050' duration='430080'>
<Initialization
sourceURL='../../../../../4cb82855-f620-4da6-aa99-88d537e8e479.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzc24iOiI0Y2I4Mjg1NS1mNjIwLTRkYTYtYWE5OS04OGQ1MzdlOGU0NzkiLCJuYmYiOjE3MzY0NTQzNjAzNjcsImV4cCI6MTczNjU0MDc2MDM2NywicGF0aHMiOlsiL3BlX3BlcmlfMDAwMDQ1LzkyNTA1L2NlbmMvZzEvcGVfcGVyaV8wMDAwNDVfMjJfNjQubXA0Il0sInNyIjowLCJlciI6MTAzOH0.s1Zl9uJGivBqyH7pwCIsKOy9TVD-xzobLVKe4fKVgfo/pe_peri_000045/92505/cenc/g1/pe_peri_000045_22_64.mp4'
range='0-1038'/>
<SegmentURL mediaRange='1039-165678'/>
<SegmentURL mediaRange='165679-330315'/>
<SegmentURL mediaRange='330316-495266'/>
<SegmentURL mediaRange='495267-659808'/>
<SegmentURL mediaRange='659809-824256'/>
<SegmentURL mediaRange='824257-988928'/>
<SegmentURL mediaRange='988929-1153655'/>
<SegmentURL mediaRange='1153656-1318178'/>
<SegmentURL mediaRange='1318179-1482855'/>
<SegmentURL mediaRange='1482856-1647618'/>
<SegmentURL mediaRange='1647619-1812058'/>
<SegmentURL mediaRange='1812059-1943768'/>
</SegmentList>
</Representation>
</AdaptationSet>
</Period>
</MPD>
```
I extracted the MPD url from the webpage manually, by inspecting the developer tools from an audiobook sample url like this.
`https://www.audible.com/webplayer?asin=B00NWS13PI&isSample=true`
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU "https://d1jobzhhm62zby.cloudfront.net/4cb82855-f620-4da6-aa99-88d537e8e479.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzc24iOiI0Y2I4Mjg1NS1mNjIwLTRkYTYtYWE5OS04OGQ1MzdlOGU0NzkiLCJuYmYiOjE3MzY0NTQwNzUxMzksImV4cCI6MTczNjU0MDQ3NTEzOSwicGF0aHMiOlsiXlxcL3BlX3BlcmlfMDAwMDQ1XFwvOTI1MDVcXC9jZW5jXFwvZzFcXC9wZV9wZXJpXzAwMDA0NV8yMlxcLDMyXFwsNjRfdjhcXC5tYXN0ZXJcXC5tcGQkIl0sInFzIjp7InNzX3NlYyI6MjAsInN0YXJ0X3NlYyI6IjAiLCJlbmRfc2VjIjoiMjMwLjI0NiIsImlkIjoiNGNiODI4NTUtZjYyMC00ZGE2LWFhOTktODhkNTM3ZThlNDc5In0sImludHNpZyI6IlVGNXVwM3NmRHBXT3U2WHBpTTBaRUVKVDBFSXhQMzJpN3FZVFFxWktyN1EifQ.kSPhsdxYb_1ViwFpY8Arufk7oQWwsnSSImXWiwQCr1o/pe_peri_000045/92505/cenc/g1/pe_peri_000045_22,32,64_v8.master.mpd?ss_sec=20&start_sec=0&end_sec=230.246&id=4cb82855-f620-4da6-aa99-88d537e8e479"
[debug] Command-line config: ['-vU', 'https://d1jobzhhm62zby.cloudfront.net/4cb82855-f620-4da6-aa99-88d537e8e479.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzc24iOiI0Y2I4Mjg1NS1mNjIwLTRkYTYtYWE5OS04OGQ1MzdlOGU0NzkiLCJuYmYiOjE3MzY0NTQwNzUxMzksImV4cCI6MTczNjU0MDQ3NTEzOSwicGF0aHMiOlsiXlxcL3BlX3BlcmlfMDAwMDQ1XFwvOTI1MDVcXC9jZW5jXFwvZzFcXC9wZV9wZXJpXzAwMDA0NV8yMlxcLDMyXFwsNjRfdjhcXC5tYXN0ZXJcXC5tcGQkIl0sInFzIjp7InNzX3NlYyI6MjAsInN0YXJ0X3NlYyI6IjAiLCJlbmRfc2VjIjoiMjMwLjI0NiIsImlkIjoiNGNiODI4NTUtZjYyMC00ZGE2LWFhOTktODhkNTM3ZThlNDc5In0sImludHNpZyI6IlVGNXVwM3NmRHBXT3U2WHBpTTBaRUVKVDBFSXhQMzJpN3FZVFFxWktyN1EifQ.kSPhsdxYb_1ViwFpY8Arufk7oQWwsnSSImXWiwQCr1o/pe_peri_000045/92505/cenc/g1/pe_peri_000045_22,32,64_v8.master.mpd?ss_sec=20&start_sec=0&end_sec=230.246&id=4cb82855-f620-4da6-aa99-88d537e8e479']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [65cf46cdd] (pip)
[debug] Python 3.12.4 (CPython x86_64 64bit) - Linux-6.8.0-51-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg N-116415-g2128c17739-20240725 (setts), ffprobe N-116415-g2128c17739-20240725
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, mutagen-1.47.0, requests-2.31.0, sqlite3-3.45.1, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[generic] Extracting URL: https://d1jobzhhm62zby.cloudfront.net/4cb82855-f620-4da6-aa99-88d537e8e479.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzc24iOiI0Y2I4Mjg1NS1mNjIwLTRkYTYtYWE5OS04OGQ1MzdlOGU0NzkiLCJuYmYiOjE3MzY0NTQwNzUxMzksImV4cCI6MTczNjU0MDQ3NTEzOSwicGF0aHMiOlsiXlxcL3BlX3BlcmlfMDAwMDQ1XFwvOTI1MDVcXC9jZW5jXFwvZzFcXC9wZV9wZXJpXzAwMDA0NV8yMlxcLDMyXFwsNjRfdjhcXC5tYXN0ZXJcXC5tcGQkIl0sInFzIjp7InNzX3NlYyI6MjAsInN0YXJ0X3NlYyI6IjAiLCJlbmRfc2VjIjoiMjMwLjI0NiIsImlkIjoiNGNiODI4NTUtZjYyMC00ZGE2LWFhOTktODhkNTM3ZThlNDc5In0sImludHNpZyI6IlVGNXVwM3NmRHBXT3U2WHBpTTBaRUVKVDBFSXhQMzJpN3FZVFFxWktyN1EifQ.kSPhsdxYb_1ViwFpY8Arufk7oQWwsnSSImXWiwQCr1o/pe_peri_000045/92505/cenc/g1/pe_peri_000045_22,32,64_v8.master.mpd?ss_sec=20&start_sec=0&end_sec=230.246&id=4cb82855-f620-4da6-aa99-88d537e8e479
[generic] pe_peri_000045_22,32,64_v8.master.mpd?ss_sec=20&start_sec=0&end_sec=230: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] pe_peri_000045_22,32,64_v8.master.mpd?ss_sec=20&start_sec=0&end_sec=230: Extracting information
ERROR: An extractor error has occurred. (caused by KeyError('media')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/home/k/.pyenv/versions/3.12.4/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/k/.pyenv/versions/3.12.4/lib/python3.12/site-packages/yt_dlp/extractor/generic.py", line 2520, in _real_extract
info_dict['formats'], info_dict['subtitles'] = self._parse_mpd_formats_and_subtitles(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/k/.pyenv/versions/3.12.4/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 2683, in _parse_mpd_formats_and_subtitles
return self._merge_mpd_periods(periods)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/k/.pyenv/versions/3.12.4/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 2691, in _merge_mpd_periods
for period in periods:
File "/home/k/.pyenv/versions/3.12.4/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 2893, in _parse_mpd_periods
representation_ms_info = extract_multisegment_info(representation, adaption_set_ms_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/k/.pyenv/versions/3.12.4/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 2776, in extract_multisegment_info
ms_info['segment_urls'] = [segment.attrib['media'] for segment in segment_urls_e]
~~~~~~~~~~~~~~^^^^^^^^^
KeyError: 'media'
```
```
| bug,triage | low | Critical |
2,778,776,951 | pytorch | c10/util/BFloat16-math.h has undefined behavior | ### 🐛 Describe the bug
Per https://en.cppreference.com/w/cpp/language/extending_std:
> It is undefined behavior to add declarations or definitions to namespace std or to any namespace nested within std, with a few exceptions noted below.
The "exceptions noted below" do not seem to include what we're doing in [BFloat16-math.h](https://github.com/pytorch/pytorch/blob/main/c10/util/BFloat16-math.h), and specifically don't include adding overloads of functions that take program-defined types.
This problem is currently "theoretical" in that I am not aware of practical issues resulting from this header at this time.
To fix this, we would need to at least put the functions in BFloat16-math.h into a namespace other than `std` (either `c10` or a new one, like say `c10_math`). Then, we could either:
- have callers do `using std::pow` and all the other cmath functions, and rely on [ADL](https://quuxplusone.github.io/blog/2019/04/26/what-is-adl/) to select the c10/c10_math version for half/BFloat16
- `using` all the std:: functions into our namespace (which IMO argues toward that namespace being a new one like `c10_math`).
### Versions
N/A
cc @malfet @seemethere @manuelcandales @SherlockNoMad @angelayi | module: build,triaged,module: core aten | low | Critical |
2,778,784,642 | vscode | Determine launch.json configuration field from result of task | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I have a program that I run via a launch.json config, where the program's path gets determined by the preLaunchTask. The task writes a file out with the path, and the launch config reads from that path:
tasks.json
```
{
"version": "2.0.0",
"tasks": [
{
"label": "gen_file_path",
"type": "shell",
"command": "/path/to/gen_file_path",
"problemMatcher": []
}
],
}
```
launch.json
```
{
"version": "0.2.0",
"configurations": [
{
"name": "Run C++",
"type": "cppdbg",
"request": "launch",
"cwd": "${workspaceRoot}",
"program": "${input:fileContent}",
"preLaunchTask": "gen_file_path"
},
],
"inputs": [
{
"id": "fileContent",
"type": "command",
"command": "extension.commandvariable.file.content",
"args": {
"fileName": "/path/to/output_from_task"
}
}
]
}
```
However, this fails because VSCode reads from the file before the preLaunchTask runs, so it can't get the new output. Is there any way to populate a field in a launch.json configuration from the output of a task? | feature-request,debug | low | Minor |
2,778,801,473 | go | cmd/compile: no automatic use of fused multiply-add on amd64 even with GOAMD64=v3 | ### Go version
go version go1.23.4 linux/amd64
### Output of `go env` in your module/workspace:
```shell
-
```
### What did you do?
Compile the following program with `GOARCH=amd64 GOAMD64=v3 go build -gcflags=-S`
```
package pkg
import "math"
func fooImplicit(x, y, z float64) float64 {
return x*y + z
}
func fooExplicit(x, y, z float64) float64 {
return math.FMA(x, y, z)
}
```
### What did you see happen?
```
command-line-arguments.fooImplicit STEXT nosplit size=9 args=0x18 locals=0x0 funcid=0x0 align=0x0
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:5) TEXT command-line-arguments.fooImplicit(SB), NOSPLIT|NOFRAME|ABIInternal, $0-24
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:5) FUNCDATA $0, gclocals·g2BeySu+wFnoycgXfElmcg==(SB)
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:5) FUNCDATA $1, gclocals·g2BeySu+wFnoycgXfElmcg==(SB)
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:5) FUNCDATA $5, command-line-arguments.fooImplicit.arginfo1(SB)
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:5) FUNCDATA $6, command-line-arguments.fooImplicit.argliveinfo(SB)
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:5) PCDATA $3, $1
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:6) MULSD X1, X0
0x0004 00004 (/home/dominikh/prj/src/example.com/bar.go:6) ADDSD X2, X0
0x0008 00008 (/home/dominikh/prj/src/example.com/bar.go:6) RET
0x0000 f2 0f 59 c1 f2 0f 58 c2 c3 ..Y...X..
command-line-arguments.fooExplicit STEXT nosplit size=9 args=0x18 locals=0x0 funcid=0x0 align=0x0
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:9) TEXT command-line-arguments.fooExplicit(SB), NOSPLIT|NOFRAME|ABIInternal, $0-24
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:9) FUNCDATA $0, gclocals·g2BeySu+wFnoycgXfElmcg==(SB)
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:9) FUNCDATA $1, gclocals·g2BeySu+wFnoycgXfElmcg==(SB)
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:9) FUNCDATA $5, command-line-arguments.fooExplicit.arginfo1(SB)
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:9) FUNCDATA $6, command-line-arguments.fooExplicit.argliveinfo(SB)
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:9) PCDATA $3, $1
0x0000 00000 (/home/dominikh/prj/src/example.com/bar.go:10) VFMADD231SD X1, X0, X2
0x0005 00005 (/home/dominikh/prj/src/example.com/bar.go:10) MOVUPS X2, X0
0x0008 00008 (/home/dominikh/prj/src/example.com/bar.go:10) RET
0x0000 c4 e2 f9 b9 d1 0f 10 c2 c3 .........
```
### What did you expect to see?
I expected fooImplicit and fooExplicit to generate identical code when setting GOAMD64=v3.
On arm64, the compiler detects the `x*y + z` pattern and automatically uses FMA. On amd64, math.FMA uses runtime feature detection unless the GOAMD64 environment variable is set to v3 or higher, in which case calls to math.FMA compile directly to VFMADD231SD. However, `x*y + z` isn't detected, regardless of the value of GOAMD64. | Performance,NeedsInvestigation,compiler/runtime | low | Minor |
2,778,811,884 | flutter | `flutter-devicelab-mac` bots have been failing since 12/16 | They are failing in "Prepare iOS device|Dismiss iOS dialogs|Run app to dismiss dialogs"
## reproduction
@LouiseHsu is using `led` to test device lab PR's on CI.
## example affected bots
- https://chromium-swarm.appspot.com/bot?id=flutter-devicelab-mac-1
- https://chromium-swarm.appspot.com/bot?id=flutter-devicelab-mac-14
## example failures
- https://ci.chromium.org/ui/p/flutter/builders/try/Mac_x64_ios%20hot_mode_dev_cycle_ios__benchmark/47/overview
- https://luci-milo.appspot.com/ui/p/flutter/builders/try/Mac_x64_ios%20hot_mode_dev_cycle_ios__benchmark/46/overview
## error logs
Sometimes it says:
```
GatherProvisioningInputs
CreateBuildDescription
Build description signature: a1e66c9dc7529a0d8971349d7ac5ac98
Build description path: /Users/swarming/Library/Developer/Xcode/DerivedData/infra-dialog-hbrdbwazhkutrwccpcbhibiotqak/Build/Intermediates.noindex/XCBuildData/a1e66c9dc7529a0d8971349d7ac5ac98.xcbuilddata
/opt/s/w/ir/cache/cocoon/cipd_packages/device_doctor/tool/infra-dialog/infra-dialog.xcodeproj: error: Provisioning profile "match Development *" has platforms "watchOS and iOS", which does not match the current platform "macOS". (in target 'infra-dialogUITests' from project 'infra-dialog')
/opt/s/w/ir/cache/cocoon/cipd_packages/device_doctor/tool/infra-dialog/infra-dialog.xcodeproj: error: Provisioning profile "match Development *" doesn't include the com.apple.application-identifier,
com.apple.security.app-sandbox,
com.apple.security.get-task-allow,
com.apple.security.network.client,
com.apple.security.temporary-exception.files.absolute-path.read-only,
com.apple.security.temporary-exception.mach-lookup.global-name,
com.apple.security.temporary-exception.mach-lookup.local-name,
and com.apple.security.temporary-exception.sbpl entitlements. Profile qualification is using entitlement definitions that may be out of date. Connect to network to update. (in target 'infra-dialogUITests' from project 'infra-dialog')
/opt/s/w/ir/cache/cocoon/cipd_packages/device_doctor/tool/infra-dialog/infra-dialog.xcodeproj: error: Provisioning profile "match Development *" doesn't include the currently selected device "flutter-devicelab-mac-14" (identifier F4B0151D-6DFF-5AC6-A28F-CD6072DC35D4). (in target 'infra-dialogUITests' from project 'infra-dialog')
Test session results, code coverage, and logs:
/Users/swarming/Library/Developer/Xcode/DerivedData/infra-dialog-hbrdbwazhkutrwccpcbhibiotqak/Logs/Test/Test-infra-dialog-2024.12.17_10-01-17--0800.xcresult
Testing failed:
Provisioning profile "match Development *" has platforms "watchOS and iOS", which does not match the current platform "macOS".
Provisioning profile "match Development *" doesn't include the com.apple.application-identifier,
```
Usually it says:
```
2024-12-17 09:57:12.131 xcodebuild[1931:14509] Writing error result bundle to /var/folders/87/q7tyxqlx5sv13g_xnv41976h0000gp/T/ResultBundle_2024-17-12_09-57-0012.xcresult
xcodebuild: error: Unable to find a destination matching the provided destination specifier:
{ id:00008030-001C498C022B402E }
Available destinations for the "infra-dialog" scheme:
{ platform:macOS, arch:x86_64, variant:Mac Catalyst, id:F4B0151D-6DFF-5AC6-A28F-CD6072DC35D4 }
{ platform:iOS, id:dvtdevice-DVTiPhonePlaceholder-iphoneos:placeholder, name:Any iOS Device }
{ platform:iOS Simulator, id:dvtdevice-DVTiOSDeviceSimulatorPlaceholder-iphonesimulator:placeholder, name:Any iOS Simulator Device }
{ platform:macOS, variant:Mac Catalyst, name:Any Mac }
{ platform:iOS Simulator, id:92FE5F12-D57C-46FD-A575-73838DE2DD02, OS:17.0, name:iPad (10th generation) }
{ platform:iOS Simulator, id:96556E12-0764-434B-A12F-20FCB8FA28BC, OS:18.0, name:iPad (10th generation) }
{ platform:iOS Simulator, id:E742930F-E2C8-4B85-B41D-8812EC0E92C1, OS:17.0, name:iPad Air (5th generation) }
{ platform:iOS Simulator, id:E51B66FB-D85E-4EF5-B19D-66B4008EEFC0, OS:18.0, name:iPad Air (5th generation) }
{ platform:iOS Simulator, id:F9E91713-2B70-4504-900C-4E5E65766D67, OS:17.0, name:iPad Pro (11-inch) (4th generation) }
{ platform:iOS Simulator, id:17A8CF74-68C6-48E6-844C-A6C52A1A6DB6, OS:18.0, name:iPad Pro (11-inch) (4th generation) }
{ platform:iOS Simulator, id:8C0C729F-4CB8-4BDC-A82D-329C51275B3D, OS:17.0, name:iPad Pro (12.9-inch) (6th generation) }
{ platform:iOS Simulator, id:8A79CEF4-2146-4BB5-8946-93C5223C57C3, OS:18.0, name:iPad Pro (12.9-inch) (6th generation) }
{ platform:iOS Simulator, id:2D9AC777-9142-416D-8000-BCCBA0E9319D, OS:17.0, name:iPad mini (6th generation) }
{ platform:iOS Simulator, id:75D79BB3-2CA0-44EC-9301-EF383590EA9D, OS:18.0, name:iPad mini (6th generation) }
{ platform:iOS Simulator, id:2A24430E-D906-4983-98A4-A0D10E2BFFB5, OS:17.0, name:iPhone 14 }
{ platform:iOS Simulator, id:2572052B-E8A6-4E7E-899A-0015DAE7942F, OS:17.0, name:iPhone 14 Plus }
{ platform:iOS Simulator, id:D5F5DC29-AD5F-4CC1-B4E3-4F60DD392BAC, OS:17.0, name:iPhone 14 Pro }
{ platform:iOS Simulator, id:F081F76D-AD23-40C1-BEB1-ADAF280B3BE8, OS:17.0, name:iPhone 14 Pro Max }
{ platform:iOS Simulator, id:D668C823-10C0-4095-B86F-1F0D9E6E1012, OS:17.0, name:iPhone 15 }
{ platform:iOS Simulator, id:2BF967A0-A6ED-4E41-B703-A2C40CA16680, OS:18.0, name:iPhone 15 }
{ platform:iOS Simulator, id:F245C248-854D-4396-81F0-CB150B780939, OS:17.0, name:iPhone 15 Plus }
{ platform:iOS Simulator, id:C8831D11-4476-4A46-B7ED-75BDE6A801B2, OS:18.0, name:iPhone 15 Plus }
{ platform:iOS Simulator, id:DB9E81C0-FD2E-4806-9CAF-CAC31CC32FD5, OS:17.0, name:iPhone 15 Pro }
{ platform:iOS Simulator, id:EF6DBC83-A1F2-4003-B74E-DBEAD39A868E, OS:18.0, name:iPhone 15 Pro }
{ platform:iOS Simulator, id:971C0588-405C-4F7B-BE6B-28BBDEF6B5A9, OS:17.0, name:iPhone 15 Pro Max }
{ platform:iOS Simulator, id:52D2A073-8E7A-422F-BC71-DA427CEEE134, OS:18.0, name:iPhone 15 Pro Max }
{ platform:iOS Simulator, id:9042CBE5-BE5E-4446-AEDA-3520EB0AB799, OS:17.0, name:iPhone SE (3rd generation) }
{ platform:iOS Simulator, id:C8088DA4-3708-48C7-89DE-2375A26FF165, OS:18.0, name:iPhone SE (3rd generation) }
Ineligible destinations for the "infra-dialog" scheme:
{ platform:iOS, arch:arm64e, id:00008030-001C498C022B402E, name:swarming’s iPhone, error:swarming’s iPhone is not available because it is unpaired Pair with the device in the Xcode Devices Window, and respond to any pairing prompts on the device. }
``` | team-infra | medium | Critical |
2,778,825,408 | terminal | Support the language override for unpackaged distribution (ZIP, Portable) | ### Description of the new feature
Language setting for unpackaged distribution (ZIP, Portable) that allow you to set the language explicitly.
### Proposed technical implementation details
The specified issues [16018](https://github.com/microsoft/terminal/issues/16018), [18336](https://github.com/microsoft/terminal/issues/18336) related to support for language settings in unpackaged distributions were closed due to **limitations of the Windows platform**. These limitations seem to have already been addressed in the Windows platform as of **Windows App SDK version 1.6.240701003**. This is discussed in [[MRTCore] Add Microsoft.Windows.Globalization.ApplicationLanguages class #4523](https://github.com/microsoft/WindowsAppSDK/pull/4523) :
> To **support language change for un-packaged applications** Microsoft.Windows.Globalization.ApplicationLanguages.ApplicationLanguages type is introduced in MRTCore. Microsoft.Windows.Globalization.ApplicationLanguages.**PrimaryLanguageOverride property is supported both for packaged and un-packaged WindowsAppSDK apps**. | Issue-Feature,Product-Terminal,Needs-Tag-Fix,Priority-3,External-Blocked-WinUI3 | low | Minor |
2,778,832,136 | pytorch | Release 2.6.0 validations checklist and cherry-picks | ### 🐛 Describe the bug
Similar to: https://github.com/pytorch/pytorch/issues/134694
We need to make sure that:
- [x] Python 3.13 wheel validate - https://github.com/pytorch/test-infra/actions/runs/12898326715/job/35965201816#step:14:3798
- [x] Validate Metadata section of wheels - make sure python versions are set
- [x] PyTorch 2.5.0 exposes statically linked libstdc++ CXX11 ABI symbols : https://github.com/pytorch/pytorch/issues/133437
- [ ] CUDA
- [x] pypi binaries with slimmed dependencies are usable in standard AWS containers 2023 regression in 1.13 - https://github.com/pytorch/test-infra/actions/runs/12936813683/job/36083269755#step:14:125
- [ ] Check cuda 1.12.1 update issue: https://github.com/pytorch/pytorch/issues/94772 with small wheels . Passes on GPU but failing on CPU, new issue: https://github.com/pytorch/pytorch/issues/145801
- [ ] `torch.compile`
- [x] Basic test works (for example see test mentioned in https://github.com/openai/triton/pull/1176 ) in PyTorch docker container
- [x] `torch.compile` raises an error if used on Windows. Test (part of torchvision): https://github.com/pytorch/test-infra/actions/runs/12935566485/job/36079281843#step:9:486
- [x] `torch.compile` works on 3.13 : Test: https://github.com/pytorch/test-infra/actions/runs/12873664387/job/35891677345#step:14:3604
- [x] `torch.compile` raises error on 3.13t: https://github.com/pytorch/test-infra/actions/runs/12873664387/job/35891678653#step:14:2811
- MPS
- [x] Resnet is usable out of the box (https://github.com/pytorch/test-infra/actions/runs/12898326715/job/35965216838#step:9:1996)
- Is torchvision usable? True German shepherd (cpu): 37.6% German shepherd (mps): 34.1%
- [x] Validate docker release builds
Issues/Milestone validation
- [x] https://github.com/pytorch/pytorch/issues/137597
- [x] https://github.com/pytorch/pytorch/issues/140797 @atalman
- [x] https://github.com/pytorch/pytorch/pull/144358 @justinchuby
- [ ] https://github.com/pytorch/pytorch/pull/143242 @jithunnair-amd
- [x] https://github.com/pytorch/pytorch/issues/142203 @atalman
- [x] https://github.com/pytorch/pytorch/issues/143933 @atalman
- [x] https://github.com/pytorch/pytorch/issues/142266 @atalman
- [x] https://github.com/pytorch/pytorch/issues/141909 @malfet
- [x] https://github.com/pytorch/pytorch/issues/142344 @atalman
- [x] https://github.com/pytorch/pytorch/issues/141770 @justinchuby
- [x] https://github.com/pytorch/pytorch/issues/141046 @kit1980
- [x] https://github.com/pytorch/pytorch/issues/139722 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/142448 @atalman
- [x] https://github.com/pytorch/pytorch/pull/142113 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/141230 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/141949 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/142043 @atalman
- [x] https://github.com/pytorch/pytorch/pull/141948 @atalman
- [x] https://github.com/pytorch/pytorch/pull/141800 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/141333 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/141471 @atalman
- [x] https://github.com/pytorch/pytorch/issues/135867 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/141569 @justinchuby
- [x] https://github.com/pytorch/pytorch/pull/141658 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138049 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/140873 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/140865 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/137886 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/141729 @justinchuby
- [x] https://github.com/pytorch/pytorch/pull/141413 @justinchuby
- [x] https://github.com/pytorch/pytorch/issues/141260 @justinchuby
- [x] https://github.com/pytorch/pytorch/issues/141080 @justinchuby
- [x] https://github.com/pytorch/pytorch/pull/137428 @justinchuby
- [x] https://github.com/pytorch/pytorch/issues/137374 @atalman
- [ ] https://github.com/pytorch/pytorch/issues/138340 @drisspg
- [x] https://github.com/pytorch/pytorch/pull/137966 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138802 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/134666 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138186 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138727 @chuanqi129
- [ ] https://github.com/pytorch/pytorch/pull/138354 @nWEIdia
- [x] https://github.com/pytorch/pytorch/issues/138391 @justinchuby
- [x] https://github.com/pytorch/pytorch/issues/136559 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/137338 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138992 @chuanqi129
- [x] https://github.com/pytorch/pytorch/issues/138478 @kit1980
- [x] https://github.com/pytorch/pytorch/issues/138851 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/137394 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138189 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/138543 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/138331 @chuanqi129
- [x] https://github.com/pytorch/pytorch/pull/137890 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/137889 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/137745 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/137267 @kit1980
- [x] https://github.com/pytorch/pytorch/pull/137199 @yifuwang
- [ ] https://github.com/pytorch/pytorch/issues/134929 @atalman
Amazon Linux 2023
- [x] https://github.com/pytorch/pytorch/issues/138482
- [x] https://github.com/pytorch/pytorch/issues/144433 - https://github.com/pytorch/test-infra/actions/runs/12936813683/job/36083269755#step:14:125
XPU Binaries Validations:
- [x] https://github.com/pytorch/pytorch/issues/145290
### Cherry-Picks to validate
### Versions
2.6.0 | oncall: releng | low | Critical |
2,778,835,866 | pytorch | Confusing documentation for torch.bucketize | ### 📚 The doc issue
I was reading through this page describing `torch.bucketize`, and it took me a few passes to understand the behavior of the `right` argument.
https://pytorch.org/docs/stable/generated/torch.bucketize.html
It says:
> right ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – if False, return the first suitable location that is found. If True, return the last such index. If no suitable index found, return 0 for non-numerical value (eg. nan, inf) or the size of boundaries (one pass the last index). In other words, if False, gets the lower bound index for each value in input from boundaries. If True, gets the upper bound index instead. Default value is False.
This seems to suggest that there are multiple suitable buckets for a value. In reality, there is only ever one suitable bucket, and `right` defines the behavior when `input` equals `boundaries`. The previous section describes it much more clearly.
> right | returned index satisfies
> -- | --
> False | boundaries[i-1] < input[m][n]...[l][x] <= boundaries[i]
> True | boundaries[i-1] <= input[m][n]...[l][x] < boundaries[i]
### Suggest a potential alternative/fix
Would it be more precise to say something like this?
> right ([bool](https://docs.python.org/3/library/functions.html#bool), optional) – controls how buckets are assigned to values in `boundaries`. Let's say `input[i] == boundaries[j]` for some indices `i` and `j`. If `right == False`, `out[i] == j`. Else, `out[i] == j+1`.
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @albanD | module: docs,triaged,actionable,module: python frontend | low | Minor |
2,778,836,490 | tauri | [bug] fs-plugin exists() results in "Uncaught (in promise) forbidden path" when used with the path api | ### Describe the bug
Calling `exists()` with a path results in "Uncaught (in promise) forbidden path". Using a baseDir does not result in this issue.
### Reproduction
Will check if the file `test/exists.json` exists.
Create the following file in src-tauri: `test/exists.json`
Bundle the file:
```
"resources": [
"test/**/*"
]
```
Set the permissions in default.json (fs:allow-resource-read-recursive and fs:allow-resource-meta-recursive" was set as an attempted workaround):
```JSON
"permissions": [
{
"identifier": "fs:scope-resource",
"allow": [
{
"path": "$RESOURCE"
},
{
"path": "$RESOURCE/test/**"
}
]
},
"core:default",
"opener:default",
"core:window:allow-hide",
"core:window:allow-show",
"core:window:allow-set-size",
"fs:default",
"fs:allow-resource-read-recursive",
"fs:allow-resource-meta-recursive"
]
```
Try to see if the file exists in a Vue component:
```JS
async function loadFilesTest() {
const resource_dir = await path.resourceDir()
const exist_string = "test/exists.json"
const does_not_exist_string = "test/does_not_exist.json"
const exist_path = await path.join(resource_dir, exist_string)
const does_not_exist_path = await path.join(resource_dir, does_not_exist_string)
const file_exists_base = await exists(exist_string, {
baseDir: BaseDirectory.Resource
})
const file_does_not_exist_base = await exists(does_not_exist_string, {
baseDir: BaseDirectory.Resource
})
const file_exists_path = await exists(exist_path)
// const file_does_not_exist_path = await exists(does_not_exist_path) //Uncaught (in promise) forbidden path:
console.log(exist_path, "exists with base dir: ", file_exists_base)
console.log(does_not_exist_path, "exists with base dir: ", file_does_not_exist_base)
console.log(exist_path, "exists with path api: ", file_exists_path)
// console.log(does_not_exist_path, "doesn't work with path api: ", file_does_not_exist_path)
}
onMounted(() => {
loadFilesTest()
})
```
Console output:
```
C:\...\debug\test\exists.json exists with base dir: true
C:\...\debug\test\does_not_exist.json exists with base dir: false
C:\...\debug\test\exists.json exists with path api: true
```
Uncommenting the `file_does_not_exist_path` parts result in `Uncaught (in promise) forbidden path:`.
### Expected behavior
The expected behavior is for it to return `false`, the same as when using `baseDir`.
### Full `tauri info` output
```text
> [email protected] tauri
> tauri info
[✔] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
✔ WebView2: 131.0.2903.112
✔ MSVC:
- Visual Studio Build Tools 2019
- Visual Studio Community 2022
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.3.1
- pnpm: 8.6.3
- npm: 9.6.7
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.2.0
- @tauri-apps/cli : 2.2.2
[-] Plugins
- tauri-plugin-fs 🦀: 2.2.0
- @tauri-apps/plugin-fs : 2.2.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
Thanks to @FabianLars I found out this same issue had been raised previously (https://github.com/tauri-apps/tauri/issues/11614), but was marked as closed. | type: bug,status: needs triage | low | Critical |
2,778,848,424 | opencv | RuntimeError: _ARRAY_API is not PyCapsule object on Python 3.13 with no GIL | ### System Information
* OpenCV 4.10.0
* Ubuntu 22.04
* Python 3.13 with no GIL
* Numpy 2.2.1
### Detailed description
I am trying to build OpenCV with bindings using Python 3.13 with no GIL. I am able to build OpenCV, but when trying to import `cv2` I get an error: `RuntimeError: _ARRAY_API is not PyCapsule object`.
### Steps to reproduce
Build a docker container using the following dockerfile:
```
FROM nvidia/cuda:12.3.2-devel-ubuntu22.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get install -y cmake build-essential zip python3.13-nogil python3.13-dev
ADD https://bootstrap.pypa.io/get-pip.py /
RUN python3.13t /get-pip.py
RUN python3.13t -m pip install numpy
ADD https://github.com/opencv/opencv/archive/refs/tags/4.10.0.zip /opencv.zip
RUN unzip /opencv.zip
ADD https://github.com/opencv/opencv_contrib/archive/refs/tags/4.10.0.zip /opencv_contrib.zip
RUN unzip /opencv_contrib.zip
RUN mkdir build
RUN bash -c "cmake \
-S /opencv-4.10.0 \
-B /build \
-DOPENCV_EXTRA_MODULES_PATH='/opencv_contrib-4.10.0/modules/cudev;/opencv_contrib-4.10.0/modules/cudaarithm;/opencv_contrib-4.10.0/modules/cudaimgproc;/opencv_contrib-4.10.0/modules/cudawarping' \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
-DBUILD_opencv_apps=OFF \
-DWITH_OPENCL=OFF \
-DWITH_PNG=OFF \
-DWITH_JPEG=OFF \
-DWITH_WEBP=OFF \
-DWITH_OPENJPEG=OFF \
-DWITH_JASPER=OFF \
-DWITH_OPENEXR=OFF \
-DWITH_JPEGXL=OFF \
-DWITH_V4L=OFF \
-DWITH_FFMPEG=OFF \
-DWITH_GSTREAMER=OFF \
-DWITH_ANDROID_MEDIANDK=OFF \
-DVIDEOIO_ENABLE_PLUGINS=OFF \
-DWITH_GTK=OFF \
-DPARALLEL_ENABLE_PLUGINS=OFF \
-DHIGHGUI_ENABLE_PLUGINS=OFF \
-DWITH_PROTOBUF=OFF \
-DBUILD_PROTOBUF=OFF \
-DOPENCV_DNN_OPENCL=OFF \
-DENABLE_CCACHE=OFF \
-DBUILD_JAVA=OFF \
-DBUILD_opencv_python2=OFF \
-DBUILD_opencv_dnn=OFF \
-DBUILD_opencv_gapi=OFF \
-DBUILD_opencv_highgui=OFF \
-DBUILD_opencv_flann=ON \
-DBUILD_opencv_objdetect=OFF \
-DBUILD_opencv_videoio=OFF \
-DBUILD_opencv_video=OFF \
-DBUILD_opencv_photo=OFF \
-DBUILD_opencv_stitching=OFF \
-DBUILD_opencv_world=OFF \
-DBUILD_opencv_ml=OFF \
-DBUILD_opencv_calib3d=OFF \
-DBUILD_opencv_python3=ON \
-DPYTHON_EXECUTABLE=$(which python3.13t) \
-DOPENCV_PYTHON3_INSTALL_PATH=lib/python3.13t/dist-packages \
-DPYTHON3_LIBRARIES=/usr/lib/x86_64-linux-gnu/libpython3.13t.so \
-DWITH_CUDA=OFF 2>&1 \
| tee /cmake.log"
RUN bash -c "make -C /build -j $(nproc) 2>&1 | tee /build.log"
RUN bash -c "make -C /build install 2>&1 | tee /install.log"
```
Then, inside the container run `python3.13t -c "import cv2"`
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | category: 3rdparty | low | Critical |
2,778,850,031 | godot | RayCast3D movement inside CharacterBody3D if calling CharacterBody3D movement in another script | ### Tested versions
Godot v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 Ti SUPER (NVIDIA; 32.0.15.6614) - AMD Ryzen 7 7700 8-Core Processor (16 Threads)
### Issue description
A typical player controller looks like this (RayCast3D doesn't move in it):
https://github.com/user-attachments/assets/b029766d-8dc5-40b9-a8e0-1ee012191993
The player controller in which RayCast3D moves. The player movement is called not in the player script, but in another element and from another script file using a reference to the player controller. In this case, RayCast3D and ShapeCast3D move.
https://github.com/user-attachments/assets/c3de6c21-8fb1-45d4-b443-3c741f0dec1d
This bug is reproduced only if you call the function for player movement not in the player script, but from another script.
### Steps to reproduce
Good player conroller:

**player_good.gd**
```
extends CharacterBody3D
const SPEED = 5.0
const JUMP_VELOCITY = 4.5
func _physics_process(delta: float) -> void:
move(delta)
func move(delta):
# Add the gravity.
if not is_on_floor():
velocity += get_gravity() * delta
# Handle jump.
if Input.is_action_just_pressed("ui_accept") and is_on_floor():
velocity.y = JUMP_VELOCITY
# Get the input direction and handle the movement/deceleration.
# As good practice, you should replace UI actions with custom gameplay actions.
var input_dir := Input.get_vector("ui_left", "ui_right", "ui_up", "ui_down")
var direction := (transform.basis * Vector3(input_dir.x, 0, input_dir.y)).normalized()
if direction:
velocity.x = direction.x * SPEED
velocity.z = direction.z * SPEED
else:
velocity.x = move_toward(velocity.x, 0, SPEED)
velocity.z = move_toward(velocity.z, 0, SPEED)
move_and_slide()
```
Bad player controller

**player_bad.gd**
```
extends CharacterBody3D
const SPEED = 5.0
const JUMP_VELOCITY = 4.5
func _physics_process(delta: float) -> void:
move(delta)
func move(delta):
# Add the gravity.
if not is_on_floor():
velocity += get_gravity() * delta
# Handle jump.
if Input.is_action_just_pressed("ui_accept") and is_on_floor():
velocity.y = JUMP_VELOCITY
# Get the input direction and handle the movement/deceleration.
# As good practice, you should replace UI actions with custom gameplay actions.
var input_dir := Input.get_vector("ui_left", "ui_right", "ui_up", "ui_down")
var direction := (transform.basis * Vector3(input_dir.x, 0, input_dir.y)).normalized()
if direction:
velocity.x = direction.x * SPEED
velocity.z = direction.z * SPEED
else:
velocity.x = move_toward(velocity.x, 0, SPEED)
velocity.z = move_toward(velocity.z, 0, SPEED)
move_and_slide()
```
**state_controller.gd**
```
extends Node
@onready var player_bad: CharacterBody3D = $"../PlayerBad"
# Called every frame. 'delta' is the elapsed time since the previous frame.
func _physics_process(delta: float) -> void:
player_bad.move(delta)
```
### Minimal reproduction project (MRP)
[bug_move_cas.zip](https://github.com/user-attachments/files/18368041/bug_move_cas.zip)
| topic:physics,needs testing,topic:3d | low | Critical |
2,778,855,027 | go | x/tools/cmd/signature-fuzzer/fuzz-runner: TestRunner/Minimization2 failures | ```
#!watchflakes
default <- pkg == "golang.org/x/tools/cmd/signature-fuzzer/fuzz-runner" && test == "TestRunner/Minimization2"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8726206230344596065)):
=== RUN TestRunner/Minimization2
=== PAUSE TestRunner/Minimization2
=== CONT TestRunner/Minimization2
rnr_test.go:131: ... begin iteration 0 with current seed 55909
error executing cmd /home/swarming/.swarming/w/ir/x/t/fuzzrun641481422/fuzzTest/fzTest: exit status 2
starting main
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x86d28]
goroutine 19 [running]:
...
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x86d28]
goroutine 19 [running]:
fmt.(*buffer).writeString(...)
/home/swarming/.swarming/w/ir/x/w/goroot/src/fmt/print.go:108
fmt.(*fmt).padString(0x4b8028, {0x0, 0x7})
/home/swarming/.swarming/w/ir/x/w/goroot/src/fmt/format.go:113 +0x288
fmt.(*fmt).fmtS(0x4b8028, {0x0, 0x7})
/home/swarming/.swarming/w/ir/x/w/goroot/src/fmt/format.go:362 +0x4c
...
main.main.func2(0x496040)
/home/swarming/.swarming/w/ir/x/t/fuzzrun641481422/fuzzTest/Main.go:30 +0x54
created by main.main in goroutine 1
/home/swarming/.swarming/w/ir/x/t/fuzzrun641481422/fuzzTest/Main.go:27 +0x1a0
... starting minimization for failed directory /home/swarming/.swarming/w/ir/x/t/fuzzrun641481422/fuzzTest
error executing cmd /home/swarming/.swarming/w/ir/x/t/fuzzrun641481422/fuzzTest/fzTest: exit status 1
package minimization succeeded: found bad pkg 1
error executing cmd /home/swarming/.swarming/w/ir/x/t/fuzzrun641481422/fuzzTest/fzTest: exit status 1
function minimization succeeded: found bad fcn 1
--- FAIL: TestRunner/Minimization2 (25.80s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,Tools | low | Critical |
2,778,872,634 | go | proposal: crypto/tls: add support for NIST curve based ML-KEM hybrids | ### Proposal Details
The current version of the [draft-kwiatkowski-tls-ecdhe-mlkem](https://datatracker.ietf.org/doc/draft-kwiatkowski-tls-ecdhe-mlkem/03/) draft includes two hybrid ML-KEM groups that use NIST curves:
* SecP256r1MLKEM768
* SecP384r1MLKEM1024
As explained in the draft, they are interesting for environments that require compliance, either with FIPS in general, or with higher security standards, like the Common Criteria Protection Profile v4.3 or CNSA 2.0.
I'd like to ask for their inclusion in a future Go release. | Proposal,Proposal-Crypto,LibraryProposal | low | Minor |
2,778,897,445 | storybook | [Bug]: Generated CSS is erroneously cached | ### Describe the bug
I want Cloudflare to cache **CSS**, JS, and images from the site.
This currently works for JS because of the cache busting **contenthash**
`filename: '[name].[contenthash:8].iframe.bundle.js',` in the default storybook webpack settings.
Other resources are similarly cached
```
{
test: /\.(svg|ico|jpg|jpeg|png|apng|gif|eot|otf|webp|ttf|woff|woff2|cur|ani|pdf)(\?.*)?$/,
type: 'asset/resource',
generator: { filename: 'static/media/[name].[contenthash:8][ext]' }
},
{
test: /\.(mp4|webm|wav|mp3|m4a|aac|oga)(\?.*)?$/,
...
generator: { filename: 'static/media/[name].[contenthash:8][ext]' }
},
```
**However**, CSS does not have the contenthash added into the MiniCssExtractPlugin filename.
```
MiniCssExtractPlugin {
...
options: {
filename: '[name].css',
...
chunkFilename: '[name].css'
},
```
This means that when I update the CSS used by my storybook - it does not update and instead uses the outdated cached version of `main.css`.
Please update the default MiniCssExtractPlugin settings to use something like
```
filename: '[name].[contenthash].css',
...
chunkFilename: '[name].iframe.[contenthash].css'
```
### Reproduction link
Can't reproduce easily due to Cloudflare caching dependency
### Reproduction steps
1. Host a storybook behind Cloudflare
2. Enable caching on the CSS resources served by the storybook (I used nginx but do it however you like)
3. Access the Storybook so Cloudflare caches the `main.css`
4. Update the CSS in some obvious way in the Storybook and deploy that
5. Attempt to access the updated storybook and observe that the old cached CSS is instead used
### System
```bash
Storybook Environment Info:
System:
OS: Linux 6.8 Ubuntu 22.04.5 LTS 22.04.5 LTS (Jammy Jellyfish)
CPU: (16) x64 AMD Ryzen 7 PRO 5850U with Radeon Graphics
Shell: 5.8.1 - /usr/bin/zsh
Binaries:
Node: 20.18.0 - ~/.nvm/versions/node/v20.18.0/bin/node
Yarn: 1.22.22 - /usr/bin/yarn
npm: 10.8.2 - ~/.nvm/versions/node/v20.18.0/bin/npm
pnpm: 9.12.2 - ~/.nvm/versions/node/v20.18.0/bin/pnpm <----- active
Browsers:
Chrome: 131.0.6778.204
npmPackages:
@storybook/addon-essentials: ^8.4.7 => 8.4.7
@storybook/addon-interactions: ^8.4.7 => 8.4.7
@storybook/addon-onboarding: ^8.4.7 => 8.4.7
@storybook/angular: ^8.4.7 => 8.4.7
@storybook/blocks: ^8.4.7 => 8.4.7
@storybook/test: ^8.4.7 => 8.4.7
storybook: ^8.4.7 => 8.4.7
```
### Additional context
I am currently fixing this by manually modifying the webpack config
```
const miniCssPlugin = config?.plugins?.find(
(plugin: any) => plugin?.constructor?.name === 'MiniCssExtractPlugin',
) as any;
if (miniCssPlugin && 'options' in miniCssPlugin) {
miniCssPlugin.options.filename = '[name].[contenthash].css';
miniCssPlugin.options.chunkFilename = '[name].iframe.[contenthash].css';
}
``` | bug,angular | low | Critical |
2,778,898,542 | PowerToys | [MWB] Consistent lag and visual artifacts when hovering over a specific part of my screen | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
Issue occurs both with and without admin permissions
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
Unsure how exactly to reproduce -- may simply be caused by leaving the program open for a while with my specific system configuration
(ROG Ally (AMD Ryzen Z1), Windows 11 Home v10.0.26100)
Hovering over specific portions of my screen using MWB causes consistent visual artifacting (White rectangle appears beside mouse, cursor sillhouettes briefly visible)
This area includes the bottom of the Twitch player when using Arc Browser, or the entire window when inside an (Unreal Engine based?) game (the one i've experienced being Tetris Effect: Connected). Could be something to do with high GPU usage?
The cursor also experiences heavy jitter / lag when under this effect.
Afterwards, some visual artifiacts are left behind (last image), but they seem to only be visible briefly, or when using ShareX to capture a screenshot in specific cases. In Arc, the artifacts only briefly appear in the window title bar when retriggering the bug (though I also have seen cases where the artifacts are simply left behind in the title bar indefinitely...)
[PowerToysReport_2025-01-09-22-26-01.zip](https://github.com/user-attachments/files/18368108/PowerToysReport_2025-01-09-22-26-01.zip)
https://github.com/user-attachments/assets/330c4290-981c-4279-8f59-54f39a82cef8
https://github.com/user-attachments/assets/c04d852c-b683-4905-a653-eb072107729c

https://github.com/user-attachments/assets/5e426ea8-37c7-401a-90cf-e952f5504361
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
Arc 1.33.0
Tetris Effect: Connected (latest version)
ShareX (latest version) | Issue-Bug,Needs-Triage | low | Critical |
2,778,909,919 | flutter | Expose a getter to `ScrollbarPainter._thumbRect` | ### Use case
I'd like to write tests that involve dragging the `Scrollbar` thumb. However, the thumb isn't a dedicated widget, it's just painted inside the `ScrollbarPainter`. Also, the thumb rect is private. Currently, there is no way to find where the `Scrollbar` thumb is.
### Proposal
Provide a way to allow querying the scrollbar's thumb rectangle. | framework,f: scrolling,c: proposal,P3,team-framework,triaged-framework | low | Minor |
2,778,959,601 | pytorch | Illegal Memory Access when Using Trainable Biases in Flex Attention | ### 🐛 Describe the bug
I'm back with another insane flex attention bug report.
Recently, I was playing around with the learnable biases in flex attention when I started hitting an illegal memory access in the backward pass.
After using `CUDA_LAUNCH_BLOCKING` I found it was happening in the following kernel during autotuning:
```python
File "/tmp/torchinductor_ubuntu/jy/cjy56z23prjyls6tnwn4ay4mmmb6vvergqxm4wmnv5l7zlfzk66e.py", line 3784, in call
triton_poi_fused_zeros_1.run(buf0, 100663296, grid=grid(100663296), stream=stream0)
```
```python
import triton
import triton.language as tl
from triton.compiler.compiler import AttrsDescriptor
from torch._inductor.runtime import triton_helpers, triton_heuristics
from torch._inductor.runtime.triton_helpers import libdevice, math as tl_math
from torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties
triton_helpers.set_driver_to_gpu()
@triton_heuristics.pointwise(
size_hints={'x': 134217728},
filename=__file__,
triton_meta={'signature': {'out_ptr0': '*fp32', 'xnumel': 'i32'}, 'device': DeviceProperties(type='cuda', index=0, multi_processor_count=114, cc=90, major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, warp_size=32), 'constants': {}, 'configs': [AttrsDescriptor(divisible_by_16=(0, 1), equal_to_1=())]},
inductor_meta={'autotune_hints': set(), 'kernel_name': 'triton_poi_fused_zeros_1', 'mutated_arg_names': [], 'optimize_mem': True, 'no_x_dim': False, 'num_load': 0, 'num_reduction': 0, 'backend_hash': 'BC5F52D6E7923B9DC1733AF7005D933F35B86508E4142BC7F067F48E9C59404B', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False},
min_elem_per_thread=0
)
@triton.jit
def triton_poi_fused_zeros_1(out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 100663296
xoffset = tl.program_id(0) * XBLOCK
xindex = xoffset + tl.arange(0, XBLOCK)[:]
xmask = tl.full([XBLOCK], True, tl.int1)
x0 = xindex
tmp0 = 0.0
tl.store(out_ptr0 + (x0), tmp0, None)
```
What might be also related is the following inductor warnings beforehand:
```
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] Failed to reorder for topological_sort_lpmf: SchedulerBuffer(scheduler=<torch._inductor.scheduler.Scheduler object at 0x750d4019caf0>, node=ComputedBuffer(name='buf0', layout=FixedLayout('cuda:0', torch.float32, size=[1, 1, 384, 256], stride=[98304, 98304, 1, 384]), data=Reduction(
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] 'cuda',
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] torch.float32,
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] def inner_fn(index, rindex):
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] _, _, i2, i3 = index
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] r0_0 = rindex
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] tmp0 = ops.load(tangents_1, i2 + 384 * ModularIndexing(r0_0 + 256 * i3, 1, 4096) + 1572864 * ModularIndexing(r0_0 + 256 * i3, 4096, 16))
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] tmp1 = ops.load(add_24, i2 + 384 * ModularIndexing(r0_0 + 256 * i3, 1, 4096) + 1572864 * ModularIndexing(r0_0 + 256 * i3, 4096, 16))
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] tmp2 = ops.load(rsqrt_6, 4096 * ModularIndexing(r0_0 + 256 * i3, 4096, 16) + ModularIndexing(r0_0 + 256 * i3, 1, 4096))
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] tmp3 = tmp1 * tmp2
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] tmp4 = tmp0 * tmp3
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] return tmp4
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] ,
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] ranges=[1, 1, 384, 256],
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] reduction_ranges=[256],
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] reduction_type=sum,
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] origin_node=None,
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] origins=OrderedSet([sum_21, mul_52, mul_49])
[rank0]:E0109 15:24:39.114000 535215 torch/_inductor/memory.py:646] [0/1] )), defining_op=SchedulerNode(name='op0'), users=[NodeUser(node=SchedulerNode(name='op1'), can_inplace=False, is_weak=False), NodeUser(node=SchedulerNode(name='op27'), can_inplace=False, is_weak=False)], mpi_buffer=MemoryPlanningInfoForBuffer(size_alloc=0, size_free=0, succ_nodes=OrderedSet([])))
```
I know this likely isn't enough to fix the issue and I'd love to provide as repo as soon as possible, I'm just having trouble tracing down what exactly is going wrong here.
### Versions
2.7.0.dev20250109+cu126
cc @ptrblck @msaroufim @eqy @Chillee @drisspg @yanboliang @BoyuanFeng @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 | module: crash,module: cuda,triaged,module: flex attention | low | Critical |
2,778,981,552 | go | proposal: net/http/pprof: function to register all profiles to a mux | ## User Story
- I have an application that has `mux` instances.
- I want to enable `net/http/pprof` on a specific `mux` instance.
- `net/http/pprof` does not provide an ergonomic interface for attaching the handlers to a specific mux.
## Current Options / Alternatives Considered
The `net/http/pprof` package `init` function is the recommended path for enabling the `pprof` handler.[^1] This method uses the `DefaultServeMux`:
https://github.com/golang/go/blob/46b576be724b6e64359fd872b9bd5109aba93cc0/src/net/http/pprof/pprof.go#L95-L105
If the user wants to mount the `pprof` handlers using a non-default mux, they must do this by manually enumerating all of the available profilers[^2]. For example:
```
mux := http.NewServeMux()
mux.HandleFunc("/debug/pprof/", pprof.Index)
mux.HandleFunc("/debug/pprof/cmdline/", pprof.Cmdline)
mux.HandleFunc("/debug/pprof/profile/", pprof.Profile)
mux.HandleFunc("/debug/pprof/symbol/", pprof.Symbol)
mux.HandleFunc("/debug/pprof/trace/", pprof.Trace)
```
## Proposal
This experience could be made better for users by moving the logic in the `init` function into a separate method (with the `mux` as an argument), then invoking this within the default package init function.
[^1]: https://pkg.go.dev/net/http/pprof#:~:text=To%20use%20pprof%2C%20link%20this%20package%20into%20your%20program%3A
[^2]: https://pkg.go.dev/net/http/pprof#:~:text=If%20you%20are%20not%20using%20DefaultServeMux%2C%20you%20will%20have%20to%20register%20handlers%20with%20the%20mux%20you%20are%20using. | Proposal,LibraryProposal | low | Critical |
2,778,986,451 | godot | Duplicating TileMapLayers with scene tiles causes them to double on the copy | ### Tested versions
- Reproducible in: `v4.3.stable.mono.arch_linux`
### System information
Godot v4.3.stable.mono unknown - Arch Linux #1 SMP PREEMPT_DYNAMIC Thu, 02 Jan 2025 22:52:26 +0000 - Wayland - GLES3 (Compatibility) - AMD Radeon 780M (radeonsi, gfx1103_r1, LLVM 18.1.8, DRM 3.59, 6.12.8-arch1-1) - AMD Ryzen 7 7840HS with Radeon 780M Graphics (16 Threads)
### Issue description
Whenever a `TileMapLayer` is duplicated with the `Node.duplicate(flags)` method, all its children are, as expected, also duplicated. However, as any scene tiles are instantiated as children, their instances are also duplicated, then, when the duplicate `TileMapLayer` does its first update, it instantiates all its tile scenes. So you end up with, for each scene tile, an instance that came from the `duplicate()`, and an instance that came from the new tilemap itself.
### Steps to reproduce
1. Create a `TileMapLayer` node with scene tiles;
2. Access it from a script and call its `duplicate()` method.
### Minimal reproduction project (MRP)
[double-duplicate-tilemap.zip](https://github.com/user-attachments/files/18368665/double-duplicate-tilemap.zip)
| bug,needs testing,topic:2d | low | Minor |
2,779,011,154 | next.js | Parallel routes with intercepting route child, causing 500 error on vercel from dynamic route. | ### Link to the code that reproduces this issue
https://github.com/okcoker/next-js-parallel-route-bug
### To Reproduce
1. Start application with `next dev`
2. Click on any example links
3. Notice 500 error with prefetched intercepting route
Note: Example with dynamic segment `[username]` works on dev, but not in vercel
### Current vs. Expected behavior
Current behavior: Deploying functionality that works on dev, does not work within vercel using next.js 15 (dynamic segment example works on next 14)
Expected behavior:
- Deploying intercepting route should work on both dev and production.
- Using the (...) marker should be equivalent to using the (..) marker if we are one level deep
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:17:33 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6031
Available memory (MB): 49152
Available CPU cores: 16
Binaries:
Node: 22.10.0
npm: 10.9.0
Yarn: 1.22.21
pnpm: 9.11.0
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: 15.1.4
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Vercel (Deployed)
### Additional context
I tested with next.js v14 and the dynamic segment example worked (linked in the [readme](https://github.com/okcoker/next-js-parallel-route-bug/blob/main/README.md))
In our production app, we recently upgraded from next 14 to 15 and these intercepted routes were working as expected. Non dynamic segments work with the intercepting routes on next 15 for us, but for some reason do not work on my example. Will post updates as they come, although there are some clear differences in functionality showcased in the example repo currently. | Parallel & Intercepting Routes | low | Critical |
2,779,019,924 | go | x/tools/go/packages: empty result from `go list -f "{{context.GOARCH}} {{context.Compiler}}"` (netbsd) | ```
#!watchflakes
default <- pkg == "golang.org/x/tools/go/packages" && test == "TestErrorMissingFile/Modules"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8726203353397902385)):
=== RUN TestErrorMissingFile/Modules
=== PAUSE TestErrorMissingFile/Modules
=== CONT TestErrorMissingFile/Modules
invoke.go:205: starting GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/modcache GO111MODULE=off GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/fake go list -e -f {{context.ReleaseTags}} -- unsafe
invoke.go:205: starting GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/fake go list -f "{{context.GOARCH}} {{context.Compiler}}" -- unsafe
invoke.go:205: 38.170914ms for GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/modcache GO111MODULE=off GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/fake go list -e -f {{context.ReleaseTags}} -- unsafe
invoke.go:205: starting GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/fake go list -e -json=Name,ImportPath,Error,Dir,GoFiles,IgnoredGoFiles,IgnoredOtherFiles,CFiles,CgoFiles,CXXFiles,MFiles,HFiles,FFiles,SFiles,SwigFiles,SwigCXXFiles,SysoFiles,CompiledGoFiles,Export,DepOnly,Imports,ImportMap,Module -compiled=true -test=false -export=true -deps=true -find=false -pgo=off -- missing.go
invoke.go:205: 71.302184ms for GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/fake go list -f "{{context.GOARCH}} {{context.Compiler}}" -- unsafe
invoke.go:205: 54.686548ms for GOROOT= GOPATH=/home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/modcache GO111MODULE=on GOPROXY=file:///home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/modproxy PWD=/home/swarming/.swarming/w/ir/x/t/TestErrorMissingFile_Modules2887077684/fake go list -e -json=Name,ImportPath,Error,Dir,GoFiles,IgnoredGoFiles,IgnoredOtherFiles,CFiles,CgoFiles,CXXFiles,MFiles,HFiles,FFiles,SFiles,SwigFiles,SwigCXXFiles,SysoFiles,CompiledGoFiles,Export,DepOnly,Imports,ImportMap,Module -compiled=true -test=false -export=true -deps=true -find=false -pgo=off -- missing.go
packages_test.go:1669: could not parse GOARCH and Go compiler in format "<GOARCH> <compiler>":
stdout: <<>>
stderr: <<>>
--- FAIL: TestErrorMissingFile/Modules (0.34s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,Tools | low | Critical |
2,779,031,129 | transformers | LlavaNextVideoProcessor -> TypeError: LlavaNextVideoProcessor.__call__() got an unexpected keyword argument 'legacy' (I have the fix) | ### System Info
Problem's root cause is in `ImageTextToTextPipeline` class in the `image_text_to_text.py` pipeline.
Line `438`
```py
model_inputs = self.processor(
images=images, text=text, return_tensors=self.framework, legacy=False, **processing_kwargs
).to(dtype=self.torch_dtype)
```
Notice how legacy is always specified as False?
If you use this model (`llava-hf/LLaVA-NeXT-Video-7B-32K-hf`) on `transfomers==4.47.1` you will get this error because its config specifies to use the class: `LlavaNextVideoProcessor` from `processing_llava_next_video.py` and it's `__call__` method is not expecting that kwarg.
The quick fix is this:
Modify `__call__` (line `101`) in `processing_llava_next_video.py`
from this:
```py
def __call__(
self,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]],
images: ImageInput = None,
videos: VideoInput = None,
padding: Union[bool, str, PaddingStrategy] = False,
truncation: Union[bool, str, TruncationStrategy] = None,
max_length: int = None,
return_tensors: Optional[Union[str, TensorType]] = TensorType.PYTORCH,
) -> BatchFeature:
```
to this:
```py
def __call__(
self,
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]],
images: ImageInput = None,
videos: VideoInput = None,
padding: Union[bool, str, PaddingStrategy] = False,
truncation: Union[bool, str, TruncationStrategy] = None,
max_length: int = None,
return_tensors: Optional[Union[str, TensorType]] = TensorType.PYTORCH,
**kwargs, # <-- this guy
) -> BatchFeature:
```
Notice the unused kwargs at the end. This reflects the pattern used for `__init__`
which looks like this:
```py
def __init__(
self,
video_processor=None,
image_processor=None,
tokenizer=None,
chat_template=None,
patch_size=None,
vision_feature_select_strategy=None,
video_token="<video>",
image_token="<image>",
num_additional_image_tokens=0,
**kwargs, # <-- this guy
):
```
I ain't got time to step through the PR process, so I hope this helps the HF staff either make this quick patch, or solve the problem at a higher level in the code for `image_text_to_text.py`.
### Who can help?
HF staff
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) (`image-to-text-to-text`)
### Reproduction
```py
pipe = pipeline("image-text-to-text", model="llava-hf/LLaVA-NeXT-Video-7B-32K-hf")
messages = {'role': 'user', 'content': [{'type': 'text', 'text': "What's in this image?"}, {'type': 'video'}]}
videos = ["https://huggingface.co/datasets/raushan-testing-hf/videos-test/resolve/main/sample_demo_1.mp4"]
out = pipe(text=messages, videos=videos)
```
### Expected behavior
No exception raised due to an unexpected kwarg. | Core: Pipeline,bug,VLM | low | Critical |
2,779,031,201 | godot | Converting visual shader to shader loses subresources | ### Tested versions
- Reproduceable in: v4.4.dev.mono.custom_build [61d0ff4b9] / our fork that closely tracks master
### System information
Godot v4.4.dev.mono (61d0ff4b9) - Pop!_OS 22.04 LTS on Wayland - X11 display driver, Multi-window, 3 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2070 SUPER (nvidia; 565.77) - AMD Ryzen 7 5800X 8-Core Processor (16 threads)
### Issue description
When converting the visual shader to a shader, the shader breaks for specific visual shaders. In our case, our water shader. We are making use of the shaderV plugin as well which I include in the mrp: https://github.com/arkology/ShaderV
Visual Shader ✅:

Converted Shader ❌

Now the extremely strange thing is the shader code in the visual shader and the converted shader are identical:

This leads me to believe it must be an error on either the implementation of visual_shader.cpp or shader.cpp when it comes to generating the actual shader code the gpu uses but this is just speculation. In reality, I am very new to the internals of Godot's shader generation.
I discovered this because I found a bug in the visual shader that causes a crash on exported projects. Changing to a shader at export time fixes it. I'll be opening a separate issue and a PR enabling (at least via custom plugins) support to convert the visual shaders at export time as I have found this to have significant benefits in regards to game size on top of acting as a workaround for an unrelated bug I still need to open an issue for.
### Steps to reproduce
1. Open the attached MRP
2. See that there are 2 scenes in the root: normal_shader.tscn and visual_shader.tscn
3. See that the water renders correct in visual_shader.tscn
4. See that the water does not render correctly in normal_shader.tscn
5. For fun, convert the visual_shader to a shader via right clicking on the asset and see that it breaks

### Minimal reproduction project (MRP)
[mrp.zip](https://github.com/user-attachments/files/18368949/mrp.zip)
| bug,topic:editor,topic:shaders | low | Critical |
2,779,041,274 | tensorflow | Memory Allocation Issues | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18.0
### Custom code
Yes
### OS platform and distribution
Ubuntu 24.04.1 LTS on WSL2
### Mobile device
_No response_
### Python version
3.12.8
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
90300
### GPU model and memory
RTX 4070 12GB
### Current behavior?
I create a virtual gpu with a hard limit of 10GB. I start training the network and it works for bit but then says out of memory and tries to allocate more than the set limit. What I expect to happen is that is stays within the 10GB limit and can train the network successfully.
### Standalone code to reproduce the issue
```shell
import numpy as np
import keras
from keras import layers
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
%matplotlib inline
tfds.disable_progress_bar()
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only allocate 1GB of memory on the first GPU
try:
tf.config.set_logical_device_configuration(
gpus[0],
[tf.config.LogicalDeviceConfiguration(memory_limit=10240)])
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
train_ds, validation_ds, test_ds = tfds.load(
"cats_vs_dogs",
# Reserve 10% for validation and 10% for test
split=["train[:40%]", "train[40%:50%]", "train[50%:60%]"],
as_supervised=True, # Include labels
)
resize_fn = keras.layers.Resizing(150, 150)
train_ds = train_ds.map(lambda x, y: (resize_fn(x), y))
validation_ds = validation_ds.map(lambda x, y: (resize_fn(x), y))
test_ds = test_ds.map(lambda x, y: (resize_fn(x), y))
augmentation_layers = [
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
]
def data_augmentation(x):
for layer in augmentation_layers:
x = layer(x)
return x
train_ds = train_ds.map(lambda x, y: (data_augmentation(x), y))
from tensorflow import data as tf_data
batch_size = 16
train_ds = train_ds.batch(batch_size).prefetch(tf_data.AUTOTUNE).cache()
validation_ds = validation_ds.batch(batch_size).prefetch(tf_data.AUTOTUNE).cache()
test_ds = test_ds.batch(batch_size).prefetch(tf_data.AUTOTUNE).cache()
base_model = keras.applications.Xception(
weights="imagenet", # Load weights pre-trained on ImageNet.
input_shape=(150, 150, 3),
include_top=False,
) # Do not include the ImageNet classifier at the top.
# Freeze the base_model
base_model.trainable = False
# Create new model on top
inputs = keras.Input(shape=(150, 150, 3))
# Pre-trained Xception weights requires that input be scaled
# from (0, 255) to a range of (-1., +1.), the rescaling layer
# outputs: `(inputs * scale) + offset`
scale_layer = keras.layers.Rescaling(scale=1 / 127.5, offset=-1)
x = scale_layer(inputs)
# The base model contains batchnorm layers. We want to keep them in inference mode
# when we unfreeze the base model for fine-tuning, so we make sure that the
# base_model is running in inference mode here.
x = base_model(x, training=False)
x = keras.layers.GlobalAveragePooling2D()(x)
x = keras.layers.Dropout(0.2)(x) # Regularize with dropout
outputs = keras.layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
model.summary(show_trainable=True)
model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[keras.metrics.BinaryAccuracy()],
)
epochs = 2
print("Fitting the top layer of the model")
model.fit(train_ds, epochs=epochs, validation_data=validation_ds)
```
### Relevant log output
```shell
2025-01-09 19:39:57.953074: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1736469597.967544 14431 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1736469597.971752 14431 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-09 19:39:57.986195: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
1 Physical GPUs, 1 Logical GPUs
I0000 00:00:1736469600.052169 14431 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 10240 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4070, pci bus id: 0000:0a:00.0, compute capability: 8.9
Fitting the top layer of the model
Epoch 1/2
2025-01-09 19:40:04.479339: I tensorflow/core/kernels/data/tf_record_dataset_op.cc:376] The default buffer size is 262144, which is overridden by the user specified `buffer_size` of 8388608
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1736469604.583055 14486 service.cc:148] XLA service 0x7f8df8002230 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1736469604.583109 14486 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4070, Compute Capability 8.9
2025-01-09 19:40:04.722034: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
I0000 00:00:1736469605.234339 14486 cuda_dnn.cc:529] Loaded cuDNN version 90300
7/582 ━━━━━━━━━━━━━━━━━━━━ 12s 22ms/step - binary_accuracy: 0.5658 - loss: 0.6950
I0000 00:00:1736469608.599510 14486 device_compiler.h:188] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
243/582 ━━━━━━━━━━━━━━━━━━━━ 16s 48ms/step - binary_accuracy: 0.8715 - loss: 0.2755
2025-01-09 19:40:20.475756: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 1073741824 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469620.475821 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 1073741824
2025-01-09 19:40:20.615636: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 966367744 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469620.615700 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 966367744
2025-01-09 19:40:20.769033: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 869731072 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469620.769095 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 869731072
2025-01-09 19:40:20.906909: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 782758144 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469620.906973 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 782758144
2025-01-09 19:40:21.048863: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 704482304 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469621.048940 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 704482304
2025-01-09 19:40:21.229614: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 634034176 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469621.229682 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 634034176
2025-01-09 19:40:21.371940: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 570630912 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469621.372000 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 570630912
2025-01-09 19:40:21.510751: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 513568000 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469621.510817 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 513568000
2025-01-09 19:40:21.650945: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 462211328 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469621.651034 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 462211328
2025-01-09 19:40:21.814945: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 415990272 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469621.815035 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 415990272
2025-01-09 19:40:21.954790: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 374391296 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469621.954851 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 374391296
2025-01-09 19:40:22.094150: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 336952320 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469622.094219 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 336952320
2025-01-09 19:40:22.267664: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 303257088 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
...
2025-01-09 19:40:23.128022: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 161164032 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469623.128090 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 161164032
2025-01-09 19:40:23.296856: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1253] failed to alloc 145047808 bytes on host: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
W0000 00:00:1736469623.296940 14537 device_host_allocator.h:61] could not allocate pinned host memory of size: 145047808
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
```
| stat:awaiting response,type:bug,stale,TF 2.18 | low | Critical |
2,779,047,925 | pytorch | quantize_fx.prepare_qat_fx, `get_default_qat_qconfig_mapping` is unused in code. | ### 📚 The doc issue
https://pytorch.org/docs/stable/_modules/torch/ao/quantization/quantize_fx.html#prepare_qat_fx
```
from torch.ao.quantization import get_default_qat_qconfig_mapping
```
In Example, It is unused in code. Not sure which one is correct.
### Suggest a potential alternative/fix
```
from torch.ao.quantization import get_default_qconfig
```
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim | module: docs,oncall: quantization | low | Minor |
2,779,064,533 | next.js | TypeScript Plugin Incorrectly Error `PromiseLike` Return Types in "use server" file | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/prod-shape-c4z7k2?file=%2Fapp%2Factions.ts%3A10%2C1
### To Reproduce
Put the following script inside a "use server" file:
```
function something(v: any): () => Promise<any> & { __errorType?: Error } {
return {} as any
}
export const actionL = something(async () => {
})
```
Here’s the error I encountered:

> **Note**: It also happen when I return `PromiseLike<any>`
### Current vs. Expected behavior
The current Next.js TypeScript plugin prevents me from using `PromiseLike` for values exported from server actions. I expect it to allow exporting functions that return a `PromiseLike` without throwing any errors.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #52-Ubuntu SMP PREEMPT_DYNAMIC Thu Dec 5 13:09:44 UTC 2024
Available memory (MB): 7755
Available CPU cores: 8
Binaries:
Node: 18.20.5
npm: 10.8.2
Yarn: N/A
pnpm: 9.15.2
Relevant Packages:
next: 15.2.0-canary.3 // Latest available version is detected (15.2.0-canary.3).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Linting, TypeScript
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | Linting,TypeScript | low | Critical |
2,779,080,539 | godot | Mesh normals have sudden pops when using non-orthonormal basis | ### Tested versions
Reproducible in 4.3.stable and 4.4.dev7
### System information
Godot v4.4.dev7 - Windows 11 (build 22631) - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - NVIDIA GeForce GTX 1660 SUPER (NVIDIA; 32.0.15.6636) - Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz (16 threads)
### Issue description
When a mesh has a non-orthonormal basis (e.g. non-uniform scale or skewed transform), its normals experience sudden "pops" / changes at certain thresholds. I'm not sure what these thresholds actually are. For example while you are adjusting only the Z component of the scale, or if you are skewing the mesh, then at some point, the normals will suddenly change from one version to another.

The expected behaviour is for the normals to be adjusted smoothly. I imagine there is a bug in the code that "renormalizes" mesh normals.
### Steps to reproduce
- Open the attached project/scene.
- Select the `mesh` node.
- Slide its `xy` value in the inspector until the normals "pop".
- Note that this is also possible to reproduce if the node's `Rotation Edit Mode` is set to `Euler`, by scaling it along the Z axis only. But the popping effect in that case is a little more subtle.
### Minimal reproduction project (MRP)
[bug-report.zip](https://github.com/user-attachments/files/18369321/bug-report.zip)
| bug,topic:rendering,confirmed,topic:3d | low | Critical |
2,779,081,450 | excalidraw | Bug: Png Rasterization of arrow cropping issue | # What
Multi point curved lines can cause issues where the edge of the curve does not make it into a copied PNG
# Steps to recreate
Paste the following into excalidraw
```json
{"type":"excalidraw/clipboard","elements":[{"id":"Q3bdXGLR926979Hx2xWGH","type":"diamond","x":4440,"y":36620,"width":400,"height":230,"angle":0,"strokeColor":"#1e1e1e","backgroundColor":"transparent","fillStyle":"hachure","strokeWidth":4,"strokeStyle":"solid","roughness":1,"opacity":100,"groupIds":[],"frameId":null,"index":"bMj","roundness":{"type":2},"seed":845291952,"version":377,"versionNonce":1321108400,"isDeleted":false,"boundElements":[{"id":"fskGWWP84zFZyVTRK_Iak","type":"text"},{"id":"KOskzei7KeGXVVonpFGaD","type":"arrow"}],"updated":1736472330181,"link":null,"locked":false},{"id":"fskGWWP84zFZyVTRK_Iak","type":"text","x":4606.276119232178,"y":36717.5,"width":67.44776153564453,"height":35,"angle":0,"strokeColor":"#1e1e1e","backgroundColor":"transparent","fillStyle":"hachure","strokeWidth":4,"strokeStyle":"solid","roughness":1,"opacity":100,"groupIds":[],"frameId":null,"index":"bMk","roundness":null,"seed":1058171824,"version":278,"versionNonce":1117195184,"isDeleted":false,"boundElements":[],"updated":1736472338512,"link":null,"locked":false,"text":"Test","fontSize":28,"fontFamily":5,"textAlign":"center","verticalAlign":"middle","containerId":"Q3bdXGLR926979Hx2xWGH","originalText":"Test","autoResize":true,"lineHeight":1.25},{"id":"9_u4yiULNh8omBdeFsdfc","type":"rectangle","x":3540,"y":37280,"width":220.00000000000003,"height":120.00000000000001,"angle":0,"strokeColor":"#1e1e1e","backgroundColor":"transparent","fillStyle":"hachure","strokeWidth":4,"strokeStyle":"solid","roughness":1,"opacity":100,"groupIds":[],"frameId":null,"index":"bMr","roundness":{"type":3},"seed":702640560,"version":241,"versionNonce":931055024,"isDeleted":false,"boundElements":[{"id":"9OkuYG4z98TOxhAMifAKx","type":"text"},{"id":"KOskzei7KeGXVVonpFGaD","type":"arrow"}],"updated":1736472335994,"link":null,"locked":false},{"id":"9OkuYG4z98TOxhAMifAKx","type":"text","x":3600.8880615234375,"y":37322.5,"width":98.223876953125,"height":35,"angle":0,"strokeColor":"#1e1e1e","backgroundColor":"transparent","fillStyle":"hachure","strokeWidth":4,"strokeStyle":"solid","roughness":1,"opacity":100,"groupIds":[],"frameId":null,"index":"bMs","roundness":null,"seed":707247024,"version":132,"versionNonce":375743920,"isDeleted":false,"boundElements":[],"updated":1736472341515,"link":null,"locked":false,"text":"Test 2","fontSize":28,"fontFamily":5,"textAlign":"center","verticalAlign":"middle","containerId":"9_u4yiULNh8omBdeFsdfc","originalText":"Test 2","autoResize":true,"lineHeight":1.25},{"id":"KOskzei7KeGXVVonpFGaD","type":"arrow","x":4840.000000000002,"y":36740,"width":1240.2000000000025,"height":539,"angle":0,"strokeColor":"#1e1e1e","backgroundColor":"transparent","fillStyle":"hachure","strokeWidth":4,"strokeStyle":"solid","roughness":1,"opacity":100,"groupIds":[],"frameId":null,"index":"bN4","roundness":{"type":2},"seed":1408349104,"version":248,"versionNonce":550774608,"isDeleted":false,"boundElements":[{"type":"text","id":"vLZfmQtWY-MVjvSLEoP1T"}],"updated":1736472344624,"link":null,"locked":false,"points":[[0,0],[59.99999999999818,460],[-1040.0000000000018,440],[-1180.2000000000044,539]],"lastCommittedPoint":null,"startBinding":{"elementId":"Q3bdXGLR926979Hx2xWGH","focus":-0.9967391304347918,"gap":4.334531515288575,"fixedPoint":null},"endBinding":{"elementId":"9_u4yiULNh8omBdeFsdfc","focus":-0.39280903533314404,"gap":1,"fixedPoint":null},"startArrowhead":null,"endArrowhead":"arrow","elbowed":false},{"id":"vLZfmQtWY-MVjvSLEoP1T","type":"text","x":4301.505852517401,"y":37192.27202562778,"width":61.17910385131836,"height":35,"angle":0,"strokeColor":"#1e1e1e","backgroundColor":"transparent","fillStyle":"hachure","strokeWidth":4,"strokeStyle":"solid","roughness":1,"opacity":100,"groupIds":[],"frameId":null,"index":"bN5","roundness":null,"seed":1826602416,"version":15,"versionNonce":1533632944,"isDeleted":false,"boundElements":[],"updated":1736472343758,"link":null,"locked":false,"text":"test","fontSize":28,"fontFamily":5,"textAlign":"center","verticalAlign":"middle","containerId":"KOskzei7KeGXVVonpFGaD","originalText":"test","autoResize":true,"lineHeight":1.25}],"files":{}}
```
Select all elements and select "Copy as png"
See rendered result:

# Browser details:
User Agent: `Mozilla/5.0 (X11; Linux x86_64; rv:133.0) Gecko/20100101 Firefox/133.0`
Os: `Ubuntu 24.04`
In browser excalidraw with not extensions enabled. | bug | low | Critical |
2,779,141,605 | flutter | boundary.toImage().toByteData() handles transparency differently on physical iphone devices with 3.27.1 | ### Steps to reproduce
Paint on a canvas wrapped in a RepaintBoundary but leave some area unpainted (transparent).
call boundary.toImage().toByteData(format: ui.ImageByteFormat.png) and then save the resulting image as png file.
On flutter 3.24.x and prior on all devices the unpainted pixel RGBA values are 0,0,0,0 (black transparent). I have tested android emulator, android devices, web, ios emulator, ios devices.
On flutter 3.27.1 ios physical devices have unpainted pixel values of 255,255,255,0 (white transparent) *only* ios physical devices, ios emulators, android, and web all still have pixel values of 0,0,0,0.
Why does the color of a transparent pixel matter? If you call canvas.drawImage(img, Offset(0,0), paint) the paint gets applied to all non-zero pixels. It also effects anything that looks at the pixel values of course besides the alpha channel.
### Expected results
Run the below code, and press the 'toImage' in the lower right to run the test .
The output should be:

The top is the canvas, the bottom are saved versions of the output. The last one shows the image with the alpha channel removed.
The expected value of the unpainted area around the red circle is black (historically and on all devices except physical iphones)
### Actual results
On physical iphone devices (not emulator) with flutter 3.27.1 the unpainted pixels are 255,255,255,0
This results in the output below.

Also the image quality is terrible on a physical iphone device in addition to the pixel values being different.
This is not true on an iphone emulator of the same device type:

Note the pixel values printed are correct/expected, because they are pulled from toImage(rawRGBA). So i suspect this issue is related to the png encoding process on iphone specifically.
Edit: I also tried some 'hacks' like calling drawColor(0x00000000, BlendMode.src) on the canvas, and that did not fix it on the iphone.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:typed_data';
import 'dart:ui' as ui;
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
import 'package:image/image.dart' as imglib;
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'PixelTest',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'RepaintBoundary Pixel Test'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
static final GlobalKey repaintBoundaryKey = GlobalKey();
String _hex = "";
int _red = -1;
int _green = -1;
int _blue = -1;
int _alpha = -1;
ui.Image? _image;
Uint8List? _png1;
Uint8List? _png2;
Uint8List? _png3;
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Container(
decoration: BoxDecoration(border: Border.all(color: Colors.blueAccent)),
child: RepaintBoundary(
key: repaintBoundaryKey,
child: ClipRect(
child: CustomPaint(
size: Size(100, 100),
painter: MyCustomPainter(),
),
),
),
),
SizedBox(height: 20),
Text('Background Color:'),
Text('Red: $_red Green: $_green Blue: $_blue Alpha: $_alpha'),
Text('Hex: $_hex'),
SizedBox(height: 20),
_image != null
? Row(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text("Raw ui.Image "),
Container(
decoration: BoxDecoration(border: Border.all(color: Colors.blueAccent), color: Colors.grey),
child: RawImage(image: _image),
),
],
)
: Container(),
SizedBox(height: 20),
_png1 != null
? Row(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text("toByteData(format: png) "),
Container(
decoration: BoxDecoration(border: Border.all(color: Colors.blueAccent), color: Colors.grey),
child: Image.memory(_png1!),
),
],
)
: Container(),
SizedBox(height: 20),
_png2 != null
? Row(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text("Reencoded "),
Container(
decoration: BoxDecoration(border: Border.all(color: Colors.blueAccent), color: Colors.grey),
child: Image.memory(_png2!),
),
],
)
: Container(),
SizedBox(height: 20),
_png3 != null
? Row(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text("No Alpha "),
Container(
decoration: BoxDecoration(border: Border.all(color: Colors.blueAccent), color: Colors.grey),
child: Image.memory(_png3!),
),
],
)
: Container(),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: () async {
RenderRepaintBoundary boundary = repaintBoundaryKey.currentContext!.findRenderObject() as RenderRepaintBoundary;
ui.Image image = await boundary.toImage();
ByteData? byteData = await image.toByteData(format: ui.ImageByteFormat.rawRgba);
if (byteData == null) {
return;
}
var uintlist = byteData!.buffer.asUint32List();
if (uintlist.isEmpty) {
return;
}
// png1 - just raw output of toByteData()
ByteData? pngData = await image.toByteData(format: ui.ImageByteFormat.png);
_png1 = pngData!.buffer.asUint8List();
// png2 - convert to image.Image and back
imglib.Image? img = imglib.decodePng(_png1!);
_png2 = imglib.encodePng(img!);
// png3 - remove alpha channel
img = img.convert(format: imglib.Format.uint8, numChannels: 3);
img = img.convert(format: imglib.Format.uint8, numChannels: 4, alpha: img.maxChannelValue);
_png3 = imglib.encodePng(img);
setState(() {
_image = image;
_hex = "0x${uintlist[0].toRadixString(16).padLeft(8, '0')}";
_red = (uintlist[0] & 0x000000ff);
_green = (uintlist[0] & 0x0000ff00) >> 8;
_blue = (uintlist[0] & 0x00ff0000) >> 16;
_alpha = (uintlist[0] & 0xff000000) >> 24;
});
},
child: const Text("toImage"),
),
);
}
}
// paints nothing
class MyCustomPainter extends CustomPainter {
MyCustomPainter();
@override
void paint(Canvas canvas, Size size) async {
// canvas.drawColor(const Color(0xff000000), BlendMode.src); // black
// canvas.drawColor(const Color(0xffff0000), BlendMode.src); // green
// canvas.drawColor(const Color(0xff00ff00), BlendMode.src); // green
// canvas.drawColor(const Color(0xff0000ff), BlendMode.src); // blue
// canvas.drawColor(const Color(0x00ffffff), BlendMode.src); // white transpaent
// canvas.drawColor(const Color(0x00000000), BlendMode.src); // black transparent
Paint p = Paint()..color = Colors.red;
canvas.drawCircle(Offset(50, 50), 25.0, p);
}
@override
bool shouldRepaint(MyCustomPainter oldDelegate) => true;
}
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.27.1, on Debian GNU/Linux 12 (bookworm) 6.1.0-28-amd64, locale en_US.UTF-8)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0-rc2)
[✓] Chrome - develop for the web
[✓] Linux toolchain - develop for Linux desktop
[✓] Android Studio (version 2023.2)
[✓] Connected device (3 available)
[✓] Network resources
• No issues found!
```
</details>
| platform-ios,engine,dependency: skia,has reproducible steps,P2,team-engine,triaged-engine,found in release: 3.27,found in release: 3.28 | low | Major |
2,779,151,277 | ant-design | Tabs 的 type 设计、Tab 布局、编辑配置好像可以优化下 | ## What problem does this feature solve?
### 两种 type 统一布局,并且第一个 tab 和最后一个 tab 不会紧跟左右侧的边缘,看起来更加优雅美观
这是 type 为 card 的布局

这是 type 为 line 的布局,每个 Tab 应该是同级的,样式也应该是一样的才对吧

这是两者之间的样式区别

这是 arco 中 Tabs 的布局

### 为什么 type 的类型是 line、card、editable-card,而没有 editable-line?
这个 type 应该只有 line 和 card 两种,表示 tab 的样式类型,能否编辑应该是属于 tab 的功能,应该跟 type 没有关联,而且 editable-card 有 card 的味道在里面,对于互斥的选项,这个好像不太合理,这种样式控制应该类似 Radio 里面的 `optionType` 属性。
### 当 type 为 editable-card 的时候,相关的属性在同一级
比如 `addIcon`、`hideAdd`、`removeIcon` 等等,是否可以把这些属性聚合一下?
### 既然可以编辑是不是可以支持 title 也能编辑
类似这样,鼠标移动到对应 Tab 的时候,显示编辑 title 的图标

## What does the proposed API look like?
type 改为 tabType
```
"tabType“: 'card' | 'line'
```
使用一个 editable(或者叫其他名字) 属性,聚合与编辑相关的属性,当设置这个属性的时候就表示这个 Tabs 是可以编辑的
```
editable?: {
"addIcon"?: ReactNode,
"hideAdd"?: boolean,
// ...等等现在与 editable-card 相关联的属性
// 编辑 Tab 标题后的回调
"onTitleChange"?: (event : targetKey, newTitle: string) => void
}
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Minor |
2,779,155,814 | PowerToys | Shortcut remapping for Keyboard Manager is not working | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update, Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Win + Shift + S

### ✔️ Expected Behavior
No action
### ❌ Actual Behavior
Windows screenshot shortcut was active
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,779,166,455 | godot | Animating 'frame' Property of AnimatedSprite2D Without 'animation' Track Causes Errors in Console | ### Tested versions
- Reproducible in: 4.3.stable.mono, 4.0.stable, 3.5.stable, 3.2.4.beta4
### System information
Godot v4.3.stable.mono - Windows 10.0.22631 - GLES3 (Compatibility) - AMD Radeon RX 580 2048SP (Advanced Micro Devices, Inc.; 31.0.21921.1000) - Genuine Intel(R) CPU 0000 @ 2.60GHz (16 Threads)
### Issue description
When attempting to animate only the **frame** property of an **AnimatedSprite2D** using an **AnimationPlayer** (without including a track for the **animation** property), the following errors are repeatedly logged in the console:
```
scene/resources/animation.cpp:1496 - Index p_track = -1 is out of bounds (tracks.size() = 1).
scene/resources/animation.cpp:1850 - Index p_track = -1 is out of bounds (tracks.size() = 1).
Animation '<null>' doesn't exist
```
**Additional Notes**
I suspect the issue might be related to the way frame previews are handled in the frame property track. If this is the case, the AnimationPlayer could be trying to access invalid data when the animation property track is missing.
### Steps to reproduce
1. Add an **AnimatedSprite2D** node to the scene and create a **SpriteFrames** resource for it.
2. Add at least two animations to the **SpriteFrames**, each containing at least one frame (you can use the default **logo.svg** for both).
3. Add an **AnimationPlayer** node as a child of the **AnimatedSprite2D**.
4. In the **AnimationPlayer**, create a new animation and add a track for the **frame** property of the **AnimatedSprite2D**.
5. Add a keyframe to the **frame** property track.
### Minimal reproduction project (MRP)
[MRP.zip](https://github.com/user-attachments/files/18370060/MRP.zip)
| topic:animation | low | Critical |
2,779,193,519 | rust | Inconsistent vtable layout with projections in supertrait bounds, making upcasting unsound | ```rust
#![feature(trait_upcasting)]
trait Supertrait<T> {
fn _print_numbers(&self, mem: &[usize; 100]) {
println!("{mem:?}");
}
}
impl<T> Supertrait<T> for () {}
trait Identity {
type Selff;
}
impl<Selff> Identity for Selff {
type Selff = Selff;
}
trait Middle<T>: Supertrait<()> + Supertrait<T> {
fn say_hello(&self, _: &usize) {
println!("Hello!");
}
}
impl<T> Middle<T> for () {}
trait Trait: Middle<<() as Identity>::Selff> {}
impl Trait for () {}
fn main() {
(&() as &dyn Trait as &dyn Middle<()>).say_hello(&0);
}
```
([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=943f2b5193470b431dcfe96301b09238))
example output (shortened):
```
[0, 2338324182462507040, 7738151096083899748, 438881233243824, 439624262586284, 439667212259195, 439765996507179…
```
so apparently, we're calling `_print_numbers` actually, because the vtable of `dyn Trait` and `dyn Middle<()>` aren't compatible.
@rustbot label F-trait_upcasting, I-unsound, T-compiler, requires-nightly, A-trait-objects, A-coercions | T-compiler,I-unsound,C-bug,A-coercions,requires-nightly,F-trait_upcasting,T-types,A-trait-objects | low | Critical |
2,779,202,760 | rust | Inconsistent vtable layout with HRTBs | ```rust
trait Supertrait<T> {
fn _print_numbers(&self, mem: &[usize; 100]) {
println!("{mem:?}");
}
}
impl<T> Supertrait<T> for () {}
trait Trait<T, U>: Supertrait<T> + Supertrait<U> {
fn say_hello(&self, _: &usize) {
println!("Hello!");
}
}
impl<T, U> Trait<T, U> for () {}
fn main() {
(&() as &'static dyn for<'a> Trait<&'static (), &'a ()>
as &'static dyn Trait<&'static (), &'static ()>)
.say_hello(&0);
}
```
([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=362a1cb4d4a24d38775ac9e5ba21d3c7))
example output (shortened):
```
[0, 2338324182462507040, 7738151096083899748, 438881233243824, 439624262586284, 439667212259195, 439765996507179…
```
so apparently, we're calling `_print_numbers` actually, because the vtable of `dyn for<'a> Trait<&'static (), &'a ()>` and `dyn Trait<&'static (), &'static ()>` aren't compatible.
@rustbot label I-unsound, T-compiler, A-trait-objects, A-coercions, A-higher-ranked
---
This code example is technically a regression, starting to misbehave somewhere between `1.55` and `1.56` (haven't further bisected yet…) | P-high,T-compiler,I-unsound,C-bug,A-coercions,T-types,A-trait-objects,A-higher-ranked | low | Critical |
2,779,240,978 | tauri | [bug] Self Signed Certificate Hosting https `devUrl` | ### Describe the bug
We are building a password manager of sorts and have the requirement of using (subtle) WebCrytpo in our app. My [understanding](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto) is that the only way the browser is able to access WebCrypto is by being in a secure context (ie. ssl) environment.
I can run my dev server locally with a self signed certificate (from `mkcert`) and it works no issue across all browsers. When I configure the `devUrl` in tauri to use `https:` I receive an error along the lines:

If I remove the ssl from both my dev server and tauri `devUrl` it works across all browsers an emulators.
**Does tauri support `https:` `devUrl`'s**? If not, is there an other way to enable the secure context so that we can leverage (subtle) WebCrypto?
### Reproduction
_No response_
### Expected behavior
I should be able to specify an `https:` `devUrl` and it should work.
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 15.1.1 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.11.0
- yarn: 4.5.3
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : not installed!
- @tauri-apps/cli : 2.2.2
[-] Plugins
- tauri-plugin-log 🦀: 2.0.2
- @tauri-apps/plugin-log : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../out
- devUrl: https://localhost:5030/
- framework: React (Next.js)
- bundler: Webpack
[-] iOS
- Developer Teams: None
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,779,254,168 | ollama | ollama not working | ### What is the issue?
I installed ollama in ubuntu via curl command but not working when using ollama. So I check ollama version.>ollama -v ollama version is 0.0.0 Warning: client version is 0.5.0
### OS
Linux
### GPU
_No response_
### CPU
AMD
### Ollama version
_No response_ | bug,needs more info | low | Minor |
2,779,260,369 | pytorch | [Pipelining] PP+DDP does not work for Zero Bubble | Due to zero-bubble's implementation for backward bypassing torch.autograd.backward() in favor of calling .grad() directly, this skips hooks used by DDP for gradient reduction.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o | oncall: distributed,triaged,bug,module: pipelining | low | Minor |
2,779,263,010 | flutter | Update engine README and PR template to explain the monorepo | It's not clear from looking https://github.com/flutter/engine that I shouldn't be making new PRs there, and instead should be making changes in https://github.com/flutter/flutter/tree/master/engine.
Update the engine README, [PR template](https://github.com/flutter/engine/blob/main/.github/PULL_REQUEST_TEMPLATE.md), and other engine repo documentation to explain that the repo is archived (even though it can't literally be GitHub-archived yet). | engine,P3,team-engine,triaged-engine,d: docs/,monorepo | low | Minor |
2,779,294,696 | flutter | ARN Pre-launch report issue in Flutter, just in Android 15 | <img width="1182" alt="image" src="https://github.com/user-attachments/assets/22049ed9-4d74-40a1-a54f-d30e3827650b" />
```console
"main" tid=1 Native
#00 pc 0x00000000000b26f7 /apex/com.android.runtime/lib64/bionic/libc.so (read+7)
#01 pc 0x0000000000011798 /vendor/lib64/libOpenglSystemCommon.so (QemuPipeStream::commitBufferAndReadFully(unsigned long, void*, unsigned long)+232)
#02 pc 0x000000000015a16a /vendor/lib64/libvulkan_enc.so (goldfish_vk::VulkanStreamGuest::read(void*, unsigned long)+74)
#03 pc 0x000000000019f939 /vendor/lib64/libvulkan_enc.so (goldfish_vk::VkEncoder::vkCreateImageView(VkDevice_T*, VkImageViewCreateInfo const*, VkAllocationCallbacks const*, VkImageView_T**, unsigned int)+553)
#04 pc 0x00000000001715ca /vendor/lib64/libvulkan_enc.so (goldfish_vk::ResourceTracker::on_vkCreateImageView(void*, VkResult, VkDevice_T*, VkImageViewCreateInfo const*, VkAllocationCallbacks const*, VkImageView_T**)+234)
#05 pc 0x000000000024c8a7 /vendor/lib64/libvulkan_enc.so (goldfish_vk::entry_vkCreateImageView(VkDevice_T*, VkImageViewCreateInfo const*, VkAllocationCallbacks const*, VkImageView_T**)+103)
#06 pc 0x0000000000881674 /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#07 pc 0x0000000000880bed /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#08 pc 0x0000000000533fbe /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#09 pc 0x0000000000896d76 /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#10 pc 0x000000000084d381 /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#11 pc 0x00000000008b8cf3 /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#12 pc 0x0000000000491027 /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#13 pc 0x0000000000498ff6 /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#14 pc 0x00000000004994cd /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#15 pc 0x000000000048f1f6 /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#16 pc 0x00000000008bea2f /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#17 pc 0x00000000008bdb06 /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#18 pc 0x000000000049b4a7 /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/split_config.x86_64.apk!libflutter.so
#19 pc 0x00000000003a03ab /apex/com.android.art/lib64/libart.so (art_quick_generic_jni_trampoline+219)
#20 pc 0x000000000038c945 /apex/com.android.art/lib64/libart.so (nterp_helper+3837)
#21 pc 0x00000000003b5288 /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/base.apk (io.flutter.embedding.engine.FlutterJNI.performNativeAttach+30826496)
#22 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#23 pc 0x00000000003b539e /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/base.apk (io.flutter.embedding.engine.FlutterJNI.attachToNative+30)
#24 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#25 pc 0x00000000003b4d52 /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/base.apk (io.flutter.embedding.engine.FlutterEngine.attachToJni+18)
#26 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#27 pc 0x00000000003b4bde /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/base.apk (io.flutter.embedding.engine.FlutterEngine.<init>+458)
#28 pc 0x000000000038d096 /apex/com.android.art/lib64/libart.so (nterp_helper+5710)
#29 pc 0x00000000003b45ee /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/base.apk (io.flutter.embedding.engine.FlutterEngineGroup.createEngine+22)
#30 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#31 pc 0x00000000003b4564 /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/base.apk (io.flutter.embedding.engine.FlutterEngineGroup.createAndRunEngine+108)
#32 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#33 pc 0x00000000003a6aac /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/base.apk (io.flutter.embedding.android.FlutterActivityAndFragmentDelegate.setUpFlutterEngine+396)
#34 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#35 pc 0x00000000003a617a /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/base.apk (io.flutter.embedding.android.FlutterActivityAndFragmentDelegate.onAttach+14)
#36 pc 0x0000000000395094 /apex/com.android.art/lib64/libart.so (art_quick_invoke_stub+756)
#37 pc 0x000000000041da7a /apex/com.android.art/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+218)
#38 pc 0x00000000005a346c /apex/com.android.art/lib64/libart.so (art::interpreter::ArtInterpreterToCompiledCodeBridge(art::Thread*, art::ArtMethod*, art::ShadowFrame*, unsigned short, art::JValue*)+428)
#39 pc 0x000000000059ddb7 /apex/com.android.art/lib64/libart.so (bool art::interpreter::DoCall<false, true>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+2023)
#40 pc 0x000000000096e8f8 /apex/com.android.art/lib64/libart.so (MterpInvokeVirtual+2936)
#41 pc 0x000000000037e799 /apex/com.android.art/lib64/libart.so (mterp_op_invoke_virtual+25)
#42 pc 0x00000000003a75da /data/app/~~V8oC50YRgLJcpP6QsIi90A==/com.geniai.family.friends.tracker.location-wl0UqGAA0N-aTIuxQx6UIA==/base.apk (io.flutter.embedding.android.FlutterActivity.onCreate+26)
#43 pc 0x0000000000594c52 /apex/com.android.art/lib64/libart.so (art::interpreter::Execute(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame&, art::JValue, bool, bool)+306)
#44 pc 0x0000000000959a2f /apex/com.android.art/lib64/libart.so (artQuickToInterpreterBridge+1007)
#45 pc 0x00000000003a053c /apex/com.android.art/lib64/libart.so (art_quick_to_interpreter_bridge+140)
#46 pc 0x000000000038c945 /apex/com.android.art/lib64/libart.so (nterp_helper+3837)
#47 pc 0x00000000001d0672 /system/framework/framework.jar (android.app.Activity.performCreate+158)
#48 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#49 pc 0x00000000001d05ba /system/framework/framework.jar (android.app.Activity.performCreate+2)
#50 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#51 pc 0x000000000023f49a /system/framework/framework.jar (android.app.Instrumentation.callActivityOnCreate+6)
#52 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#53 pc 0x00000000001bf608 /system/framework/framework.jar (android.app.ActivityThread.performLaunchActivity+892)
#54 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#55 pc 0x00000000001bf212 /system/framework/framework.jar (android.app.ActivityThread.handleLaunchActivity+126)
#56 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#57 pc 0x00000000002c91d8 /system/framework/framework.jar (android.app.servertransaction.LaunchActivityItem.execute+24)
#58 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#59 pc 0x00000000002cb72e /system/framework/framework.jar (android.app.servertransaction.TransactionExecutor.executeCallbacks+154)
#60 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#61 pc 0x00000000002cb66a /system/framework/framework.jar (android.app.servertransaction.TransactionExecutor.execute+146)
#62 pc 0x000000000038c8e0 /apex/com.android.art/lib64/libart.so (nterp_helper+3736)
#63 pc 0x00000000001be522 /system/framework/framework.jar (android.app.ActivityThread$H.handleMessage+254)
#64 pc 0x000000000207d1a7 /memfd:jit-zygote-cache (android.os.Handler.dispatchMessage+183)
#65 pc 0x000000000038c945 /apex/com.android.art/lib64/libart.so (nterp_helper+3837)
#66 pc 0x00000000004595f6 /system/framework/framework.jar (android.os.Looper.loopOnce+334)
#67 pc 0x0000000002099866 /memfd:jit-zygote-cache (android.os.Looper.loop+550)
#68 pc 0x000000000038baed /apex/com.android.art/lib64/libart.so (nterp_helper+165)
#69 pc 0x00000000001c8a1e /system/framework/framework.jar (android.app.ActivityThread.main+202)
#70 pc 0x00000000003953f6 /apex/com.android.art/lib64/libart.so (art_quick_invoke_static_stub+806)
#71 pc 0x000000000041da89 /apex/com.android.art/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+233)
#72 pc 0x00000000008194c2 /apex/com.android.art/lib64/libart.so (_jobject* art::InvokeMethod<(art::PointerSize)8>(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, _jobject*, _jobject*, unsigned long)+1442)
#73 pc 0x0000000000772698 /apex/com.android.art/lib64/libart.so (art::Method_invoke(_JNIEnv*, _jobject*, _jobject*, _jobjectArray*)+56)
at io.flutter.embedding.engine.FlutterJNI.nativeAttach (Native method)
at io.flutter.embedding.engine.FlutterJNI.performNativeAttach (FlutterJNI.java:432)
at io.flutter.embedding.engine.FlutterJNI.attachToNative (FlutterJNI.java:424)
at io.flutter.embedding.engine.FlutterEngine.attachToJni (FlutterEngine.java:399)
at io.flutter.embedding.engine.FlutterEngine.<init> (FlutterEngine.java:369)
at io.flutter.embedding.engine.FlutterEngineGroup.createEngine (FlutterEngineGroup.java:206)
at io.flutter.embedding.engine.FlutterEngineGroup.createAndRunEngine (FlutterEngineGroup.java:158)
at io.flutter.embedding.android.FlutterActivityAndFragmentDelegate.setUpFlutterEngine (FlutterActivityAndFragmentDelegate.java:332)
at io.flutter.embedding.android.FlutterActivityAndFragmentDelegate.onAttach (FlutterActivityAndFragmentDelegate.java:194)
at io.flutter.embedding.android.FlutterActivity.onCreate (FlutterActivity.java:638)
at android.app.Activity.performCreate (Activity.java:8074)
at android.app.Activity.performCreate (Activity.java:8054)
at android.app.Instrumentation.callActivityOnCreate (Instrumentation.java:1341)
at android.app.ActivityThread.performLaunchActivity (ActivityThread.java:3688)
at android.app.ActivityThread.handleLaunchActivity (ActivityThread.java:3864)
at android.app.servertransaction.LaunchActivityItem.execute (LaunchActivityItem.java:103)
at android.app.servertransaction.TransactionExecutor.executeCallbacks (TransactionExecutor.java:135)
at android.app.servertransaction.TransactionExecutor.execute (TransactionExecutor.java:95)
at android.app.ActivityThread$H.handleMessage (ActivityThread.java:2253)
at android.os.Handler.dispatchMessage (Handler.java:106)
at android.os.Looper.loopOnce (Looper.java:201)
at android.os.Looper.loop (Looper.java:288)
at android.app.ActivityThread.main (ActivityThread.java:7870)
at java.lang.reflect.Method.invoke (Native method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run (RuntimeInit.java:548)
at com.android.internal.os.ZygoteInit.main (ZygoteInit.java:1003)
```
| waiting for customer response,in triage | low | Minor |
2,779,328,143 | tauri | [help] Page not found handling | ### Describe the bug
I am using NextJS as my react framework. I have a `not-found.tsx` in the root of my project and that works without issue during development.
When I build / bundle / distribute my application, how will not found handling work? (ie. how are missing paths handled outside of a NextJS web server?)
I know this is really straight forward to test, but for some reason my built `apk` is not working. It attempts to load `http://localhost:5030`. That address is valid in dev, I just thought it would default to `http://localhost:8080`. So I need to figure that out but thought I would ask here in the mean time.
### Reproduction
_No response_
### Expected behavior
**Expected behavior on a missing path:**
Not sure
**Expected behavior when loading the built / bundled assets:**
Should load with out issue and use localhost:8080
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 15.1.1 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.11.0
- yarn: 4.5.3
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : not installed!
- @tauri-apps/cli : 2.2.2
[-] Plugins
- tauri-plugin-log 🦀: 2.0.2
- @tauri-apps/plugin-log : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../out
- devUrl: http://localhost:5030/
- framework: React (Next.js)
- bundler: Webpack
[-] iOS
- Developer Teams: None
```
### Stack trace
_No response_
### Additional context
_No response_ | type: question | low | Critical |
2,779,335,001 | material-ui | useMediaQuery causes error in Firefox browser extension content script | ### Steps to reproduce
Steps:
1. Download this this minimal reproduction browser extension https://github.com/Mr-Quin/mui-firefox-debug/releases/tag/1.0.0
2. Load it into Firefox and Chrome
3. Go to https://www.google.com/ and compare the behavior between the browser
### Current behavior
The example extension calls `useMediaQuery("(prefers-color-scheme: dark)")` and renders "Prefer dark: Yes/No" on the page
Behavior in Chrome:

In Firefox, the content script crashes with error
```
Uncaught TypeError: 'matchMedia' called on an object that does not implement interface Window.
<anonymous> moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:3134
useMemo moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:9316
useMemo moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:584
useMediaQueryNew moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:3129
useMediaQuery2 moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:3168
App moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:16771
renderWithHooks moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:8411
updateFunctionComponent moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:9908
beginWork moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:10563
performUnitOfWork moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:13628
workLoopSync moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:13527
renderRootSync moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:13511
performWorkOnRoot moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:13213
performWorkOnRootViaSchedulerTask moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:14053
performWorkUntilDeadline moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js:5668
[index.tsx-CQJLccjk.js:3134:39](moz-extension://614e6a10-4d1a-4c77-93fa-2e3a5dd1fd2e/assets/index.tsx-CQJLccjk.js)
```

### Expected behavior
Using useMediaQuery should not crash the script in Firefox
### Context
Offending line
```js
function useMediaQueryNew(query, defaultMatches, matchMedia2, ssrMatchMedia, noSsr) {
const getDefaultSnapshot = reactExports.useCallback(() => defaultMatches, [defaultMatches]);
const getServerSnapshot = reactExports.useMemo(() => {
if (noSsr && matchMedia2) {
return () => matchMedia2(query).matches;
}
if (ssrMatchMedia !== null) {
const {
matches
} = ssrMatchMedia(query);
return () => matches;
}
return getDefaultSnapshot;
}, [getDefaultSnapshot, query, ssrMatchMedia, noSsr, matchMedia2]);
const [getSnapshot, subscribe] = reactExports.useMemo(() => {
if (matchMedia2 === null) {
return [getDefaultSnapshot, () => () => {
}];
}
const mediaQueryList = matchMedia2(query); // <----------------- this one
return [() => mediaQueryList.matches, (notify) => {
mediaQueryList.addEventListener("change", notify);
return () => {
mediaQueryList.removeEventListener("change", notify);
};
}];
}, [getDefaultSnapshot, matchMedia2, query]);
const match2 = maybeReactUseSyncExternalStore(subscribe, getSnapshot, getServerSnapshot);
return match2;
}
```
This might be a fairly niche use case, but a similar issue can be found here https://github.com/facebook/react/issues/16606.
I was able to fix this by replacing this line in `mui-system/src/useMediaQuery/useMediaQuery.ts`
```js
matchMedia = supportMatchMedia ? window.matchMedia : null,
```
with
```js
matchMedia = supportMatchMedia ? window.matchMedia.bind(window) : null,
```
I don't know what exactly causes the issue in Firefox alone but looks like it has something to do with saving `window` methods to a variable and calling that variable.
### Your environment
System:
OS: Windows 11 10.0.26100
Binaries:
Node: 20.12.1 - C:\Program Files\nodejs\node.EXE
npm: 10.5.0 - C:\Program Files\nodejs\npm.CMD
pnpm: 9.15.2 - C:\Program Files\nodejs\pnpm.CMD
Browsers:
Google Chrome 131.0.6778.265
Firefox Developer Edition 135.0b2
npmPackages:
@emotion/react: ^11.14.0 => 11.14.0
@emotion/styled: ^11.14.0 => 11.14.0
@mui/icons-material: ^6.3.0 => 6.3.1
@mui/material: ^6.3.0 => 6.3.1
@mui/system: ^6.3.0 => 6.3.1
@types/react: ^19.0.2 => 19.0.4
react: 19.0.0 => 19.0.0
react-dom: 19.0.0 => 19.0.0
typescript: ^5.7.2 => 5.7.3
**Search keywords**: extension firefox useMediaQuery matchMedia | hook: useMediaQuery,status: waiting for maintainer | low | Critical |
2,779,342,607 | pytorch | [bug report template format] Simplify version information with HTML tags | ### 🚀 The feature, motivation and pitch
When I looked at the bug report, I found the version information **too long and redundant**.
Many reporters are following the instructions here:

Reporters run the downloaded script and get the environment information. They paste the information in the bug report.
Unfortunately, I think the information are **too redundant** like below:
```
PyTorch version: 2.6.0.dev20241230+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241230+cu126
[pip3] torchaudio==2.6.0.dev20241230+cu126
[pip3] torchvision==0.22.0.dev20241230+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241230+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
Actually, in most time, we just need **pytorch version**, **OS**, **CPU** and **GPU** information is enough! The rest of the infomation **can be folded** and viewed when needed like below, using some **html tags** (i.e., `<details>`and `<summary>`). That way, version information doesn't take up too much space on the browser page space. Refer this #144183
PyTorch version: 20241230
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241230+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-204-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.998
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.6.0.dev20241230+cu126
[pip3] torchaudio==2.6.0.dev20241230+cu126
[pip3] torchvision==0.22.0.dev20241230+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241230+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241230+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
I think there are two possible solutions:
**solution1**: We can modify the [issue format here](https://github.com/pytorch/pytorch/tree/main/.github/ISSUE_TEMPLATE), preconfiguring these HTML tags.
**solution2**: But I think a more efficient way for bug reporters is to modify the [collect_env script](https://github.com/pytorch/pytorch/blob/main/torch/utils/collect_env.py). We can wrap the redunt information with some HTML tags.
### Alternatives
_No response_
### Additional context
_No response_ | module: collect_env.py,triaged,needs design | low | Critical |
2,779,381,252 | vscode | Git Issue |
Type: <b>Bug</b>
Bug Title: Files from the new branch persist after switching back to main
Description:
When switching from a newly created branch back to the main branch, files added in the new branch remain in the working directory. These files should not persist in the main branch unless explicitly committed and merged.
Steps to Reproduce:
1. Check out a new branch: git checkout -b new-branch.
2. Add new files to the working directory in the new branch.
3. Push the changes to the origin: git push origin new-branch.
4. Switch back to the main branch: git checkout main.
Expected Result:
The files added in the new branch should not appear in the working directory when back on the main branch.
Actual Result:
Files from the new branch remain in the working directory after switching back to the main branch.
Environment:
Git Version: 2.47.1
Operating System: macOS Version 14.6.1 (23G93)
VS Code version: Code 1.96.2 (Universal) (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Darwin arm64 23.6.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M2 (8 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|2, 2, 2|
|Memory (System)|8.00GB (0.17GB free)|
|Process Argv|--crash-reporter-id 9df8cd10-70de-4a2e-a4ff-e4dd6be4fad6|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (35)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-tailwindcss|bra|0.12.17
dart-code|Dar|3.102.0
flutter|Dar|3.102.0
prettier-vscode|esb|11.0.0
figma-vscode-extension|fig|0.4.0
copilot|Git|1.256.0
copilot-chat|Git|0.23.2
vscode-github-actions|git|0.27.0
go|gol|0.44.0
terraform|has|2.34.2
azure-dev|ms-|0.8.4
vscode-azureappservice|ms-|0.25.4
vscode-azurecontainerapps|ms-|0.7.1
vscode-azurefunctions|ms-|1.16.1
vscode-azureresourcegroups|ms-|0.10.2
vscode-azurestaticwebapps|ms-|0.12.2
vscode-azurestorage|ms-|0.16.1
vscode-azurevirtualmachines|ms-|0.6.6
vscode-cosmosdb|ms-|0.24.1
vscode-docker|ms-|1.29.3
black-formatter|ms-|2024.4.0
debugpy|ms-|2024.14.0
isort|ms-|2023.10.1
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
remote-containers|ms-|0.394.0
azure-account|ms-|0.12.0
vscode-node-azure-pack|ms-|1.2.0
vscode-typescript-next|ms-|5.8.20250109
vsliveshare|ms-|1.0.5948
vscode-react-native|msj|1.13.0
inline-html|pus|0.3.10
LiveServer|rit|5.7.9
vscode-gradle|vsc|3.16.4
vscode-todo-highlight|way|1.0.5
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | info-needed,git | low | Critical |
2,779,422,839 | ollama | GPU using | ### What is the issue?
i install ollama. and it shows GPU installed, But i can't use GPU,,just can use CPU, Why?
### OS
_No response_
### GPU
_No response_
### CPU
_No response_
### Ollama version
_No response_ | bug,needs more info | low | Major |
2,779,435,711 | pytorch | FlexAttention uses much more GPU memory than FlashAttention-2 | ### 🐛 Describe the bug
Thank you for the outstanding work on PyTorch FlexAttention! I am currently trying to integrate FlexAttention with the Hugging Face Transformers framework for training. However, I noticed that FlexAttention seems to consume more GPU memory compared to FlashAttention-2. The issue can be reproduced using the following demo scripts:
## Reproduction
You need two files to reproduce my observations, and these two files are in the same folder.
1. memory_test.py
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments, default_data_collator, TrainerCallback
import argparse
from transformers.models.llama.modeling_llama import LLAMA_ATTENTION_CLASSES
from datasets import Dataset
from flex_attention import LlamaFlexAttention, llama_model_forward
import os
class ProfilerCallback(TrainerCallback):
def __init__(self, prof):
self.prof = prof
def on_step_end(self, args, state, control, **kwargs):
self.prof.step()
def train_with_profiler(trainer, args=None):
with torch.profiler.profile(
activities=[torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA],
schedule=torch.profiler.schedule(skip_first=1, wait=1, warmup=1, active=trainer.args.max_steps-3),
on_trace_ready=torch.profiler.tensorboard_trace_handler(f'{trainer.args.output_dir}/profiler_log'),
profile_memory=True,
with_stack=False,
record_shapes=True
) as prof:
trainer.add_callback(ProfilerCallback(prof))
trainer.train()
local_rank = int(os.environ.get("LOCAL_RANK", -1))
if local_rank == 0:
prof.export_memory_timeline(f"./{args.attention_type}.html", device="cuda:0")
parser = argparse.ArgumentParser()
parser.add_argument("--model_name_or_path", type=str, default="meta-llama/Llama-3.2-3B")
parser.add_argument("--attention_type", type=str, default="flex")
parser.add_argument("--train_length", type=int, default=2048)
parser.add_argument("--dataset_size", type=int, default=8192)
args = parser.parse_args()
if __name__ == "__main__":
assert args.attention_type in ["flash_attention_2", "flex", "sdpa", "eager"], "Invalid attention type"
torch.compiler.reset()
tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path)
if args.attention_type == "flex":
LLAMA_ATTENTION_CLASSES["flash_attention_2"] = LlamaFlexAttention
attn_implementation = "flash_attention_2"
else:
attn_implementation = args.attention_type
model = AutoModelForCausalLM.from_pretrained(args.model_name_or_path, torch_dtype=torch.bfloat16, attn_implementation=attn_implementation)
model.model.forward = llama_model_forward.__get__(model.model)
random_input_ids = torch.randint(low=0, high=tokenizer.vocab_size, size=(args.dataset_size, args.train_length))
train_dataset = Dataset.from_dict({"input_ids": random_input_ids.tolist(), "labels": random_input_ids.tolist()})
training_args = TrainingArguments(
output_dir=f"./tmp-{args.attention_type}",
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=1,
save_steps=500,
save_total_limit=1,
max_steps=10,
logging_steps=1,
logging_dir="./logs",
logging_first_step=True,
report_to="none",
do_train=True,
gradient_checkpointing=True,
gradient_checkpointing_kwargs={"use_reentrant": False},
gradient_accumulation_steps=2,
deepspeed="../../config/deepspeed/stage2-offload.json",
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
tokenizer=tokenizer,
data_collator=default_data_collator,
)
# train_with_profiler(trainer, args)
trainer.train()
```
2. flex_attention.py
```python
import torch
from torch.nn.attention.flex_attention import flex_attention, create_block_mask
from transformers.models.llama.modeling_llama import LlamaAttention, StaticCache, apply_rotary_pos_emb, repeat_kv, Cache, logger, DynamicCache, BaseModelOutputWithPast, FlashAttentionKwargs, Unpack, LlamaModel, add_start_docstrings_to_model_forward, LLAMA_INPUTS_DOCSTRING
from typing import Optional, Tuple, Union, List
from functools import lru_cache
def flex_causal_mask(b, h, q_idx, kv_idx):
return q_idx >= kv_idx
def score_mod(score, b, h, q_idx, kv_idx):
return score
flex_attention = torch.compile(flex_attention, mode="max-autotune")
@lru_cache
def create_block_mask_cached(mask_mod: Optional[torch.BoolTensor] = None, B: int = 1, H: int = 1, Q_LEN: int = 1, KV_LEN: int = 1, device: Optional[torch.device] = None):
return create_block_mask(mask_mod=mask_mod, B=B, H=H, Q_LEN=Q_LEN, KV_LEN=KV_LEN, device=device, BLOCK_SIZE=(128, 64))
class LlamaFlexAttention(LlamaAttention):
"""
Llama flex attention module. This module inherits from `LlamaAttention` as the weights of the module stays
untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
flex attention and deal with padding tokens in case the input contains any of them.
"""
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.LongTensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_value: Optional[Cache] = None,
output_attentions: bool = False,
use_cache: bool = False,
cache_position: Optional[torch.LongTensor] = None,
position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # will become mandatory in v4.45
**kwargs,
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
if isinstance(past_key_value, StaticCache):
raise ValueError(
"`static` cache implementation is not compatible with `attn_implementation==flash_attention_2` "
"make sure to use `sdpa` in the mean time, and open an issue at https://github.com/huggingface/transformers"
)
output_attentions = False
bsz, q_len, _ = hidden_states.size()
query_states = self.q_proj(hidden_states)
key_states = self.k_proj(hidden_states)
value_states = self.v_proj(hidden_states)
# Flash attention requires the input to have the shape
# batch_size x seq_length x head_dim x hidden_dim
# therefore we just need to keep the original shape
query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
if position_embeddings is None:
logger.warning_once(
"The attention layers in this model are transitioning from computing the RoPE embeddings internally "
"through `position_ids` (2D tensor with the indexes of the tokens), to using externally computed "
"`position_embeddings` (Tuple of tensors, containing cos and sin). In v4.45 `position_ids` will be "
"removed and `position_embeddings` will be mandatory."
)
cos, sin = self.rotary_emb(value_states, position_ids)
else:
cos, sin = position_embeddings
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
if past_key_value is not None:
# sin and cos are specific to RoPE models; cache_position needed for the static cache
cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
# TODO: These transpose are quite inefficient but Flash Attention requires the layout [batch_size, sequence_length, num_heads, head_dim]. We would need to refactor the KV cache
# to be able to avoid many of these transpose/reshape/view.
# query_states = query_states.transpose(1, 2)
# key_states = key_states.transpose(1, 2)
# value_states = value_states.transpose(1, 2)
key_states = repeat_kv(key_states, self.num_key_value_groups)
value_states = repeat_kv(value_states, self.num_key_value_groups)
dropout_rate = self.attention_dropout if self.training else 0.0
# In PEFT, usually we cast the layer norms in float32 for training stability reasons
# therefore the input hidden states gets silently casted in float32. Hence, we need
# cast them back in the correct dtype just to be sure everything works as expected.
# This might slowdown training & inference so it is recommended to not cast the LayerNorms
# in fp32. (LlamaRMSNorm handles it correctly)
input_dtype = query_states.dtype
if input_dtype == torch.float32:
if torch.is_autocast_enabled():
target_dtype = torch.get_autocast_gpu_dtype()
# Handle the case where the model is quantized
elif hasattr(self.config, "_pre_quantization_dtype"):
target_dtype = self.config._pre_quantization_dtype
else:
target_dtype = self.q_proj.weight.dtype
logger.warning_once(
f"The input hidden states seems to be silently casted in float32, this might be related to"
f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
f" {target_dtype}."
)
query_states = query_states.to(target_dtype)
key_states = key_states.to(target_dtype)
value_states = value_states.to(target_dtype)
attn_output = flex_attention(
query_states,
key_states,
value_states,
block_mask=kwargs["block_mask"] if "block_mask" in kwargs else None,
score_mod=None if "block_mask" in kwargs else score_mod,
)
attn_output = attn_output.transpose(1, 2).reshape(bsz, q_len, -1).contiguous()
attn_output = self.o_proj(attn_output)
if not output_attentions:
attn_weights = None
return attn_output, attn_weights, past_key_value
@add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING)
def llama_model_forward(
self,
input_ids: torch.LongTensor = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
cache_position: Optional[torch.LongTensor] = None,
**flash_attn_kwargs: Unpack[FlashAttentionKwargs],
) -> Union[Tuple, BaseModelOutputWithPast]:
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
output_hidden_states = (
output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
)
use_cache = use_cache if use_cache is not None else self.config.use_cache
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
if (input_ids is None) ^ (inputs_embeds is not None):
raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
if self.gradient_checkpointing and self.training and use_cache:
logger.warning_once(
"`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
)
use_cache = False
if inputs_embeds is None:
inputs_embeds = self.embed_tokens(input_ids)
# kept for BC (non `Cache` `past_key_values` inputs)
return_legacy_cache = False
if use_cache and not isinstance(past_key_values, Cache):
return_legacy_cache = True
if past_key_values is None:
past_key_values = DynamicCache()
else:
past_key_values = DynamicCache.from_legacy_cache(past_key_values)
logger.warning_once(
"We detected that you are passing `past_key_values` as a tuple of tuples. This is deprecated and "
"will be removed in v4.47. Please convert your cache or use an appropriate `Cache` class "
"(https://huggingface.co/docs/transformers/kv_cache#legacy-cache-format)"
)
if cache_position is None:
past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
cache_position = torch.arange(
past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
)
if position_ids is None:
position_ids = cache_position.unsqueeze(0)
causal_mask = self._update_causal_mask(
attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions
)
hidden_states = inputs_embeds
# create position embeddings to be shared across the decoder layers
position_embeddings = self.rotary_emb(hidden_states, position_ids)
# decoder layers
all_hidden_states = () if output_hidden_states else None
all_self_attns = () if output_attentions else None
next_decoder_cache = None
# block_mask
if isinstance(self.layers[0].self_attn, LlamaFlexAttention):
block_mask = create_block_mask_cached(mask_mod=flex_causal_mask, B=1, H=1, Q_LEN=hidden_states.size(1), KV_LEN=hidden_states.size(1), device=hidden_states.device)
flash_attn_kwargs["block_mask"] = block_mask
if "num_items_in_batch" in flash_attn_kwargs:
flash_attn_kwargs.pop("num_items_in_batch")
for decoder_layer in self.layers[: self.config.num_hidden_layers]:
if output_hidden_states:
all_hidden_states += (hidden_states,)
if self.gradient_checkpointing and self.training:
layer_outputs = self._gradient_checkpointing_func(
decoder_layer.__call__,
hidden_states,
causal_mask,
position_ids,
past_key_values,
output_attentions,
use_cache,
cache_position,
position_embeddings,
**flash_attn_kwargs,
)
else:
layer_outputs = decoder_layer(
hidden_states,
attention_mask=causal_mask,
position_ids=position_ids,
past_key_value=past_key_values,
output_attentions=output_attentions,
use_cache=use_cache,
cache_position=cache_position,
position_embeddings=position_embeddings,
**flash_attn_kwargs,
)
hidden_states = layer_outputs[0]
if use_cache:
next_decoder_cache = layer_outputs[2 if output_attentions else 1]
if output_attentions:
all_self_attns += (layer_outputs[1],)
hidden_states = self.norm(hidden_states)
# add hidden states from the last decoder layer
if output_hidden_states:
all_hidden_states += (hidden_states,)
next_cache = next_decoder_cache if use_cache else None
if return_legacy_cache:
next_cache = next_cache.to_legacy_cache()
if not return_dict:
return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
return BaseModelOutputWithPast(
last_hidden_state=hidden_states,
past_key_values=next_cache,
hidden_states=all_hidden_states,
attentions=all_self_attns,
)
```
3. stage2-offload.json
```json
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true,
"round_robin_gradients": true
}
}
```
## Usage
```shell
torchrun --nproc_per_node=8 memory_test.py --attention_type flex # FlexAttention
torchrun --nproc_per_node=8 memory_test.py --attention_type flash_attention_2 # FlashAttention-2
```
The experiments are conducted on 8*A100-40G.
## Observations
I have noticed that FlexAttention uses approximately 28GB of GPU memory across 8 devices, whereas FlashAttention-2 requires only around 23GB. I'm currently unsure whether this discrepancy arises from the internal implementation of FlexAttention or the block mask. Changing the block mask to score_mod did not resolve the issue either.
I would appreciate any insights or explanations regarding this matter! Thank you!
### Versions
```shell
torch==2.6.0.dev20241218+cu118
transformers==4.47.1
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | module: memory usage,triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Critical |
2,779,460,862 | vscode | Right click menu not showing go to definition and also F12 is not working | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.2
- OS Version: MacOS 15.2
Steps to Reproduce:
1. Right click menu not showing go to definition and also F12 is not working, cmd + F12 not working
2. when I disable all extensions, it works.
| triage-needed,stale | low | Critical |
2,779,476,357 | godot | No input detected from PS4 controller when connected wirelessly | ### Tested versions
- Reproducible in v4.4.dev7.official [46c8f8c5c].
- Likely reproducible in much older versions, as per this [Reddit post](https://www.reddit.com/r/godot/comments/khw4cj/different_bluetooth_and_wired_controller_inputs/)
- Someone commented a possible explanation for this issue, might be worth checking out when debugging
### System information
Godot v4.4.dev7 - Windows 11 (build 26100) - Multi-window, 3 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4080 (NVIDIA; 32.0.15.6636) - 13th Gen Intel(R) Core(TM) i7-13700F (24 threads)
### Issue description
When using a _wirelessly_ connected PS4 controller, no input is detected by Godot. Using this testing code block:
```gdscript
for device in Input.get_connected_joypads():
print("Device ID: ", device)
print("Device Name: ", Input.get_joy_name(device))
```
I can confirm that there only one device connected called "PS4 Controller". However, Godot does not receive any input, which I have confirmed after seeing no controller input from this function:
```gdscript
func _input(evt):
print(evt)
```
This does work as intended when directly connecting the PS4 Controller through wire.
### Steps to reproduce
- Create a new Godot project, and add a Node. Attach a new script to it, and inside that script add this function:
```gdscript
func _input(evt):
print(evt)
```
- Connect a PS4 Controller to your system _wirelessly_
- Run the scene. You will notice no controller events are printed.
### Minimal reproduction project (MRP)
N/A | bug,topic:input | low | Critical |
2,779,482,893 | transformers | Prompt_ids feature causing repetitions and hallucinations | ### System Info
System Info
Hi @sanchit-gandhi and @gante
Using Prompt Feature like it is mentioned here (https://github.com/huggingface/transformers/issues/22395) causing the model output to have too many repetitions and too much of hallucinations.
I recorded an audio and gave it to the Whisper ASR model with prompt like as mentioned below.
More details:
Transformers Commit: https://github.com/huggingface/transformers/commit/1c7e5e236823cd38faac8115f96205a82c17fff9
Test-Case: Steps how to reproduce the issue.
Audio contents: "The full name of Donald is Donald J. Trump Jr"
prompt = "Donald Duck"
model = WhisperForConditionalGeneration.from_pretrained(model_dir).to("cuda")
feature_extractor = WhisperFeatureExtractor.from_pretrained(model_dir)
processor = WhisperProcessor.from_pretrained(model_dir)
prompt_ids = processor.get_prompt_ids(prompt)
input_features = feature_extractor(audio, sampling_rate=16000, return_tensors="pt").input_features
predicted_ids = model.generate(input_features.to("cuda"), prompt_ids=prompt_ids, num_beams=4)
text = [processor.decode(predicted_id, skip_special_tokens=True) for predicted_id in predicted_ids]
transcript = text[0]
Output: The full name of Donald is Donald J. Trump Jr. Donald Duck Donald Duck Donal Donald Duck Donald Duck Donald Duck Donal Donald Duck Donald Duck Donald Duck Donald Duck Donal Donald Duck Donald Duck Donald Duck Donald Duck Donald Duck Donal Donald Duck
Link to the audio: https://drive.google.com/file/d/1ud-B0uepD8Sk6ArkvJdqPmFWYpCmAooi/view?usp=drive_link
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Reproduction
Test-Case: Steps how to reproduce the issue.
Audio contents: "The full name of Donald is Donald J. Trump Jr"
prompt = "Donald Duck"
model = WhisperForConditionalGeneration.from_pretrained(model_dir).to("cuda")
feature_extractor = WhisperFeatureExtractor.from_pretrained(model_dir)
processor = WhisperProcessor.from_pretrained(model_dir)
prompt_ids = processor.get_prompt_ids(prompt)
input_features = feature_extractor(audio, sampling_rate=16000, return_tensors="pt").input_features
predicted_ids = model.generate(input_features.to("cuda"), prompt_ids=prompt_ids, num_beams=4)
text = [processor.decode(predicted_id, skip_special_tokens=True) for predicted_id in predicted_ids]
transcript = text[0]
Output: The full name of Donald is Donald J. Trump Jr. Donald Duck Donald Duck Donal Donald Duck Donald Duck Donald Duck Donal Donald Duck Donald Duck Donald Duck Donald Duck Donal Donald Duck Donald Duck Donald Duck Donald Duck Donald Duck Donal Donald Duck
### Expected behavior
Expected behavior
It has to give either "The full name of Donald is Donald J. Trump" or "The full name of Donald is Donald Duck", not infinite no of prompt key words.
| bug | low | Minor |
2,779,538,992 | rust | Confusing error message when closures return references based on their arguments | ### Code
```Rust
struct Val {
inner: u32
}
fn main() {
let get_str = |val: &Val| &val.inner;
println!("{}", get_str(&Val { inner: 1 }));
}
```
### Current output
```Shell
error: lifetime may not live long enough
--> src/main.rs:6:31
|
6 | let get_str = |val: &Val| &val.inner;
| - - ^^^^^^^^^^ returning this value requires that `'1` must outlive `'2`
| | |
| | return type of closure is &'2 u32
| let's call the lifetime of this reference `'1`
```
### Desired output
```Shell
```
### Rationale and extra context
I'm not sure what this specific limitation would be called, but I ran into it recently and thought that the error could give some advice on how to solve it. I'm not sure what the best way to do that is, so I haven't filled out the desired output section.
I was a bit surprised to learn that
```rust
struct Val {
inner: u32
}
fn main() {
let get_str = |val: &Val| -> &u32 { &val.inner };
println!("{}", get_str(&Val { inner: 1 }));
}
```
still has the same error (namely, `&Val` and `&u32` are assigned two separate lifetimes), whereas
```rust
struct Val {
inner: u32
}
fn main() {
let get_str: fn(&Val) -> &u32 = |val: &Val| &val.inner;
println!("{}", get_str(&Val { inner: 1 }));
}
```
fixes the issue. I'm guessing there's some subtle differences in the way a type is created for a closure (even when the return type explicitly written out) versus specifying the type manually, but I'm honestly not sure. If there's some documentation regarding this, it might be a good idea to mention that in the error message.
### Other cases
```Rust
```
### Rust Version
```Shell
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,779,544,211 | PowerToys | The Ctrl Key Function gets freezes | ### Microsoft PowerToys version
Release v0.87.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
General
### Steps to reproduce
[PowerToysReport_2025-01-10-13-07-48.zip](https://github.com/user-attachments/files/18372523/PowerToysReport_2025-01-10-13-07-48.zip)
### ✔️ Expected Behavior
Whenever the powertooys running sometime the Ctrl keys fuction especially in the Ms Word Function as if the Ctrl Key is Stick or Contantly Pressed. I thought that there are some error in the physical keyboard and got it replaced but problem remained same and only resolves when I terminates the PowerToys Program.
### ❌ Actual Behavior
Ctrl Key should not be behaved as if it is constantly pressed.
### Other Software
MS word 365 | Issue-Bug,Needs-Triage | low | Critical |
2,779,575,763 | PowerToys | Request for Adding Regex Support to the New+ Feature for Template Folder Creation | ### Description of the new feature / enhancement
I would like to request the ability to use regular expressions (regex) when creating a template folder via the "New +" option.
For example, it would be helpful to automatically add the creation date or location-specific names to file names.
「New+」機能でテンプレートフォルダを作成する際に、正規表現(regex)を使用できるようにしていただけますでしょうか。
例えば、ファイル名に作成場所に応じた名称を自動で追加したり、年月日を含めたりすることが可能になると助かります。
### Scenario when this would be used?
In our work, we frequently create files, and to make file management easier, we include the creation date in the file names.
If template file names could be set as something like $YYYY$MM$DD_powertoys, it would eliminate the need for manual input and greatly improve efficiency.
業務上、ファイルを頻繁に作成しており、その際に管理を容易にするため、ファイル名に作成日時を記載しております。
もし、テンプレートファイル名を「$YYYY$MM$DD_powertoys」のように設定できるようになれば、手動で名前を入力する手間が省け、大幅に効率化できると考えております。
### Supporting information
_No response_ | Needs-Triage | medium | Minor |
2,779,580,269 | PowerToys | PowerToys crashes frequently (Faulting application name: PowerToys.AdvancedPaste.exe) | ### Microsoft PowerToys version
0.87.1.0
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Advanced Paste
### Steps to reproduce
Windows 11 Enterprise 23H2
Just start PowerToys. I have only Awake enabled, yet the error is caused by AdvancedPaste.exe which I find strange.
### ✔️ Expected Behavior
PowerToys should not crash frequently.
### ❌ Actual Behavior
PowerToys crashes frequently
Faulting application name: PowerToys.AdvancedPaste.exe, version: 0.87.1.0, time stamp: 0x67200000
Faulting module name: Microsoft.UI.Xaml.dll, version: 3.1.6.0, time stamp: 0xe09b717f
Exception code: 0xc000027b
Fault offset: 0x0000000000009125
Faulting process id: 0x0x5B78
Faulting application start time: 0x0x1DB633772A37978
Faulting application path: C:\Program Files\PowerToys\WinUI3Apps\PowerToys.AdvancedPaste.exe
Faulting module path: C:\Program Files\PowerToys\WinUI3Apps\Microsoft.UI.Xaml.dll
Report Id: ad7c45d0-ba6c-4696-ae07-76930e437905
Faulting package full name:
Faulting package-relative application ID:
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,779,587,203 | PowerToys | PowerToys Run - Get \\server results | ### Description of the new feature / enhancement
Let's say I have a server called `MYSERVER` on my network. It would be very convenient if I could just enter `myserver` in PowerToys Run and have `\\MYSERVER` or `\\myserver` ranked at the top and preselected for me so I can just hit enter. I can't seem to get this working currently, so maybe it's not a feature today?
At the same time I'd like to completely get rid of `https://myserver/` as a suggestion. I don't find this useful at all.
I would assume this falls under the "URI Handler" plugin? Perhaps it's just me not knowing how to configure it properly?
### Scenario when this would be used?
I frequently use Windows Run to go to servers on my network. It would be convenient to be able to do this from PowerToys Run as well.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,779,606,668 | tauri | [bug] CloseRequested:: prevent_close() is invalid | ### Describe the bug
I use tauri to listen for window close events. I capture the close request in main_window.on_window_event and use api.prevent_close(); Prevents the default shutdown behavior, but he closes the program anyway。
permissions
```
{
"$schema": "../gen/schemas/desktop-schema.json",
"identifier": "default",
"description": "Capability for the main window",
"windows": [
"main"
],
"permissions": [
"core:default",
"shell:allow-open",
"fs:default",
"clipboard-manager:default",
"dialog:default",
{
"identifier": "http:default",
"allow": [{ "url": "https://*.tauri.app" }],
"deny": [{ "url": "https://private.tauri.app" }]
},
{
"identifier": "fs:allow-exists",
"allow": [
{
"path": "$APPDATA/*"
}
]
}
]
}
```
run
```
#[cfg_attr(mobile, tauri::mobile_entry_point)]
pub fn run() {
// 初始化日志
sys_log::init_logger();
log::info!("init system start");
let current_dir = std::env::current_dir().expect("路径错误");
let data_path = current_dir.join(DATA_PATH);
if !data_path.exists() {
let _ = std::fs::create_dir_all(&data_path).expect("创建数据文件夹失败");
}
// let data_path = data_path.as_path();
#[allow(unused_mut)]
let mut builder = tauri::Builder::default();
#[allow(unused_mut)]
let mut app = builder
.plugin(tauri_plugin_dialog::init())
.plugin(tauri_plugin_clipboard_manager::init())
.plugin(tauri_plugin_shell::init())
.invoke_handler(tauri::generate_handler![greet])
.setup(move |app| {
register_database(app, data_path.clone())?;
// The shutdown request event is registered here
close_dialog::dialog(app.handle());
Ok(())
})
.build(tauri::generate_context!())
.expect("error while building tauri application");
log::info!("init system ready");
app.run(|_app_handle, e| {
match e {
tauri::RunEvent::Ready => {
log::info!("Application is ready.");
},
_ => {}
}
});
}
```
dialog
```
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use tauri::{AppHandle, Manager, WindowEvent};
use tauri_plugin_dialog::{DialogExt, MessageDialogButtons, MessageDialogKind};
pub fn dialog(app: &AppHandle) {
let main_window = Arc::new(app.get_webview_window("main").unwrap());
let dialog_shown = Arc::new(AtomicBool::new(false));
let dialog_shown_clone = dialog_shown.clone();
let app = Arc::new(app.clone());
let main_window_clone = main_window.clone();
main_window.on_window_event(move |event| {
if let WindowEvent::CloseRequested { api, .. } = event {
if !dialog_shown_clone.load(Ordering::Relaxed) {
log::info!("Confirm to exit the current program?");
// 标记对话框已显示
dialog_shown_clone.store(true, Ordering::Relaxed);
// 阻止默认的关闭行为
api.prevent_close();
let app = app.clone();
let main_window_clone = main_window_clone.clone();
let dialog_shown_clone = dialog_shown.clone();
// 使用 tokio 或 async-std 来处理异步任务
tauri::async_runtime::block_on( async move {
handle_exit_dialog(app, main_window_clone, dialog_shown_clone).await
});
}
}
});
}
async fn handle_exit_dialog(
app: Arc<AppHandle>,
main_window: Arc<tauri::WebviewWindow>,
dialog_shown: Arc<AtomicBool>,
) {
log::info!("Wait for the user to choose whether to quit the program!");
app.dialog()
.message("确定退出吗??")
.kind(MessageDialogKind::Info)
.title("Information")
.buttons(MessageDialogButtons::OkCancelCustom(
format!("退出"),
format!("取消"),
))
.show(move |result| match result {
true => {
log::info!("Exit application");
main_window
.close()
.unwrap_or_else(|e| log::error!("Failed to close window: {}", e));
},
false => {
log::info!("Cancels exiting the current program");
dialog_shown.store(false, Ordering::Relaxed);
}
});
}
```
### Reproduction
_No response_
### Expected behavior
I want a dialog box to pop up when I close the program and let the user confirm
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 14.5.0 x86_64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.84.0 (9fc6b4312 2025-01-07)
✔ cargo: 1.84.0 (66221abde 2024-11-19)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-apple-darwin (default)
- node: 22.11.0
- yarn: 1.22.22
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.2.0
- tauri-build 🦀: 2.0.4
- wry 🦀: 0.48.0
- tao 🦀: 0.31.1
- @tauri-apps/api : 2.2.0
- @tauri-apps/cli : 2.2.2
[-] Plugins
- tauri-plugin-shell 🦀: 2.2.0
- @tauri-apps/plugin-shell : 2.2.0
- tauri-plugin-fs 🦀: 2.2.0
- @tauri-apps/plugin-fs : not installed!
- tauri-plugin-dialog 🦀: 2.2.0
- @tauri-apps/plugin-dialog : 2.2.0
- tauri-plugin-http 🦀: 2.2.0
- @tauri-apps/plugin-http : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
```text
Can print out this log: 'Wait for the user to choose whether to quit the program! '
```
### Additional context
```
[build-dependencies]
tauri-build = { version = "2", features = [] }
[dependencies]
tauri = { version = "2", features = [] }
tauri-plugin-shell = "2"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
chrono = "0.4"
quick-xml = "0.37.2"
tempfile = "3.15.0"
thiserror = "2.0"
zip = "2"
walkdir = "2.5.0"
# tokio = {version = "1.43.0", features = ["full"] }
tauri-plugin-fs = "2"
tauri-plugin-clipboard-manager = "2.2.0"
tauri-plugin-dialog = "2"
tauri-plugin-http = "2"
sqlx = { version = "0.8", default-features = false, features = ["runtime-tokio-native-tls", "sqlite", "macros"] }
# rusqlite = {version = "0.32.1", features = ["bundled-sqlcipher"] }
urlencoding = "2.1"
log = "0.4"
log4rs = {version = "1.3", features = ["gzip"] }
[dev-dependencies]
tokio = {version = "1.43.0", features = ["full"] }
``` | type: bug,status: needs triage | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.