id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,775,723,606 | transformers | Add cosmos from Nvidia | ### Model description
https://www.nvidia.com/en-us/ai/cosmos/ on model is autoregressive, all open source
We might have a PR coming directly from NVIDIA!
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | New model | low | Minor |
2,775,773,741 | next.js | Next.js project doesn't work with Yarn 4.6.0 using Turbopack | ### Link to the code that reproduces this issue
https://github.com/GlittersIsGold/turboyarnext
### To Reproduce
1. install yarn 4.6.0 - ```npm install -g yarn@latest```
2. check yarn version - ```yarn -v``` should output 4.6.0
3. init next-app with yarn - ```yarn create next-app turboyarnext```
4. Follow the prompts, simply answer Yes to use Turbopack
```
✔ Would you like to use TypeScript? … No / Yes - Y
✔ Would you like to use ESLint? … No / Yes - Y
✔ Would you like to use Tailwind CSS? … No / Yes - N
✔ Would you like your code inside a `src/` directory? … No / Yes - N
✔ Would you like to use App Router? (recommended) … No / Yes - Y
✔ Would you like to use Turbopack for `next dev`? … No / Yes - Y
? Would you like to customize the import alias (`@/*` by default)? › No / Yes - N
```
```
Creating a new Next.js app in /Users/GlittersIsGold/Documents/Programms/turboyarnext.
Using yarn.
Initializing project with template: app
Installing dependencies:
- react
- react-dom
- next
Installing devDependencies:
- typescript
- @types/node
- @types/react
- @types/react-dom
- eslint
- eslint-config-next
- @eslint/eslintrc
! The local project doesn't define a 'packageManager' field. Corepack will now add one referencing [email protected]+sha512.a6b2f7906b721bba3d67d4aff083df04dad64c399707841b7acf00f6b133b7ac24255f2652fa22ae3534329dc6180534e98d17432037ff6fd140556e2bb3137e.
! For more details about this field, consult the documentation at https://nodejs.org/api/packages.html#packagemanager
➤ YN0000: · Yarn 4.6.0
➤ YN0000: ┌ Resolution step
➤ YN0085: │ + @eslint/eslintrc@npm:3.2.0, @types/node@npm:20.17.12, @types/react-dom@npm:19.0.2, and 319 more.
➤ YN0000: └ Completed in 3s 599ms
➤ YN0000: ┌ Fetch step
➤ YN0000: └ Completed
➤ YN0000: ┌ Link step
➤ YN0000: │ ESM support for PnP uses the experimental loader API and is therefore experimental
➤ YN0007: │ sharp@npm:0.33.5 must be built because it never has been before or the last one failed
➤ YN0000: └ Completed in 1s 361ms
➤ YN0000: · Done with warnings in 5s 185ms
Initialized a git repository.
Success! Created turboyarnext at /Users/GlittersIsGold/Documents/Programm/turboyarnext
```
5. proceed to project dir, install deps and try to start app
```cd turboyarnext && yarn && yarn dev```
```
➜ turboyarnext git:(main) yarn
➤ YN0000: · Yarn 4.6.0
➤ YN0000: ┌ Resolution step
➤ YN0000: └ Completed
➤ YN0000: ┌ Fetch step
➤ YN0000: └ Completed
➤ YN0000: ┌ Link step
➤ YN0000: │ ESM support for PnP uses the experimental loader API and is therefore experimental
➤ YN0000: └ Completed
➤ YN0000: · Done with warnings in 0s 272ms
➜ turboyarnext git:(main) yarn dev
▲ Next.js 15.1.4 (Turbopack)
- Local: http://localhost:3000
- Network: http://79.164.27.29:3000
✓ Starting...
FATAL: An unexpected Turbopack error occurred. Please report the content of /var/folders/st/_ln0wwtj7xl3pd3l2sv8xh700000gn/T/next-panic-fc976d666c1f252cd743c4aa6d34fee8.log, along with a description of what you were doing when the error occurred, to https://github.com/vercel/next.js/issues/new
[Error [TurbopackInternalError]: Next.js package not found
Debug info:
- Execution of get_entrypoints_with_issues failed
- Execution of Project::entrypoints failed
- Execution of AppProject::routes failed
- Execution of directory_tree_to_entrypoints_internal failed
- Execution of directory_tree_to_loader_tree failed
- Execution of *FileSystemPath::join failed
- Execution of get_next_package failed
- Next.js package not found]
```
6. Get an error
```
cat /var/folders/st/_ln0wwtj7xl3pd3l2sv8xh700000gn/T/next-panic-fc976d666c1f252cd743c4aa6d34fee8.log
---------------------------
Next.js package not found
Debug info:
- Execution of get_entrypoints_with_issues failed
- Execution of Project::entrypoints failed
- Execution of AppProject::routes failed
- Execution of directory_tree_to_entrypoints_internal failed
- Execution of directory_tree_to_loader_tree failed
- Execution of *FileSystemPath::join failed
- Execution of get_next_package failed
- Next.js package not found
```
### Current vs. Expected behavior
Expected to launch Next app via Turbopack & yarn dev, observed to Fatal Turbopack error, package not found.
Next package already exists in package.json
```package.json
{
"name": "turboyarnext",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev --turbopack",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"dependencies": {
"next": "15.1.4",
"react": "^19.0.0",
"react-dom": "^19.0.0"
},
"devDependencies": {
"@eslint/eslintrc": "^3",
"@types/node": "^20",
"@types/react": "^19",
"@types/react-dom": "^19",
"eslint": "^9",
"eslint-config-next": "15.1.4",
"typescript": "^5"
},
"packageManager": "[email protected]+sha512.a6b2f7906b721bba3d67d4aff083df04dad64c399707841b7acf00f6b133b7ac24255f2652fa22ae3534329dc6180534e98d17432037ff6fd140556e2bb3137e"
}
```
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Fri Dec 6 19:02:41 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6030
Available memory (MB): 18432
Available CPU cores: 12
Binaries:
Node: 20.14.0
npm: 10.7.0
Yarn: 4.6.0
pnpm: N/A
Relevant Packages:
next: 15.1.4
eslint-config-next: 15.1.4
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | create-next-app,Turbopack | low | Critical |
2,775,791,527 | pytorch | Extended functionality for torch.quantization.fuse_modules | ### 🚀 The feature, motivation and pitch
The method `torch.quantization.fuse_modules` supports many of the common fusion strategies, i.e., conv+bn, conv+bn+relu, etc. However, there are additional fusion operations that are useful in practice that could be interesting. Specifically, cascades of bn+linear layers can actually be fused trivially using the following. The documentation contains the algebraic derivation of the fusion.
```python
@torch.no_grad()
def fuse_batch_norm_1d_into_linear(norm: nn.BatchNorm1d, linear: nn.Linear, epsilon: float=1e-12) -> None:
"""
Fuse a batch norm module into the linear layer that follows it.
Args:
norm: The batch norm layer that occurs before the convolution layer.
linear: The linear layer to fuse the batch norm into.
epsilon: A small value for numerical stability.
Returns:
None
Details:
This function de-composes the fusion into four simple steps. Assume that the
cascade of a 1d batch normalization into a linear layer is formulated
as follows where \f$x\f$ is the input vector, \f$\mu, \sigma\f$ are the
moving statistics of the batch norm, \f$\gamma, \beta\f$ are the learned
affine parameters of the batch norm, and \f$W, b\f$ are the weights and
biases of the linear layer.
\f$y = \Big[ \frac{x - \mu}{\sigma} \odot \gamma + \beta \Big] \cdot W + b\f$
1. Apply the distributive property to group \f$\beta\f$ with the bias \f$b\f$.
This allows \f$\beta\f$ to be absorbed by the bias of the linear layer:
\f$y = \Big[ \frac{x - \mu}{\sigma} \odot \gamma \Big] \cdot W + \beta \cdot W + b\f$
Update: \f$b \gets \beta \cdot W + b\f$
2. Apply the associative law for scalar and dot product to group \f$\gamma\f$
with the weight \f$W\f$. This allows \f$\gamma\f$ to be absorbed by the weight:
\f$y = \Big[ \frac{x - \mu}{\sigma} \Big] \cdot \big[ W \odot \gamma \big] + b\f$
Update: \f$W \gets W \odot \gamma\f$
3. Apply the associative law for scalar and dot product to group \f$\sigma\f$
with the weight \f$W\f$. This allows \f$\sigma\f$ to be absorbed by the weight:
\f$y = \big[ x - \mu \big] \cdot \Big[ W \odot \frac{1}{\sigma} \Big] + b\f$
Update: \f$W \gets W \odot \frac{1}{\sigma}\f$
4. Apply the distributive property to group \f$\mu\f$ with the bias \f$b\f$.
This allows \f$\mu\f$ to be absorbed by the bias:
\f$y = x \cdot W - \mu \cdot W + b\f$
Update: \f$b \gets b - \mu \cdot W\f$
This leaves the final simplified linear form with the batch norm analytically
integrated into the calculation. The batch norm can now be replaced by the
fused linear layer:
\f$y = x \cdot W + b\f$
"""
# 1. Apply distributive property to group β with the bias.
offset = norm.bias @ linear.weight.T
if linear.bias is None:
linear.bias = nn.Parameter(offset)
else:
linear.bias[:] = linear.bias + offset
norm.bias.fill_(0.0) # Reset β to identity.
# 2. Apply associative law for scalar and dot product to group γ with weight.
linear.weight[:] = linear.weight * norm.weight
norm.weight.fill_(1.0) # Reset γ to identity.
# 3. Apply associative law for scalar and dot product to group Var[x] with weight.
linear.weight[:] = linear.weight / norm.running_var.add(epsilon).sqrt()
norm.running_var[:] = 1.0 # reset Var[x] to identity.
# 4. Apply distributive property to group E[x] with bias.
offset = norm.running_mean @ linear.weight.T
linear.bias[:] = linear.bias - offset
norm.running_mean[:] = 0.0 # reset E[x] to identity.
```
This same concept can be applied to bn+conv, though the derivation is less straight forward when supporting strided convolution, group convolution, etc. Happy to provide the derivation and code for that if these are features the PyTorch community would be interested in adding to the library directly. I certainly find them useful in practice!
### Alternatives
I'm aware that `torch.quantization.fuse_modules` can be augmented using `fuse_custom_config_dict`, but perhaps directly integrating these fusion policies into PyTorch could be helpful. I certainly find them useful in practice!
### Additional context
_No response_
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim | oncall: quantization,triaged | low | Minor |
2,775,799,945 | rust | `const`s that depend on other constants which fail to evaluate create difficult to read error messages due to `notes:` output | ### Code
```Rust
macro_rules! blarg {
() => {
const A: u32 = const {
panic!()
};
const B: u32 = A + 1;
const C: u32 = B + 1;
const D: u32 = C + 1;
}
}
blarg!();
```
### Current output
```Shell
Compiling playground v0.0.1 (/playground)
error[E0080]: evaluation of `A::{constant#0}` failed
--> src/lib.rs:13:1
|
13 | blarg!();
| ^^^^^^^^ the evaluated program panicked at 'explicit panic', src/lib.rs:13:1
|
= note: this error originates in the macro `$crate::panic::panic_2021` which comes from the expansion of the macro `blarg` (in Nightly builds, run with -Z macro-backtrace for more info)
note: erroneous constant encountered
--> src/lib.rs:3:24
|
3 | const A: u32 = const {
| ________________________^
4 | | panic!()
5 | | };
| |_________^
...
13 | blarg!();
| -------- in this macro invocation
|
= note: this note originates in the macro `blarg` (in Nightly builds, run with -Z macro-backtrace for more info)
note: erroneous constant encountered
--> src/lib.rs:7:24
|
7 | const B: u32 = A + 1;
| ^
...
13 | blarg!();
| -------- in this macro invocation
|
= note: this note originates in the macro `blarg` (in Nightly builds, run with -Z macro-backtrace for more info)
note: erroneous constant encountered
--> src/lib.rs:8:24
|
8 | const C: u32 = B + 1;
| ^
...
13 | blarg!();
| -------- in this macro invocation
|
= note: this note originates in the macro `blarg` (in Nightly builds, run with -Z macro-backtrace for more info)
note: erroneous constant encountered
--> src/lib.rs:9:24
|
9 | const D: u32 = C + 1;
| ^
...
13 | blarg!();
| -------- in this macro invocation
|
= note: this note originates in the macro `blarg` (in Nightly builds, run with -Z macro-backtrace for more info)
For more information about this error, try `rustc --explain E0080`.
error: could not compile `playground` (lib) due to 1 previous error
```
### Desired output
```Shell
Compiling playground v0.0.1 (/playground)
error[E0080]: evaluation of `A::{constant#0}` failed
--> src/lib.rs:13:1
|
13 | blarg!();
| ^^^^^^^^ the evaluated program panicked at 'explicit panic', src/lib.rs:13:1
|
= note: this error originates in the macro `$crate::panic::panic_2021` which comes from the expansion of the macro `blarg` (in Nightly builds, run with -Z macro-backtrace for more info)
For more information about this error, try `rustc --explain E0080`.
error: could not compile `playground` (lib) due to 1 previous error
```
### Rationale and extra context
playground link to repro: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=f8bd0c35877ea97f896174c4543618cc
seems like a regression of this previous issue & fix: https://github.com/rust-lang/rust/issues/110891
while working on a macro for `rasn`, I tested the panic conditions for failure to parse the input literal, which I noticed produces a wall of `note:`s that are utterly useless (especially given the fact that the macro is not expanded in the doctest so the error spans are just the same macro invocation over and over + a couple others, which makes finding the error very difficult:
```rust
error[E0080]: evaluation of `main::_doctest_main_src_macros_rs_193_0::SYS_DESCR::OID::{constant#0}::COMPONENTS::{constant#0}` failed
--> src/macros.rs:195:46
|
5 | const SYS_DESCR: &'static rasn::types::Oid = rasn::oid!(".3.3.6.1.2.1.2.2.1.3");
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the evaluated program panicked at 'the first OID arc must be <= 2', src/macros.rs:5:46
|
= note: this error originates in the macro `$crate::panic::panic_2021` which comes from the expansion of the macro `rasn::oid` (in Nightly builds, run with -Z macro-backtrace for more info)
note: erroneous constant encountered
--> src/macros.rs:195:46
|
5 | const SYS_DESCR: &'static rasn::types::Oid = rasn::oid!(".3.3.6.1.2.1.2.2.1.3");
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this note originates in the macro `rasn::oid` (in Nightly builds, run with -Z macro-backtrace for more info)
note: erroneous constant encountered
--> src/macros.rs:195:46
|
5 | const SYS_DESCR: &'static rasn::types::Oid = rasn::oid!(".3.3.6.1.2.1.2.2.1.3");
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this note originates in the macro `rasn::oid` (in Nightly builds, run with -Z macro-backtrace for more info)
note: erroneous constant encountered
--> src/macros.rs:195:46
|
5 | const SYS_DESCR: &'static rasn::types::Oid = rasn::oid!(".3.3.6.1.2.1.2.2.1.3");
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this note originates in the macro `rasn::oid` (in Nightly builds, run with -Z macro-backtrace for more info)
note: erroneous constant encountered
--> src/macros.rs:195:46
|
5 | const SYS_DESCR: &'static rasn::types::Oid = rasn::oid!(".3.3.6.1.2.1.2.2.1.3");
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this note originates in the macro `rasn::oid` (in Nightly builds, run with -Z macro-backtrace for more info)
note: erroneous constant encountered
--> src/macros.rs:196:12
|
6 | assert_eq!(SYS_DESCR, rasn::types::Oid::new(&[1, 3, 6, 1, 2, 1, 2, 2, 1, 3]).unwrap());
| ^^^^^^^^^
note: erroneous constant encountered
--> src/macros.rs:196:1
|
6 | assert_eq!(SYS_DESCR, rasn::types::Oid::new(&[1, 3, 6, 1, 2, 1, 2, 2, 1, 3]).unwrap());
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: this note originates in the macro `assert_eq` (in Nightly builds, run with -Z macro-backtrace for more info)
```
### Other cases
```Rust
```
### Rust Version
```Shell
rustc 1.85.0-beta.1 (e30eefff4 2025-01-08)
binary: rustc
commit-hash: e30eefff41038ceea427009023627d6d66b36715
commit-date: 2025-01-08
host: x86_64-unknown-linux-gnu
release: 1.85.0-beta.1
LLVM version: 19.1.6
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,775,812,933 | vscode | Allow default folder setting for "load workspace from file" | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
For those like me that have many workspace settings files, allow a default folder setting which contains all the .code-workspace files. When File→Open File From Workspace is selected, open this folder as the default. If the setting has no value, then use the default as currently implemented. | feature-request,workbench-workspace | low | Minor |
2,775,828,249 | neovim | ci: run benchmark test suite in CI | ### Problem
The `test/benchmark` test suite is not run in CI, which means the code there could break or "bit rot".
### Expected behavior
Add a benchmark job in CI. It's only intended to keep the benchmarks in a "runnable state", so I suggest:
- 1 job only (linux).
- only runs on master, not PRs.
- only checks that the suite didn't exit non-zero. doesn't assert any sort of performance goals. | enhancement,performance,ci | low | Major |
2,775,842,644 | angular | feat(API): Make ɵSharedStylesHost a Public API in Angular for customisable style overrides | ### Feature Description
This proposal suggests making the internal API `ɵSharedStylesHost` a public API, allowing developers to leverage the advantage of extending ɵSharedStylesHost. By exposing the extended version of ɵSharedStylesHost through Angular’s dependency injection system, developers can easily access it to modify or override styles dynamically for use cases such as Web Components or cross-framework environments.
### Use Case
### Proposal: Make `ɵSharedStylesHost` a public API in Angular for Customisable Style Overrides
for Web Component-based Environments
#### **Context**
As web development progresses towards using **Web Components**, developers often need greater flexibility to **dynamically modify styles** of components, especially when they are used across multiple frameworks or environments. In Angular, the `ɵSharedStylesHost` class plays a central role in managing styles across components, particularly those using **ViewEncapsulation.Emulated**. However, `ɵSharedStylesHost` is an internal class, and developers currently have no public API to interact with it for dynamic style modification.
Given the increasing adoption of **Web Component** architecture, especially in projects that need to integrate Angular components as **custom elements**, there is a need for a solution that allows overriding styles at runtime. This is particularly useful for use cases like dynamic theming or external style injection, such as when Angular Material components are used in a Web Component-based environment.
This proposal suggests **extending `ɵSharedStylesHost`** to allow developers to interact with it via **Angular’s dependency injection system**. By exposing the extended version of `ɵSharedStylesHost` through a provider, developers can easily access it to modify or override styles dynamically for use cases like **Web Components** or **cross-framework environments**.
#### **Problem Statement**
- **Web Component Environments**: Angular components, especially those encapsulated using **Shadow DOM** or **ViewEncapsulation.Emulated**, often need to have their styles dynamically overridden in web component-based environments.
- **Material CSS**: When using **Angular Material** components or third-party libraries inside Angular custom elements, there may be a need to override specific styles like themes, colors, or layout properties dynamically.
- **No Public API for Dynamic Style Overrides**: Currently, Angular does not provide an easy, public API to interact with or override styles after the component is initialized, especially for components that use **ViewEncapsulation.Emulated** or **Shadow DOM**.
#### **Objective**
This proposal aims to extend the functionality of the internal `ɵSharedStylesHost` class and expose it via Angular’s **dependency injection system**. By doing so, we provide a public mechanism for dynamically modifying or overriding styles in Angular components, particularly those used as **Web Components** or **custom elements**.
#### **Proposed Solution**
1. **Extend `ɵSharedStylesHost` and Expose It Through a Provider**:
- Extend the `ɵSharedStylesHost` class to allow dynamic interaction with the styles it manages.
- Provide this extended class as a service through Angular's dependency injection system so that developers can use it in their components, directives, or other services.
Example Extended `SharedStylesHost`:
```typescript
import { Injectable } from '@angular/core';
import { ɵSharedStylesHost } from '@angular/core';
@Injectable({
providedIn: 'root', // This allows the service to be injected globally
})
export class ExtendedSharedStylesHost extends ɵSharedStylesHost {
constructor() {
super();
}
/**
* Dynamically overrides a style in the component's view.
* @param selector CSS selector of the target element.
* @param property CSS property to override.
* @param value New value for the property.
*/
overrideStyle(selector: string, property: string, value: string): void {
const element = document.querySelector(selector);
if (element) {
element.style[property] = value;
} else {
console.error(`Element with selector "${selector}" not found.`);
}
}
/**
* Injects a new stylesheet into the component's view.
* @param styleUrl URL of the stylesheet.
*/
injectStylesheet(styleUrl: string): void {
// Check if the stylesheet already exists to avoid duplication
const existingLink = document.querySelector(`link[href="${styleUrl}"]`);
if (!existingLink) {
const link = document.createElement('link');
link.rel = 'stylesheet';
link.href = styleUrl;
document.head.appendChild(link);
} else {
console.log(`Stylesheet with URL "${styleUrl}" already injected.`);
}
}
/**
* Removes an injected stylesheet.
* @param styleUrl URL of the stylesheet to remove.
*/
removeStylesheet(styleUrl: string): void {
const link = document.querySelector(`link[href="${styleUrl}"]`);
if (link) {
link.remove();
} else {
console.error(`Stylesheet with URL "${styleUrl}" not found.`);
}
}
/**
* Override and manipulate styles before injecting them.
* @param styles Array of CSS styles as strings.
*/
override addStyles(styles: string[]): void {
// Manipulate styles before calling the super method
const modifiedStyles = styles.map(style => {
// Example: Wrap all styles with a custom class to scope them
return `.custom-class ${style}`;
});
// Call the original addStyles method with modified styles
super.addStyles(modifiedStyles);
}
}
```
2. **Register the Extended Service in Angular’s Providers**:
- Angular’s **dependency injection** system can be used to inject this extended service into any component or service that needs to override styles dynamically.
- By registering `ExtendedSharedStylesHost` as a provider in the application, we ensure that it is available throughout the application, especially for components using **ViewEncapsulation.Emulated** or **Shadow DOM**.
Example of providing the service globally:
```typescript
@NgModule({
providers: [
ExtendedSharedStylesHost // Register the extended service
]
})
export class AppModule {}
```
3. **Using the Extended `SharedStylesHost` in Components**:
- In any component or service, developers can now inject the `ExtendedSharedStylesHost` and use its methods to interact with and override styles of components dynamically.
Example usage in a component:
```typescript
@Component({
selector: 'app-dynamic-style',
templateUrl: './dynamic-style.component.html',
styleUrls: ['./dynamic-style.component.css']
})
export class DynamicStyleComponent {
constructor(private stylesHost: ExtendedSharedStylesHost) {}
changeTheme() {
// Example: Dynamically change background color using overrideStyle method
this.stylesHost.overrideStyle('.theme-background', 'background-color', 'lightblue');
}
addCustomStyles() {
// Dynamically add a new stylesheet
this.stylesHost.injectStylesheet('assets/custom-styles.css');
}
}
```
4. **Cross-framework Compatibility and Web Component Customization**:
- Since the `ExtendedSharedStylesHost` service can be injected globally, it allows Angular components embedded as Web Components to interact with their styles dynamically.
- Developers can use this service to ensure that styles (such as CSS variables) are injected or overridden according to external needs, such as theming across multiple frameworks.
5. **Dynamic Theming Support**:
- This solution would support the ability to **dynamically inject CSS** variables or stylesheets at runtime, enabling applications to easily swap between different themes or branding options.
- This is particularly useful in environments where Angular Material is used as a Web Component and developers need to dynamically switch themes.
#### **Benefits**
1. **Improved Flexibility**: By exposing `ɵSharedStylesHost` as a public API, developers can programmatically manage styles for Angular components, particularly those used in Web Component-based environments.
2. **Dynamic Theming**: The ability to dynamically change themes, override default styles, and apply custom CSS would enhance user experience and make Angular components more adaptable.
3. **Cross-framework Integration**: Developers can now modify Angular component styles when using them as custom elements or web components within other frameworks (like React or Vue).
4. **Web Component Customization**: This solution enables deep customization of Angular components used as web components, offering a powerful mechanism for styling and theming.
#### **Considerations**
1. **Performance**: Dynamically injecting styles or modifying component styles could have a performance impact if done too frequently. Care should be taken to optimize for large applications.
2. **Backward Compatibility**: This solution should ensure that existing applications using Angular’s current style encapsulation mechanisms are not broken.
3. **Security**: The extension of `ɵSharedStylesHost` should ensure that it does not expose sensitive internal implementation details or break Angular's encapsulation model.
#### **Conclusion**
By extending the `ɵSharedStylesHost` class and exposing it as a public service through Angular's **dependency injection** system, we can provide developers with the tools to dynamically manage and override styles in Angular components. This would be particularly beneficial for **Web Component-based environments**, **cross-framework styling**, and **dynamic theming**, helping Angular remain flexible and adaptable for modern, large-scale applications. | feature,area: core,needs triage | low | Critical |
2,775,903,037 | flutter | [monorepo] Google test doesn't pick up engine change. | ### Type of Request
bug
### Infrastructure Environment
google test
### What is happening?
This for example https://github.com/flutter/flutter/pull/161265 which adds new API in dart ui and call it on the frameowrk side. The google test however failed because it didn't pick up the dart ui change. It seems to me the google test is not using the engine from the source code.
### Steps to reproduce
pr that change dart ui and calls it on the framework side
### Expected results
google test should use the engine build from the source code. | team-infra,monorepo | low | Critical |
2,775,920,758 | next.js | dev --turbo in monorepo can't resolve alias import | ### Link to the code that reproduces this issue
https://github.com/adr-x/next-turbo
### To Reproduce
```
pnpm dev -F web
```
vs
```
pnpm dev-turbo -F web
```
### Current vs. Expected behavior
Expected behavior is to work like on production and resolve the import.
`dev --turbopack`

`dev` and `start

### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Home
Available memory (MB): 65444
Available CPU cores: 12
Binaries:
Node: 18.20.5
npm: 10.8.2
Yarn: N/A
pnpm: 9.1.2
Relevant Packages:
next: 15.2.0-canary.1 // Latest available version is detected (15.2.0-canary.1).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | Turbopack | low | Minor |
2,775,999,050 | neovim | Weird `scrolloff` behavior when virtual lines are present before line 1 | ### Problem
When there are virtual lines (extmarks) before line 1 of a buffer, and `scrolloff` is nonzero, scrolling line 1 into view via `scrolloff` will cause a large and unexpected scroll jump.
### Steps to reproduce
1. Open a clean neovim `nvim -u NONE`
2. Insert some lines `ihello<ESC>yy9p`
3. Insert an extmark line above line 1 `:lua vim.api.nvim_buf_set_extmark(0, vim.api.nvim_create_namespace("vline"), 0, 0, { virt_lines={{{"Virt Line", "Comment"}}}, virt_lines_above=true })`
4. Add some more virtual lines `9@:`
5. Set scrolloff `:set scrolloff=4`
6. Set line numbering to make it a bit easer to see `:set nu`
7. Scroll down a little `<C-e><C-e>`
8. Now is when you can witness the bug. Press `k` a couple times and notice that when line 1 gets into the scrolloff range, the buffer jumps all the way up.
### Expected behavior
I would expect pressing `k` with `scrolloff=4` set to still only scroll the buffer up by 1 line when the cursor is already positioned at row 4 of the window.
### Nvim version (nvim -v)
NVIM v0.11.0-dev
### Vim (not Nvim) behaves the same?
yes
### Operating system/version
WSL
### Terminal name/version
WSLTTY
### $TERM environment variable
xterm-256color
### Installation
Neovim PPA | bug,marks | low | Critical |
2,776,026,273 | kubernetes | [FG:InPlacePodVerticalScaling] avoid checking the configuration of resource managers to learn their expected behavior | ### What happened?
The VPA logic needs to know if resource managers (cpumanager, memory manager) can allocate exclusively resources.
To do so, it peeks in their configuration and second-guesses their expected behavior.
This is unfortunate and due to the lack of resource manager API which can report the same information
As result of this approach, there's an actual bug in the current VPA logic: this check is wrong https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kubelet.go#L2856 because the name of the policy for cpumanager is `static` (lowercase "s") while for memory manager is `Static` (uppercase "s")
### What did you expect to happen?
VPA should be able to learn about resource manager behavior in a safe and supported way
### How can we reproduce it (as minimally and precisely as possible)?
configure memory manager with `Static` policy and enable VPA. the `canResizePod` logic will behave incorrectly
### Anything else we need to know?
behavior introduced in 2d8939c4aef8f060d413bb27272ba38cd7171fbe
a new API to expose this information is proposed in https://github.com/kubernetes/kubernetes/pull/128728
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,priority/backlog,sig/node,triage/accepted | low | Critical |
2,776,042,919 | godot | Unable to engage debugger during execution - `Parameter "script_lang" is null.` | ### Tested versions
- Reproducible in 4.3.stable and later, not reproducible in 4.2.2
### System information
Godot v4.4.dev7 - Windows 10 (build 19045) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 SUPER (NVIDIA; 32.0.15.6094) - AMD Ryzen 7 2700X Eight-Core Processor (16 threads)
### Issue description
It appears that when you run a scene in the editor and hit a blocking operation (a large while loop seems to recreate this issue nicely), the editor becomes unable to pause execution, instead returning (on my example script, for instance):
```
E 0:00:01:0489 node_2d.gd:7 @ _ready(): Parameter "script_lang" is null.
<C++ Source> core/debugger/remote_debugger.cpp:420 @ debug()
<Stack Trace> node_2d.gd:7 @ _ready()
```
I'm not certain if this is something that changed on my end recently, as I only noticed it with dev6, but testing reveals that I can go back as early as 4.3 to recreate the issue in the same way, while 4.2.2 properly pauses execution.
### Steps to reproduce
- Add the code below to a Node2D, run the program.
- Attempt to pause the code before the output prints, in either the scene run buttons or the debugger buttons
### Minimal reproduction project (MRP)
The below script is enough to recreate the issue:
```
extends Node2D
func _ready() -> void:
var i : int = 0
while i < 100000000:
i += 1
print("loop complete")
``` | bug,topic:editor,confirmed,regression | low | Critical |
2,776,044,297 | godot | [Windows] Godot console wrapper fails to launch in certain paths | ### Tested versions
4.3.stable
4.4.dev [2582793d408ade0b6ed42f913ae33e7da5fb9184]
### System information
Windows 11
### Issue description
Godot fails to launch the console wrapper when in specific folders, it seems specific to `C:\\Users\\` and doesn't happen in some other paths, but specifically happens with paths with spaces in it
### Steps to reproduce
Try to launch Godot console in a path under `C:\\Users\\` with an username with a space in it, like `C:\\Users\\Foo Bar\\`
### Minimal reproduction project (MRP)
N/A | bug,platform:windows,topic:editor | low | Major |
2,776,086,670 | PowerToys | 'PowerToys.Awake.exe --use-pt-config' opens a debug message window | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Awake
### Steps to reproduce
Run from a cmd shell:
"\Program Files\PowerToys\PowerToys.Awake.exe" --use-pt-config
This opens a black window with awake debug messages.
This does not happen when running awake with other options. For example, there's no such problem with:
"\Program Files\PowerToys\PowerToys.Awake.exe" --use-parent-pid
### ✔️ Expected Behavior
No debug window with the --use-pt-config, like for the --use-parent-pid option, or with no option at all.
Maybe add a new option enabling debugging messages, if actual debug messages ARE desired?
### ❌ Actual Behavior
_No response_
### Other Software
Windows 11 2024H2 with all the latest updates. [Version 10.0.26100.2605]
| Issue-Bug,Needs-Triage | low | Critical |
2,776,090,666 | yt-dlp | [core] Default --trim-filenames values | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Currently --trim-filenames is disabled by default, which can lead to "file name too long" errors with the default output template, as shown below. I believe the option should be enabled by default (once the linked PR is merged, as the current --trim-filenames is pretty broken) to prevent errors such as this. As for what the default should be, I will copy what I wrote in my initial [PR](https://github.com/yt-dlp/yt-dlp/pull/12023) for this issue:
> Furthermore, I have added (what I believe to be) reasonable default values for `--trim-filenames` based on the platform of the user. Based on this [table](https://en.m.wikipedia.org/wiki/Comparison_of_file_systems#Limits), it seems that for Windows and Mac filesystems (NTFS, FAT32, exFAT, HFS+, APFS) the maximum filename length is 255 characters, while the majority of other filesystems have a maximum filename length of 255 bytes. I also subtracted the length of what I think is the longest possible extension (`.annotations.xml`). I think these defaults will be fine for the majority of users and will prevent filename too long errors.
See also the brief discussion here: https://github.com/yt-dlp/yt-dlp/pull/12023#discussion_r1906957589
> @pukkandan: Changing default needs some further discussion (such as not truncating `id`, adding `compat-option` etc). It'd be best to split it off into a separate issue.
> @7x11x13: IMO failing to download is a lot worse than truncating the videoId in the filename. Perhaps the default output template should have the videoId at the beginning of the name, rather than the end? Anyways, I will split this into another PR.
Related:
- https://github.com/yt-dlp/yt-dlp/issues/1136
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp "https://7x11x13-testing.bandcamp.com/track/-"
[Bandcamp] Extracting URL: https://7x11x13-testing.bandcamp.com/track/-
[Bandcamp] -: Downloading webpage
[Bandcamp] 953563459: Downloading free downloads page
[Bandcamp] 953563459: Downloading mp3-v0 JSON
[Bandcamp] 953563459: Downloading mp3-320 JSON
[Bandcamp] 953563459: Downloading flac JSON
[Bandcamp] 953563459: Downloading aac-hi JSON
[Bandcamp] 953563459: Downloading vorbis JSON
[Bandcamp] 953563459: Downloading alac JSON
[Bandcamp] 953563459: Downloading wav JSON
[Bandcamp] 953563459: Downloading aiff-lossless JSON
[info] 953563459: Downloading 1 format(s): falac
ERROR: unable to open for writing: [Errno 63] File name too long: '7x11x13-testing - ああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああああ [953563459].m4a.part'
```
| enhancement,core-triage | low | Critical |
2,776,109,054 | PowerToys | Theme reversion (Left shift + Alt + PrtScrn) error message | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
Theme reversion (Left shift + Alt + PrtScrn)
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
[2025-01-08.txt](https://github.com/user-attachments/files/18351802/2025-01-08.txt)
| Issue-Bug,Needs-Triage | low | Critical |
2,776,127,180 | vscode | Incorrect TerminalShellExecution exit code | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.0
- OS Version: macOS Sonoma 14.4
Steps to Reproduce:
1. Listen for `onDidEndTerminalShellExecution` events
2. Run a command that returns some exit code
3. Type the same command and interrupt with ^C (i.e. interrupt without actually running the command)
4. The exit code reported by the `TerminalShellExecution` event for the second execution (expected to be `undefined` since the command didn't actually run) will be the same as the previous command's exit code
For example:
```bash
$ echo test^C
-> exit code undefined
$ echo test
-> exit code 0
$ echo test^C
-> exit code 0
$ echo test^C
-> exit code 0
$ echo not test^C
-> exit code undefined
$ echo test^C
-> exit code undefined
```
From a quick look, this seems like caused by this code (marked as a hack): https://github.com/microsoft/vscode/blob/3b30f94f4f129e67be9b6c677aac582923da1fab/src/vs/platform/terminal/common/capabilities/commandDetectionCapability.ts#L360-L370 | bug,help wanted,terminal-shell-bash | low | Critical |
2,776,142,000 | go | runtime:cpu1: TestPreemptionAfterSyscall/1ms failures | ```
#!watchflakes
default <- pkg == "runtime:cpu1" && test == "TestPreemptionAfterSyscall/1ms"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8726340078513899489)):
=== RUN TestPreemptionAfterSyscall/1ms
proc_test.go:1082: timeout exceeded: 5.044309084s (5s)
--- FAIL: TestPreemptionAfterSyscall/1ms (5.10s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| help wanted,NeedsInvestigation,compiler/runtime | low | Critical |
2,776,156,210 | go | proposal: spec: permit conversion of []A to []B if A and B have same underlying type modulo struct tags | **Background:** It is not uncommon to need to convert a value of slice type []A to type []B, where A and B have the same representation (e.g. string). Unfortunately, one must allocate a copy.
**Proposal:** We propose to relax the restrictions so that, given `var ( a A; aa []A; b B; bb []B)`, the conversion `([]B)(aa)` is legal so long as ~~both `B(a)` and `A(b)` are legal~~ A and B have the same underlying type, ignoring struct tags. The result would be a slice of the same len, cap, and pointer, but a different type. In other words, the operation creates an alias, it does not allocate an array.
The requirement for the ~~mutual assignability~~ same underlying type is that if aa aliases bb, then assigning to aa[0] and reading from bb[0] effects a conversion from A to B, and vice versa. Therefore both conversions had better be legal. Consequently, the two types must have the same memory layout (among other requirements).
[Edited: I mistakenly started with "mutually convertible" but this is not sufficient for e.g. A=float32 B=float64.]
Prior art:
- https://github.com/golang/go/issues/3756 - closed because it allowed []X->[]Y if X assignable to Y (which may change repr + allocate)
- https://github.com/golang/go/issues/19778 - similar but requires "same memory layout", which is not a spec concept
- https://github.com/golang/go/issues/29864 - closed by fiat
Loosely related:
- https://github.com/golang/go/issues/64742: generic slice conversion, requires a func(X)Y operator though.
- https://github.com/golang/go/issues/15209: generalize append/copy to permit assignment conversions
@ianlancetaylor @griesemer | LanguageChange,Proposal,LanguageChangeReview,LanguageProposal | medium | Critical |
2,776,179,374 | godot | GDScript completion eats the word the caret is on | ### Tested versions
v4.4.dev7.official [46c8f8c5c]
### System information
windows 11
### Issue description
I can't find if this was introduced willingly, but I find it annoying, maybe because of habit.
in recent dev builds, autocompletion now erases the word the caret is on:
|4.4dev7|4.3|
|-|-|
|<video src="https://github.com/user-attachments/assets/50941d50-923c-4bc3-a076-8fadf9d16ada"></video>|<video src="https://github.com/user-attachments/assets/af9d47b7-b65a-4618-8879-fcccbc183e67"></video>|
Is that a bug?
### Steps to reproduce
- place the caret before a word
- begin typing
- select an autocompletion
### Minimal reproduction project (MRP)
any | discussion,topic:editor,usability | low | Critical |
2,776,182,781 | kubernetes | [FG:InPlacePodVerticalScaling] Inconsistency between scheduler & kubelet admission logic | The scheduler uses `max(spec...resources, status...resources)` to determine the resources requested by a pod, but when the Kubelet is making internal fit decisions it just uses the allocated resources.
In most cases, disagreement between these two approaches would be due to a pending upsize, where the spec...resources are higher than the allocated resources (desired > allocated == actual). This case is working as intended: the scheduler should not schedule over a pending resize, but the Kubelet needs to be able to admit resizes when there are 2 competing resizes.
This is only problematic if there is a case where the scheduler thinks there are _more_ available resources than the Kubelet does, which could lead to a pod being rejected with `OutOf{CPU,Memory}` . In other words, if `max(spec...resources, status...resources) < allocated resources`. This can only happen if both of the following are true:
1. There is a pending (or deferred) downsize (spec...resources < allocated)
2. There is an in-progress upsize (actual resources < allocated)
**Repro steps:**
This repro hits the above conditions for CPU requests, and manipulates memory requests & limits to make the condition persistent.
1. Create a pod with memory & cpu requests & limits
2. In a single resize request, shrink the memory limit below usage, and increase the CPU request
- Setting the memory limit below the usage will cause the resize to be stuck in progress, without the memory or CPU changes having been actuated.
- Allocated CPU requests are now higher than the actual CPU requests (shares)
3. In a single resize request, shrink the CPU request back to the original amount, and increase memory requests & limits to more than the available memory (but below the allocatable capacity).
- The resize will be deferred since there is insufficient available memory for the request
- The desired CPU is now below the allocated amount
- For CPU requests, desired == actual < allocated
4. Create a burstable pod with a CPU request, such that the CPU request is `allocatable cpu - sum(desired cpu)`
- The scheduler will see this pod as fitting, but the kubelet will reject it with OutOfCPU
Moving the memory usage > limit check to allocation time (option 3 in [Handling Memory Limit Decreases for In-Place Pod Resize](https://docs.google.com/document/d/1cEFLXKwNOSNLAkzyhoJUgkBW0OiX-9bXB_aJV7OAypw/edit?tab=t.0#heading=h.y6xkh6jc1iat)) will mitigate this by making a persistent in-progress resize (step 2 above) less likely, but it doesn't entirely eliminate the issue.
/sig node
/priority important-soon | priority/important-soon,sig/node,triage/accepted | low | Minor |
2,776,193,502 | go | cmd/compile: ICE on slice of array pointer returned by a generic function | ### Go version
go version go1.23.0 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/mcyoung/Library/Caches/go-build'
GOENV='/Users/mcyoung/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/mcyoung/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/mcyoung/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/go/1.23.0/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/opt/homebrew/Cellar/go/1.23.0/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.0'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/mcyoung/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/Users/mcyoung/code/protocompile/go.mod'
GOWORK='/Users/mcyoung/code/protocompile/go.work'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/5l/2_ns6hbx1gj92vv9pkrl_jdc0000gn/T/go-build656695524=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
The following program crashes the compiler:
```go
package x
func F[T int32]() {
_ = G[*[0]T]()[:]
}
func G[T any]() (v T) {
return
}
var _ = F[int32]
```
I have not been able to reduce this program further: everything, except the body of `G`, is essential to triggering the bug. `F` must be stenciled, G must return a generic type, not `*T` or `*[0]T`, and the type constraint on `F[T]` seems to be irrelevant.
### What did you see happen?
ICE.
```go
./prog.go:3:6: internal compiler error: 'F[int32]': bad ptr to array in slice go.shape.*uint8
Please file a bug report including a short program that triggers the error.
https://go.dev/issue/new
```
### What did you expect to see?
Not an ICE. | NeedsInvestigation,compiler/runtime,BugReport | low | Critical |
2,776,261,470 | godot | Embedded Game can freeze Editor | ### Tested versions
- Reproducible in Godot v4.4.dev (a65954894) master, since https://github.com/godotengine/godot/pull/99010
### System information
Windows 11 (build 22631) - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2070 (NVIDIA; 32.0.15.6614) - Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz (12 threads)
### Issue description
When the game is embedded the editor process is tied to the game process, so whenever the game is frozen the editor is as well.
If the game calls `OS.delay_msec(2000)` the editor cannot be interacted with during that time.
If the game gets stuck in an infinite loop the editor will be completely frozen.
It was known that this happens on startup, but it couldn't really be fixed, see https://github.com/godotengine/godot/pull/99010#issuecomment-2507979072
### Steps to reproduce
1. Make sure "Embed Game on Next Play" is checked in the game editor.
2. Add the following code, or any code that will take a while to process:
```gdscript
func _ready() -> void:
while true:
pass
```
3. Run the game.
4. The Editor is unresponsive.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Major |
2,776,291,817 | next.js | `cacheLife` with `revalidate` < 1 causes error in production only | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/wonderful-heyrovsky-r4d52k
### To Reproduce
1. Visit the reproduction app.
2. In the app preview (dev mode), notice everything is working (`/` page content is "OK").
3. Build the app (e.g. `pnpm run build`), and notice there are no errors.
4. Run the app in production mode (e.g. `pnpm run start`), visit `/`, and see a 500 error:
> Error: Invalid revalidate configuration provided: 0.999 < 1
### Current vs. Expected behavior
Current behavior: an error is thrown in production only.
Expected behavior: in my opinion, the error check should be changed to `< 0`, and should be enforced at least during build time, if not also in dev mode.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.2.0-canary.1 // Latest available version is detected (15.2.0-canary.1).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
dynamicIO
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local)
### Additional context
_No response_ | dynamicIO | low | Critical |
2,776,311,932 | godot | PointLight2D shadow_filter = SHADOW_FILTER_NONE still applies a filter in Forward+ and Mobile rendering when it shouldn't | ### Tested versions
Doesn't work in 4.3-stable (Didn't work in previous versions of v4 at least as early as 4.2-stable either)
Works as intended in 3.6-stable
### System information
Godot v4.3.stable - openSUSE Tumbleweed 20241216 - X11 - Vulkan (Forward+) - dedicated AMD Radeon RX 6600 XT (RADV NAVI23) - AMD Ryzen 7 5700X 8-Core Processor (16 Threads)
### Issue description
In v3.6, 2D lights don't have any shadow acne. In v4.3, there are unwanted artefacts when a light casts a shadow, which as @arkology has pointed out is `shadow_filter = SHADOW_FILTER_NONE` still applying a filter.
The issue is present with Forward+ and Mobile renderer, but mostly cleared up in Compatibility.
It shows up more obviously when the game is running
Here is a picture from the most recent MRP I've uploaded

The shadows are cast from Light 2

The image above is taken from v3.6 ^
This shows the pixel perfect shadow casting with no acne from a Light2D
This is the effect I would like to achieve again

This image is from v4.3 ^
This shows the acne forming a gradient on the edge of the shadows cast from a PointLight2D

Another image from v4.3 ^
This shows the same artefacts (the gradient is a different colour as I've been fiddling to try lower it as much as possible)
Increasing the Shadow Atlas Size to maximum helps a little bit, but does not fix the problem.
I've included images from both MRPs showing the issue in a comment below
### Steps to reproduce
Works in both v4.3-stable and v3.6-stable
1. Set up the project by setting:
> Viewport width/height = 480/270 (Low res is important)
> Window override width/height = 1920/1080
> Stretch Mode = Viewport
> Stretch Aspect = Keep
3. Create a 2D scene
4. Add a ColorRect and set it to a colour darker than white (eg grey, 707070)
5. Add a PointLight2D in v4.x, or Light2D in v3.6, enable shadows
6. Set Texture to this [Image](https://github.com/user-attachments/assets/27dd7763-5318-4233-8278-107a8227f08c)
Or a similar sized sprite, where White is Light, and Transparent is no light.
7. Add a LightOccluder2D and an occluder polygon of whatever shape.
8. Make sure the Light is overlapping the occluder so there are shadows being cast,
9. Launch the game
Additional steps to get a better showing of the issue:
10. Add an extra PointLight2D, with a script to follow the mouse
11. Add a CanvasModulate with a dark colour
12. Launch the game and you'll see artefacts like the first image in this post
### Minimal reproduction project (MRP)
NEW MRP version for 4.x is here -
[NEWShadowAcneGD4-3MRP.zip](https://github.com/user-attachments/files/18362758/NEWShadowAcneGD4-3MRP.zip)
OLD version:
This contains both the 4.3 and 3.6 versions. The 3.6 version needs to be opened with 3.6-stable and not a 4.x version
[ShadowAcneGitUpload.zip](https://github.com/user-attachments/files/18352893/ShadowAcneGitUpload.zip)
| bug,topic:rendering,needs testing | low | Major |
2,776,319,222 | go | runtime: GODEBUG=profstackdepth=N not usable with //go:debug | Go 1.23 increased the default profiling stack depth from 32 to 128 (https://go.dev/cl/572396). It also introduced `GODEBUG=profstackdepth=32` as a way to revert back to the old default (https://go.dev/cl/581917).
However, that new GODEBUG doesn't appear in the `internal/godebugs` [table](https://cs.opensource.google/go/go/+/master:src/internal/godebugs/table.go;bpv=1), which means it is not eligible for use in `go.mod` `godebug` blocks or `//go:debug` directives (https://go.dev/doc/godebug).
Given that the main purpose of this GODEBUG is for compatibility, it seems like it should be eligible for use there, and perhaps this should be backported to 1.23.
This came up in https://github.com/golang/go/issues/69590#issuecomment-2578362721, where a user would like the old behavior.
One thing that is a bit unclear to me: I think if we add this as an "official" compatibility GODEBUG, then we will tie it to the version in `go.mod`. i.e., modules with `go 1.22.0` will use `profstackdepth=32`, even if built with 1.23. They must set `go 1.23.0` to get the new behavior. That is probably the right thing to do, but might be a bit odd to change in a minor release.
cc @felixge @golang/runtime | NeedsDecision,compiler/runtime,BugReport | low | Critical |
2,776,330,732 | flutter | Do not schedule resources for PRs that haven't sign the CLA | We should be more conservative with our resources and have some minimum bar met before scheduling infra resources. | team-infra | low | Minor |
2,776,343,843 | godot | When editing custom resources in the Inspector Tab, Dictionarys do not save other custom resources | ### Tested versions
Godot v4.3.stable
### System information
Godot v4.3.stable - macOS 14.6.1 - GLES3 (Compatibility) - Apple M1 Pro - Apple M1 Pro (10 Threads)
### Issue description
Pretty much what the title says. I've created a custom resource called World that has a dictionary of World nodes [String, WorldNode]. World node is also a resource. When I'm in the inspector trying to create new world nodes and add them to the dictionary, they disappear as soon as I close the inspector window. I tried this with an array instead, and the new WorldNodes are correctly saved to the array. I'd like to use a dictionary so I don't have to search for the ID of the world node.
I also tested this with other data types and it seems like only dictionaries of custom resources have this problem.
### Steps to reproduce
Create a resource, give it a dictionary data type.
Create a second custom resource. Give it some data fields.
Create a instance of your first custom resource in the heirarcy, and in the inspector, stuff some instances of your second resource in the dictionary.
Open another node or resource in the inspector. Then go back to the custom resource.
Notice that the dictionary didn't save any of the data you entered.
### Minimal reproduction project (MRP)
[dictionarytroubleshootproject.zip](https://github.com/user-attachments/files/18353075/dictionarytroubleshootproject.zip)
| bug,topic:editor,needs testing | low | Major |
2,776,349,074 | vscode | Next Reference and Next Search Result Conflict |
Type: <b>Bug</b>
1. Open any folder where the programming language supports 'find all references'. I have tried python and c++ projects
2. Use the search feature (Ctrl + Shift + F)
3. Observe that pressing F4 cycles through the search results
4. Use the 'find all references' feature on a source code symbol
5. Overserve that pressing F4 cycles through the reference results
6. Use the search feature again
7. Observe that pressing F4 reopens the references panel and cycles references instead of search results (the currently active panel)
** Expected Result **
F4 key goes to the next item on the current panel (whether it be references or search results).
** Other Details **
I have tried this on insiders edition with only the minimum extensions required to use the 'find all references' feature.
** Hotkey Mapping **
137: { "key": "f4", "command": "references-view.next",
"when": "reference-list.hasResult && references-view.canNavigate" },
...
1141: { "key": "f4", "command": "search.action.focusNextSearchResult",
"when": "hasSearchResult || inSearchEditor" },
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i9-13900K (32 x 2995)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|95.75GB (65.36GB free)|
|Process Argv|--crash-reporter-id 28a27f36-60c5-4553-bd3f-bc1be676f0c5|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (30)</summary>
Extension|Author (truncated)|Version
---|---|---
ruff|cha|2024.56.0
continue|Con|0.8.66
dart-code|Dar|3.102.0
flutter|Dar|3.102.0
vscode-eslint|dba|3.0.10
githistory|don|0.6.20
gitlab-workflow|Git|5.25.1
vscode-clangd|llv|0.1.33
git-graph|mhu|1.30.0
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
hexeditor|ms-|1.11.1
powershell|ms-|2024.4.0
vs-keybindings|ms-|0.2.1
vsliveshare|ms-|1.0.5948
vscode-yaml|red|1.15.0
rust-analyzer|rus|0.3.2257
swift-lang|ssw|1.11.4
svelte-vscode|sve|109.5.1
even-better-toml|tam|0.21.2
cmake|twx|0.0.17
vscode-lldb|vad|1.11.1
vscode-svg-previewer|vit|0.7.0
gitblame|wad|11.1.1
change-case|wma|1.0.0
clang-format|xav|1.9.0
(5 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | references-viewlet | low | Critical |
2,776,356,847 | react | Bug: state update from a rAF in useLayoutEffect not batched when using createRoot | _Recreating #28457, which was closed due to staleness._
Maybe a weird edge case, but this caused an unexpected visual change in our app when migrating a root from `ReactDOM.render`.
React version: 18.2.0
## Steps To Reproduce
1. Define a component that has some state, and a `useLayoutEffect` that modifies the state in a `requestAnimationFrame` callback. like:
```jsx
function App() {
const [message, setMessage] = React.useState("Loading...");
React.useLayoutEffect(() => {
requestAnimationFrame(() => {
setMessage("Ready");
});
});
return <p>{message}</p>;
}
```
2. Mount the component using `ReactDOM.render`, and you will never observe the component with its initial state ("Loading..."). Only the updated state ("Ready").
3. Mount the component with `createRoot` and `root.render`, and you will observe the component render twice. Once with the initial state ("Loading..."), then with the updated state ("Ready").
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example: https://codepen.io/mxmul/pen/abMexLe
[react-batching.webm](https://github.com/facebook/react/assets/3022244/5a1e3238-5b30-4b74-b796-01024dae07b5)
## The current behavior
Two renders - once with the initial state, and a second with the updated state.
## The expected behavior
A single render with the updated state. This seems to be the behavior before React 18.
| Status: Unconfirmed | high | Critical |
2,776,405,854 | transformers | Transformers can create unconventional python module names when loading certain repositories | ### System Info
- `transformers` version: 4.41.1
- Platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.35
- Python version: 3.10.15
- Huggingface_hub version: 0.23.5
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@Rocketknight1 (maybe?)
### Information
Python module names cannot typically:
* Start with anything but letter or underscore
* Contain hyphens
Transformers can create and load python modules that break both of these conventions. This can cause unexpected behavior with code that uses the modules that transformers creates, such as creating, saving, and loading pyTorch traces from disk.
### Tasks
Load a model from huggingface and trace it.
### Reproduction
I try to load, trace, save to disk, and reload the model from this repo: https://huggingface.co/nomic-ai/nomic-bert-2048
```
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
# Define mean pooling function
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Create a wrapper class for tracing
class TransformerWrapper(torch.nn.Module):
def __init__(self, model):
super().__init__()
self.model = model
def forward(self, input_ids, attention_mask):
outputs = self.model(input_ids=input_ids, attention_mask=attention_mask)
pooled = mean_pooling(outputs, attention_mask)
return F.normalize(pooled, p=2, dim=1)
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('nomic-ai/nomic-embed-text-v1')
tokenizer.model_max_length = 128
base_model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True)
base_model.eval()
# Create wrapped model
wrapped_model = TransformerWrapper(base_model)
# Prepare example input for tracing
example_sentences = ['example sentence']
encoded_input = tokenizer(
example_sentences,
padding="max_length",
truncation=True,
return_tensors='pt'
)
with torch.no_grad():
output = wrapped_model(encoded_input["input_ids"], encoded_input["attention_mask"])
# Trace the model
with torch.no_grad():
traced_model = torch.jit.trace(
wrapped_model,
(
encoded_input['input_ids'],
encoded_input['attention_mask']
)
)
print(type(base_model))
torch.jit.save(traced_model, "my_model.pt")
torch.jit.load("my_model.pt") # this will fail
```
The model is loaded in an unconventionally-named python module:
```
$ print(type(base_model))
<class 'transformers_modules.nomic-ai.nomic-bert-2048.40b98394640e630d5276807046089b233113aa87.modeling_hf_nomic_bert.NomicBertModel'>`
```
The module name is serialized inside the torch trace. When the trace is loaded again, it fails to parse because the module name of the class does not follow python conventions:
```
return torch.jit.load(model_path)
cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files, _restore_shapes) # type: ignore[call-arg]
RuntimeError: expected newline but found 'ident' here:
Serialized File "code/__torch__.py", line 6
training : bool
_is_full_backward_hook : Optional[bool]
model : __torch__.transformers_modules.nomic-ai.nomic-bert-2048.40b98394640e630d5276807046089b233113aa87.modeling_hf_nomic_bert.NomicBertModel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
def forward(self: __torch__.TransformerWrapper,
input_ids: Tensor,
```
### Expected behavior
The module names created by transformers should be sanitized to follow python convention. I was able to solve this problem with a simple modification:
https://github.com/kory/transformers/commit/b3fde4fff92f83fc3322c05cada94dae90842df8
I am unsure if this is the best fix, or whether it would be considered safe, for the package as a whole, but this does fix the tracing issue I'm hitting:
```
print(type(base_model))
<class 'transformers_modules.nomic_ai.nomic_bert_2048._40b98394640e630d5276807046089b233113aa87.modeling_hf_nomic_bert.NomicBertModel'>
```
| bug | low | Critical |
2,776,413,339 | react-native | TextInput crashes when any text is entered while running as iOS app on apple silicon mac ("My Mac (Designed for iPhone)" target) | ### Description
When running iOS apps on apple silicon macs using the "My Mac (Designed for iPhone)" target, apps crash once you enter any text into a `TextInput` component.
Running the same code on an actual iOS devices works but not when running the app on Mac.
### Steps to reproduce
```bash
npm i
npx pod-install
```
Open the project in Xcode and select "My Mac (Designed for iPhone)" as the destination. You also need to select a signing profile to make it run of course.
Start metro:
```bash
npm start
```
Once the app launches click on the `TextInput` and press any key. You will see that the app crashes.
First when you only focus the input without entering any text, the following will long many times:
```
Failed to get or decode unavailable reasons
Can't find or decode reasons
```
After you actually enter text it will crash.
### React Native Version
0.76.5
### Affected Platforms
Runtime - iOS, Runtime - Desktop
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1.1
CPU: (10) arm64 Apple M1 Pro
Memory: 1.77 GB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 21.1.0
path: /usr/local/bin/node
Yarn:
version: 1.22.22
path: /usr/local/bin/yarn
npm:
version: 10.9.0
path: /usr/local/bin/npm
Watchman:
version: 2024.11.11.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /Users/jobpaardekooper/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 2023.2 AI-232.10300.40.2321.11567975
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.1
path: /usr/bin/javac
Ruby:
version: 3.2.2
path: /Users/jobpaardekooper/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.5
wanted: 0.76.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
An uncaught exception was raised
*** -[NSXPCEncoder _checkObject:]: This coder only encodes objects that adopt NSSecureCoding (object is of class 'RCTWeakEventEmitterWrapper').
(
0 CoreFoundation 0x00000001961bc300 __exceptionPreprocess + 176
1 libobjc.A.dylib 0x0000000195ca2cd8 objc_exception_throw + 88
2 Foundation 0x00000001972c9404 -[NSXPCEncoder _checkObject:] + 308
3 Foundation 0x00000001972c90d8 -[NSXPCEncoder _encodeUnkeyedObject:] + 40
4 Foundation 0x00000001972da128 -[NSXPCEncoder _encodeArrayOfObjects:forKey:] + 184
5 Foundation 0x00000001972f46fc -[NSDictionary(NSDictionary) encodeWithCoder:] + 568
6 Foundation 0x00000001972c9710 -[NSXPCEncoder _encodeObject:] + 488
7 Foundation 0x0000000197396210 -[NSAttributedString encodeWithCoder:] + 224
8 Foundation 0x00000001972c9710 -[NSXPCEncoder _encodeObject:] + 488
9 RemoteTextInput 0x00000001b5f0eda0 -[RTIDocumentState encodeWithCoder:] + 592
10 Foundation 0x00000001972c9710 -[NSXPCEncoder _encodeObject:] + 488
11 Foundation 0x00000001972ce400 _NSXPCSerializationAddInvocationWithOnlyObjectArgumentsArray + 116
12 Foundation 0x00000001972ce2b8 -[NSXPCEncoder _encodeInvocationObjectArgumentsOnly:count:typeString:selector:isReply:into:] + 212
13 Foundation 0x00000001972c7bd4 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] + 1208
14 Foundation 0x00000001972ce1a8 -[NSXPCConnection _sendSelector:withProxy:arg1:arg2:] + 128
15 Foundation 0x00000001972ce0d0 _NSXPCDistantObjectSimpleMessageSend2 + 68
16 RemoteTextInput 0x00000001b5f13854 __52-[RTIInputSystemClient _updateTextForSessionWithID:]_block_invoke + 72
17 RemoteTextInput 0x00000001b5f131b8 __58-[RTIInputSystemClient enumerateServices:force:withBlock:]_block_invoke.139 + 60
18 CoreFoundation 0x0000000196179aa4 __NSSET_IS_CALLING_OUT_TO_A_BLOCK__ + 24
19 CoreFoundation 0x000000019618c990 -[__NSSetM enumerateObjectsWithOptions:usingBlock:] + 276
20 RemoteTextInput 0x00000001b5f13088 -[RTIInputSystemClient enumerateServices:force:withBlock:] + 288
21 RemoteTextInput 0x00000001b5f137d8 -[RTIInputSystemClient _updateTextForSessionWithID:] + 244
22 RemoteTextInput 0x00000001b5f14f5c -[RTIInputSystemClient remoteTextInputSessionWithID:documentDidChange:] + 156
23 UIKitCore 0x00000001c884f99c __39-[UIKBRTIPartner documentStateChanged:]_block_invoke + 136
24 UIKitCore 0x00000001c88543d0 __48-[UIKBRTIPartner _updateRTIStateWithCompletion:]_block_invoke + 288
25 UIKitCore 0x00000001c885c5ec -[UIKBRTIPartner _queryUIKitDocumentRequest:completion:] + 4336
26 UIKitCore 0x00000001c885a208 -[UIKBRTIPartner _queryDocumentRequest:completion:] + 136
27 UIKitCore 0x00000001c885427c -[UIKBRTIPartner _updateRTIStateWithCompletion:] + 312
28 UIKitCore 0x00000001c8852ee4 -[UIKBRTIPartner updateStateWithCompletion:updateTraits:] + 144
29 UIKitCore 0x00000001c884f900 -[UIKBRTIPartner documentStateChanged:] + 176
30 UIKitCore 0x00000001c8143d44 -[_UIKeyboardStateManager setDocumentState:] + 208
31 UIKitCore 0x00000001c815a5d8 -[_UIKeyboardStateManager updateKeyboardStateForInsertion:] + 104
32 UIKitCore 0x00000001c850df94 -[UIKBInputDelegateManager insertText:updateInputSource:] + 464
33 UIKit 0x0000000272d743c4 -[UIKBInputDelegateManagerAccessibility insertText:updateInputSource:] + 112
34 UIKitCore 0x00000001c8508288 -[UIKBInputDelegateManager insertText:] + 92
35 UIKitCore 0x00000001c815150c -[_UIKeyboardStateManager performKeyboardOutput:checkingDelegate:forwardToRemoteInputSource:] + 3732
36 UIKit 0x0000000272d657c0 -[_UIKeyboardStateManagerAccessibility performKeyboardOutput:checkingDelegate:forwardToRemoteInputSource:] + 140
37 UIKitCore 0x00000001c8859e80 -[UIKBRTIPartner _performKeyboardOutputOperations:] + 608
38 UIKit 0x0000000272d68e6c -[UIKBRTIPartnerAccessibility _performKeyboardOutputOperations:] + 100
39 UIKitCore 0x00000001c8858430 -[UIKBRTIPartner _queued_performTextOperations:resultHandler:] + 780
40 libdispatch.dylib 0x0000000103258b64 _dispatch_call_block_and_release + 32
41 libdispatch.dylib 0x000000010325a8a4 _dispatch_client_callout + 20
42 libdispatch.dylib 0x000000010326dd38 _dispatch_main_queue_drain + 1092
43 libdispatch.dylib 0x000000010326d8e4 _dispatch_main_queue_callback_4CF + 44
44 CoreFoundation 0x0000000196188e60 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16
45 CoreFoundation 0x0000000196148a4c __CFRunLoopRun + 1996
46 CoreFoundation 0x0000000196147bc4 CFRunLoopRunSpecific + 588
47 HIToolbox 0x00000001a15b7f64 RunCurrentEventLoopInMode + 292
48 HIToolbox 0x00000001a15bdd54 ReceiveNextEventCommon + 636
49 HIToolbox 0x00000001a15bdeb8 _BlockUntilNextEventMatchingListInModeWithFilter + 76
50 AppKit 0x0000000199c73a08 _DPSNextEvent + 660
51 AppKit 0x000000019a5b3e0c -[NSApplication(NSEventRouting) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 688
52 AppKit 0x0000000199c66ae0 -[NSApplication run] + 480
53 AppKit 0x0000000199c3d364 NSApplicationMain + 888
54 AppKit 0x0000000199e8b870 +[NSWindow _savedFrameFromString:] + 0
55 UIKitMacHelper 0x00000001b097ab38 UINSApplicationMain + 972
56 UIKitCore 0x00000001c7a473a8 UIApplicationMain + 148
57 ReproducerApp.debug.dylib 0x00000001062fc288 __debug_main_executable_dylib_entry_point + 96
58 dyld 0x0000000195ce0274 start + 2840
)
```
### Reproducer
https://github.com/jobpaardekooper/rn-textinput-crash
### Screenshots and Videos
_No response_ | Platform: iOS,Component: TextInput,Needs: Triage :mag: | low | Critical |
2,776,417,536 | ui | [bug]: when sidebar is collapsible icon view, SidebarGroupLabel flickers | ### Describe the bug
when the Sidebar component is set to collapsible = icon, the group label flickers
```
<Sidebar collapsible="icon">
<SidebarContent>
<SidebarGroup>
<SidebarGroupLabel>On Point Mobile Bartending</SidebarGroupLabel>
<SidebarGroupContent></SidebarGroupContent>
</SidebarGroup>
</SidebarContent>
<SidebarFooter>
<UserButton showName />
</SidebarFooter>
</Sidebar>
### Affected component/components
Sidebar
### How to reproduce
1. add a sidebar
2. set collapsible to icon
3. add a SidebarGroup with a SidebarGroupLabel
Notice the label flickers
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Nextjs 15
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,776,439,355 | vscode | editor tab decoration/hover info is not discoverable for screen reader users | On hover of an editor tab, one can learn information about the editor, for example: `~/Repos/vscode/src/vs/workbench/contrib/chat/browser/chatEditing/chatEditingActions.ts • 3 problems in this file • Modified`
It appears this info is included on the tab's aria-label but is never read by the screen reader as the editor gets focus.
<img width="911" alt="Image" src="https://github.com/user-attachments/assets/cd333c0e-1ce8-45b9-bf98-6a72a1916a4b" />
I believe including this in the editor's aria label would be annoying and too verbose. We should make this information readable/triggerable. I'm wondering about a new `window.title` variable as I know screen reader users focus the window title for other info.
Thoughts @jooyoungseo and @rperez030?
I realized this because for files that have copilot edits, we include the following: `~/Repos/vscode-extension-samples/terminal-sample/src/extension.ts • Pending changes from chat • 6 problems in this file` but it's not screen reader discoverable at the moment.
cc @joyceerhl, @jrieken, @isidorn
| feature-request,accessibility | low | Minor |
2,776,449,265 | godot | Race Condition in NavigationServer3D | ### Tested versions
- Reproducible in: 4.3-stable, 4.4-dev1.
- Not reproducible in: 4.2-stable, 4.4-dev2 and higher.
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 Ti (NVIDIA; 32.0.15.6636) - AMD Ryzen 7 7800X3D 8-Core Processor (16 Threads)
### Issue description
`NavigationServer3D` contains a data race in all methods that query navigation map. The result is a complete freeze of the **main** thread. Might be related to [this](https://github.com/godotengine/godot/pull/90804).
Seems to be fixed in 4.4-dev2. Maybe we want to port it to 4.3?
`NavigationServer3D.set_active(false` seems to fix the issue as well.
### Steps to reproduce
Just run the minimal project attached, it will freeze instantly.
But overall idea is to run for example `NavigationServer3D.map_get_closest_point` with any params enough times to trigger the race condition **on a separate thread**.
### Minimal reproduction project (MRP)
Here is the main script. You can attach it to a Node3D scene with some dummy navigation region. Minimal project [attached](https://github.com/user-attachments/files/18353618/test.zip).
```gdscript
extends Node3D
var _task_id: int
var _map: RID
var _thread := Thread.new()
func _ready() -> void:
_map = await NavigationServer3D.map_changed
_thread.start(_test)
func _test() -> void:
while true:
NavigationServer3D.map_get_closest_point(_map, Vector3.ZERO)
# Join the threads just because it's correct, but it doesn't make any difference.
func _exit_tree() -> void:
_thread.wait_to_finish()
``` | bug,topic:core,topic:navigation | low | Major |
2,776,450,733 | rust | compiletest: error emitter pretty svg tests don't normalize color differences between Windows and Unixes | 

This is the svg snapshots of the same test `test/ui/error-emitter/multiline-removal-suggestion.rs` (the former is the reference Unix-blessed snapshot, the latter is generated on Windows with `//@ only-linux` removed), but I believe this fails on Windows because the exact colors chosen are different. I believe this is because there are certain cmd.exe default bright colors which are very hard to read, so we actually pick different colors on purpose.
cc @estebank for FYI: I reasoned about the failure in #134664 incorrectly, I found out today after revisiting #132752, which is a *different* bug. | A-testsuite,O-windows,A-diagnostics,T-compiler,T-bootstrap,C-bug,D-diagnostic-infra,A-compiletest | low | Critical |
2,776,454,857 | go | internal/trace: TestTraceCPUProfile/Default failures | ```
#!watchflakes
default <- pkg == "internal/trace" && test == "TestTraceCPUProfile/Default"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8726805723477811937)):
=== RUN TestTraceCPUProfile/Default
reader_test.go:112: unexpected error while reading the trace: inconsistent status for proc 1: old Syscall vs. new Running
trace_test.go:627: found bad trace; dumping to test log...
trace_test.go:638: Trace Go1.23
EventBatch gen=1 m=28747 time=2687627497990 size=65459
ProcStart dt=365 p=2 p_seq=1
GoStart dt=223 g=1 g_seq=1
HeapAlloc dt=839 heapalloc_value=4194304
GoStop dt=301 reason_string=16 stack=8
ProcStop dt=54
...
String id=138
data="runtime.traceStartReadCPU.func1"
String id=139
data="/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/tracecpu.go"
String id=140
data="runtime.traceLocker.ProcStart"
String id=141
data="runtime.acquirep"
--- FAIL: TestTraceCPUProfile/Default (18.13s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,compiler/runtime | low | Critical |
2,776,457,381 | pytorch | inductor slow kernel choice for max(x) if x is not contiguous | ### 🐛 Describe the bug
I noticed that if `x` is not contiguous, torchinductor generates unexpectedly slow cuda kernels for `torch.max(x)`. Small repro:
```
import torch
import fire
from torch._inductor.utils import do_bench_using_profiling
torch.manual_seed(0)
def get_max(x):
x = torch.max(x)
return x
def run(is_contiguous: bool = True):
x = torch.randn(4096, 8192, dtype=torch.bfloat16, device="cuda")
if not is_contiguous:
x = x.t().contiguous().t()
get_max_c = torch.compile(get_max)
# warmup
y = get_max_c(x)
# perf
duration_microseconds = do_bench_using_profiling(lambda: get_max_c(x))
print('duration in microseconds', duration_microseconds)
if __name__ == '__main__':
fire.Fire(run)
```
Running this script with `is_contiguous=True` results in the expected (to me) pattern of a two stage reduction. Running this script with `is_contiguous=False` results in a three stage reduction, which seems to be significantly slower than the two-stage variant - ~8x slower on my machine for this example input.
Example script calls with logs:
```
> TORCH_LOGS_FORMAT="short" TORCH_LOGS="output_code" python ~/local/tmp/20250108_max_repro.py --is_contiguous True 2>&1 | with-proxy gh gist create
- Creating gist...
✓ Created secret gist
https://gist.github.com/vkuzo/4798af9dbd1a13ff66d9586312c04d03
> TORCH_LOGS_FORMAT="short" TORCH_LOGS="output_code" python ~/local/tmp/20250108_max_repro.py --is_contiguous False 2>&1 | with-proxy gh gist create
- Creating gist...
✓ Created secret gist
https://gist.github.com/vkuzo/6b9d3e1397ff808b4897d75a59b7eaab
```
For context, I noticed this during a refactor of torchao float8 code (https://github.com/pytorch/ao/pull/1481). I can work around by manually passing contiguous tensors to `max(tensor)` in the modeling code, but sharing this as it was unexpected.
### Versions
PyTorch commit: 768d73f6929be2a6eb81fe7424416dceb4a4aca9 (main branch)
hardware: NVIDIA H100
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng | triaged,oncall: pt2,module: inductor | low | Critical |
2,776,471,223 | storybook | [Bug]: Matching characters in search results disappear when using High Contrast Mode (hcm) | ### Describe the bug
When in Windows High Contrast Mode, matching characters in the search results disappear into the background. 
### Reproduction link
https://github.com/TGiles/storybook-test
### Reproduction steps
1. Given a new Storybook instance that was created via `npx storybook@next init`, [see my storybook-test repo for this new instance](https://github.com/TGiles/storybook-test) and running either Windows 10 or Windows 11
2. Enable High Contrast Mode
3. Serve new Storybook instance locally
4. Use the search bar and type in the character "d"
Expected Results:
- The "d" in Docs is highlighted or marked in some way that indicates a match to the search query
Actual Results:
- The "d" in Docs is set to the background/canvas color and becomes invisible.
### System
```bash
Storybook Environment Info:
System:
OS: Windows 10 10.0.19045
CPU: (24) x64 AMD Ryzen 9 9900X 12-Core Processor
Binaries:
Node: 22.13.0 - C:\nvm4w\nodejs\node.EXE
Yarn: 1.22.22 - C:\nvm4w\nodejs\yarn.CMD
npm: 10.9.2 - C:\nvm4w\nodejs\npm.CMD <----- active
Browsers:
Chrome: 131.0.6778.206
Edge: Chromium (127.0.2651.98)
npmPackages:
@storybook/addon-essentials: ^8.5.0-beta.8 => 8.5.0-beta.8
@storybook/blocks: ^8.5.0-beta.8 => 8.5.0-beta.8
@storybook/test: ^8.5.0-beta.8 => 8.5.0-beta.8
@storybook/web-components: ^8.5.0-beta.8 => 8.5.0-beta.8
@storybook/web-components-vite: ^8.5.0-beta.8 => 8.5.0-beta.8
chromatic: ^11.22.1 => 11.22.1
storybook: ^8.5.0-beta.8 => 8.5.0-beta.8
```
### Additional context
_No response_ | bug,ui,accessibility | low | Critical |
2,776,478,935 | rust | "error occurred: unknown target" when trying to add a new target | when forking the rust-lang/rust repo and attempting to add support for a new target triple, x will fail to recognise the new target when trying to build the cross compiling rustc
this happens when exactly following the directions laid out by "adding a new target" on the rustc dev guide, so either x has an issue with building the new rustc, or the documentation needs updating!
i've added the target triple to the supported targets macro, included it in the stage0 missing targets array, and (not entirely sure if this is necessary but still did it) supplied a json file with a rough specification of the target platform
the exact error given by x (presumably coming from the rustc used for building) is: `error occurred: unknown target 'my-target-triple'` | T-bootstrap,C-bug,A-contributor-roadblock | low | Critical |
2,776,483,870 | storybook | [Bug]: Difficult to determine if text is a link or not in High Contrast Mode (hcm) | ### Describe the bug
When in High Contrast Mode on Windows 10 or 11, it is difficult to determine if certain text is an interactive link or not. For example, hovering over the settings cog will change the cursor to a pointer and change the background color, while focusing the settings cog will display an outline around the button.
When hovering or keyboard navigating through the docs, components, and stories in the sidebar, the only indication that these items are interactive is the cursor to pointer change.

### Reproduction link
https://github.com/TGiles/storybook-test
### Reproduction steps
1. Given a newly created Storybook instance and a machine running Windows 10 or Windows 11
2. Enable High Contrast Mode
3. Navigate through the sidebar (either hovering or keyboard navigation)
Expected result:
- Links have an indicator, other than the cursor turning into a pointer, that distinguishes the text as a link.
Actual result:
- Links only have the cursor to pointer change, making it difficult to determine if the item is a button or a link that can be interacted with.
### System
```bash
Storybook Environment Info:
System:
OS: Windows 10 10.0.19045
CPU: (24) x64 AMD Ryzen 9 9900X 12-Core Processor
Binaries:
Node: 22.13.0 - C:\nvm4w\nodejs\node.EXE
Yarn: 1.22.22 - C:\nvm4w\nodejs\yarn.CMD
npm: 10.9.2 - C:\nvm4w\nodejs\npm.CMD <----- active
Browsers:
Chrome: 131.0.6778.206
Edge: Chromium (127.0.2651.98)
npmPackages:
@storybook/addon-essentials: ^8.5.0-beta.8 => 8.5.0-beta.8
@storybook/blocks: ^8.5.0-beta.8 => 8.5.0-beta.8
@storybook/test: ^8.5.0-beta.8 => 8.5.0-beta.8
@storybook/web-components: ^8.5.0-beta.8 => 8.5.0-beta.8
@storybook/web-components-vite: ^8.5.0-beta.8 => 8.5.0-beta.8
chromatic: ^11.22.1 => 11.22.1
storybook: ^8.5.0-beta.8 => 8.5.0-beta.8
```
### Additional context
Having the cursor: pointer is helpful, but having some form of underline on links is extremely beneficial in HCM. There are a few ways to handle this:
- Make underlines appear on hover and focus. This strategy is what [Wikipedia uses for their links](https://github.com/TGiles/storybook-test/blob/main/links_underline_on_hover_wikipedia.png). 
- Always show underlines but make them disappear on hover and focus. MDN uses this for links in the [main content of articles](https://github.com/TGiles/storybook-test/blob/main/links_underline_disappears_on_hover_mdn.png). 
- Always show underlines regardless of hover and focus. This is the default user agent styling in Firefox, seen in this [demo on MDN](https://github.com/TGiles/storybook-test/blob/main/links_underline_always_present_mdn.png) 
| bug,ui,accessibility | low | Critical |
2,776,487,331 | godot | Spurious Warning on Menu Accelerators With Mask Modifiers | ### Tested versions
Reproducible in 4.3dev
### System information
N/A
### Issue description
Adding accelerators the documented way is creating spurious warnings when a modifier mask is used.
NOTE: I already checked, it's not the 2nd parameter that's the error since the 1st line here does not get an error.
```gdscript
popup.add_item("Rail Line", -1, KEY_T)
popup.add_item("Train Station", -1, KEY_MASK_CTRL | KEY_MASK_SHIFT | KEY_T)
popup.add_item("Freight Depot", -1, KEY_MASK_ALT | KEY_T)
```
The errors for the 2nd and 3rd items are:
```
W 0:00:02:0329 Integer used when an enum value is expected. If this is intended cast the integer to the enum type.
<C++ Error> INT_AS_ENUM_WITHOUT_CAST
<C++ Source> new_tools_container.tscn::GDScript_erqco:10
W 0:00:02:0329 Cannot pass 301989972 as Enum "Key": no enum member has matching value.
<C++ Error> INT_AS_ENUM_WITHOUT_MATCH
<C++ Source> new_tools_container.tscn::GDScript_erqco:10
W 0:00:02:0329 Integer used when an enum value is expected. If this is intended cast the integer to the enum type.
<C++ Error> INT_AS_ENUM_WITHOUT_CAST
<C++ Source> new_tools_container.tscn::GDScript_erqco:11
W 0:00:02:0329 Cannot pass 67108948 as Enum "Key": no enum member has matching value.
<C++ Error> INT_AS_ENUM_WITHOUT_MATCH
<C++ Source> new_tools_container.tscn::GDScript_erqco:11
```
I don't expect this since this is what the docs say to do:

### Steps to reproduce
Given above small script ^^^
### Minimal reproduction project (MRP)
N/A its in the docs | bug,topic:gdscript | low | Critical |
2,776,492,280 | flutter | webview_flutter recognizes gestures on embedded maps through overlaying widgets | ### What package does this bug report belong to?
webview_flutter
### What target platforms are you seeing this bug on?
iOS
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
collection:
dependency: transitive
description:
name: collection
sha256: a1ace0a119f20aabc852d165077c036cd864315bd99b7eaa10a60100341941bf
url: "https://pub.dev"
source: hosted
version: "1.19.0"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_inappwebview:
dependency: "direct main"
description:
name: flutter_inappwebview
sha256: "80092d13d3e29b6227e25b67973c67c7210bd5e35c4b747ca908e31eb71a46d5"
url: "https://pub.dev"
source: hosted
version: "6.1.5"
flutter_inappwebview_android:
dependency: transitive
description:
name: flutter_inappwebview_android
sha256: "62557c15a5c2db5d195cb3892aab74fcaec266d7b86d59a6f0027abd672cddba"
url: "https://pub.dev"
source: hosted
version: "1.1.3"
flutter_inappwebview_internal_annotations:
dependency: transitive
description:
name: flutter_inappwebview_internal_annotations
sha256: "787171d43f8af67864740b6f04166c13190aa74a1468a1f1f1e9ee5b90c359cd"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
flutter_inappwebview_ios:
dependency: transitive
description:
name: flutter_inappwebview_ios
sha256: "5818cf9b26cf0cbb0f62ff50772217d41ea8d3d9cc00279c45f8aabaa1b4025d"
url: "https://pub.dev"
source: hosted
version: "1.1.2"
flutter_inappwebview_macos:
dependency: transitive
description:
name: flutter_inappwebview_macos
sha256: c1fbb86af1a3738e3541364d7d1866315ffb0468a1a77e34198c9be571287da1
url: "https://pub.dev"
source: hosted
version: "1.1.2"
flutter_inappwebview_platform_interface:
dependency: transitive
description:
name: flutter_inappwebview_platform_interface
sha256: cf5323e194096b6ede7a1ca808c3e0a078e4b33cc3f6338977d75b4024ba2500
url: "https://pub.dev"
source: hosted
version: "1.3.0+1"
flutter_inappwebview_web:
dependency: transitive
description:
name: flutter_inappwebview_web
sha256: "55f89c83b0a0d3b7893306b3bb545ba4770a4df018204917148ebb42dc14a598"
url: "https://pub.dev"
source: hosted
version: "1.1.2"
flutter_inappwebview_windows:
dependency: transitive
description:
name: flutter_inappwebview_windows
sha256: "8b4d3a46078a2cdc636c4a3d10d10f2a16882f6be607962dbfff8874d1642055"
url: "https://pub.dev"
source: hosted
version: "0.6.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "5398f14efa795ffb7a33e9b6a08798b26a180edac4ad7db3f231e40f82ce11e1"
url: "https://pub.dev"
source: hosted
version: "5.0.0"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "7bb2830ebd849694d1ec25bf1f44582d6ac531a57a365a803a6034ff751d2d06"
url: "https://pub.dev"
source: hosted
version: "10.0.7"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "9491a714cca3667b60b5c420da8217e6de0d1ba7a5ec322fab01758f6998f379"
url: "https://pub.dev"
source: hosted
version: "3.0.8"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: c35bb79562d980e9a453fc715854e1ed39e24e7d0297a880ef54e17f9874a9d7
url: "https://pub.dev"
source: hosted
version: "5.1.1"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
plugin_platform_interface:
dependency: transitive
description:
name: plugin_platform_interface
sha256: "4820fbfdb9478b1ebae27888254d445073732dae3d6ea81f0b7e06d5dedc3f02"
url: "https://pub.dev"
source: hosted
version: "2.1.8"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "9f47fd3630d76be3ab26f0ee06d213679aa425996925ff3feffdec504931c377"
url: "https://pub.dev"
source: hosted
version: "1.12.0"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "688af5ed3402a4bde5b3a6c15fd768dbf2621a614950b17f04626c431ab3c4c3"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "664d3a9a64782fcdeb83ce9c6b39e78fd2971d4e37827b9b06c3aa1edc5e760c"
url: "https://pub.dev"
source: hosted
version: "0.7.3"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: f6be3ed8bd01289b34d679c2b62226f63c0e69f9fd2e50a6b3c1c729a961041b
url: "https://pub.dev"
source: hosted
version: "14.3.0"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
webview_flutter:
dependency: "direct main"
description:
name: webview_flutter
sha256: "889a0a678e7c793c308c68739996227c9661590605e70b1f6cf6b9a6634f7aec"
url: "https://pub.dev"
source: hosted
version: "4.10.0"
webview_flutter_android:
dependency: transitive
description:
name: webview_flutter_android
sha256: "3d535126f7244871542b2f0b0fcf94629c9a14883250461f9abe1a6644c1c379"
url: "https://pub.dev"
source: hosted
version: "4.2.0"
webview_flutter_platform_interface:
dependency: transitive
description:
name: webview_flutter_platform_interface
sha256: d937581d6e558908d7ae3dc1989c4f87b786891ab47bb9df7de548a151779d8d
url: "https://pub.dev"
source: hosted
version: "2.10.0"
webview_flutter_wkwebview:
dependency: transitive
description:
name: webview_flutter_wkwebview
sha256: b7e92f129482460951d96ef9a46b49db34bd2e1621685de26e9eaafd9674e7eb
url: "https://pub.dev"
source: hosted
version: "3.16.3"
sdks:
dart: ">=3.6.0 <4.0.0"
flutter: ">=3.27.1"
```
</details>
### Steps to reproduce
1. Create a WebViewWidget that loads a web page containing an embedded map (ex. https://flutter.dev/community)
2. Put WebViewWidget in a stack such that another widget will load on top of it and at least partially cover the WebViewWidget
### Expected results
Any interactions to the top level widget should be ignored by the WebViewWidget underneath.
### Actual results
Interactions that would be recognized by the embedded map work as though the top level widget wern't covering it.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:async';
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:webview_flutter/webview_flutter.dart';
Future main() async {
WidgetsFlutterBinding.ensureInitialized();
runApp(const MaterialApp(home: MyApp()));
}
class MyApp extends StatefulWidget {
const MyApp({super.key});
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
@override
void initState() {
super.initState();
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: const Text("Example")),
body: SafeArea(
child: Column(children: <Widget>[
Expanded(
child: Stack(
children: [
WebViewWidget(controller: WebViewController()
..setJavaScriptMode(JavaScriptMode.unrestricted)
..setNavigationDelegate(
NavigationDelegate(
onProgress: (int progress) {
// Update loading bar.
},
onPageStarted: (String url) {},
onPageFinished: (String url) {},
onHttpError: (HttpResponseError error) {},
onNavigationRequest: (NavigationRequest request) {
if (request.url.startsWith('https://www.youtube.com/')) {
return NavigationDecision.prevent;
}
return NavigationDecision.navigate;
},
),
)
..loadRequest(Uri.parse('https://flutter.dev/community'))),
Container(width: 500, height: 2000, color: const Color.fromARGB(127, 255, 193, 7),),
],
),
),
])));
}
}
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
This is me zooming in and out through a widget overlaying part of the web page.
https://github.com/user-attachments/assets/f87f7b6e-b6e4-45d5-b05e-95c430395dd3
</details>
### Logs
<details open><summary>Logs</summary>
```console
No logs
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 14.3.1 23D60 darwin-x64, locale en-US)
• Flutter version 3.27.1 on channel stable at /Users/ahanke/development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (3 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[!] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
• Android SDK at /Users/ahanke/Library/Android/sdk
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/macos-android-setup for more details.
[✓] Xcode - develop for iOS and macOS (Xcode 15.3)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15E204a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/macos-android-setup for detailed instructions).
[✓] VS Code (version 1.75.0)
• VS Code at /Users/ahanke/Downloads/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] VS Code (version 1.74.3)
• VS Code at /Users/ahanke/Desktop/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (2 available)
• macOS (desktop) • macos • darwin-x64 • macOS 14.3.1 23D60 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.264
! Error: Browsing on the local area network for Ryan’s iPad. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 2 categories.
```
</details>
| waiting for customer response,in triage | low | Critical |
2,776,494,222 | next.js | [turbopack] dynamic imports fallback to single existing file when target file does not exist | ### Link to the code that reproduces this issue
https://github.com/qnten/reproduction-dynamic-import
### To Reproduce
1. Start the application in dev with turbo enabled `npm run dev`
2. The page `/` should display the following:
```
1. first file exists so "first name" should be displayed: first file
2. second file does not exist so "no name found" should be displayed: first file
3. third file does not exist so "no name found" should be displayed: first file
```
### Current vs. Expected behavior
**Current Behavior**
When using dynamic import to load files from a directory that contains only one file, the import incorrectly falls back to that existing file even if the requested file does not match the import path. Specifically, in the provided example:
• First File (first.ts): Exists and correctly returns "first name".
• Second File (second.ts): Does not exist, but getExportFromFile("second") still returns "first name" instead of "no name found".
• Third File (third.ts): Does not exist, but getExportFromFile("third") also returns "first name" instead of "no name found".
_This behavior occurs only when the directory contains a single file. If a second file is added (`second.ts`) everything works as expected._
**Expected Behavior**
The dynamic import should strictly attempt to load the specified file based on the provided path. If the target file does not exist within the directory, the import should throw.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.2.0: Fri Dec 6 19:01:59 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 22.4.1
npm: 10.8.1
Yarn: 1.22.22
pnpm: 9.5.0
Relevant Packages:
next: 15.2.0-canary.1 // Latest available version is detected (15.2.0-canary.1).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | Turbopack | low | Minor |
2,776,509,888 | flutter | x86 Mac try bots fail to build infra-dialog | ### Type of Request
bug
### Infrastructure Environment
LUCI / Devicelab
### What is happening?
infra-dialog is failing to build. Totally guessing, but maybe Xcode 16 got put on the bot at some point and automatically upgraded `MACOSX_DEPLOYMENT_TARGET` for the infra-dialog project to 14, since Xcode 16 requires 14. `MACOSX_DEPLOYMENT_TARGET` is not strictly set in the project
https://chromium-swarm.appspot.com/botlist?c=id&c=task&c=cpu&c=device_os&c=os&c=status&d=asc&f=device_os%3AiOS&f=pool%3Aluci.flutter.try&f=os%3AMac&f=cpu%3Ax86&k=cpu&s=id
https://chromium-swarm.appspot.com/bot?id=flutter-devicelab-mac-1
https://luci-milo.appspot.com/ui/p/flutter/builders/try/Mac_ios%20wide_gamut_ios/47/overview
```
Command line invocation:
/opt/flutter/xcode/15a240d/XCode.app/Contents/Developer/usr/bin/xcodebuild -project infra-dialog.xcodeproj -scheme infra-dialog -destination id=00008030-0003184C36DB402E test CODE_SIGN_STYLE=Manual DEVELOPMENT_TEAM=S8QB4VV633 "PROVISIONING_PROFILE_SPECIFIER=match Development *"
User defaults from command line:
IDEPackageSupportUseBuiltinSCM = YES
Build settings from command line:
CODE_SIGN_STYLE = Manual
DEVELOPMENT_TEAM = S8QB4VV633
PROVISIONING_PROFILE_SPECIFIER = match Development *
--- xcodebuild: WARNING: Using the first of multiple matching destinations:
{ platform:macOS, arch:x86_64, variant:Mac Catalyst, id:A5531980-9CA4-532B-B7B6-903EF7DB8041 }
{ platform:iOS, id:dvtdevice-DVTiPhonePlaceholder-iphoneos:placeholder, name:Any iOS Device }
{ platform:iOS Simulator, id:dvtdevice-DVTiOSDeviceSimulatorPlaceholder-iphonesimulator:placeholder, name:Any iOS Simulator Device }
{ platform:macOS, variant:Mac Catalyst, name:Any Mac }
{ platform:iOS Simulator, id:BE579EAB-45FA-414E-9B27-51D38C03398A, OS:17.0, name:iPad (10th generation) }
{ platform:iOS Simulator, id:90277A7F-F4E8-47E8-8B61-38B3C2D42C00, OS:17.0, name:iPad Air (5th generation) }
{ platform:iOS Simulator, id:88B3E1FF-16F0-456A-A432-392130981F97, OS:17.0, name:iPad Pro (11-inch) (4th generation) }
{ platform:iOS Simulator, id:FAFE3710-2752-4BFB-AF62-2A7C9D3F1911, OS:17.0, name:iPad Pro (12.9-inch) (6th generation) }
{ platform:iOS Simulator, id:B99DFBF3-7D12-4261-A45D-C0278FCB7BC1, OS:17.0, name:iPad mini (6th generation) }
{ platform:iOS Simulator, id:280F2910-B8BC-4AB2-9FC2-76BF5A048F43, OS:17.0, name:iPhone 14 }
{ platform:iOS Simulator, id:35252CEE-0A79-4FAD-AAB1-D68CA95EB6C0, OS:17.0, name:iPhone 14 Plus }
{ platform:iOS Simulator, id:9186F136-BC97-4242-9A78-2DE1AF82AFA9, OS:17.0, name:iPhone 14 Pro }
{ platform:iOS Simulator, id:69C87291-02BB-480C-BCFB-B7CAFFF7EBE1, OS:17.0, name:iPhone 14 Pro Max }
{ platform:iOS Simulator, id:61BAF5CA-32B3-48DA-9287-346902A72605, OS:17.0, name:iPhone 15 }
{ platform:iOS Simulator, id:9F08931C-A7D6-40D2-8496-D0C5B543FD2A, OS:17.0, name:iPhone 15 Plus }
{ platform:iOS Simulator, id:0F00F48B-01D4-45AD-AEAA-295B86A1E9F5, OS:17.0, name:iPhone 15 Pro }
{ platform:iOS Simulator, id:0F4CAD5F-1EFE-4EB6-AD73-B319FCA6C589, OS:17.0, name:iPhone 15 Pro Max }
{ platform:iOS Simulator, id:C8D9FB0E-6AAF-4FDD-8A13-77A263B3F4E4, OS:17.0, name:iPhone SE (3rd generation) }
2024-12-17 09:58:30.309 xcodebuild[1852:12828] Writing error result bundle to /var/folders/7n/mhts4gd500g0p9c3kkbxfshr0000gp/T/ResultBundle_2024-17-12_09-58-0030.xcresult
Prepare packages
xcodebuild: error: Failed to build project infra-dialog with scheme infra-dialog.: Cannot test target “infra-dialogUITests” on “My Mac”: My Mac’s macOS 13.6.1 doesn’t match infra-dialogUITests’s macOS 14.0 deployment target.
```
https://github.com/flutter/cocoon/tree/3a4c89e4eed19c16f075aa4bb5c1291e9d72c450/cipd_packages/device_doctor/tool/infra-dialog
### Steps to reproduce
Run a `Mac_ios` test on https://chromium-swarm.appspot.com/bot?id=flutter-devicelab-mac-1
### Expected results
I expect tests to pass | team-infra,P2,triaged-infra | low | Critical |
2,776,510,495 | langchain | langchain-cli for MacOS unable to load required files | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When calling langchain-cli migrate <path> I get the error below.
### Error Message and Stack Trace (if applicable)
CLIError: Failed to download Grit CLI from
https://github.com/getgrit/gritql/releases/latest/download/marzano-aarch64-apple-darwin.tar.gz
### Description
Migrate versions according to the documentation
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:46 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6031
> Python Version: 3.12.5 (main, Aug 6 2024, 19:08:49) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.1.148rc1
> langchain_anthropic: 0.2.4
> langchain_cli: 0.0.35
> langchain_experimental: 0.3.4
> langchain_google_genai: 2.0.7
> langchain_openai: 0.2.14
> langchain_pinecone: 0.2.0
> langchain_text_splitters: 0.3.4
> langgraph_sdk: 0.1.48
> langserve: 0.3.1
Other Dependencies
------------------
> aiohttp: 3.9.5
> anthropic: 0.42.0
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fastapi: 0.115.6
> filetype: 1.2.0
> gitpython: 3.1.44
> google-generativeai: 0.8.3
> gritql: 0.1.5
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langserve[all]: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.3
> orjson: 3.10.13
> packaging: 24.2
> pinecone-client: 5.0.1
> pydantic: 2.10.4
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> sse-starlette: 1.8.2
> tenacity: 9.0.0
> tiktoken: 0.8.0
> tomlkit: 0.13.2
> typer[all]: Installed. No version info available.
> typing-extensions: 4.12.2
> uvicorn: 0.34.0 | 🤖:bug | low | Critical |
2,776,522,917 | pytorch | [dynamo][graph break] omegaconfig ListConfig `__contains__` | ### 🐛 Describe the bug
Fixing this graph break should help with the performance regression in https://github.com/pytorch/pytorch/issues/132872
~~~
+ @unittest.skipIf(not HAS_OMEGACONG, "missing omegaconf package")
+ def test_omegaconf_listconfig_contains(self):
+ def fn(cfg, x):
+ if 1 in cfg:
+ return torch.sin(x)
+ return torch.cos(x)
+
+ config = list_config = OmegaConf.create([1, 2, 3, {"key": "value"}])
+
+ x = torch.randn(4)
+ opt_fn = torch.compile(fn, backend="eager", fullgraph=True)
+ self.assertEqual(fn(config, x), opt_fn(config, x))
+
+
~~~
### Error logs
```
File "/home/anijain/local/pytorch/torch/_dynamo/symbolic_convert.py", line 1076, in run
while self.step():
^^^^^^^^^^^
File "/home/anijain/local/pytorch/torch/_dynamo/symbolic_convert.py", line 986, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/anijain/local/pytorch/torch/_dynamo/symbolic_convert.py", line 1676, in COMPARE_OP
self.push(compare_op_handlers[inst.argval](self, self.popn(2), {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anijain/local/pytorch/torch/_dynamo/variables/builtin.py", line 1008, in call_function
return handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/anijain/local/pytorch/torch/_dynamo/variables/builtin.py", line 859, in builtin_dispatch
unimplemented(error_msg)
File "/home/anijain/local/pytorch/torch/_dynamo/exc.py", line 356, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: builtin: eq [<class 'torch._dynamo.variables.user_defined.UserDefinedObjectVariable'>, <class 'torch._dynamo.variables.constant.ConstantVariable'>] False
from user code:
File "/home/anijain/local/pytorch/test/dynamo/test_repros.py", line 6137, in fn
if 1 in cfg:
File "/home/anijain/.conda/envs/pytorch-3.11/lib/python3.11/site-packages/omegaconf/listconfig.py", line 606, in __contains__
if x == item:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
To execute this test, run the following from the base repo dir:
python test/dynamo/test_repros.py ReproTests.test_omegaconf_listconfig_contains
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
````
### Versions
NA
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,776,539,305 | flutter | Decorators experiment | ### Background
Flutter apps are composed from a tree of widgets.
For example:
```dart
Center(
child: ColoredBox(
color: Colors.amber,
child: Padding(
padding: EdgeInsets.all(8.0),
child: Text('Hello world'),
),
),
),
```
This results in the following widget tree:
```
Center
└── ColoredBox
└── Padding
└── Text
```
This coding style where the widget tree is created by nesting widgets is generally viewed as a strength of Flutter, however, we've heard that in some cases the source code can be a bit verbose due to heavy nesting. Decorators are an experiment for how we might address this.
### Experiment
The "decorators" experiment adds an alternative syntax focused on basic styling widgets, such as adding padding or centering a widget. With the decorators syntax you can wrap a widget with another styling widget.
For example:
```dart
Text('Hello world')
.padding(EdgeInsets.all(8.0))
.coloredBox(Colors.amber)
.center()
```
These decorators _do not_ modify the `Text` widget. Instead, these wrap the `Text` widget in a `Padding` widget, then wraps that in a `ColoredBox` widget, and then finally wraps that in a `Center` widget.
This creates the same widget tree as before:
```
Center
└── ColoredBox
└── Padding
└── Text
```
### Frequently asked questions
#### Will decorators replace existing widgets?
No! We're only adding a new tool to your toolbox, you can stick to nested widgets if that's what you prefer. We won't deprecate or remove existing widgets. The decorators will likely be placed in a separate library with its own import, and will thus be opt-in.
#### Does the order of decorators matter?
Just like nested widgets, the order of decorators matter. However, the order of decorators is _inverted_ from the order of nested widgets. We like to say that nested widgets are "inside out", whereas decorators are "outside in".
#### Will I be able to create my own decorators?
Yup!
#### Do decorators use new Dart language features?
No, decorators are plain Dart code that use the existing [extension methods](https://dart.dev/language/extension-methods) language feature.
#### How will decorators impact my app's performance?
We expect the performance impact of decorators to be negligible.
#### Where does the 'decorators' name come from?
Decorators are used to, well, decorate your app's UI using the [decorator pattern](https://en.wikipedia.org/wiki/Decorator_pattern).
#### How can I get updates on the decorators experiment?
We'll post updates to this issue. Stay tuned!
#### How can I give feedback on this proposal?
You can go ahead and comment on this issue.
### Open questions
We'd like to evaluate the following:
1. Do decorators improve the overall development experience of Flutter?
2. Do decorators improve code readability?
3. Do decorators improve code writability?
4. Do decorators make it harder to learn Flutter?
We'd love to hear your feedback on this experiment. Feel free to share
your thoughts, suggestions, or questions on this thread!
| c: new feature,P3,team-framework,triaged-framework | medium | Major |
2,776,546,047 | flutter | Enforce semantics role checks on required parameters | ### Use case
After this pr https://github.com/flutter/flutter/pull/161260
we will be backfilling all the missing role in http://flutter.dev/go/semantics-roles, this will add a bunch of new ways people can get it wrong when assigning the new role to their widget. to help reduce the chaos we should also implement a way to enforce semantics role requirements. See [this doc](https://docs.google.com/document/d/1WOEraK9VIh9dzpJ3mrXgenzfbpRxeISNAdgH8Sh80a4/edit?tab=t.0#heading=h.j9ifh1mhb7b0)
For example, if user assigns a tab role, they needs to have a ontap if it is enabled etc.
### Proposal
implement the checks | a: accessibility,c: proposal,P1,team-accessibility,triaged-accessibility | medium | Minor |
2,776,562,139 | flutter | Remove `dev_dependency`s from generated plugin registrants on all platforms | https://github.com/flutter/flutter/pull/161343 will remove dev dependencies from the generated plugin registrant for Android as part of a fix for enabling dev dependncies being removed from release builds (https://github.com/flutter/flutter/issues/160407).
We should follow up that PR by doing the same for all platforms, just not Android. This is what the change for Android looked like for context:
```dart
Future<void> _writeAndroidPluginRegistrant(FlutterProject project, List<Plugin> plugins, {required bool releaseMode}) async {
Iterable<Plugin> methodChannelPlugins = _filterMethodChannelPlugins(plugins, AndroidPlugin.kConfigKey);
if (releaseMode) {
methodChannelPlugins = methodChannelPlugins.where((Plugin p) => !p.isDevDependency);
}
``` | tool,P1,team-android,triaged-android | medium | Minor |
2,776,569,152 | tensorflow | Unable to connect to TPU through Cloud VM (metadata issue?) | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
v2.18.0-rc2-4-g6550e4bd802 2.18.0
### Custom code
Yes
### OS platform and distribution
tpu-ubuntu2204-base
### Mobile device
_No response_
### Python version
3.11.2
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I am on a VM instance trying to connect to a tpu v4-32 using a test script. I installed tensorflow-tpu on both the VM (in venv) and TPU (globally) as per the instructions from the google website.
It seems like there is an issue with getting TPU metadata.
It is able to connect to the metadata server when I request manually from the VM:
```
$ curl http://169.254.169.254/computeMetadata/v1/ -H "Metadata-Flavor: Google"
instance/
oslogin/
project/
```
Any help would be appreciated!
### Standalone code to reproduce the issue
```shell
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=tpu_name)
tf.config.experimental_connect_to_cluster(resolver)
try:
tf.tpu.experimental.initialize_tpu_system(resolver)
print("TPU initialized:", resolver.master())
except Exception as e:
print("Failed to initialize TPU:", e)
```
### Relevant log output
```shell
$ python hello.py
2025-01-08 23:49:33.189260: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-01-08 23:49:33.221197: I tensorflow/core/tpu/tpu_api_dlsym_initializer.cc:95] Opening library: /home/ucsdwanglab/test_tpu/.venv/lib/python3.11/site-packages/tensorflow/python/platform/../../libtensorflow_cc.so.2
2025-01-08 23:49:33.221290: I tensorflow/core/tpu/tpu_api_dlsym_initializer.cc:121] Libtpu path is: /home/ucsdwanglab/test_tpu/.venv/lib/python3.11/site-packages/libtpu/libtpu.so
Failed to get TPU metadata (tpu-env) from instance metadata for variable CHIPS_PER_HOST_BOUNDS: INTERNAL: Failed to fetch URL after 30 tries (http status: 404); curl status: No error
=== Source Location Trace: ===
learning/45eac/tfrc/runtime/gcp_metadata_utils.cc:93
learning/45eac/tfrc/runtime/env_var_utils.cc:50
Failed to get TPU metadata (tpu-env) from instance metadata for variable HOST_BOUNDS: INTERNAL: Failed to fetch URL after 30 tries (http status: 404); curl status: No error
=== Source Location Trace: ===
learning/45eac/tfrc/runtime/gcp_metadata_utils.cc:93
learning/45eac/tfrc/runtime/env_var_utils.cc:50
Failed to get TPU metadata (tpu-env) from instance metadata for variable ALT: INTERNAL: Failed to fetch URL after 30 tries (http status: 404); curl status: No error
=== Source Location Trace: ===
learning/45eac/tfrc/runtime/gcp_metadata_utils.cc:93
learning/45eac/tfrc/runtime/env_var_utils.cc:50
Failed to get TPU metadata (tpu-env) from instance metadata for variable WRAP: INTERNAL: Failed to fetch URL after 30 tries (http status: 404); curl status: No error
=== Source Location Trace: ===
learning/45eac/tfrc/runtime/gcp_metadata_utils.cc:93
learning/45eac/tfrc/runtime/env_var_utils.cc:50
Failed to get TPU metadata (accelerator-type) from instance metadata for variable TPU_ACCELERATOR_TYPE: INTERNAL: Failed to fetch URL after 30 tries (http status: 404); curl status: No error
=== Source Location Trace: ===
learning/45eac/tfrc/runtime/gcp_metadata_utils.cc:93
Failed to find host bounds for accelerator type: WARNING: could not determine TPU accelerator type, please set env var `TPU_ACCELERATOR_TYPE` manually, otherwise libtpu.so may not properly initialize.
Failed to get TPU metadata (agent-worker-number) from instance metadata for variable TPU_WORKER_ID: INTERNAL: Failed to fetch URL after 30 tries (http status: 404); curl status: No error
=== Source Location Trace: ===
learning/45eac/tfrc/runtime/gcp_metadata_utils.cc:93
Failed to get TPU metadata (worker-network-endpoints) from instance metadata for variable TPU_WORKER_HOSTNAMES: INTERNAL: Failed to fetch URL after 30 tries (http status: 404); curl status: No error
=== Source Location Trace: ===
learning/45eac/tfrc/runtime/gcp_metadata_utils.cc:93
WARNING: Logging before InitGoogle() is written to STDERR
E0000 00:00:1736380405.363400 3192 common_lib.cc:511] INVALID_ARGUMENT: Error: unexpected worker hostname 'WARNING: could not determine TPU worker hostnames or IP addresses' from env var TPU_WORKER_HOSTNAMES. Expecting a valid hostname or IP address without port number. (Full TPU workers' addr string: WARNING: could not determine TPU worker hostnames or IP addresses, please set env var `TPU_WORKER_HOSTNAMES` manually, otherwise libtpu.so may not properly initialize.)
=== Source Location Trace: ===
learning/45eac/tfrc/runtime/libtpu_init_utils.cc:173
2025-01-08 23:56:48.526584: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1736380609.730442 3192 context_distributed_manager.cc:762] unknown service tensorflow.WorkerService
Additional GRPC error information from remote target /job:worker/replica:0/task:0 while calling /tensorflow.WorkerService/GetStatus:
:{"created":"@1736380609.730372913","description":"Error received from peer ipv4:10.130.0.3:8470","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"unknown service tensorflow.WorkerService","grpc_status":12}
E0108 23:56:49.730822322 3192 completion_queue.cc:244] assertion failed: queue.num_items() == 0
https://symbolize.stripped_domain/r/?trace=7f1ccaf5cebc,7f1ccaf0e04f&map=
*** SIGABRT received by PID 3192 (TID 3192) on cpu 4 from PID 3192; stack trace: ***
PC: @ 0x7f1ccaf5cebc (unknown) (unknown)
@ 0x7f1caa302841 1888 (unknown)
@ 0x7f1ccaf0e050 18460496 (unknown)
@ 0x7f1ccaed1c60 (unknown) (unknown)
https://symbolize.stripped_domain/r/?trace=7f1ccaf5cebc,7f1caa302840,7f1ccaf0e04f,7f1ccaed1c5f&map=
E0108 23:56:49.732558 3192 coredump_hook.cc:316] RAW: Remote crash data gathering hook invoked.
E0108 23:56:49.732569 3192 coredump_hook.cc:355] RAW: Skipping coredump since rlimit was 0 at process start.
E0108 23:56:49.732575 3192 client.cc:269] RAW: Coroner client retries enabled, will retry for up to 30 sec.
E0108 23:56:49.732580 3192 coredump_hook.cc:411] RAW: Sending fingerprint to remote end.
E0108 23:56:49.732595 3192 coredump_hook.cc:420] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] stat failed on crash reporting socket /var/google/services/logmanagerd/remote_coredump.socket (Is the listener running?): No such file or directory
E0108 23:56:49.732601 3192 coredump_hook.cc:472] RAW: Dumping core locally.
E0108 23:56:49.745981 3192 process_state.cc:805] RAW: Raising signal 6 with default behavior
Aborted
```
| type:bug,comp:tpus,TF 2.18 | medium | Critical |
2,776,579,025 | puppeteer | [Bug]: Crash when Using getDisplayMedia API | #### Description
The issue occurs when calling `await navigator.mediaDevices.getDisplayMedia(displayMediaOptions)` within a browser instance controlled by Puppeteer, regardless of the `displayMediaOptions` provided. If a media stream is initiated, the Puppeteer process crashes on try to close the page.
Notably, this issue does not occur when running the same code directly in Chromium or any other browser outside of Puppeteer, where the functionality works as expected. However, when executed via Puppeteer—either within `page.evaluate()` or directly in the DevTools console—the browser crashes.
### Minimal, reproducible example
##### Automatic Reproduction
```TypeScript
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch({
args: [
'--auto-accept-this-tab-capture',
'--autoplay-policy=no-user-gesture-required',
],
headless: false,
});
let page = await browser.newPage();
page.once('load', async () => {
const result = await page.evaluate(async () => {
const displayMediaOptions = {
video: { displaySurface: "window" },
audio: false,
preferCurrentTab: true,
};
await navigator.mediaDevices.getDisplayMedia(displayMediaOptions); // Causes crash
return 1 + 1;
});
console.log('Result:', result);
await new Promise(resolve => setTimeout(resolve, 5000)); // 5-second delay
await page.close(); // Crash occurs here if tab is capturing media
await new Promise(resolve => setTimeout(resolve, 5000)); // 5-second delay
page = await browser.newPage(); // Error occurs here
});
await page.goto('https://pptr.dev/');
})();
```
##### Manual Reproduction
```TypeScript
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch({
args: [
'--auto-accept-this-tab-capture',
'--autoplay-policy=no-user-gesture-required',
],
headless: false,
});
const page = await browser.newPage();
await page.goto('https://pptr.dev/');
// Go to the DevTools console and execute the following script:
/*
* const displayMediaOptions = {
* video: { displaySurface: "window" },
* audio: false,
* preferCurrentTab: true,
* };
* await navigator.mediaDevices.getDisplayMedia(displayMediaOptions);
* // Then try to close the page.
*/
})();
```
#### Justification for Arguments
The arguments provided in the Puppeteer launch configuration are not the cause of the issue, as the problem persists even without them. Here's a breakdown of their purpose:
1. **`--auto-accept-this-tab-capture`**:
- This flag enables automatic tab capture without requiring additional scripts or manual permissions from the user. It simplifies the workflow for scenarios involving screen sharing or tab streaming, ensuring smooth initialization without unnecessary prompts or setup.
- Removing this argument does not resolve the crash, confirming it is not contributing to the problem.
2. **`--autoplay-policy=no-user-gesture-required`**:
- This flag allows media playback (e.g., starting a video on YouTube) without user interaction. It is essential for automated testing or workflows where user gestures cannot be simulated in real-time.
- The issue occurs even if this argument is omitted, further ruling out its involvement in the crash.
These arguments are practical enhancements for automation and do not interfere with the underlying browser operations.
### Background
I am using the browser to perform server-side rendering of operations executed within the browser environment. This setup allows me to use `navigator.mediaDevices.getDisplayMedia` to capture and stream data via P2P connections to my clients. In addition to P2P, I also use WebSocket connections to transmit the data efficiently.
The issue arises when a client disconnects, and I close the tab to conserve resources, as it is no longer needed. At this point, the crash occurs.
Using `navigator.mediaDevices.getDisplayMedia` to capture and send streaming data is currently one of the most efficient methods available for minimizing both bandwidth usage and processing overhead, making it essential to my workflow.
### Expectation
- Closing a page with `page.close()` or manually while streaming media should not affect the browser connection.
- A new page should be openable with `browser.newPage()` without encountering errors.
### Reality
The process crashes, resulting in the following error:
```
Error: Protocol error: Connection closed.
at Connection._rawSend (/home/vfx/projects/test-bug/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/Connection.js:91:35)
at Connection.send (/home/vfx/projects/test-bug/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/Connection.js:84:21)
at CdpBrowser._createPageInContext (/home/vfx/projects/test-bug/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/Browser.js:186:53)
at CdpBrowserContext.newPage (/home/vfx/projects/test-bug/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/BrowserContext.js:123:40)
at async CdpBrowser.newPage (/home/vfx/projects/test-bug/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/Browser.js:183:16)
at async /home/vfx/projects/test-bug/sandbox.js:32:12
```
### Puppeteer configuration file (if used)
_No response_
### Puppeteer version
23.11.1
### Node version
22.3.0
### Package manager
npm
### Package manager version
10.8.1
### Operating system
Linux | bug,upstream,confirmed,P3 | low | Critical |
2,776,586,954 | go | build: build failure on go1.23-linux-arm64_c4ah72-perf_vs_release | ```
#!watchflakes
default <- builder == "go1.23-linux-arm64_c4ah72-perf_vs_release" && repo == "go" && mode == "build"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8726290771046120449)):
go: downloading github.com/BurntSushi/toml v1.0.0
go: downloading github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
2025/01/08 17:35:29 Load average: 7.18 2.02 0.69 1/1008 35022
2025/01/08 17:35:29 Waiting for load average to drop below 0.20...
2025/01/08 17:35:59 Load average: 4.35 1.83 0.67 1/1006 35022
2025/01/08 17:35:59 Waiting for load average to drop below 0.20...
2025/01/08 17:36:29 Load average: 2.63 1.65 0.64 1/1006 35022
2025/01/08 17:36:29 Waiting for load average to drop below 0.20...
2025/01/08 17:36:59 Load average: 1.59 1.49 0.62 1/1006 35022
2025/01/08 17:36:59 Waiting for load average to drop below 0.20...
...
[sweet] Running benchmark tile38 for experiment: run 8
[sweet] Running benchmark tile38 for baseline: run 8
[sweet] Running benchmark tile38 for experiment: run 9
[sweet] Running benchmark tile38 for baseline: run 9
[sweet] Running benchmark tile38 for experiment: run 10
[sweet] Running benchmark tile38 for baseline: run 10
[sweet] error: failed to execute benchmarks: cockroachdb
2025/01/08 20:39:08 Error running sweet: error running sweet run: exit status 1
2025/01/08 20:39:08 FAIL
exit status 1
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,776,644,436 | storybook | [Bug]: ENAMETOOLONG with deeply nested folder structure in storybook/next | ### Describe the bug
If you have a folder structure with long names that is deeply nested, it's possible to overrun the OS's maximum file length as there is no built in truncation
You should then see an issue like
```
=> Failed to build the preview
SB_BUILDER-WEBPACK5_0002 (WebpackInvocationError): ENAMETOOLONG: name too long, open './storybook-static/app-some-folder2-some-sub-folder-some-sub-sub-folder-some-sub-sub-sub-folder-some-sub-sub-sub-sub-folder-some-sub-sub-sub-sub-sub-folder-some-sub-sub-sub-sub-sub-sub-folder-something_is_going_to_get_cha-SomeComponent-AComponent-stories.a38fd5fb.iframe.bundle.js'
```
This error is because the file name being written to the disk is too long and the OS rejects it. It appears as though the config is using a concatenation of the entire chunk file name and path to create the output file name. This error doesn't happen in standard webpack or in next.js, they have a maxlength for the filename and truncate it.
webpack for example
```
dist/src_some-folder_some-sub-folder_some-sub-sub-folder_some-sub-sub-sub-folder_some-sub-sub-sub--921ea3.js
```
next.js for example
```
.next/static/chunks/9e8ad_something_is_going_to_get_cha_SomeComponent_AComponent_tsx_2394de._.js
```
it is possible to work around this issue by manually setting the webpack config output.filename to not include the [name] param.
### Reproduction link
https://github.com/diffidentDude/next-test
### Reproduction steps
I have created a repo to repro the issue https://github.com/diffidentDude/next-test
### System
```bash
% npx storybook@latest info
npm warn deprecated [email protected]: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
npm warn deprecated [email protected]: Glob versions prior to v9 are no longer supported
npm warn deprecated [email protected]: Rimraf versions prior to v4 are no longer supported
Storybook Environment Info:
System:
OS: macOS 14.7
CPU: (10) arm64 Apple M1 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.14.0 - ~/Code/next-test/.devbox/nix/profile/default/bin/node
npm: 10.7.0 - ~/Code/next-test/.devbox/nix/profile/default/bin/npm <----- active
pnpm: 9.11.0 - ~/Code/next-test/.devbox/virtenv/nodejs/corepack-bin/pnpm
Browsers:
Chrome: 131.0.6778.205
Safari: 18.0
npmPackages:
@storybook/addon-essentials: ^8.4.7 => 8.4.7
@storybook/addon-interactions: ^8.4.7 => 8.4.7
@storybook/addon-onboarding: ^8.4.7 => 8.4.7
@storybook/blocks: ^8.4.7 => 8.4.7
@storybook/nextjs: ^8.4.7 => 8.4.7
@storybook/react: ^8.4.7 => 8.4.7
@storybook/test: ^8.4.7 => 8.4.7
eslint-plugin-storybook: ^0.11.2 => 0.11.2
storybook: ^8.4.7 => 8.4.7
```
### Additional context
_No response_ | bug,help wanted,nextjs,builder-webpack5 | low | Critical |
2,776,647,942 | pytorch | NotImplementedError: Output channels > 65536 not supported at the MPS device. | ### 🐛 Describe the bug
I used the newest edited version of this fork and torch==2.7.0.dev,but it just said "NotImplementedError: Output channels > 65536 not supported at the MPS device",instead of giving me a corrrect output.
1.Run the following command :
wav = tts.tts(text="Hello,world!",speaker_wav="project/input/001.wav",language="en")
2.See error:
Traceback (most recent call last):
File "", line 1, in
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/api.py", line 312, in tts
wav = self.synthesizer.tts(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/utils/synthesizer.py", line 408, in tts
outputs = self.tts_model.synthesize(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/tts/models/xtts.py", line 410, in synthesize
return self.full_inference(text, speaker_wav, language, **settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/tts/models/xtts.py", line 471, in full_inference
(gpt_cond_latent, speaker_embedding) = self.get_conditioning_latents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/tts/models/xtts.py", line 354, in get_conditioning_latents
speaker_embedding = self.get_speaker_embedding(audio, load_sr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/tts/models/xtts.py", line 309, in get_speaker_embedding
self.hifigan_decoder.speaker_encoder.forward(audio_16k.to(self.device), l2_norm=True)
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/encoder/models/resnet.py", line 167, in forward
x = self.torch_spec(x)
^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/nn/modules/container.py", line 250, in forward
input = module(input)
^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/rbhan/Data/StellAIHub/voice/coqui-ai-TTS/TTS/encoder/models/base_encoder.py", line 26, in forward
return torch.nn.functional.conv1d(x, self.filter).squeeze(1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Output channels > 65536 not supported at the MPS device.
### Versions
coqui-tts 0.25.1 ....../coqui-ai-TTS
coqui-tts-trainer 0.2.2
torch 2.7.0.dev20250108
torchaudio 2.6.0.dev20250108
torchvision 0.22.0.dev20250108
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | module: convolution,triaged,module: mps | low | Critical |
2,776,664,188 | godot | push_input() has weird/inconsistent behavior with mouse hover & mouse entered/exited signals | ### Tested versions
- Reproducible on Godot 4.3 stable
- Reproducible on Godot 4.2.2 stable
### System information
Godot v4.3.stable - Pop!_OS 22.04 LTS - X11 - Vulkan (Forward+) - dedicated AMD Radeon RX 7800 XT (RADV NAVI32) - AMD Ryzen 7 2700X Eight-Core Processor (16 Threads)
### Issue description
Adding a SubViewportContainer -> SubViewport to an existing scene with various controls nodes has issues with mouse hover / mouse signals, when setting the `mouse_filter` on the SubViewportContainer to `ignore`. As in, it does not detect a mouse hovering over a button and any mouse-entered/exited signals do not fire. Focused signals on the controls nodes inside the VP do however work.
### Steps to reproduce
Create a 2d/3d scene and add a Button control to it. Add a SubViewportContainer -> SubViewport and set the Container's `mouse_filter` property to ignore. Inside the SubViewport set the background to transparent (set the resolution to cover the scene), and inside of that VP add another scene with buttons.
Create a script for the main scene:
```
extends Node3D
@onready var _vp = %SubViewport
func _unhandled_input(event: InputEvent) -> void:
_vp.push_input(event)
```
Such that any _unhandled input (any input not attached to the Button on the main scene) will be pushed to the viewport. This allows the user to push buttons on both the main scene and the VP scene however hover and mouse entered signals will not work for the VP scene items.
### Minimal reproduction project (MRP)
[test-game-project.zip](https://github.com/user-attachments/files/18354790/test-game-project.zip)
| discussion,topic:input | low | Major |
2,776,669,119 | godot | ResourceLoader.load_threaded_get_status progress stuck at 0 for Firefox when Thread Support On | ### Tested versions
- Reproducible in 4.3 stable
- Not reproducible in 4.2.stable
### System information
Godot v4.3.stable unknown - Arch Linux #1 SMP PREEMPT_DYNAMIC Thu, 02 Jan 2025 22:52:26 +0000 - X11 - GLES3 (Compatibility) - AMD Radeon Graphics (radeonsi, renoir, LLVM 18.1.8, DRM 3.59, 6.12.8-arch1-1) - AMD Ryzen 9 5900HX with Radeon Graphics (16 Threads)
### Issue description
When exporting a project to HTML5 with Thread Support: ON
Expected behavior:
- `ResourceLoader.load_threaded_get_status` return value changes as the scene is loaded
Actual behavior:
- `ResourceLoader.load_threaded_get_status` always returns `IN PROGRESS` with progress set to `0`
- Broken in Firefox but not Chromium, only when Thread Support is ON
### Steps to reproduce
1. Open the project and click "Run in browser" for the remote debug option
2. Open the URL in Firefox
3. Check the console
### Minimal reproduction project (MRP)
[firefox-thread-mrp.zip](https://github.com/user-attachments/files/18355038/firefox-thread-mrp.zip)
| bug,platform:web,regression | low | Critical |
2,776,684,042 | react | Bug: React 19 cannot run apps in both dev & production mode simultaneously | <!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
Unable to run multiple React micro frontends (MFEs) when they don't all use the same `NODE_ENV`. When developing large systems using micro frontends (e.g. via module federation), it is not feasible for developers to have every MFE running locally. Instead, a proxy config can be used to pull built artifacts of existing MFEs and only run relevant apps locally. This worked well in React 18, but does not in React 19.
React version: 19
## Steps To Reproduce
1. Using some MFE solution (in my example, module federation), run a host app in dev mode (NODE_ENV === 'development') and a remote app in prod mode (NODE_ENV === 'production')
2. Note that the runtimes are not compatible.
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example: https://github.com/rdenman/react-19-mixed-env-mf
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
Unable to run mixed builds (dev + prod)
## The expected behavior
Both dev and prod builds should be compatible.
This is the error I see when running the attached example. Note that downgrading the example to React 18 fixes this issue, and running both apps in either dev or prod mode also resolves the issue.
<img width="705" alt="Screenshot 2025-01-08 at 7 59 06 PM" src="https://github.com/user-attachments/assets/7b66a8b5-8704-4950-a706-2f3501a0e39c" />
| Status: Unconfirmed | medium | Critical |
2,776,703,481 | ollama | we need Ollama Video-LLaVA | i want to use Ollama Video-LLaVA model , but it model is not have in Ollama , someone can add this model to Ollama please?
i just try [anas/video-llava](https://ollama.com/anas/video-llava) & [ManishThota/llava_next_video](https://ollama.com/ManishThota/llava_next_video)
it not work it have bug in this issues [issues.Add Video-LLaVA](https://github.com/ollama/ollama/issues/3184) , [medium.manish-VideoLLaVA](https://medium.com/@manish.thota1999/an-experiment-to-unlock-ollamas-potential-video-question-answering-e2b4d1bfb5ba)

| feature request | low | Critical |
2,776,709,173 | react-native | [0.76] updating appProperties does not work on iOS | ### Description
Hi, I'm using react native v0.76.5 (new architecture is enabled) and react-native-firebase/messaging v21.6.2.
According to the [documentation](https://rnfirebase.io/messaging/usage#background-application-state), I have the following code in index.js
```
messaging().setBackgroundMessageHandler(async remoteMessage => {
//
});
const HeadlessCheck = ({ isHeadless }) => {
if (isHeadless) return null;
return <App />;
}
AppRegistry.registerComponent(appName, () => HeadlessCheck);
```
I have the the following code in AppDelegate.mm:
```
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.moduleName = @"gk_research_app_staging";
// You can add your custom initial props in the dictionary below.
// They will be passed down to the ViewController used by React Native.
// self.initialProps = @{};
// return [super application:application didFinishLaunchingWithOptions:launchOptions];
self.initialProps = [RNFBMessagingModule addCustomPropsToUserProps:nil withLaunchOptions:launchOptions];
[FIRApp configure];
[application registerForRemoteNotifications];
return [super application:application didFinishLaunchingWithOptions:launchOptions];
}
```
I have a problem for iOS device - when the app is in quit state and receives a notification, after a few seconds, I click the notification which makes the app open, but nothing is shown. I know it's because of "if (isHeadless) return null;".
The following logs are in my console:
When the app is opened(foreground/background):
Running "test_app" with {"rootTag":1,"initialProps":{"concurrentRoot":true,**"isHeadless":false**},"fabric":true}
When the app is quit and receives a notification, the bundle is re-generated:
Running "test_app" with {"rootTag":1,"initialProps":{"concurrentRoot":true,**"isHeadless":true**},"fabric":true}
When re-open the app or open the app by the notification, the bundle is not regenerated and keep the last state:
Running "test_app" with {"rootTag":1,"initialProps":{"concurrentRoot":true,**"isHeadless":true**},"fabric":true}
This issue does not happen when the app is already open, no matter if it is in foreground or background state, because the bundle will not be re-generated.
To make the app UI display, I need to exit the app and open the app again which cause the bundle re-generated:
Running "test_app" with {"rootTag":1,"initialProps":{"concurrentRoot":true,**"isHeadless":false**},"fabric":true}
How can I solve this issue?
I need your help!!!
Related:
https://github.com/facebook/react-native/issues/20115
https://github.com/invertase/react-native-firebase/issues/5388
### Steps to reproduce
1. App is in quit state and receives a notification
2. Open the app
3. App shows blank screen
### React Native Version
0.76.5
### Affected Platforms
Runtime - iOS
### Areas
Bridgeless - The New Initialization Flow
### Output of `npx react-native info`
```text
System:
OS: macOS 14.5
CPU: (8) arm64 Apple M2
Memory: 2.56 GB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.18.0
path: ~/.nvm/versions/node/v20.18.0/bin/node
Yarn:
version: 1.22.19
path: /opt/homebrew/bin/yarn
npm:
version: 11.0.0
path: ~/.nvm/versions/node/v20.18.0/bin/npm
Watchman:
version: 2024.10.28.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/williamchan/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK:
API Levels:
- "34"
- "35"
Build Tools:
- 30.0.3
- 33.0.0
- 34.0.0
- 35.0.0
System Images:
- android-23 | Google APIs ARM 64 v8a
- android-33 | Google Play ARM 64 v8a
- android-34 | Google Play ARM 64 v8a
- android-35 | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12483815
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.11
path: /usr/bin/javac
Ruby:
version: 3.2.2
path: /Users/williamchan/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.5
wanted: 0.76.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
The following logs are in my console:
When the app is opened(foreground/background):
Running "test_app" with {"rootTag":1,"initialProps":{"concurrentRoot":true,"isHeadless":false},"fabric":true}
When the app is quit and receives a notification, the bundle is re-generated:
Running "test_app" with {"rootTag":1,"initialProps":{"concurrentRoot":true,"isHeadless":true},"fabric":true}
When re-open the app or open the app by the notification, the bundle is not regenerated and keep the last state:
Running "test_app" with {"rootTag":1,"initialProps":{"concurrentRoot":true,"isHeadless":true},"fabric":true}
```
### Reproducer
/
### Screenshots and Videos
_No response_ | Platform: iOS,Needs: Repro,Needs: Attention,Type: New Architecture | low | Major |
2,776,717,949 | react-native | [0.76][New Arch] Pressable loses background when applying borderless ripple effect | ### Description
The `Pressable` component loses its backgroundColor and ripple effect when `android_ripple` is set with `borderless: true`.
Notes:
• If `Pressable` is the only component in the app or if all parent components have transparent backgrounds, the ripple effect works as expected. However, this scenario is not realistic.
Edit 1:
Crazy thing: if you add `overflow: hidden` to the parent, the Pressable background and ripple works again for some reason. But again, only if the parent and pressable are in the root.
I have two snacks:
1. Using SDK 51 (old arch), this is working fine:
https://snack.expo.dev/@wfern/android-pressable-borderless-sdk-51
2. Using SDK 52 (new arch), where the bug happens:
https://snack.expo.dev/@wfern/android-pressable-borderless-sdk-52
### Steps to reproduce
In the Snack with SDK 52 set `Pressable` android_ripple borderless to `true` and see it's background and ripple effect disappear.
### React Native Version
0.76.5
### Affected Platforms
Runtime - Android
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
info Fetching system and libraries information...
System:
OS: macOS 15.2
CPU: (8) arm64 Apple M1
Memory: 126.64 MB / 8.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.17.0
path: ~/.nvm/versions/node/v20.17.0/bin/node
Yarn:
version: 1.22.22
path: /opt/homebrew/bin/yarn
npm:
version: 11.0.0
path: ~/.nvm/versions/node/v20.17.0/bin/npm
Watchman:
version: 2024.12.02.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /opt/homebrew/lib/ruby/gems/3.4.0/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2412.12266719
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: /usr/bin/javac
Ruby:
version: 3.4.1
path: /opt/homebrew/opt/ruby/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.3
wanted: ^15.1.3
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.5
wanted: 0.76.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
No logs.
```
### Reproducer
https://snack.expo.dev/@wfern/android-pressable-borderless-sdk-52
In this snack set the Pressable ripple borderless to true.
### Screenshots and Videos
_No response_ | Issue: Author Provided Repro,Type: New Architecture | low | Critical |
2,776,738,884 | flutter | obj messy code | ### Steps to reproduce
What is the reason for this obj garbled code when textspan is nested
<img width="203" alt="infoflow_2025-1-9_11-0-7" src="https://github.com/user-attachments/assets/7a1d4444-3bc2-4b15-95d5-eefdf7442f2b" />
### Expected results
Expected normal display
### Actual results
messy code
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/gestures.dart';
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'dart:ui' as ui show Locale, LocaleStringAttribute, ParagraphBuilder, SpellOutStringAttribute, StringAttribute;
import 'package:logic_module/utils/log_util.dart';
class TextSpanWidget extends StatelessWidget {
String? text;
TextStyle? style;
GestureRecognizer? recognizer;
MouseCursor mouseCursor;
PointerEnterEventListener? onEnter;
PointerExitEventListener? onExit;
String? semanticsLabel;
ui.Locale? locale;
bool? spellOut;
Map<GlobalKey, Rect> spanRects;
TextSpanWidget({
Key? key,
required this.text,
required this.style,
required this.recognizer,
required this.mouseCursor,
required this.onEnter,
required this.onExit,
required this.semanticsLabel,
required this.locale,
required this.spellOut,
required this.spanRects,
}) : super(key: key);
String _tag = "TextSpanWidget";
@override
Widget build(BuildContext context) {
WidgetsBinding.instance.addPostFrameCallback((call) {
WidgetsBinding.instance.addPersistentFrameCallback((call) {
if (!context.mounted) return;
try {
RenderBox renderBox = context.findRenderObject() as RenderBox;
Offset offset = Offset.zero;
offset = renderBox.localToGlobal(Offset.zero);
if (offset.dx.isNaN || offset.dy.isNaN) {
return;
}
spanRects[key as GlobalKey] = Rect.fromLTWH(offset.dx, offset.dy, renderBox.size.width, renderBox.size.height);
} catch (e, stackTrace) {
LogUtil.error(_tag, 'Error calculating offset: $e\n$stackTrace');
}
});
});
return Text.rich(TextSpan(
text: text,
style: style,
recognizer: recognizer,
mouseCursor: mouseCursor,
onEnter: onEnter,
onExit: onExit,
semanticsLabel: semanticsLabel,
locale: locale,
spellOut: spellOut,
));
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| waiting for customer response,in triage | low | Critical |
2,776,765,876 | pytorch | [Profiler]Close CUPTI by default after pytorch profiling ends | ### 🚀 The feature, motivation and pitch
Recently, I observed that the torch profiler continues to affect program performance even after profiling has completed. Upon investigation, I found that this issue occurs because Kineto does not close CUPTI (i.e., it does not call `cuptiFinalize`) by default when profiling ends, unless the environment variable `TEARDOWN_CUPTI` is explicitly set to `"1"`.
I believe most users do not know this behavior. Is it possible to change the behavior so that CUPTI is closed by default at the end of profiling if the env is not set? Are there risks associated with this change?
I noticed the following comment in the torch code:
```python
if self.config.get("triton.cudagraphs", False):
os.environ["DISABLE_CUPTI_LAZY_REINIT"] = "1"
# FIXME: CUDA Graph does not work well with CUPTI teardown.
# 1) crashes on 1st lazy CUPTI re-init after teardown (CUDA 11)
# 2) crashes on 2nd non-lazy CUPTI re-init after teardown (CUDA 12)
# Workaround: turn off CUPTI teardown when using CUDA Graphs.
os.environ["TEARDOWN_CUPTI"] = "0"
```
Is the current behavior of not closing CUPTI primarily due to concerns about crashes caused by CUDA Graphs?
Thank you for your attention to this matter.
### Alternatives
_No response_
### Additional context
_No response_
cc @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise | oncall: profiler | low | Critical |
2,776,766,589 | godot | MSBuild ProjectRootElement.Create() Method Missing After Exporting Godot Project | ### Tested versions
Godot-v4.3.stable.mono.official
### System information
Win11 - Godot-v4.3.stable.mono.official - .NET SDK: 6.0.415
### Issue description
Description
I'm experiencing an issue when using Microsoft.Build and Microsoft.Build.Locator in a Godot project to dynamically load MSBuild. Everything works as expected when running the project directly from the Godot editor. However, after exporting the project, the application throws a runtime error stating that the Create method does not exist.
Code Example
Here is a simplified version of the code I'm using:
using Microsoft.Build.Locator;
using Microsoft.Build.Construction;
public void InitializeMsBuild()
{
MSBuildLocator.RegisterDefaults();
var projectRoot = ProjectRootElement.Create();
}
Error Message
After exporting the project and running the executable, I receive the following error:
Method not found: 'Microsoft.Build.Construction.ProjectRootElement.Create()'.
Investigation
I used reflection to inspect the methods available in ProjectRootElement both in the Godot editor runtime and in the exported project. The Create method is present in both cases:
Method: Microsoft.Build.Construction.ProjectRootElement Create(System.Xml.XmlReader, Microsoft.Build.Evaluation.ProjectCollection)
This confirms that the method is indeed part of the loaded assembly.
Expected Behavior
The exported project should behave consistently with the behavior observed in the Godot editor. The Create method should be available and callable.
Actual Behavior
The method is reported as missing in the exported project runtime, even though it is confirmed to be present through reflection.
### Steps to reproduce
Create a Godot project.
Add a C# script that uses Microsoft.Build.Locator and calls ProjectRootElement.Create().
Run the project from the Godot editor – it works fine.
Export the project.
Run the exported executable – it throws a runtime error.
### Minimal reproduction project (MRP)
Additional Information
I suspect that the issue might be related to how assemblies are bundled during the export process. Perhaps some dependencies of Microsoft.Build are not being correctly included in the exported project.
I would appreciate any guidance on how to ensure that all required assemblies are properly bundled in the exported project, or if there are any additional steps needed to make the exported executable recognize the Create method. | bug,needs testing,topic:dotnet,topic:export | low | Critical |
2,776,782,893 | vscode | Hide and Lock Main Editor Area | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Currently I am working on full screen graphical stuff.
I have multiple monitors and like to have the debug stuff on a secondary monitor as the app I am working on open on the primary (Where I also edit code)
Because we can't split the debug pane and console out of the main window, I split the editor out into a new window, have that on the primary monitor and then have the 'main' window with the debug panel on the secondary monitor and lock the editor group so that any files I open go to the primary monitor.
This is fine however the main editor section takes up valuable space (The secondary screen is smaller and vertical)
<img width="540" alt="Image" src="https://github.com/user-attachments/assets/e07a0049-ff3e-4c51-b6f1-42aabf5e0f5a" />
It would be awesome to have button somewhere to do the following all in one go:
- Split the editor to a new window
- Hide the editor section on the main VS Code window
- Lock the editor group on the main VS Code window
| feature-request,workbench-editor-grid,workbench-auxwindow | low | Critical |
2,776,826,915 | flutter | Error after including Shared Preference dependancy | When i include Shared Preference dependency i got an error as
What went wrong:
Could not determine the dependencies of task ':shared_preferences_android:compileDebugJavaWithJavac'.
> Could not determine the dependencies of null.
> Cannot query the value of this provider because it has no value available.
pubspec yaml is
shared_preferences: ^2.3.5
sdk version is 35 | waiting for customer response,in triage | low | Critical |
2,776,830,834 | ui | [bug]: Toasts stop being dismissed correctly if one of them is dismissed with an action | ### Describe the bug
Hello! I found out something weird. If you dismiss a toast with an action, the others(if you have a limit bigger then 1) are no longer being dismiss.
Expected behavior:
After I interact with a Toast/Notification, the timeout for dismissal should be resumed. That is never happening.
Also...after a mouse interaction(`mouseenter`) the dismissal should be cleared.
I have fixed my issue by adding a `dismissQueue`(alongside the `removeQueue` that useToast already has).
```
const addToDismissQueue = (toastId: string) => {
if (dismissQueue.has(toastId)) return;
const timeout = setTimeout(() => {
dismissQueue.delete(toastId);
dispatch({ type: 'DISMISS_TOAST', toastId });
}, TOAST_GLOBALS.duration);
dismissQueue.set(toastId, timeout);
};
const removeFromDismissQueue = (toastId: string) => {
clearTimeout(dismissQueue.get(toastId));
dismissQueue.delete(toastId);
};
const clearDismissQueue = () =>{
for(const timeout of dismissQueue.values()){
clearTimeout(timeout);
}
dismissQueue.clear();
}
```
And when I dispatch the `ADD_TOAST`, I place the props:
```
dispatch(
{
type: 'ADD_TOAST',
toast: {
...props,
onMouseLeave: ()=> addToDismissQueue(id),
onMouseEnter: ()=> removeFromDismissQueue(id),
duration,
id,
open: true,
onOpenChange: open => {
if (!open) dismiss();
},
},
}
);
```
Don't forget to clear the dismissQueue when you clear all notifications:
```
export const reducer = (state: State, action: Action): State => {
switch (action.type) {
case 'ADD_TOAST':
// addToDismissQueue(action.toast.id, duration, limit);
return {
...state,
toasts: [action.toast, ...state.toasts].slice(0, TOAST_GLOBALS.limit),
};
case 'UPDATE_TOAST':
return {
...state,
toasts: state.toasts.map(t => (t.id === action.toast.id ? { ...t, ...action.toast } : t)),
};
case 'DISMISS_TOAST': {
const { toastId } = action;
// ! Side effects ! - This could be extracted into a dismissToast() action,
// but I'll keep it here for simplicity
if (toastId) {
addToRemoveQueue(toastId);
} else {
state.toasts.forEach(toast => {
addToRemoveQueue(toast.id);
});
}
return {
...state,
toasts: state.toasts.map(t =>
t.id === toastId || toastId === undefined
? {
...t,
open: false,
}
: t,
),
};
}
case 'REMOVE_TOAST':
if (action.toastId === undefined) {
clearDismissQueue();
return {
...state,
toasts: [],
};
}
return {
...state,
toasts: state.toasts.filter(t => t.id !== action.toastId),
};
}
};
```
This what we achieve the desired outcome.
Note:
The `TOAST_GLOBALS` constant is something I came up with in order to set the parameters from a context right inside the `useToast` hook:
```
function useToast() {
const context = React.useContext(ToasterContext);
if (!context) throw new Error('useToast must be used inside the Toaster context');
const [state, setState] = React.useState<State>(memoryState);
React.useEffect(() => {
listeners.push(setState);
return () => {
const index = listeners.indexOf(setState);
if (index > -1) {
listeners.splice(index, 1);
}
};
}, [state]);
Object.assign(TOAST_GLOBALS, context);
return {
...state,
toast,
unmount: () => {
dispatch({ type: 'REMOVE_TOAST' });
setState(memoryState);
},
dismiss: (toastId?: string) => {
dispatch({ type: 'DISMISS_TOAST', toastId });
if (!toastId) return;
for (const toast of memoryState.toasts.filter(toast => toast.id !== toastId)) {
addToDismissQueue(toast.id);
}
},
context,
};
}
```
I don't think this approach is optimal so I am posting it here to temporary solve this issue(at least for me it was an issue) and I am waiting for a fix in the near future. Thanks!
### Affected component/components
Toast
### How to reproduce
1. Add basic toast to project
2. Update the delay and the limit (limit = 3 and delay = 3000)
3. Add the Toaster to your project
4. Add a button that creates a toast
5. Add an action inside the toast that dismisses the toast(or use the dismiss function that is returned from the useToast hook)
6. Press the `insert-toast` button twice and dismiss one of the toasts.
7. The second toast is never dismissed
Extra: You can add other toasts and they won't be dismissed either.
### Codesandbox/StackBlitz link
https://codesandbox.io/p/devbox/4sknl5
### Logs
_No response_
### System Info
```bash
Systems: Linux / Mac / Windows
Browsers: Chrome / Arc / Firefox / Zen
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug | low | Critical |
2,776,980,536 | transformers | Error occurs when using model.generate with Gemma2 in ZeRO3 environment | ### System Info
> pip install git+https://github.com/huggingface/transformers.git@a6256ec0982fee2c57cc41237bff7e64ed4dcda9
> pip install deepspeed==0.15.4
> pip install torch==2.5.1+cu121
<details>
<summary>huggingface-env</summary>
transformers: 4.48.0.dev0
huggingface_hub version: 0.27.0
Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Python version: 3.10.12
Running in iPython ?: No
Running in notebook ?: No
Running in Google Colab ?: No
Running in Google Colab Enterprise ?: No
Token path ?: /root/.cache/huggingface/token
Has saved token ?: True
Who am I ?: jp1924
Configured git credential helpers:
FastAI: N/A
Tensorflow: N/A
Torch: 2.5.1+cu121
Jinja2: 3.1.5
Graphviz: N/A
keras: N/A
Pydot: N/A
Pillow: 10.4.0
hf_transfer: N/A
gradio: N/A
tensorboard: N/A
numpy: 1.26.4
pydantic: 2.10.4
aiohttp: 3.11.11
ENDPOINT: https://huggingface.co
HF_HUB_CACHE: /root/.cache/huggingface/hub
HF_ASSETS_CACHE: /root/.cache/huggingface/assets
HF_TOKEN_PATH: /root/.cache/huggingface/token
HF_STORED_TOKENS_PATH: /root/.cache/huggingface/stored_tokens
HF_HUB_OFFLINE: False
HF_HUB_DISABLE_TELEMETRY: False
HF_HUB_DISABLE_PROGRESS_BARS: None
HF_HUB_DISABLE_SYMLINKS_WARNING: False
HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
HF_HUB_DISABLE_IMPLICIT_TOKEN: False
HF_HUB_ENABLE_HF_TRANSFER: False
HF_HUB_ETAG_TIMEOUT: 10
HF_HUB_DOWNLOAD_TIMEOUT: 10
</details>
<details>
<summary>deepspeed-env</summary>
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
[WARNING] FP Quantizer is using an untested triton version (3.1.0), only 2.3.(0, 1) and 3.0.0 are known to be compatible with these kernels
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
[WARNING] gds requires the dev libaio .so object and headers but these were not found.
[WARNING] gds: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
gds .................... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.5
[WARNING] using untested triton version (3.1.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
DeepSpeed general environment info:
torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch']
torch version .................... 2.5.1+cu121
deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed']
deepspeed info ................... 0.15.4, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.1
deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0
shared memory (/dev/shm) size .... 126.00 GB
</details>
<details>
<summary>torch-env</summary>
PyTorch version: 2.5.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Silver 4310 CPU @ 2.10GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
Stepping: 6
CPU max MHz: 3300.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.1 MiB (24 instances)
L1i cache: 768 KiB (24 instances)
L2 cache: 30 MiB (24 instances)
L3 cache: 36 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] torch==2.5.1+cu121
[pip3] torchvision==0.20.1+cu121
[pip3] triton==3.1.0
[conda] Could not collect
</details>
### Who can help?
@muellerzr
@gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import os
from accelerate import Accelerator
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
Gemma2ForCausalLM,
GenerationConfig,
TextGenerationPipeline,
TrainingArguments,
)
from transformers.integrations import is_deepspeed_zero3_enabled
CHAT_TEMPLATE = "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{{ bos_token }}{% for message in messages %}{{ '<start_of_turn>' }}{% if message.role == 'user' %}{{ '### User:\\n' }}{% if message.content is not string %}{% for content in message.content %}{% if content.type == 'image' %}{{ '<img>' }}{% elif content.type == 'text' %}{{ content.text }}{% else %}{# Do nothing #}{% endif %}{% endfor %}{% else %}{{ message.content }}{% endif %}{{ '\\n\\n' }}{% elif message.role == 'system' %}{{ '### System:\\n' }}{% if message.content is not string %}{% for content in message.content %}{% if content.type == 'image' %}{{ '<img>' }}{% elif content.type == 'text' %}{{ content.text }}{% else %}{# Do nothing #}{% endif %}{% endfor %}{% else %}{{ message.content }}{% endif %}{{ '\\n\\n' }}{% elif message.role == 'assistant' %}{{ '### Assistant:\\n' }}{% if message.content is not string %}{% for content in message.content %}{% if content.type == 'text' %}{{ content.text }}{% else %}{# Do nothing #}{% endif %}{% endfor %}{% else %}{{ message.content }}{% endif %}{% else %}{{ '' }}{% endif %}{{ '<end_of_turn>' }}{% endfor %}{% if not add_generation_prompt %}{{ eos_token }}{% elif add_generation_prompt %}{{ '<start_of_turn>' }}{{ '### Assistant:\\n' }}{% else %}{# Do nothing #}{% endif %}"
def main(args: TrainingArguments) -> None:
if not is_deepspeed_zero3_enabled():
raise ValueError("This script is only compatible with DeepSpeed Zero3.")
accelerator = Accelerator(deepspeed_plugin=args.deepspeed_plugin)
model_name = "google/gemma-2-9b"
config = AutoConfig.from_pretrained(model_name, attn_implementation="eager")
pipeline = TextGenerationPipeline(
model=AutoModelForCausalLM.from_pretrained(model_name, config=config),
tokenizer=AutoTokenizer.from_pretrained(model_name, chat_template=CHAT_TEMPLATE),
device=args.device,
batch_size=args.per_device_eval_batch_size,
)
pipeline.model = accelerator.prepare(pipeline.model)
match args.local_rank:
case 0:
max_new_tokens = 10
case 1:
max_new_tokens = 2
generate_kwargs = {
"generation_config": GenerationConfig(
max_new_tokens=max_new_tokens,
cache_implementation="hybrid",
use_cache=True,
),
"synced_gpus": True,
}
conversations = [{"role": "user", "content": "hi"}]
pipeline(conversations, return_full_text=True, **generate_kwargs)
if "__main__" in __name__:
args = TrainingArguments(
output_dir=".",
per_device_eval_batch_size=1,
deepspeed={
"bf16": {"enabled": "auto"},
"zero_optimization": {
"stage": 3,
"allgather_partitions": True,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 0,
"stage3_max_reuse_distance": 0,
"allgather_bucket_size": 0,
"overlap_comm": False,
"reduce_scatter": True,
"contiguous_gradients": True,
"stage3_gather_16bit_weights_on_model_save": True,
"offload_optimizer": {"device": "cpu", "pin_memory": True},
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": False,
},
bf16=True,
local_rank=os.getenv("LOCAL_RANK", -1),
)
main(args)
# pip install git+https://github.com/huggingface/transformers.git@a6256ec0982fee2c57cc41237bff7e64ed4dcda9
```
`deepspeed --num_gpus=2 main.py`
### Expected behavior
I'm getting the following error when using model.generate with Gemma2 in ZeRO3 environment with hybrid cache:
```python
[rank1]: File "/root/main.py", line 86, in <module>
[rank1]: main(args)
[rank1]: File "/root/main.py", line 52, in main
[rank1]: pipeline(conversations, return_full_text=True, **generate_kwargs)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py", line 278, in __call__
[rank1]: return super().__call__(Chat(text_inputs), **kwargs)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 1362, in __call__
[rank1]: return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 1369, in run_single
[rank1]: model_outputs = self.forward(model_inputs, **forward_params)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 1269, in forward
[rank1]: model_outputs = self._forward(model_inputs, **forward_params)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py", line 383, in _forward
[rank1]: generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
[rank1]: File "/home/jp/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank1]: return func(*args, **kwargs)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 2254, in generate
[rank1]: result = self._sample(
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py", line 3246, in _sample
[rank1]: model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/models/gemma2/modeling_gemma2.py", line 904, in prepare_inputs_for_generation
[rank1]: input_ids = input_ids[:, cache_position]
[rank1]: RuntimeError: CUDA error: device-side assert triggered
[rank1]: Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
I've only debugged this with the Gemma2 model so far. I don't know how it behaves with other models.
The issue occurs due to shape mismatch between input_ids and cache_position or attention_mask.
When `sync_gpus` and `this_peer_finished` are true, before skipping the addition of generated tokens to input_ids,
it always goes through `self._update_model_kwargs_for_generation`, which causes an error by adding 1 to cache_position or attention_mask.
Here's my proposed solution:
```diff
if "attention_mask" in model_kwargs:
attention_mask = model_kwargs["attention_mask"]
model_kwargs["attention_mask"] = torch.cat(
- [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1
+ [attention_mask, attention_mask.new_ones((attention_mask.shape[0], num_new_tokens))], dim=-1
)
```
```diff
if model_kwargs.get("use_cache", True):
model_kwargs["cache_position"] = model_kwargs["cache_position"][-1:] + num_new_tokens
else:
past_positions = model_kwargs.pop("cache_position")
new_positions = torch.arange(
- past_positions[-1] + 1, past_positions[-1] + num_new_tokens + 1, dtype=past_positions.dtype
+ past_positions[-1] + num_new_tokens, past_positions[-1] + num_new_tokens + 1, dtype=past_positions.dtype
).to(past_positions.device)
model_kwargs["cache_position"] = torch.cat((past_positions, new_positions))
return model_kwargs
```
And then:
```diff
model_kwargs = self._update_model_kwargs_for_generation(
outputs,
model_kwargs,
is_encoder_decoder=self.config.is_encoder_decoder,
+ num_new_tokens = 0 if this_peer_finished else 1,
)
```
I think this needs to be added. This way, when `this_peer_finished` is True,
it won't touch irrelevant things like attention_mask or cache_position. | bug | low | Critical |
2,777,000,456 | vscode | Error in all extensions (Unable to resolve nonexistent file) |
Type: <b>Bug</b>
1. Updated VS Code
2. I see the "⚠️" icon in the extension bar
3. All extensions have the following error:
```
Unable to read file 'c:\Users\yrabo.vscode-insiders\extensions\1natsu.insert-br-tag-1.0.0\package.json' (Error: Unable to resolve nonexistent file 'c:\Users\yrabo.vscode-insiders\extensions\1natsu.insert-br-tag-1.0.0\package.json')
```
I suspect that your engineers forgot to put a backslash before `.vscode-insiders` - `c:\Users\yrabo.vscode-insiders` ...
VS Code version: Code - Insiders 1.97.0-insider (2569d71b0491afddb23e173ee6cc2eb284f1b0b9, 2025-01-08T13:30:54.238Z)
OS version: Windows_NT x64 10.0.26100
Modes:
Remote OS version: Linux x64 5.15.167.4-microsoft-standard-WSL2
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i9-11900H @ 2.50GHz (16 x 2496)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.70GB (5.58GB free)|
|Process Argv|--crash-reporter-id be554c07-aa1b-477b-a3ce-992f4e6ed782|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|WSL: Ubuntu-24.04|
|OS|Linux x64 5.15.167.4-microsoft-standard-WSL2|
|CPUs|11th Gen Intel(R) Core(TM) i9-11900H @ 2.50GHz (16 x 0)|
|Memory (System)|7.61GB (5.79GB free)|
|VM|0%|
</details><details><summary>Extensions (21)</summary>
Extension|Author (truncated)|Version
---|---|---
feather-vscode|mel|1.0.1
remote-containers|ms-|0.394.0
remote-wsl|ms-|0.88.5
material-icon-theme|PKi|5.17.0
biome|bio|2024.12.22126
gitignore|cod|0.9.0
vscode-markdownlint|Dav|0.58.0
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
copilot|Git|1.256.1304
copilot-chat|Git|0.24.2025010901
vscode-github-actions|git|0.27.0
todo-tree|Gru|0.0.226
auto-markdown-toc|hun|3.0.13
svg|joc|1.5.4
vscode-publint|kra|0.1.0
vscode-copilot-vision|ms-|0.2.2024111316
bun-vscode|ove|0.0.26
trailing-spaces|sha|0.4.1
even-better-toml|tam|0.21.2
volar|Vue|2.2.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
vsaa593cf:30376535
py29gd2263:31024238
c4g48928:30535728
a9j8j154:30646983
962ge761:30841072
pythonnoceb:30776497
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
cppperfnew:30980852
pythonait:30973460
dvdeprecation:31040973
dwnewjupyter:31046869
newcmakeconfigv2:31071590
nativerepl1:31134653
pythonrstrctxt:31093868
nativeloc1:31118317
cf971741:31144450
e80f6927:31120813
iacca1:31150324
notype1:31143044
dwcopilot:31158714
h409b430:31177054
2j25a237:31183119
c3hdf307:31184662
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | bug,debt,extensions | low | Critical |
2,777,027,183 | pytorch | In AOTI Runtime, acquiring a write lock on model_exec_mutex_ may cause starvation. | ### 🐛 Describe the bug
**Describe**
https://github.com/pytorch/pytorch/blob/6f28e466f3d7b396dfe9cea87f4377be77fa7ddf/torch/csrc/inductor/aoti_runtime/model_container.h#L87
In `AOTInductorModelContainer::run`, if constant_folded_ is false, it will attempt to acquire a write lock on `model_exec_mutex_`. However, if multiple threads enter here simultaneously, some threads will be blocked. Subsequently, other threads may continue calling `AOTInductorModelContainer::run` while holding the read lock on `model_exec_mutex_`, and those threads attempting to acquire the write lock on `model_exec_mutex_` may remain blocked.
**reproduction**
Use any model, such as the one in the [tutorial](https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html), to export model.so, Complex models may be more likely to reproduce issues.
When inference is executed in multiple threads, there is a probability that some threads will be blocked.
```
#include <iostream>
#include <vector>
#include <thread>
#include <torch/torch.h>
#include <torch/csrc/inductor/aoti_runner/model_container_runner_cpu.h>
void test_inference(torch::inductor::AOTIModelContainerRunnerCpu& runner) {
std::vector<torch::Tensor> inputs = {torch::randn({8, 10}, at::kCPU)};
for (int i = 0; i < 1000000; i++) {
std::vector<torch::Tensor> outputs = runner.run(inputs);
}
}
int main() {
c10::InferenceMode mode;
torch::inductor::AOTIModelContainerRunnerCpu runner("./model.so", 512);
torch::set_num_interop_threads(1);
torch::set_num_threads(1);
std::vector<std::thread> threads;
for (int i = 0; i < 16; i++) {
threads.push_back(std::thread(test_inference, std::ref(runner)));
}
for (int i = 0; i < 16; i++) {
threads[i].join();
}
return 0;
}
```

### Versions
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55
NUMA node1 CPU(s): 56-111
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.1.0+5fe38ffd73
[pip3] torch==2.5.1+cpu
[pip3] torchaudio==2.5.1+cpu
[pip3] torchvision==0.20.1+cpu
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: export,module: aotinductor | low | Critical |
2,777,039,373 | PowerToys | MWB disconnects when switching source on monitor | ### Microsoft PowerToys version
0.87.1
### Installation method
WinGet, PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
1. MWB works correctly
2. Switch video source on KVM monitor
### ✔️ Expected Behavior
MWB works without disruption
### ❌ Actual Behavior
MWB stops working. It shows as green for couple seconds and then goes red. Refreshing connection on both laptops restore connectivity in matter of seconds.
### Other Software
My hardware setup:
- Samsung LS34C652VAUXEN monitor with connected:
- mouse
- keyboard
- LAN
- HP Elitebook connected to monitor with usb-c (data, video, power)
- Dell Latitude connected to monitor with HDMI (only video) | Issue-Bug,Needs-Triage | low | Minor |
2,777,064,354 | flutter | Bug render map in WASM compilation inside in Step widget | ### Steps to reproduce
I have a widget GoogleMap in Fluter web inside a Step(4 of five), but if i compile in wasm the map is always visible over all steps, blocking the use of other components.
Run in normal way NOT WASM:
<img width="1220" alt="NOT wasm" src="https://github.com/user-attachments/assets/5692c732-abfe-4f4c-8d69-9b8ad5b9ad1f" />
RUn in WASM(the map is visible over all steps):
<img width="1220" alt="WITH_WASM" src="https://github.com/user-attachments/assets/4db26531-d54f-4147-9365-509374df9606" />
<img width="1220" alt="WithWASM_2" src="https://github.com/user-attachments/assets/90a7cca6-f83f-4167-b885-aedcde8e837d" />
### Expected results
MAps only render inside his step
### Actual results
<img width="1220" alt="WITH_WASM" src="https://github.com/user-attachments/assets/4db26531-d54f-4147-9365-509374df9606" />
<img width="1220" alt="WithWASM_2" src="https://github.com/user-attachments/assets/90a7cca6-f83f-4167-b885-aedcde8e837d" />
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| waiting for customer response,in triage | low | Critical |
2,777,084,447 | transformers | Tokenizer outputs same offsets for different tokens. | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.0
- PyTorch version (GPU?): 2.2.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer
tokenizer = tokenizer = AutoTokenizer.from_pretrained("Gladiator/roberta-large_ner_conll2003")
text = """"One Ring to rule them all, One Ring to find them, One Ring to bring them all and in the darkness bind them." —The Ring's inscription, translated
The One Ring, also known as the Ruling Ring, Master Ring, Ring of Power, and Isildur's Bane, was among the most powerful artifacts ever created in Middle-earth. It was crafted by the Dark Lord Sauron in the fire of Orodruin, also known as Mount Doom, in the Second Age. Sauron's intent was to enhance his own power and exercise control over the other Rings of Power, which had been made by Celebrimbor and the Gwaith-i-Mírdain with Sauron's assistance. In this way, Sauron hoped to gain lordship over the Elves and all other peoples of Middle-earth.
History[]
Origin and War[]
Sauron forges the One Ring, by Ted Nasmith The Ring saw its origin at the core of a plot by Sauron to bring the peoples of Middle-earth under his dominion. To do this, Sauron conceived of a set of magic Rings that he would aid the Elves in forging, and then by way of a singular, more powerful Ring, he would be able to bend the wills of those who wore the Rings to his own. Once the Elves were under his sway, he assumed that the other peoples of Middle-earth could be either persuaded to join him or be conquered with relative ease. As such, he spent a significant amount of time amongst the Elves, taking on the fair guise of Annatar, Lord of Gifts and instructing them on how to create the Rings. After the sixteen planned upon Rings had been forged, Sauron returned to Mordor around the year SA 1600, and within the Sammath Naur upon Mount Doom, fashioned the One Ring.
Sauron had known since the beginning of his endeavor that, in order to control the other Rings, the One would have to be an object of surpassing potency. To give the One the power necessary to fulfill its function, he concentrated within it a great part of his own fëa (soul). In this way, Sauron's fate became bound to that of the Ring. If it were damaged or destroyed, so too would be Sauron's strength and power.
Soon afterwards, Sauron attempted to use it to subjugate the Elven wielders of the other Rings. However, when Sauron placed the One Ring on his finger, the Elves who bore the other Rings became immediately aware of him and, guessing his ill intent, removed their Rings. Correctly assuming that his attempt to gain lordship over the Elves had been thwarted, Sauron marshaled his armies to seize the Rings of Power by force. The conflict, which became known as the War of the Elves and Sauron, began in SA 1693. Initially, the war went very well for Sauron. He captured Eregion in short order and took back the Nine Rings that were kept there, and also Celebrimbor, the maker of the Elven Rings of Power. He tortured Celebrimbor until he divulged the location of the Seven Rings, and discovered too that Celebrimbor had made an additional Three Rings without him, each of significantly greater power than any of the Seven or the Nine. Celebrimbor died under torment by Sauron, refusing to reveal what he had done with the Three Rings, which he valued most. After the destruction of Eregion, Sauron was able to conquer most of western Middle-earth fairly quickly, driving the Ñoldor under Gil-galad to the Grey Havens and besieging Imladris. But in SA 1700, as the Elves were nearing defeat, Tar-Minastir of Númenor sent a great armada to Middle-earth and, together with Gil-galad, completely destroyed Sauron's armies, forcing Sauron to return to Mordor to regroup.
In SA 3261, Ar-Pharazôn, the last and most powerful of the Kings of Númenor, landed at Umbar at the head of an even more gigantic army to do battle with Sauron, in contention of Sauron's self-proclaimed title as Overlord of Middle-earth and King of Men. The sheer size and might of the Númenórean army was enough to cause Sauron's own forces to flee. Understanding that he could not overcome the Númenóreans through military power, Sauron surrendered to Ar-Pharazôn and was taken back to Númenor as a prisoner. However, Sauron's surrender was both "voluntary and cunning",[1] and allowed him to gain access to the people of Númenor. The Elves had not revealed to the Númenóreans the existence of the Rings of Power, and so Ar-Pharazôn was unaware of the One Ring's existence and capabilities. Ascending rapidly to become the King's most trusted counsellor, Sauron was able to use the Númenóreans' fear of death as a way to turn them against the Valar, and toward worship of Melkor.
The Ring on Sauron's finger, in the films Although Sauron's body was destroyed in the Fall of Númenor, his spirit was able to bear the Ring back to Middle-earth and he wielded it in his renewed war against the Last Alliance of Elves and Men between SA 3429 and 3441.[2]
The Ring was cut from Sauron's hand by Isildur at the end of the Siege of Barad-dûr in SA 3441, and he in turn lost it in the River Anduin (at the Gladden Fields) just before he was killed in an Orc ambush (TA 2). Since it indirectly caused Isildur's death by slipping from his finger and revealing him to the Orcs, it was known in Gondor's lore as Isildur's Bane.
The Third Age[]
Discovery by Sméagol and Déagol[]
Déagol observing the Ring for the first time Déagol/Gollum meets Bilbo The Ring remained hidden in the riverbed for almost two and a half millennia until a Stoor named Déagol discovered it on a fishing trip in TA 2463. His friend and cousin Sméagol murdered Déagol and stole the Ring. Sméagol was changed by the Ring’s influence over four and a half centuries into the creature known as Gollum. Gollum, after being exiled from his home, sought shelter deep beneath the Misty Mountains. There, he and it remained for nearly five hundred years, its power corrupting his mind and soul and deforming him into a pathetic physical form until the Ring abandoned him and fell off his finger, landing on the floor of one of the tunnels. This is thought to be an example of one of the more noticeable powers of the Ring; the ability to change size at will. It also displayed that the Ring was sentient, having part of the spirit of Sauron inside it.
Possession by Bilbo Baggins[]
Bilbo finding the Ring in The Hobbit: An Unexpected Journey As is told in The Hobbit, Bilbo Baggins found the Ring a few hours after Gollum lost it, while he was alone in the caverns of the Misty Mountains. Shortly after discovering it, Bilbo came upon Gollum himself, who had intended to eat the lost Hobbit. Bilbo managed to get Gollum to agree to a riddle-game to determine his own fate; if he lost, Gollum could eat him, and if he won, Gollum would have to show him an exit from the caves. Gollum lost the game but had no intention of letting Bilbo leave. He went to retrieve the Ring in order to use its powers of invisibility to help him kill Bilbo, but flew into a rage when he found it missing. Deducing that Bilbo had it from his last question — "What have I got in my pocket?" — Gollum chased him through the caves, not knowing that Bilbo had discovered the Ring's powers of invisibility and was following him to the cave's exit. At one point as he neared the exit, Bilbo was presented with an opportunity to easily kill Gollum, but relented out of pity for Gollum's wretched condition.
Bilbo escaped from Gollum and the Orcs that inhabited the Misty Mountains by remaining invisible, and managed to regroup with Thorin's company and Gandalf. Upon meeting them, he claimed to have escaped the goblins simply by being very agile in the dark of the caverns. Of the Ring, he claimed to Gandalf to have received it from Gollum as a prize for winning the aforementioned riddle-game. Gandalf quickly became suspicious both of Bilbo's story and of the Ring itself, which he immediately recognized as one of the Rings of Power due to the retarding effects it had had on Gollum's aging process. However, he appeared to have believed it to have been one of the lesser Rings. He knew that the smiths of the Gwaith-i-Mírdain had created such Rings as "essays in the craft", and knew nothing of the specific appearance of the One. At the time however, he was quite focused on finding a way to eliminate Smaug before Sauron grew powerful enough to influence him and as such, he did not concern himself overmuch with the Ring.
A few years after Bilbo's return to the Shire, Gandalf decided to address the issue of Bilbo's story with him once again. He managed to coerce from Bilbo the true account of how the Ring had come into his possession but somewhat counter-intuitively, the truth turned out to have been quite innocent. It was also so similar to Bilbo's initial fabrication that Gandalf saw no real reason why Bilbo would have lied about it in the first place, save perhaps to put his claim to the Ring beyond any possible doubt. Gandalf concluded that the Ring had an "unwholesome" effect on its owner that set to work almost immediately, as it was not in Bilbo's nature to lie, particularly about something so apparently trivial. However, he saw no real danger in letting Bilbo keep the Ring despite the Hobbit's strangely possessive attitude towards it. In addition, he would later tell Frodo that he believed that he had had no real right to take the Ring from Bilbo in any event, and so let the matter lie.
In the sixty years that Bilbo owned the Ring he seldom used it, although he kept it on his person at nearly all times. This lack of use meant its malign effects were slow to take hold, the most noticeable being that Bilbo retained a relatively young appearance even past 100 years of age. Gandalf later theorized that Bilbo's virtuous mindset upon having acquired the Ring, specifically his pity for Gollum, had also played a significant role in slowing the Ring's malignancy. Additionally, the few times he did use the Ring appeared to have lacked any malicious intent, such as an occasion where he used it to avoid notice by the Sackville-Bagginses, or when he theatrically vanished at the conclusion of his Farewell Birthday Party speech. This stood in stark contract to Sméagol's early uses of the Ring, which were primarily to commit theft or spy upon people.
Bilbo considers giving up the Ring prior to leaving Bag End, making him the first wearer to abandon it of own free will In TA 3001, Bilbo concocted a plan to leave the Shire for Rivendell, and both he and Gandalf had initially intended for Bilbo's nephew and adopted heir Frodo to inherit both Bilbo's estate and the Ring. As the time came for Bilbo to give it up however, he became extremely reluctant to pass the Ring to his nephew, and his obstinacy over the issue led Gandalf to confront him directly about the Ring. At this point, though Gandalf did not yet know exactly what the Ring was, he could tell that it was both evil and gaining a great deal of influence over his old friend. As such, he advised Bilbo in the strongest terms to give the Ring to Frodo. However, Bilbo became uncharacteristically angry with Gandalf, and argued intensely with him over the matter. After a very stern warning from Gandalf, Bilbo realized that the wizard was right about the Ring's negative impact on his well-being. With Gandalf's aid and advice, Bilbo managed to give up the Ring willingly. He then departed from the Shire, and Frodo came into possession of the Ring.
The truth of the Ring[]
Frodo and the One Ring Bilbo's strangely hostile reaction to giving up the Ring greatly disturbed Gandalf. Troubled by both his encounter with Bilbo and recent events abroad, Gandalf began to consider the possibility that the Ring might be more dangerous than he had suspected. He initially considered revealing his concerns to Saruman, head of the White Council and the Istari. Saruman had devoted many years of his life to the study of the Rings of Power, ostensibly for the purpose of countering Sauron's use of them should he resurface. As such, he was supremely knowledgeable with regard to the properties, powers and natures of the Rings. However, Saruman's pride had increased in tandem with his knowledge over the years, and this combined with a somewhat nebulous feeling of wariness on Gandalf's part led him to keep his own counsel.
Instead, he considered finding and interrogating Gollum in order to help him further understand the nature of the Ring. He searched for news of Gollum, and managed to determine that Gollum had indeed left the Misty Mountains to locate the Ring. However, lulled by the fact that Saruman had told the White Council on at least one occasion that the Ring was beyond finding, he decided to let Gollum be. This proved to be a nearly catastrophic oversight on Gandalf's part, because Gollum's long possession of the Ring had incidentally left him open to an otherworldly summons from Sauron. For at the time, the Dark Lord had been putting forth his power to draw as many evil beings as possible to Mordor to rebuild his forces. As such, Gollum was pulled away from his search for the Shire by Sauron's will and came at last to Mordor.
Gollum was eventually captured while skulking around the borders of the Black Land, and was taken to the rebuilt Barad-dûr. Here, Sauron too recognized the effects of a Ring of Power on Gollum. As the other Rings were all accounted for; being in his possession, destroyed, or in the hands of the Elves, he knew that Gollum must have at some point possessed the One. Under torture, Gollum revealed the existence of Bilbo and the Shire.
Around this time, Gandalf requested of his close friend Aragorn and his fellow Dúnedain Rangers to keep an extremely close watch on the Shire. They soon reported that an inordinately large number of creatures not native to the Shire were being used to keep watch on it at the behest of an unknown party. It would turn out to be Saruman who was responsible for the spying, but at the time, he was trusted by all those who opposed Sauron. Extremely worried at this discovery, Gandalf decided at last to locate Gollum. He requested Aragorn's aid in hunting Gollum, as Aragorn's woodcraft was prodigious even amongst the Rangers. However, Gollum was at the time in Sauron's custody, and so the search was in vain. After months of fruitless wandering, Gandalf gave up on finding Gollum, and left the seemingly hopeless search to Aragorn.
Gandalf discovers Isildur's account of the Ring Desperate for information, Gandalf realized while thinking of Saruman's Ring-lore that the only source from which he could have obtained knowledge of the One was some sort of account left by Isildur, as he was the only person known to have possessed the Ring besides Sauron. Gandalf then traveled to Minas Tirith in search of any records Isildur might have left behind concerning the Ring. After a thorough search, he finally came across a short manuscript by Isildur concerning the Ring's properties, including an important note that the Ring, when heated, seemed to manifest fiery writing on its outer band. Armed with this knowledge, Gandalf began a return trek to the Shire when he learned that, against all odds, Aragorn had somehow managed to find and capture Gollum. Gollum then revealed that he had been to Mordor, but would not tell Gandalf anything about what he had endured there. Gandalf was able to deduce that there was some terror upon Gollum greater than any he himself could induce, and inferred from that that Sauron must have interrogated Gollum personally. As such, Sauron almost certainly knew nearly everything Gandalf knew about the Ring's whereabouts.
Gandalf then hastened to the Shire and confirmed his suspicions; the Ring was indeed the One. Aware that Sauron would use every means at his disposal to get it back, Gandalf instructed Frodo to flee to Rivendell with it, as it was the closest safe haven. Gandalf had intended to accompany them, but was lured to Isengard and imprisoned by Saruman, who wanted the location of the Ring so that he could take it for himself. Before he had departed however, he had given a letter to Barliman Butterbur, the innkeeper of The Prancing Pony in Bree, with instructions that it was to be delivered to Frodo immediately. The letter contained a warning to Frodo that he needed to leave the Shire at once, and had also contained a bit of information about Aragorn, whom Gandalf had instructed to watch for the hobbits and aid them if he could. However, the letter was never delivered, and as such, Frodo delayed his departure in the hope that Gandalf was simply late. Eventually however, Frodo decided that he could no longer wait, and with his companions, Samwise Gamgee, Peregrin Took, and Meriadoc Brandybuck set out without him for Rivendell.
The Hobbits' delay however, had allowed Sauron's servants, a number of strange black-clad horsemen, to journey to the Shire and begin searching for the Ring. The hobbits had a number of close encounters with these black riders, but managed to stay out of their grasp. Within a few days, they reached the village of Bree and encountered Aragorn, who revealed to them that he knew of their quest and was a friend of Gandalf's. He offered to guide them to Rivendell, but the hobbits were wary of his intentions. Fortunately however, Butterbur revealed Gandalf's letter to the hobbit party and they accepted his offer. By then, Gandalf had managed to escape from Isengard, and had begun desperately seeking for Frodo. He quickly reached the Shire, and discovered to his dismay that Frodo had only left a few days before. Having intended for Frodo to have left weeks prior, he realized that Butterbur had almost certainly not delivered his letter.
Furious, he arrived at Bree a mere day after the hobbits had left it, and learned from Butterbur that Aragorn had found them and was guiding them to Rivendell. Greatly comforted by this knowledge, Gandalf set out for Rivendell the next morning. With virtually no hope of finding Aragorn and the hobbits in the wilderness, Gandalf set out for Weathertop, hoping that Aragorn would make for it as well. However, he was attacked at Weathertop by all nine of the Nazgûl, and though he managed to fend them off, he was ultimately forced to flee. In fleeing, he managed to draw four of the Nine into pursuing him, hoping that it would give the hobbits a greater chance of reaching Rivendell. His ploy worked to a degree, as the Nazgûl would later attack Aragorn and the hobbits at Weathertop, but with only five of their number. The Witch-king managed to stab Frodo in the shoulder with a Morgul-knife, but Aragorn managed to drive them off. Weeks later Aragorn and the Hobbits, with Glorfindel's aid, reached Rivendell. There, Frodo was saved by Elrond's healing arts from a fragment of the knife which had remained lodged in his shoulder, and the Ring was made temporarily secure.
Quest of the Ring[]
The Council of Elrond, as depicted in The Lord of the Rings: The Fellowship of the Ring (2001) At Rivendell, a Council overseen by Elrond was convened, which incidentally included members of every free race in Middle-earth, to decide what to do with the Ring. Numerous members of the Council advocated for hiding the Ring, sending it to Aman, or using it against Sauron, but all were ultimately rejected as possibilities. It was also debated whether or not it could be given to a safe guardian, such as Tom Bombadil, or left in Rivendell. However, these options too were rejected, as no place or guardian could be entrusted to withstand Sauron's full power when he came at last to take the Ring. As long as the Ring existed, Sauron's power would continue to grow, and the strength of the remaining Elves and Edain of Middle-earth was far less than it once had been. As things stood, Sauron would not need to regain the Ring to achieve eventual victory. Nor could it be used against Sauron, for the Ring was of Sauron and thus wholly evil. As such, any with the strength to effectively wield the One against Sauron would ultimately become a Dark Lord in turn, eliminating any point in having used it in the first place.
It was eventually decided that the Ring needed to be taken to Mordor and cast into Mount Doom, where it could be destroyed. This action would cripple Sauron's powers, and would end his ability to influence the world forever. Frodo volunteered to carry the Ring there, and a company of eight companions were chosen to accompany him. The Fellowship of the Ring, as the group came to be known, began the long trek to Mordor.
Following their adventure through Moria, during which Gandalf fell, and their time in Lothlórien, the Fellowship was scattered when Frodo and Sam split off from the rest of the group after an Uruk-hai attack. They continued on from Nen Hithoel to Mordor alone, without any clear sense of how to get there. Frodo and Sam soon became lost in the Emyn Muil, where they encountered Gollum, who had been shadowing them ever since Moria. Frodo and Sam managed to capture him, and after swearing an oath to serve the Ring's owner (in the immediate case Frodo), Gollum was ordered to lead them to Mordor, as he had been there before and knew the way.
Passing through the Dead Marshes, the hobbits came to the Black Gate and prepared to enter Mordor. However Gollum, learning only upon their arrival of Frodo's intent to actually enter Mordor, revealed that there was another way in; the pass of Cirith Ungol. On their way to the pass, the hobbits encountered Faramir and a group of Ithilien Rangers. Learning of their goal, Faramir aided them, providing supplies of food and water, instead of taking the Ring as his father would have wished.
Reaching Minas Morgul, the hobbits began their climb up the winding Stairs of Cirith Ungol to a long tunnel. Here, Gollum betrayed them, for inside the tunnel dwelt the monstrous spider Shelob, who sought to consume the two hobbits. Sam and Frodo nearly escaped, but Frodo was stung by Shelob and captured by her. Sam managed to drive the gigantic spider off, but believing that Frodo was dead, he took the Ring and resolved to finish the quest himself. However, Frodo had simply been paralyzed by Shelob's venom, making him appear dead. He was captured by a group of Orcs while he was unable to move but Sam, who had overheard the Orcs discussing Frodo's condition, followed them to the Tower of Cirith Ungol. Shortly thereafter, Sam rescued Frodo in the aftermath of a mutinous battle among the Orcs who had captured him, which had resulted in nearly the tower's entire garrison killing each other. Returning the Ring to Frodo, the two began the arduous trek towards Mount Doom.
Destruction[]
After a few days, Frodo and Sam reached the volcano, using Sauron's Road to climb it, but were ambushed by Gollum. Fending him off, Frodo continued to the Crack of Doom. But throughout his quest, the Ring had continued to tighten its hold on Frodo's mind, growing more and more powerful the closer it came to the place of its making. Entering and coming to the Cracks of Doom, Frodo claimed the Ring for his own and put it on. Sauron immediately spotted him and, realizing the magnitude of his folly, sent the Nazgûl on winged mounts to retrieve it. As fortune would have it however, Gollum, who had been spared moments before by Sam, attacked Frodo and bit the Ring and most of the finger it had been on off of Frodo's hand. Then, dancing with joy over retrieving the Ring, Gollum took one misstep and toppled over the side of a cliff into the Crack of Doom (this was one of Eru's few interventions in the events of Middle-earth).[3] There, the Ring was swiftly melted, undoing Sauron's power and finally vanquishing him.
Properties[]
Effects[]
The One Ring as portrayed for the films When a typical person put on the Ring, they would be partly "shifted" out of the physical realm into an unseen realm, walking its threshold. A side-effect (though usually the first effect noticed) was invisibility to physical beings like living Men, but this brought high visibility to other unseen beings, such as Ringwraiths.
For mortals, the Ring had several side-effects, most of them negative. Perhaps the first was that the bearer soon developed a strong attachment to it, becoming increasingly reluctant to relinquish it. Like the other great Rings of Power, it would extend the lifespan of a mortal bearer indefinitely, but the bearer would not grow or obtain more vitality. Instead the bearer would merely "continue" until existence became unbearably wearisome. Gandalf also held that if the Ring was worn too frequently, the wearer would become wraith-like over time and become entirely subsumed into the spirit world. The rate at which this occurred depended largely on the wearer's inner nature. Race also seemed to perhaps play some part in it, as Hobbits in particular seemed to "fade" rather reluctantly. Merely keeping the Ring without using it greatly slowed most of the negative effects of the Ring, but it could not do so indefinitely.
Additionally, prolonged life came to its bearer independently of their use of the Ring. Bilbo and Gollum hardly ever used the Ring, yet aged remarkably well (even in Gollum's case, though his lifestyle and extreme age warped his body). The circumstances of initially finding the Ring also played a profound role in how quickly the Ring's negative effects began to take hold of the bearer, as did the bearer's own disposition. Gollum, for instance, obtained the Ring by murdering his close friend, and as such the negative effects of the Ring on his mind in particular manifested quickly. Bilbo, on the other hand, not only found and obtained the Ring by chance, but when confronted with an opportunity to slay Gollum minutes after finding it, chose to spare him out of pity for his wretchedness. Due to this, Bilbo was ultimately able to give it up (sixty years later) of his own free will, which Gollum would have been wholly unable to do.
One potentially positive effect was that the Ring may have granted the wearer some understanding of the speech of evil creatures: while wearing it, Samwise Gamgee was able to understand the Black Speech of the Orcs in Mordor and Bilbo was able to understand the Mirkwood Spiders. It also sharpened one's senses.
Frodo holding the One Ring The Ring, being essentially an extension of Sauron himself, was evil in nature. It seemed to possess at least a limited will of its own, and could "call out" subliminally to other people, in an attempt to get them to pick it up or possibly kill the current holder. After being cut from Sauron's hand, the Ring was, as Gandalf put it, "trying to get back to its master" and used all of its power and influence to find its way back to the Dark Lord. It was also capable of changing sizes and could easily slip off of a finger where once it had been tight, with no obvious explanation as to why. Frodo Baggins was warned by Bilbo of this oddity, and kept it on a chain to make sure it never got lost.
The Ring's power grew throughout Frodo's journey, particularly on the last stage of the quest when Sam and Frodo entered Mordor and approached Mount Doom. Inside the Sammath Naur, the Ring was so powerful that not even a bearer that intended it harm would be able to do so, as was the case with Frodo Baggins.
Although the Ring was crafted by Sauron and served as a remote vessel of his essence and power, its loyalty for its maker was not absolute. Gandalf implies that if he or any other exceptionally powerful being (such as Saruman or Galadriel) were to take and use the Ring to enhance their power enough to destroy Sauron, the Ring would not lose its own power and instead consider its bearer its new master. However, the Ring would retain its corrupting nature and twist whomever had destroyed Sauron to evil.
Powers[]
"War is upon us and all our friends, a war in which only the use of the Ring could give us surety of victory." —Gandalf[4]
The Ring's primary power was control over the other Rings of Power, including "mastery over [their] powers" and domination of the wills of their users. However, its effectiveness in this manner proved limited, as the wielders of the Three never used them while Sauron held the One, and the Dwarves to whom the Seven had been given proved too tough for Sauron's mental influence to take hold. By extension, the Ring also conferred the power to dominate the wills of other beings whether they were wearing Rings or not. However, this was its least accessible power, since it granted this ability in proportion to the user's natural capacity.
Perhaps most usefully, the Ring was capable of augmenting the abilities and powers of whatever being held it. Also, the wielder of the One Ring might not have possessed most if not all the powers of the Ring. Only Sauron or someone similar in power and experience with the Ring would be able to wield its full power. Varying powers or effects of the Ring are seen in the wielder depending on their power, skill, abilities, natural capacity, and situation. When Samwise Gamgee held the Ring he could hear far better and with more acute accuracy while trying to sneak around Cirith Ungol to evade the Orcs and save Frodo. Frodo was able to see for miles when wearing it.
In all cases, a mortal wearing the Ring became invisible, except to anyone able to perceive the non-physical world, with only a thin, shaky shadow discernible in sunlight. Whether immortals would be made invisible by it is unknown, as the only immortal being who ever wore the Ring was Tom Bombadil, over whom the Ring had absolutely no power whatsoever. However, Bombadil appeared to have been unique in that regard, as both Gandalf and Saruman were susceptible to the Ring's influence, and Bombadil was anomalous in many other ways.
Another power of the Ring was the ability to project a false vision of its wearer to observers. When Sam encountered an Orc in the Tower of Cirith Ungol while holding the Ring, he appeared to the Orc as a powerful warrior cloaked in shadow "[holding] some nameless menace of power and doom." The Orc was so terrified of this illusion that it fled from the otherwise unintimidating Sam. Similarly at Mount Doom, when Frodo and Sam were attacked by Gollum, Frodo grabbed the Ring and appeared as "a figure robed in white... [and] it held a wheel of fire." In this scene, Frodo (or perhaps the Ring itself) spoke "with a commanding voice" foretelling the destruction of Gollum.
However, the Ring did not offer the wielder protection from physical harm. While wearing the Ring, Frodo was still seriously injured by the Witch-king and his Morgul-knife, and lost a finger when Gollum bit it off. Sauron himself suffered the destruction of his physical body at the hands of Gil-galad and Elendil while wearing the Ring.
As it contained the better part of Sauron's native power, it seemed to exhibit a malevolent, but limited, form of sentience. While separated from Sauron, the Ring would strive to return to him, both by impelling its bearer to yield to Sauron or his servants, or by abandoning its possessor at key moments. The Ring had the ability to change size. As well as adapting to fingers of varying size, from Sauron's to Frodo's, it sometimes suddenly expanded to escape from its wearer. For example, it slipped off of Gollum's finger when the time was right for it to be brought back into the world at large. Sauron was also capable of sensing the location of the Ring if someone put it on for any extended period of time, even if that person was hundreds of miles away from him.
To fully master all of the Ring's abilities, a wielder of the Ring would need an extremely disciplined and well-trained mind, a strong will, and a high degree of spiritual development. Even for those with the necessary prerequisites it would have taken time to master the Ring's powers to the point at which they would be strong enough to overthrow Sauron, and, hypothetically, bring peace. While this is a tantalizing prospect for some, in the end, the Ring's inherent corruption would twist its bearer into another Dark Lord as evil as Sauron was, or worse, regardless of their intentions at the outset. This result was apparently inevitable no matter how well-intentioned the bearer, as even fellow Maiar like Gandalf feared to so much as possess the Ring lest its power begin to take hold.
Despite its powerful qualities, neither the Ring's innate power nor its power over others was absolute. Three times Sauron suffered military defeat with it in his possession, first by Tar-Minastir in the SA 1700, and again by Ar-Pharazôn in SA 3262 when the Númenóreans' might so overawed his armies that they deserted him. He was defeated militarily once more at the end of the Second Age by the Last Alliance of Elves and Men, which culminated in his personal defeat at the hands of Gil-galad, Elendil and Isildur.
Ring-bearers[]
In total, the One Ring existed for c. 4867 years and was held by nine people, five of whom were hobbits. Sauron was by far the one to carry it for the most time (c. 1850 years), followed by Gollum (478 years), Bilbo (60 years), Frodo (17 years) and Isildur (2 years). Tom Bombadil wore the Ring temporarily on September 26, 3018 but was apparently unaffected by it. Of those who ever wore it, only Samwise Gamgee, Bilbo Baggins, Frodo Baggins and Tom Bombadil gave it up willingly. "Bearing" the Ring does not seem to be synonymous with merely touching or carrying it, since Gandalf refused to bear the Ring but was willing to handle it for a few seconds in Bag End.
Order
Name of Holder
From
Until
Duration
1st
Sauron
About SA 1600, when it was forged
SA 3441
about 1850 years
2nd
Isildur
SA 3441, after the defeat of Sauron; equivalent to TA 1
October 5, TA 2
about 2 years
---
the Ring was lost in the Anduin River
October 5, TA 2
TA 2463
2461 years
3rd
Déagol
Unknown date, TA 2463
The same day, TA 2463
A few minutes
4th
Gollum (Sméagol)
TA 2463
July, TA 2941
478 years
5th
Bilbo Baggins
July, TA 2941
September 22, TA 3001
60 years
6th
Frodo Baggins
September 22, TA 3001
March 14, TA 3019
About 17 years, 6 months
7th
Gandalf
TA 3018
TA 3018
A few seconds
8th
Tom Bombadil
September 26, TA 3018
September 26, TA 3018
A few minutes
9th
Samwise Gamgee
March 14, TA 3019
March 15, TA 3019
1 day
---
Frodo Baggins
March 15, TA 3019
March 25, TA 3019
10 days
---
Gollum (Sméagol)
March 25, TA 3019
March 25, TA 3019
A few seconds
Fate[]
Of the several bearers of the One Ring, three were still alive following the One Ring's destruction: Bilbo Baggins, Frodo Baggins, and Samwise Gamgee. Bilbo, having borne the Ring longest of the three, had reached a very advanced age for a Hobbit. Frodo suffered both physical and psychological scars from his strenuous quest to destroy the Ring. Samwise, having only briefly kept the Ring was affected the least and appeared to carry on a normal life following the Ring's destruction.
In consideration of the trials the Ring-bearers had endured, special dispensation was granted them by the Valar to travel to Tol Eressëa, the Elvenhome; though not the Undying Lands themselves, where it was hoped they could find rest and healing. At the close of The Return of the King, Bilbo and Frodo embark for the voyage to the West along with Galadriel, Elrond, and many of their folk, as well as Gandalf. Near the end of his life, Samwise is also said to have sailed to Eressëa. Tolkien in one of his letters described the process as a period of extended life and healing, after which, their spiritual scars cured, they would die in peace — if they so wished. Otherwise no mortal could set foot in the Undying Lands.
Description[]
Ash nazg durbatulûk, ash nazg gimbatul, ash nazg thrakatulûk, agh burzum-ishi krimpatul.
The Ring appeared to be made of real gold, but was impervious to damage. Even dragon-fire was said to be inadequate to harm the One Ring. It could only be destroyed by someone whose smithcraft was as great as Sauron's, or, by the fires within Mount Doom, where it had been created. Like the lesser rings forged by the Elves as "essays in the craft" before the Great Rings, it bore no gem, but when heated, it displayed a fiery Tengwar inscription in Elvish script. The lines were later taken up into a rhyme of lore describing the Rings, but they were evidently part of the spell that imbued the One Ring with power, since the Elves heard Sauron utter the same words during the Ring's creation, whereupon they took off their own Rings and foiled his plan.
Normally, the One Ring appeared perfectly plain and featureless, but when cast into fire the inscription appeared in fiery letters inside and outside the Ring.[5] A transliteration was seen when Gandalf read the ring-inscription during the Council of Elrond.[6]
Ash nazg durbatulûk, ash nazg gimbatul, ash nazg thrakatulûk, agh burzum-ishi krimpatul.—The inscription on the One Ring Roughly translated, the words mean:
One Ring to rule them all, One Ring to find them, One Ring to bring them all and in the darkness bind them.
When the Ring was first forged, Sauron spoke these words aloud, and Celebrimbor, maker of the Three Rings of the Elves, heard him from afar and was aware of his now-revealed purposes. The inscription uses Elvish lettering because all forms of writing J.R.R. Tolkien describes at that time were invented by the Elves.
Some recent editions of The Fellowship of the Ring accidentally omit the first two clauses of this phrase from Chapter 2, an error that was corrected by the time of the 50th Anniversary editions. The first four lines of the verse introduce three of the races inhabiting Middle-earth, as well as the eponymous title character, the Lord of the Rings:
Three Rings for the Elven-kings under the sky, Seven for the Dwarf-lords in their halls of stone, Nine for Mortal Men doomed to die, One for the Dark Lord on his dark throne In the Land of Mordor where the Shadows lie. One Ring to rule them all, One Ring to find them, One Ring to bring them all and in the darkness bind them In the Land of Mordor where the Shadows lie.
Gandalf first learned of the Ring-inscription when he read the account that Isildur had written before marching north to his death and the loss of the Ring. When Isildur had cut the Ring from Sauron's hand, it was burning hot, and so Isildur was able to transcribe the inscription before it faded. When Gandalf subsequently heated the Ring that Bilbo Baggins had found and passed on to Frodo the inscription appeared, the wizard had no doubt that it was the One Ring.
Symbolism and meaning[]
"The ring that Frodo carries is so real that it can't be tossed away and forgotten about. The consequences of doing so would be so disastrous that it would be an unthinkable act......The ring is not a fantasy, a figment of the mind, an urge, a desire, or an opinion which can be changed, it is a fact which must be faced, borne and finally overcome." —Joseph Pearce on the objectivity of the One Ring[7]
Sauron's Ring represents the addiction to power, the idolization of material things, and the contravening of human nature (in giving its bearer a wholly unnatural lifespan).
J.R.R. Tolkien held, however, that his works should not be seen as strict symbolism (he in fact believed that an author telling a reader how to think or feel about a story acts as a kind of tyrant, as seen in The Letters of J. R. R. Tolkien). The Andvarinaut in the Völsunga saga is considered to be one of the inspirations for the Ring, though Tolkien himself credited many other myths that revolve around the separation of a part of oneself to grant immortality.
In The Philosophy of Tolkien, Peter Kreeft explains how the One Ring, in meaning and utility, is the opposite of the Christian cross.
Earlier in the 20th century, many readers thought the Ring was an allegorical symbol of the Atomic bomb used in World War II.
In other versions[]
When The Hobbit was written, Tolkien had not yet conceived the Ring's sinister back-story. Thus, in the first edition of The Hobbit, Gollum surrenders the Ring to Bilbo as a reward for winning the Riddle Game. However, as Tolkien changed the nature of the Ring to fit into the legendarium of Middle-earth, he realized that the Ring's grip on Gollum would never permit him to give it up willingly. Therefore, Tolkien revised this chapter in the second edition of The Hobbit, having Gollum offer to show Bilbo the way out instead of offering to give up the Ring. Tolkien then decided that the first edition's version of events was how Bilbo had originally told the story to Gandalf and the Dwarves of Thorin's company, rather than what had actually occurred.
Trivia[]
The 13th Century Welsh king of Gwynedd, Llywelyn the Great, had a ring made for his wife-to-be Joan, Lady of Wales, the illegitimate daughter of King John of England. The ring had an inscription which read: Un fodrwy i ddangos ein cariad; Un fodrwy i'n clymu.
This inscription translates as:
One ring to show our love; One ring to bind us.
Llywelyn and Joan married in 1206. The inscription would have been known to Tolkien who had ready access to the Red Book of Hergest and the White Book of Rhydderch, the two main sources of the Welsh legends of the Mabinogion, in which Llywelyn the Great features and is known to have been part of Tolkien's library.
The sequence in Magyk by Angie Sage in which the main character Septimus finds the Dragon Ring is oddly similar to when Bilbo finds the Ruling Ring. An imitation of the One Ring makes an appearance in the animated movie Shrek 2; this version is a new wedding ring that Shrek has forged for his new wife Fiona, which displays 'I love you' in fiery writing once on her finger. The One Ring bears many similarities to the magic ring in Richard Wagner's Der Ring des Nibelungen cycle. Both rings' powers were bestowed upon them by dark magic, both are desired by all who do not possess them yet bring no contentment to their owners, and both have caused an individual to kill someone close to them in order to gain possession of it, an act for which they are cursed and eventually turned into a grotesque creature. Fans of both Lord of the Rings and Harry Potter have compared the One Ring to the Horcruxes used by Lord Voldemort in Harry Potter, as both the Ring and Horcruxes contain the essence of their creators and serve to protect the one who created them from permanent death by preserving a part of the creator's soul in another object. So powerful was the One Ring that even looking at it could inspire "Dragon Fever" Boromir attacked Frodo because of it although he redeemed himself by dying in defense of the hobbits from the Orcs; Saruman and Gollum were another victims of "Dragon Fever" because they both desired the One Ring. Saruman because he desired power for himself; Gollum because he killed his cousin Deagol for possession of it; at the end Frodo Baggins despite knowing the consequences of not destroying the One Ring fell under its power and proclaimed it was his; Gollum fought Frodo and biting off his finger regained possession of the One Ring until he falls into the Cracks of Doom; the ring survives him by only a few seconds before it is destroyed. Of all those who had possession of the One Ring only three were not affected very much by "Dragon Fever"; Bilbo Baggins because he showed mercy to the wretched Gollum; Deagol had the ring for a few minutes; Samwie Gamgee had it but never let it dominate his mind-in fact he took it from Frodo because he did not want the Orcs to have possession of it after Frodo was spider bitten and captured by Goblins; Sam in fact gives the ring back to Frodo and reminds Frodo not to forget their quest at the Crack of Doom. A variation on "Dragon-Sickness" is greed: In Tolkien poem "The Hoard" an old elven treasure hoard is claimed by a dwarf; a dragon and a King-each in turn becomes consumed with the possession of the treasure until each in turn is destroyed by someone even more greedier than the present owner.[8]
See also[]
Rings of Power
Translations[]
Foreign Language
Translated name
Arabic
الخاتم الأوحد
Afrikaans
Een Ring
Albanian
Një Unazë
Amharic
ነጠላ ቀለበት
Aragonese
Aniello Unico
Armenian
Իշխանության մատանի
Assamese
একক আঙঠি
Basque
Eraztun Bakarra
Belarusian Cyrillic
Адзіны Пярсцёнак
Bengali
একক রিং
Bulgarian Cyrillic
Един пръстен
Cambodian
ចិញ្ចៀនតែមួយ
Catalan
Anell Únic
Chinese (Hong Kong/Taiwan)
至尊魔戒
Chinese (Mainland)
指环王
Cornish
Huni Besow
Croatian
Jedinstveni Prsten
Czech
Jeden Prsten
Danish
Herskerringen
Dutch
De Ene Ring
Estonian
Üks Sõrmus
Esperanto
Unu Ringo
Faroese
Einkringur
Filipino
Ang Iisang Singsing
Finnish
Sormusten sormus
French
Anneau Unique
Galician
Anel Único
Georgian
ერთი ბეჭედი
German
Der eine Ring
Greek
Το ένα δαχτυλίδι
Gujarati
એક વીંટી
Hebrew
הטבעת האחת
Hindi
एक रिंग
Hmong
Ib lub nplhaib
Hungarian
Egy Gyűrű
Icelandic
Hringurinn eini
Irish Gaelic
Fáinne Amháin
Italian
Unico Anello
Interlingua
Anello Unic
Japanese
一つの指輪
Kannada
ಒಂದೇ ಉಂಗುರ
Kazakh
Бір сақина (Cyrillic) Bir saqïna (Latin)
Konkani
एकूच रिंग
Korean
절대반지
Kurdish
تاکە ئەڵقە (Sorani) Yek Qulp (Kurmanji)
Kyrgyz Cyrillic
Бир шакек
Laotian
ແຫວນຫນຶ່ງ
Latin
Unus Anulus
Latvian
Viens Gredzens
Lithuanian
Vienas žiedas
Luxembourgish
Een Ring
Macedonian Cyrillic
Оне Ринг
Maithili
एकल अंगूठी
Malayalam
ഒരു മോതിരം
Malaysian
Satu Cincin
Maltese
Ċirku Wieħed
Manx
Un Fainey
Maori
Kotahi te mowhiti
Marathi
एक अंगठी
Mongolian Cyrillic
Нэг бөгж
Nepalese
एकल औंठी
Norwegian
Ene Ringen
Pashto
واحد گوته
Persian
حلقه یگانه
Polish
Jedyny Pierścień
Portuguese (Portugal and Latin America)
Um Anel
Querétaro Otomi
'Nar Anillo
Romanian
Inelul Suprem , Unul
Russian
Кольцо Всевластья
Samoan
Tasi Mama
Sanskrit
एकवलयम्
Scots
Ane Ring
Scottish Gaelic
Aon Fàinne
Serbian
Један Прстен (Cyrillic) Jedan Prsten (Latin)
Shona
Mumwe Mhete
Sicilian
Unu Aneddu
Sinhalese
එක් මුදුවක්
Slovak
Jeden Prsteň
Slovenian
Prstan Mogote
Spanish (Spain and Latin America)
Anillo Único
Sundanese
Hiji Ngirining
Swahili
Pete Moja
Swedish
Härskarringen
Tamil
ஒன்று மோதிரம்
Tatar
Бер боҗра
Telugu
ఒక ఉంగరం
Thai
เอกธำมรงค์
Turkish
Tek Yüzük
Ukrainian Cyrillic
Єдиний Перстень
Uzbek
Битта узук (Cyrillic) Bitta uzuk (Latin)
Welsh
Un Fodrwy
Xhosa
Nye Indandatho
Yiddish
איין רינג
Yoruba
Oruka Kan
Zazaki
Engıştaneyo Gırd
References[]
↑ The Letters of J. R. R. Tolkien, Letter 211
↑ The Letters of J.R.R. Tolkien, Letter 211
↑ The Letters of J. R. R. Tolkien, Letter 192
↑ The Lord of the Rings, The Two Towers, Book Three, Chapter V: "The White Rider"
↑ The Lord of the Rings, The Fellowship of the Ring, Book One, Chapter II: "The Shadow of the Past"
↑ The Lord of the Rings, The Fellowship of the Ring, Book Two, Chapter II: "The Council of Elrond"
↑ Tolkien: A Celebration, 9: Joseph Pearce, "Tolkien and the Catholic Literary Revival"
↑ Tolkien Gateway - The Hoard
Categories
Categories:
Rings and jewels
Magical objects
Languages
Deutsch Español Suomi Français Italiano Nederlands Polski Português do Brasil Русский Slovenčina Українська
Community content is available under CC-BY-SA unless otherwise noted.
More Fandoms
Fantasy Lord of the Rings
Advertisement
Fan Feed
More The One Wiki to Rule Them All
1 Sauron
2 Morgoth
3 Adar
Explore properties
Fandom
Muthead
Fanatical
Follow Us
Overview
What is Fandom?
About
Careers
Press
Contact
Terms of Use
Privacy Policy
Digital Services Act
Global Sitemap
Local Sitemap
Cookie Preferences
Community
Community Central
Support
Help
Do Not Sell or Share My Personal Information
Advertise
Media Kit
Contact
Fandom Apps
Take your favorite fandoms with you and never miss a beat.
The One Wiki to Rule Them All is a FANDOM Movies Community.
View Mobile Site
Follow on IG
TikTok
Join Fan Lab"""
input_ids = tokenizer.encode_plus(
text,
return_offsets_mapping=True,
return_token_type_ids=False,
return_attention_mask=False,
add_special_tokens=False
)
print(input_ids.data["offset_mapping"][10882:10885])
### Expected behavior
Three separate tokens have the same start and end offset: (44923, 44924), (44923, 44924), (44923, 44924).
Expected behavior each token has different start or end offset.
Problem might be that word is non-english. | bug | low | Critical |
2,777,111,392 | flutter | Clipping performance regression after update from flutter 3.24.5 to 3.27.1 (Both on skia and impeller) | https://github.com/flutter/flutter/issues/160207#issuecomment-2579218268 in my original issue code sample was too big and it was unclear what was the reason of performance regression, now I found it and creating a new issue since old one contains too many comments.
Devices: Samsung A12 and Pixel 7A.
Inside the AppearanceSettingsPart when i use
```dart
clipBehavior: Clip.hardEdge
/// instead of
clipBehavior: Clip.antiAliasWithSaveLayer,
```
lag issue becomes so much better, so this might be related to clipping. When i fully remove clipping, app does not lag at all as it was on 3.24.5
i did read that clipping (especially with save layer) https://docs.flutter.dev/perf/ui-performance is expensive, but why did it become even more performance-heavy task after update?
### 3.27.1:
<img width="1727" alt="image" src="https://github.com/user-attachments/assets/bd1d33cc-c83f-4c9a-bdd0-4abe72012e6c" />
[dart_devtools_2025-01-07_11_14_13.403.json](https://github.com/user-attachments/files/18330922/dart_devtools_2025-01-07_11_14_13.403.json)
### 3.24.5
<img width="1722" alt="image" src="https://github.com/user-attachments/assets/cc76df3e-5f48-46bc-8fc3-e774eeeb7d07" />
[dart_devtools_2025-01-07_11_18_16.457.json](https://github.com/user-attachments/files/18330997/dart_devtools_2025-01-07_11_18_16.457.json)
```dart
import 'dart:math';
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
class AppRoutes {
static const String settings = '/';
static const String settingsAppearance = '$settings?t=1';
}
void main() {
runApp(
MaterialApp.router(
debugShowCheckedModeBanner: false,
routerConfig: appRouter,
),
);
}
final GoRouter appRouter = GoRouter(
routes: <RouteBase>[
GoRoute(
path: AppRoutes.settings,
pageBuilder: (context, state) {
return CupertinoPage(
key: state.pageKey,
maintainState: false,
allowSnapshotting: false,
child: TabControlledScreen(
tabIndex: int.tryParse('${state.uri.queryParameters['t']}') ?? 0,
totalTabsAmount: 7,
routerPath: AppRoutes.settings,
children: [
Center(
child: ButtonWidget(
callback: () => appRouter.push(AppRoutes.settingsAppearance),
borderRadiusOverride: BorderRadius.zero,
iconOrChild: const Text('Open screen'),
),
),
const AppearanceSettingsPart(),
],
),
);
},
),
],
);
class TabControlledScreen extends StatefulWidget {
const TabControlledScreen({
super.key,
required this.tabIndex,
required this.totalTabsAmount,
required this.routerPath,
required this.children,
});
final int tabIndex;
final int totalTabsAmount;
final String routerPath;
final List<Widget> children;
@override
State<TabControlledScreen> createState() => _TabControlledScreenState();
}
class _TabControlledScreenState extends State<TabControlledScreen>
with TickerProviderStateMixin {
late final TabController _tabController = TabController(
length: widget.totalTabsAmount,
vsync: this,
initialIndex: widget.tabIndex.clamp(0, widget.totalTabsAmount - 1),
)..addListener(_handleTabChanged);
void _handleTabChanged() {
setState(() {});
}
@override
void didUpdateWidget(covariant TabControlledScreen oldWidget) {
if (oldWidget.tabIndex != widget.tabIndex) {
_tabController.index = widget.tabIndex.clamp(
0,
widget.totalTabsAmount - 1,
);
}
super.didUpdateWidget(oldWidget);
}
@override
Widget build(BuildContext context) {
return Scaffold(
resizeToAvoidBottomInset: false,
body: Padding(
padding: EdgeInsets.only(
top: max(MediaQuery.viewPaddingOf(context).top, 16.0),
),
child: IndexedStack(
index: _tabController.index,
children: widget.children,
),
),
);
}
}
class AppearanceSettingsPart extends StatelessWidget {
const AppearanceSettingsPart({super.key});
@override
Widget build(BuildContext context) {
return Container(
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(12),
color: Colors.white12,
),
clipBehavior: Clip.antiAliasWithSaveLayer,
padding: const EdgeInsets.all(24),
child: Column(
children: <Widget>[
const Dropdown(title: 'Theme', initialValue: "PlatformBrightness"),
const Dropdown(title: 'GrainEffect', initialValue: 'GreatVisible'),
const Dropdown(
key: ValueKey(Colors.purple),
title: 'MainColor',
initialValue: '0x84329490238',
),
const Dropdown(title: 'FontSize', initialValue: "Standard"),
const Dropdown(title: 'WeatherSync', initialValue: "Always"),
const Divider(),
Switcher(
initialValue: true,
onChange: (bool value) async {},
title: 'BottomMenuBlur',
description: 'BottomMenuBlurDescription',
),
Switcher(
initialValue: false,
onChange: (bool value) async {},
title: 'DisableAnimations',
description: 'DisableAnimationsDescription',
),
],
),
);
}
}
class ButtonWidget extends StatefulWidget {
const ButtonWidget({
super.key,
this.callback,
this.iconOrChild,
this.hitTestBehavior,
this.borderRadiusOverride,
});
final Function? callback;
final Widget? iconOrChild;
final HitTestBehavior? hitTestBehavior;
final BorderRadius? borderRadiusOverride;
@override
State<ButtonWidget> createState() => _ButtonWidgetState();
}
class _ButtonWidgetState extends State<ButtonWidget>
with SingleTickerProviderStateMixin {
late final AnimationController _controller = AnimationController(vsync: this);
@override
void dispose() {
_controller.dispose();
super.dispose();
}
Future<void> _onTapUp(dynamic _) async {
await Future.delayed(const Duration(milliseconds: 50));
widget.callback?.call();
}
@override
Widget build(BuildContext context) {
final Widget childWidget = IgnorePointer(
ignoring: widget.callback == null,
child: Padding(
padding: const EdgeInsets.all(48.0),
child: widget.iconOrChild,
),
);
final Widget animationWrapper = childWidget;
return GestureDetector(
onTapDown: (_) {
},
onTapUp: _onTapUp,
onTapCancel: () {
},
child: animationWrapper,
);
}
}
class Dropdown extends StatefulWidget {
const Dropdown({
super.key,
required this.title,
required this.initialValue,
});
final String title;
final String initialValue;
@override
State<Dropdown> createState() => _DropdownState();
}
class _DropdownState extends State<Dropdown> {
late String selectedValue = widget.initialValue;
@override
Widget build(BuildContext context) {
return Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
Padding(
padding: const EdgeInsets.only(bottom: 4),
child: Text(widget.title),
),
ButtonWidget(
callback: () {},
iconOrChild: Container(
decoration: BoxDecoration(
color: Colors.purple,
border: Border.all(
width: 1.0,
color: Colors.black,
),
borderRadius: BorderRadius.circular(12),
),
padding: const EdgeInsets.only(
right: 16,
left: 16,
),
child: Row(
children: <Widget>[
Expanded(
child: Text(
selectedValue,
maxLines: 1,
overflow: TextOverflow.ellipsis,
),
),
],
),
),
),
],
);
}
}
class Switcher extends StatefulWidget {
const Switcher({
super.key,
required this.initialValue,
required this.onChange,
required this.title,
this.description,
});
final bool initialValue;
final Function onChange;
final String title;
final String? description;
@override
State<Switcher> createState() => _SwitcherState();
}
class _SwitcherState extends State<Switcher> {
late bool isSelected = widget.initialValue;
@override
Widget build(BuildContext context) {
final Widget container = Container(
width: 20.0,
height: 20.0,
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(100.0),
color: Colors.black,
),
);
return ButtonWidget(
callback: () {
setState(() {
isSelected = !isSelected;
widget.onChange(isSelected);
});
},
iconOrChild: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Row(
children: <Widget>[
Expanded(
child: Text(
widget.title,
textAlign: TextAlign.left,
maxLines: 3,
overflow: TextOverflow.ellipsis,
),
),
Padding(
padding: const EdgeInsets.only(left: 12),
child: _SwitcherAnimatedPart(
isSelected: isSelected,
child: container,
),
),
],
),
if (widget.description != null)
Padding(
padding: const EdgeInsets.only(top: 4),
child: Text(
widget.description!,
textAlign: TextAlign.left,
maxLines: 3,
overflow: TextOverflow.ellipsis,
),
),
],
),
);
}
}
class _SwitcherAnimatedPart extends StatelessWidget {
const _SwitcherAnimatedPart({
required this.child,
required this.isSelected,
});
final Widget child;
final bool isSelected;
@override
Widget build(BuildContext context) {
return AnimatedContainer(
duration: const Duration(milliseconds: 200),
curve: Curves.easeInOutCubic,
padding: EdgeInsets.only(
left: isSelected ? 32 : 4,
right: !isSelected ? 32 : 4,
top: 4,
bottom: 4,
),
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(100.0),
color: isSelected ? Colors.white : Colors.grey,
),
child: child,
);
}
}
```
Visual demo from previous issue with versions comparison:
https://github.com/user-attachments/assets/a4eb5fad-4366-4bae-bd69-65bbe02e8f84
_Originally posted by @krll-kov in https://github.com/flutter/flutter/issues/160207#issuecomment-2574887152_
| platform-android,engine,c: rendering,P3,e: impeller,team-engine,triaged-engine | low | Critical |
2,777,122,556 | PowerToys | Keyboard Manager bugs | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
I'm using a Japanese keyboard.
When I press Caps Lock with Shift, Ctrl and Alt assigned to the Caps Lock key in Keyboard Manager, Shift, Ctrl and Alt are always pressed. It seems that the function of Caps Lock is not completely exhausted but still remains.
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,777,148,830 | flutter | carouselview add animateToItem | ### Use case
When using a carousel, the primary use case for animating the scroll offset is to animate to a specific item. Currently there is only `animateTo(offset)` method on the `CarouselController`
### Proposal
Add `animateToItem(int index)` | c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Major |
2,777,160,622 | flutter | CarouselView infinite scroll | ### Use case
Currently there is a start and an end to the carousel. However a common use case is to have a carousel where the items repeat themselves.
The idea is to not have a hole on the side when the scroll extent is at the edge, so instead of this:

The hole on the left side of the first item, could be replaced by the last item.
### Proposal
```
CarouselView(
isInfinite: true,
// ...
)
``` | c: new feature,framework,f: material design,c: proposal,P3,team-design,triaged-design | low | Major |
2,777,202,786 | ant-design | Tour组件的steps传入变量较多时候,引导框可能会不生效。 | ### Reproduction link
[https://github.com/ffffhx/antd-tour-bug.git](https://github.com/ffffhx/antd-tour-bug.git)
### Steps to reproduce
当Tour的steps属性传入变量较多时候。首先点击屏幕中间上方的蓝色按钮,然后会出现三个小按钮。点击stepGroup1(或者stepGroup2)按钮,然后走完这7步引导(引导步骤只有6步以及以上才会出现这个bug,5步以及一下不管传入什么,都不会出现该bug),再点击stepGroup3的按钮,这个时候不会出现引导,即stepGroup3的引导失效。我如果把这些变量统一用一个变量step来维护,只给steps属性传入step,就能够正常运行(点击stepGroup1,完成这7步引导之后,再点击stepGroup3就会照常显示引导)。
### What is expected?
点击stepGroup1,完成这7步引导之后,再点击stepGroup3就会照常显示引导。
### What is actually happening?
点击stepGroup1,完成这7步引导之后,再点击stepGroup3,stepGroup3的引导就会失效。
| Environment | Info |
| --- | --- |
| antd | 5.23.0 |
| React | React-18 |
| System | windows11 |
| Browser | Google 131.0.6778.205 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Critical |
2,777,233,028 | godot | WebSocket Chat Demo - client not connecting on 4.4-dev7 (works on 4.3) | ### Tested versions
- Reproducible in Godot 4.4-dev7
- Not Reproducible in Godot 4.3.stable.official (77dcf97d8)
### System information
MacOS - M1 Max - Sequoia 15.2
### Issue description
WebSockets (via WebSocketPeer) are unable to connect to a server when exported to the web.
### Steps to reproduce
*Steps to reproduce*
Download the [WebSocket Chat Demo](https://github.com/godotengine/godot-demo-projects/tree/master/networking/websocket_chat)
Open in Godot v4.4-dev7
Run one instance in the usual debug mode (This will be your Server)
Run another as a HTML export or via Remote Debug > 'Run In Browser' (This will be the Client)
Start the Server in the debug instance
In your browser instance try to connect one of the Clients and you'll see that it fails without any console errors
If you repeat the same steps using Godot 4.3 it work without issue.
### Minimal reproduction project (MRP)
https://github.com/godotengine/godot-demo-projects/tree/master/networking/websocket_chat | bug,topic:network,regression | low | Critical |
2,777,280,312 | tensorflow | GPU Profiling: MemoryProfile do not contain memory events when profile remote worker. | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
nightly
### Custom code
No
### OS platform and distribution
Ubuntu 22.04
### Python version
Python 3.12
### CUDA/cuDNN version
CUDA 12.4
### GPU model and memory
A100 80GB
### Current behavior?
Start a simple any collective training with Tensorflow cluster config. And then use RPC client capture_profile in Tensorboard or tf.profiler.experimental.client.trace.
No any memory profile events or OP profiler, but only trace view.
### Standalone code to reproduce the issue
**tf_allreduce.py**
```python
import tensorflow as tf
from tensorflow.python.ops.collective_ops import all_reduce, all_reduce_v2
from tensorflow.python.eager import context
from tensorflow.core.protobuf import config_pb2
from tensorflow.python.distribute import cluster_resolver as cluster_resolver_lib
cluster_resolver = cluster_resolver_lib.TFConfigClusterResolver()
cluster = cluster_resolver.cluster_spec()
task_type = cluster_resolver.task_type
task_id = cluster_resolver.task_id
experimental_config = config_pb2.ConfigProto.Experimental(
share_cluster_devices_in_session=False,
share_session_state_in_clusterspec_propagation=False
)
config = config_pb2.ConfigProto(experimental=experimental_config)
config.experimental.collective_group_leader = '/job:worker/replica:0/task:0'
server = tf.distribute.Server(cluster,
job_name=task_type,
task_index=task_id,
protocol="grpc", # "grpc+verbs"
config=config)
run_options = config_pb2.RunOptions()
with tf.compat.v1.Session(target=server.target, config=config) as sess:
tensor = tf.Variable(tf.ones([2, 2]), dtype=tf.float32)
init = tf.compat.v1.global_variables_initializer()
sess.run(init)
sess.run(tf.print(["tensor:",tensor]))
reduced_tensor = all_reduce(tensor, group_size=2, group_key=4321, instance_key=1234, merge_op='Add', final_op='Id', communication_hint='auto')
run_options.experimental.collective_graph_key = 6
while True:
sess.run(tf.print(["reduced_tensor:",reduced_tensor]), options=run_options)
```
Run script to start server.
```bash
CUDA_VISIBLE_DEVICES=0 TF_CONFIG='{"cluster":{"worker":["localhost:2223","localhost:2224"]},"task":{"type":"worker","index":0}}' python tf_allreduce.py&
CUDA_VISIBLE_DEVICES=1 TF_CONFIG='{"cluster":{"worker":["localhost:2223","localhost:2224"]},"task":{"type":"worker","index":1}}' python tf_allreduce.py&
```
use capture_profile in Tensorboard or tf.profiler.experimental.client.trace.
```python
tf.profiler.experimental.client.trace(
'grpc://localhost:2223,grpc://localhost:2224',
'/tmp/my_tb_dir',
2000,
)
```
Try to convert xplane.pb to memory_profile, nothing show.
```python
from tensorflow.python.profiler.internal import _pywrap_profiler as profiler_wrapper
json = profiler_wrapper.xspace_to_tools_data(["xxx.xplane"], "memory_profile")
```
**Relevant log output**
```
{"memoryProfilePerAllocator":{},"numHosts":1,"memoryIds":[]}
```
Relative issue: #48146 | type:bug,comp:gpu,TF 2.18 | medium | Critical |
2,777,284,226 | TypeScript | `String.prototype.split()` with a regex containing a capturing group can return an array of `string | undefined`. | ### ⚙ Compilation target
any
### ⚙ Library
esnext, dom
### Missing / Incorrect Definition
Using split with a negative lookahead and a capturing group can return undefined. See the attached screenshot. When iterating over the array, we lose type safety and encounter runtime errors.

### Sample Code
```TypeScript
'a,b,c'.split(/,(?!(a))/) // Types as string[]
```
### Documentation Link
_No response_ | Suggestion,Awaiting More Feedback | low | Critical |
2,777,345,761 | tauri | [feat] Radio menu item and default menu item | ### Describe the problem
I am writing a tray application under Windows. The tray menu needs to have multiple radio items and set a default item. I read the menu-related documents and did not find a way to implement this function.
It is easy to implement in traditional Window applications, and the effect is as follows:

### Describe the solution you'd like
**Radio item:** Checks a specified menu item and makes it a radio item. At the same time, the function clears all other menu items in the associated group and clears the radio-item type flag for those items.
**Default item:** Specifies that the menu item is the default. A menu can contain only one default menu item, which is displayed in bold.
*Win32 API*
```CheckMenuRadioItem()```: https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-checkmenuradioitem
```SetMenuDefaultItem()```: https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-setmenudefaultitem
```MENUITEMINFOA``` structure ```MFT_RADIOCHECK``` and ```MFS_DEFAULT```: https://learn.microsoft.com/en-us/windows/win32/api/winuser/ns-winuser-menuiteminfoa
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request,status: upstream | low | Minor |
2,777,384,883 | PowerToys | AOnTop only works once before taps other windows | ### Microsoft PowerToys version
0.85.7
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Always on Top
### Steps to reproduce
one note + other apps any you would
win ctrl t -> one note(unforced),
tap the note then tap desktop = nothing
tap the note then tap some window = be covered by others
vcr : https://github.com/user-attachments/assets/018e7187-aafa-4e19-9085-d102a202b9bd
### ✔️ Expected Behavior
on top still
### ❌ Actual Behavior
be covered by others
### Other Software
wps
onenote | Issue-Bug,Needs-Triage | low | Minor |
2,777,400,644 | ui | [bug]: Formlabel does not close Popover of Combobox | ### Describe the bug
I am using a Combobox in a Form. When the Popover is already open, clicking the Formlabel closes the Popover and opens it immediately. I would expect that clicking the Formlabel when the Popover is already open, closes the Popover.
https://github.com/user-attachments/assets/d3eb27c6-a885-44ac-9b5c-d4827705b277
### Affected component/components
Combobox, Form, Formlabel
### How to reproduce
Use the official example and click on the Formlabel two times. https://ui.shadcn.com/docs/components/combobox#form
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Google Chrome for MacOS
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,777,415,113 | deno | Import from package containing colon e.g. 'astro:content' | ### Discussed in https://github.com/denoland/deno/discussions/27598
<div type='discussions-op-text'>
<sup>Originally posted by **thibaultleouay** January 9, 2025</sup>
I'm trying to use deno with astro, but i have one issue when importing
```
import { defineCollection } from 'astro:content';
```
The ide is crying with this error :
` Uncached or missing remote URL: astro:contentdeno(no-cache) `
How can i handle it? </div> | bug,node compat,node resolution | low | Critical |
2,777,448,800 | ollama | please add model:QVQ-Preview 72B! | please add model:QVQ-Preview 72B! | model request | low | Minor |
2,777,461,526 | flutter | crashes with EXC_BAD_ACCESS in SkIcuBreakIteratorCache::~SkIcuBreakIteratorCache() (in Flutter) (SkUnicode_icu.cpp:166) | ### Steps to reproduce
I am experiencing a crash issue with a Flutter application on iOS 18 and I currently only possess the crash details. Could you assist me in determining the cause? Any help in analyzing the crash information would be greatly appreciated
### Expected results
run will and not have crash
### Actual results
On iOS the application crashes with EXC_BAD_ACCESS in SkIcuBreakIteratorCache::~SkIcuBreakIteratorCache() .
This can be reproduced reliably on both physical iOS devices and the iOS simulator.
On other platforms (tested on Windows, MacOS and Android) the crash does not occur.
### Code sample
无
### Screenshots or Video
无
### Logs
>1 | Flutter | skia_private::THashTable<skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair, SkUnicode::BreakType, skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair>::Slot::reset() (in Flutter) (SkTHash.h:399)
-- | -- | --
2 | Flutter | skia_private::THashTable<skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair, SkUnicode::BreakType, skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair>::Slot::reset() (in Flutter) (SkTHash.h:399)
3 | Flutter | std::_LIBCPP_ABI_NAMESPACE::default_delete<skia_private::THashTable<skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair, SkUnicode::BreakType, skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair>::Slot []>::_EnableIfConvertible<skia_private::THashTable<skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair, SkUnicode::BreakType, skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair>::Slot>::type std::_LIBCPP_ABI_NAMESPACE::default_delete<skia_private::THashTable<skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair, SkUnicode::BreakType, skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair>::Slot []>::operator()[abi:v15000]<skia_private::THashTable<skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair, SkUnicode::BreakType, skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair>::Slot>(skia_private::THashTable<skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair, SkUnicode::BreakType, skia_private::THashMap<SkUnicode::BreakType, std::_LIBCPP_ABI_NAMESPACE::unique_ptr<UBreakIterator, SkOverloadedFunctionObject<void (UBreakIterator*), &(ubrk_close_wrapper(UBreakIterator*))> >, SkGoodHash>::Pair>::Slot*) const (in Flutter) (unique_ptr.h:76)
4 | Flutter | SkIcuBreakIteratorCache::~SkIcuBreakIteratorCache() (in Flutter) (SkUnicode_icu.cpp:166)
### Flutter Doctor output
```console
[!] Flutter (Channel [user-branch], 3.13.9, on macOS 14.5 23F79 darwin-arm64,
locale zh-Hans-CN)
! Flutter version 3.13.9 on channel [user-branch] at
/Users/wucuiping7/Desktop/development/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an
official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions
at https://flutter.dev/docs/get-started/install.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss
this error.
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/docs/get-started/install/macos#android-setup for
more details.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2023.3)
[✓] VS Code (version 1.96.2)
[✓] Connected device (3 available)
! Error: Browsing on the local area network for iPhone 13 Pro. Ensure the
device is unlocked and attached with a cable or associated with the same
local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code
-27)
! Error: Browsing on the local area network for “徐征磊”的 iPad. Ensure the
device is unlocked and attached with a cable or associated with the same
local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code
-27)
! Error: Browsing on the local area network for wan的iPhone. Ensure the
device is unlocked and attached with a cable or associated with the same
local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code
-27)
[✓] Network resources
! Doctor found issues in 2 categories.
``` | waiting for customer response,in triage | low | Critical |
2,777,503,267 | tensorflow | Build iOS tensorflowLite error with iOS library | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.8
### Custom code
Yes
### OS platform and distribution
iOS
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When i build tensorflow lite with library, i encounter error below,the error show:Undefined symbols for architecture arm64:
"std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>::find(char, unsigned long) const",
### Standalone code to reproduce the issue
```shell
Undefined symbols for architecture arm64:
"std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>::find(char, unsigned long) const", referenced from:
l001 in libNeuralnet_a.a[32](TensorFlowLiteC.a)
l001 in libNeuralnet_a.a[32](TensorFlowLiteC.a)
l001 in libNeuralnet_a.a[32](TensorFlowLiteC.a)
l001 in libNeuralnet_a.a[32](TensorFlowLiteC.a)
l001 in libNeuralnet_a.a[32](TensorFlowLiteC.a)
l001 in libNeuralnet_a.a[32](TensorFlowLiteC.a)
"std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>::rfind(char, unsigned long) const", referenced from:
l001 in libNeuralnet_a.a[32](TensorFlowLiteC.a)
l001 in libNeuralnet_a.a[32](TensorFlowLiteC.a)
l001 in libNeuralnet_a.a[32](TensorFlowLiteC.a)
"std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>::compare(unsigned long, unsigned long, char const*) const", referenced from:
l001 in libNeuralnet_a.a[32](TensorFlowLiteC.a)
"std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>::compare(unsigned long, unsigned long, char const*, unsigned long) const", referenced from:
```
### Relevant log output
_No response_ | stat:awaiting response,type:bug,stale,comp:lite,TF 2.8 | medium | Critical |
2,777,503,308 | rust | Tracking Issue for batching | <!--
NOTE: For library features, please use the "Library Tracking Issue" template instead.
Thank you for creating a tracking issue! 📜 Tracking issues are for tracking a
feature from implementation to stabilisation. Make sure to include the relevant
RFC for the feature if it has one. Otherwise provide a short summary of the
feature and link any relevant PRs or issues, and remove any sections that are
not relevant to the feature.
Remember to add team labels to the tracking issue.
For a language team feature, this would e.g., be `T-lang`.
Such a feature should also be labeled with e.g., `F-my_feature`.
This label is used to associate issues (e.g., bugs and design questions) to the feature.
-->
This is a tracking issue for the batching feature of llvm/enzyme.
The feature gate for the issue is `#![feature(batching)]`.
This is the second out of three features that were approved as experiments in [a project goal](https://rust-lang.github.io/rust-project-goals/2024h2/Rust-for-SciComp.html). It allows merging N function calls (e.g. from looping over a call) into a single call, with N being a compile-time constant. This allows better vectorization and other optimizations.
We reuse almost all of the infrastructure from the autodiff macro, with a slightly different syntax both in the user facing macro, as well as the llvm-ir we generate. We intend to later extend the autodiff macro to optionally include the batching feature, however some users might want to use batching without automatic differentiation, which is the motivation for this independent batch macro. It also seems wise to experiment with both independently before trying to merge them.
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
Discussion comments will get marked as off-topic or deleted.
Repeated discussions on the tracking issue may lead to the tracking issue getting locked.
### Steps
<!--
Include each step required to complete the feature. Typically this is a PR
implementing a feature, followed by a PR that stabilises the feature. However
for larger features an implementation could be broken up into multiple PRs.
-->
- [x] Get compiler MCP approved.
- https://github.com/rust-lang/compiler-team/issues/611
- [x] Get lang experiment approved.
- We approved this in the lang triage meeting on 2024-05-01.
- [ ] Land the experimental implementation in nightly.
- Combined change for reference (wip link): https://github.com/EnzymeAD/rust/pull/192
- [ ] Accept an RFC.
- [ ] Add documentation to the [dev guide](https://github.com/rust-lang/rustc-dev-guide).
- See the [instructions](https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs).
- [ ] Add documentation to the [reference](https://github.com/rust-lang/reference).
- See the [instructions](https://github.com/rust-lang/reference/blob/master/CONTRIBUTING.md).
- [ ] Add formatting for new syntax to the [style guide](https://github.com/rust-lang/rust/tree/master/src/doc/style-guide).
- See the [nightly style procedure](https://github.com/rust-lang/style-team/blob/master/nightly-style-procedure.md).
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[nightly-style-procedure]: https://github.com/rust-lang/style-team/blob/main/nightly-style-procedure.md
[Style Guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised.
-->
XXX --- list all the "unresolved questions" found in the RFC to ensure they are
not forgotten
### Implementation history
<!--
Include a list of all the PRs that were involved in implementing the feature.
-->
| C-tracking-issue,F-batching | low | Critical |
2,777,519,625 | storybook | [Documentation]: Specify expected preview types in TypeScript docs | ### Describe the problem
https://storybook.js.org/docs/configure/integration/typescript has
```
// Replace your-framework with the framework you are using (e.g., react-webpack5, vue3-vite)
import type { StorybookConfig } from '@storybook/your-framework';
const config: StorybookConfig = {
```
for `.storybook/main.ts`, but no equivalent for `preview.ts(x)`.
https://storybook.js.org/docs/configure#configure-story-rendering should probably include it as well.
I finally found it in https://storybook.js.org/docs/writing-stories/parameters#global-parameters.
### Additional context
_No response_ | documentation | low | Minor |
2,777,539,532 | PowerToys | Fancyzones: No Layout/disable layout shortcut | ### Description of the new feature / enhancement
Hello,
I would like to request a new feature for FancyZones: the ability to assign a shortcut key to switch directly to the "No Layout" mode.
Currently, if I want to disable the active layout or temporarily stop FancyZones from managing windows, I have to manually switch to the "No Layout" mode through the settings or editor. Adding a dedicated shortcut for this would greatly improve the workflow and make it easier to toggle FancyZones on or off when needed.
### Scenario when this would be used?
- When I need to temporarily work without FancyZones managing my windows, such as when dragging and resizing windows freely.
- For switching between FancyZones layouts and the default Windows window management behavior during specific tasks or workflows.
- To provide a smoother and faster way to toggle FancyZones on and off.
### Supporting information
- This feature would improve the flexibility and usability of FancyZones, especially for users who frequently need to adjust their workflows.
- Similar shortcuts already exist for switching between layouts, so adding one for "No Layout" mode would be consistent with the current functionality.
| Needs-Triage | low | Minor |
2,777,572,443 | transformers | Unused kwargs: ['bnb_8bit_quant_type', 'bnb_8bit_use_double_quant', 'bnb_8bit_compute_dtype'] when using bnb quantization? These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'> | ### System Info
OS: ubuntu 24.10
python: 3.12
ctransformers: 0.2.27
P.s. All dependencies:
```
[tool.poetry.dependencies]
python = "^3.12"
autoawq = "^0.2.7.post3"
evaluate = "^0.4.3"
rouge-score = "^0.1.2"
ctransformers = "^0.2.27"
gguf = "^0.13.0"
llama-cpp-python = "^0.3.5"
ctranslate2 = "^4.5.0"
exllamav2 = "^0.2.7"
bitsandbytes = "^0.45.0"
```
Why do I get
```
Unused kwargs: ['bnb_8bit_quant_type', 'bnb_8bit_use_double_quant', 'bnb_8bit_compute_dtype'] when using bnb quantization? These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>.
```
although others use these parameters and everything is fine for them (see screenshot)? Are these outdated settings? If so, can you provide a link to a post with information about removing these parameters?

### Who can help?
@MekkCyber
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
def quantize_bnb(model_id: str, quant_config: dict) -> None:
quantization_config = BitsAndBytesConfig(**quant_config)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=quantization_config,
torch_dtype="auto",
)
model.push_to_hub(
model_id.split('/')[1] +
f"-BNB-{8 if ('load_in_8bit' in quant_config and quant_config['load_in_8bit'] == True) else 4}bit",
parent_model=model_id, tags='bnb', model_card='quantization'
)
if __name__ == "__main__":
model_id = "bigscience/bloom-560m"
bnb_config = {
"load_in_8bit": True, # 8-bit quantization
"bnb_8bit_quant_type": 'nf8', # Normalized float 8
"bnb_8bit_use_double_quant": True, # Second quantization after the first
"bnb_8bit_compute_dtype": 'bfloat16', # Computation type
}
quantize_bnb(model_id=model_id, quant_config=bnb_config)
```
### Expected behavior
I expect not to see this warning | bug | low | Minor |
2,777,584,986 | flutter | [Proposal] Provide a lite version of SDK for CI/CD | ### Use case
I'm a DevOps-Engineer and use the Flutter SDK inside of Docker images used in CI/CD pipelines.
The normal installation process involves downloading and unzipping https://storage.googleapis.com/flutter_infra_release/releases/stable/linux/flutter_linux_3.27.1-stable.tar.xz, which includes a 130 MB .git directory, examples, documentation, tests and a few other things.
The directories are probably there so you can update to the latest SDK version by running `flutter channel stable`, which just pulls the latest commits from the `stable` git branch but in my case, I don't need the update functionality.
To reduce my build image size, I currently clone a specific version tag and then remove files manually.
### Proposal
Offer an additional lightweight SDK zip for CI environments and either:
- Publish it alongside the current SDK zip
- Replace the current SDK zip with the lightweight one and add instructions on how to enable the update functionality
- Replace the current SDK zip with the lightweight one and change the update functionality so it doesn't rely on git | tool,c: proposal,P3,team-tool,triaged-tool | low | Minor |
2,777,620,788 | TypeScript | Log which project is being checked. | ### 🔍 Search Terms
[_log which program is being checked_](https://github.com/microsoft/TypeScript/issues?q=is%3Aissue%20state%3Aopen%20log%20which%20program%20is%20being%20checked)
### ✅ Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
Log the project that is being checked. Also log something if all checks pass. E.g.:
```sh
$ tsc --build
Checking tsconfig.json
Checking packages/a/tsconfig.build.json
Checking packages/a/tsconfig.json
Checking packages/b/tsconfig.build.json
Checking packages/b/tsconfig.json
Success ✓
```
### 📃 Motivating Example
The `tsc` command now has useful logging in case of success by default.
### 💻 Use Cases
Currently `tsc` doesn’t log anything about the project it’s checking. In case of success, it logs nothing. If a command run takes longer and emits nothing, I usually get suspicious something went wrong.
A more serious use case are project references. TypeScript checks referenced projects one by one. A program may fail type checking. If it does, TypeScript will log which files failed type checking. However, it is ambiguous of which program this file was a part. It may be part of multiple programs. It may succeed type checking for one program, but fail in another. I.e. maybe one program should have excluded it. Or maybe it should have added a missing `lib`, but which program?
All of this can be debugged using `--traceResolution`, but that info is _very_ verbose and it takes a clean and second run | Needs More Info | low | Critical |
2,777,699,601 | react-native | Modal flickr in conditial displayed in react native 0.76.5 | ### Description
in react native 0.76.4, and react native reanimated 3.16.6 and react native navigation 6. my problem if i use condition to display my react native Modal, i have a problem
i have 2 input each one contain a button open conditionaly a modal (instead switching true/false for visible prop of the modal)
if i use condition to open and close it, when closed should be removed from the memory or dom
but if i try to open the model of the second input the first one fliker a bit before when opened the modal of the second although i close the first modal
in the version previous of react native it work fine
### Steps to reproduce
in react native 0.76.4, and react native reanimated 3.16.6 and react native navigation 6. my problem if i use condition to display my react native Modal, i have a problem
i have 2 input each one contain a button open conditionaly a modal (instead switching true/false for visible prop of the modal)
if i use condition to open and close it, when closed should be removed from the memory or dom
but if i try to open the model of the second input the first one fliker a bit before when opened the modal of the second although i close the first modal
in the version previous of react native it work fine
### React Native Version
0.76.5
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.5
CPU: (8) arm64 Apple M1
Memory: 101.41 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 22.12.0
path: ~/.nvm/versions/node/v22.12.0/bin/node
Yarn: Not Found
npm:
version: 10.9.2
path: ~/Documents/AccuV-Team/Versions/0.76.5/IDP.Mobile/node_modules/.bin/npm
Watchman:
version: 2024.01.22.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.4
- iOS 17.4
- macOS 14.4
- tvOS 17.4
- visionOS 1.1
- watchOS 10.4
Android SDK:
API Levels:
- "28"
- "29"
- "31"
- "33"
- "34"
- "35"
Build Tools:
- 29.0.2
- 30.0.2
- 30.0.3
- 31.0.0
- 33.0.0
- 33.0.2
- 33.0.3
- 34.0.0
- 35.0.0
System Images:
- android-31 | Google APIs ARM 64 v8a
- android-33 | Google APIs ARM 64 v8a
- android-34 | Google APIs ARM 64 v8a
- android-35 | Google APIs ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12700392
Xcode:
version: 15.3/15E204a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.11
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.5
wanted: ^0.76.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
...
```
### Reproducer
...
### Screenshots and Videos
_No response_ | Component: Modal,Needs: Repro,Needs: Attention | low | Minor |
2,777,702,104 | node | FATAL ERROR: v8::ToLocalChecked Empty MaybeLocal | ### Version
v23.6.0
### Platform
```text
Linux SMP Debian 5.10.103-1 (2022-03-07) x86_64 x86_64 x86_64 GNU/Linux
```
### Subsystem
_No response_
### What steps will reproduce the bug?
Hi,
I would like to report a bug, it can be reproduced by running the PoC below:
```javascript
const {exec} = require('child_process');
Object.defineProperty(Array.prototype, "2", {
set: function () {},
});
(async function () {
exec('pwd', (err, stdout, stderr) => {
console.log(stdout);
});
})();
```
Regards,
AH
### How often does it reproduce? Is there a required condition?
It reproduces anytime by simply running the given PoC on the given Node.js version.
### What is the expected behavior? Why is that the expected behavior?
It is a crash.
### What do you see instead?
FATAL ERROR: v8::ToLocalChecked Empty MaybeLocal
### Additional information
_No response_ | child_process,c++,good first issue | low | Critical |
2,777,704,925 | next.js | OpenGraph Metdata handling doesn't retain URL parameters for main page | ### Link to the code that reproduces this issue
https://github.com/CodingFabian/nextjs-og-param-issue
### To Reproduce
The issue is very straightforward. the reproduction is in the repository but is just in this code block:
```
export const metadata: Metadata = {
metadataBase: new URL('https://nextjs.org'),
openGraph: {
type: "website",
url: "./?utm=tracking",
images: ["/images/og-image.png"],
},
}
```
The intended use case is to put an open graph metadata containing a tracking code. but it could be any url parameter.
Above renders:
```
<meta property="og:url" content="https://nextjs.org"/>
```
If we use any other (non empty, non-root) url, like this:
```
export const metadata: Metadata = {
metadataBase: new URL('https://nextjs.org'),
openGraph: {
type: "website",
url: "./page?utm=tracking",
images: ["/images/og-image.png"],
},
}
```
it works as expected:
```
<meta property="og:url" content="https://nextjs.org/page?utm=tracking"/>
```
### Current vs. Expected behavior
expecting the url parameters passed into openGraph Metadata to be preserved.
Alternatively the behaviour should be consistent (and url params dropped for other urls as well) but I doubt that's desirable
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:45 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T8112
Available memory (MB): 8192
Available CPU cores: 8
Binaries:
Node: 22.9.0
npm: 10.8.3
Yarn: N/A
pnpm: 9.12.2
Relevant Packages:
next: 15.2.0-canary.2 // Latest available version is detected (15.2.0-canary.2).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Metadata
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Vercel (Deployed)
### Additional context
_No response_ | Metadata | low | Minor |
2,777,722,863 | vscode | cursor taking time to display |
Type: <b>Performance Issue</b>
as mentioned it is taking more time to display cursor when i move from my files list to program... it creates lots of trouble
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i5-13420H (12 x 2611)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.65GB (6.33GB free)|
|Process Argv|--crash-reporter-id b1480f03-8b2b-4926-8719-25d941b75fb8|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
0 139 24076 code main
0 97 7876 fileWatcher [1]
0 132 8904 shared-process
0 34 12120 crashpad-handler
0 330 13496 window [1] (collegeprogram - Visual Studio Code)
0 555 13768 extensionHost [1]
0 44 3616 c:\Users\kssha\.vscode\extensions\ms-vscode.cpptools-1.22.11-win32-x64\bin\cpptools.exe
0 7 3460 C:\WINDOWS\system32\conhost.exe 0x4
0 8 28336 "c:\Users\kssha\.vscode\extensions\ms-vscode.cpptools-1.22.11-win32-x64\bin\cpptools.exe"
0 20 24148 c:\Users\kssha\.vscode\extensions\ms-vscode.cpptools-1.22.11-win32-x64/bin/cpptools-srv.exe 3616 {9FA908F4-8C7C-4FEC-9503-D5A7E1CDD456}
0 7 25744 C:\WINDOWS\system32\conhost.exe 0x4
0 149 13752 electron-nodejs (bundle.js )
0 97 18300 "C:\Users\kssha\AppData\Local\Programs\Microsoft VS Code\Code.exe" "c:\Users\kssha\AppData\Local\Programs\Microsoft VS Code\resources\app\extensions\markdown-language-features\dist\serverWorkerMain" --node-ipc --clientProcessId=13768
0 91 28248 "C:\Users\kssha\AppData\Local\Programs\Microsoft VS Code\Code.exe" "c:\Users\kssha\AppData\Local\Programs\Microsoft VS Code\resources\app\extensions\json-language-features\server\dist\node\jsonServerMain" --node-ipc --clientProcessId=13768
0 239 14816 gpu-process
0 50 15988 utility-network-service
0 124 25112 ptyHost
0 6 5944 "C:\Program Files\Git\bin\bash.exe" --init-file "c:\Users\kssha\AppData\Local\Programs\Microsoft VS Code\resources\app/out/vs/workbench/contrib/terminal/common/scripts/shellIntegration-bash.sh"
0 13 22120 "C:\Program Files\Git\bin\..\usr\bin\bash.exe" --init-file "c:\Users\kssha\AppData\Local\Programs\Microsoft VS Code\resources\app/out/vs/workbench/contrib/terminal/common/scripts/shellIntegration-bash.sh"
0 8 7324 conpty-agent
0 8 11048 conpty-agent
0 69 15808 C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -noexit -command "try { . \"c:\Users\kssha\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\contrib\terminal\common\scripts\shellIntegration.ps1\" } catch {}"
0 69 18152 C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -noexit -command "try { . \"c:\Users\kssha\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\contrib\terminal\common\scripts\shellIntegration.ps1\" } catch {}"
0 69 20060 C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -noexit -command "try { . \"c:\Users\kssha\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\contrib\terminal\common\scripts\shellIntegration.ps1\" } catch {}"
0 8 24656 conpty-agent
0 8 25592 conpty-agent
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (collegeprogram - Visual Studio Code)
| Folder (collegeprogram): 57 files
| File types: c(40) exe(13) json(1) md(1)
| Conf files: settings.json(1);
```
</details>
<details><summary>Extensions (16)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-terminal-for-ubuntu|Doc|0.0.2
vscode-html-css|ecm|2.0.12
prettier-vscode|esb|11.0.0
code-runner|for|0.12.2
copilot|Git|1.256.0
copilot-chat|Git|0.23.2
vscode-pull-request-github|Git|0.103.2024121117
debugpy|ms-|2024.15.2024121701
python|ms-|2024.23.2025010901
vscode-pylance|ms-|2024.12.1
remote-wsl|ms-|0.88.5
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
pdf|tom|1.2.2
cmake|twx|0.0.17
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | triage-needed,stale | low | Critical |
2,777,753,083 | pytorch | [ONNX] Trying to export QAT model with torch.onnx.export, running into Problems that workarounds cant fix. Is ONNX export planned for QAT? | ### 🐛 Describe the bug
The bug occurs when trying to export a model fully converted model to onnx. I have tried the workarounds suggested in (https://pytorch.org/docs/main/quantization.html#frequently-asked-questions) but even after trying all of them and debugging further I end up at various torch.onnx.errors.SymbolicValueError, which I cannot really explain due to the nature of these workarounds feeling like patchwork.
Imports:
```
import torch.onnx
import torch
import torch.nn as nn
from torchinfo import summary
import numpy as np
import onnx
from torch.ao.quantization import fuse_modules, QuantStub, DeQuantStub
from torch.ao.quantization import (
get_default_qat_qconfig,
prepare_qat,
convert
)
```
My Code:
```
class Conv1d(nn.Conv1d):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.causal_padding = self.dilation[0] * (self.kernel_size[0] - 1)
self.conv1d = nn.Conv1d(
in_channels=self.in_channels,
out_channels=self.out_channels,
kernel_size=self.kernel_size[0],
stride=self.stride[0],
dilation=self.dilation[0],
padding= 0,
padding_mode='zeros'
)
def forward(self, x):
x = nn.functional.pad(x, (self.causal_padding//2, self.causal_padding-self.causal_padding//2), 'constant')
return self.conv1d(x)
class ResidualUnit(nn.Module):
def __init__(self, in_channels, out_channels, dilation):
super().__init__()
self.dilation = dilation
self.layers = nn.Sequential(
Conv1d(in_channels=in_channels, out_channels=out_channels,
kernel_size=5, dilation=dilation),
nn.LeakyReLU(),
nn.Conv1d(in_channels=in_channels, out_channels=out_channels,
kernel_size=1),
)
def forward(self, x):
return x + self.layers(x)
class EncoderBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride):
super().__init__()
self.layers = nn.Sequential(
ResidualUnit(in_channels=int(in_channels),
out_channels=int(in_channels), dilation=3),
nn.LeakyReLU(),
Conv1d(in_channels=int(in_channels), out_channels=int(out_channels),
kernel_size=2*stride, stride=stride),
)
def forward(self, x):
return self.layers(x)
class Encoder(nn.Module):
def __init__(self, C=36, D=9):
super().__init__()
self.layers = nn.Sequential(
Conv1d(in_channels=36, out_channels=C, kernel_size=7),
nn.LeakyReLU(),
EncoderBlock(in_channels=C ,out_channels=24, stride=1),
nn.LeakyReLU(),
EncoderBlock(in_channels=24 ,out_channels=20, stride=2),
nn.LeakyReLU(),
EncoderBlock(in_channels=20 ,out_channels=16, stride=2),
nn.LeakyReLU(),
EncoderBlock(in_channels=16 ,out_channels=12, stride=2),
nn.LeakyReLU(),
Conv1d(in_channels=12, out_channels=D, kernel_size=1),
)
def forward(self, x):
x = self.layers(x)
return x
if __name__ == "__main__":
model = Encoder()
model.train()
model.qconfig = get_default_qat_qconfig("x86")
qat_model = prepare_qat(model)
qat_model_done = convert(qat_model)
dummy_input = torch.rand(1, 36, 800)
onnx_file_path = "./qat_export_test.onnx"
onnx_model = torch.onnx.export(
qat_model_done,
dummy_input,
onnx_file_path,
export_params=True
)
```
Error:
```
Traceback (most recent call last):
File "/media/data/xxx/xxx/prev_code/xxx/data_compression/data_compression/examples/idk.py", line 115, in <module>
onnx_model = torch.onnx.export(
^^^^^^^^^^^^^^^^^^
File /media/data/xxx/xxx/prev_code/xxx//prev_code/moritz_martinolli_397/venv/lib/python3.11/site-packages/torch/onnx/__init__.py", line 375, in export
export(
File "/media/data/xxx/xxx/prev_code/xxx/prev_code/moritz_martinolli_397/venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 502, in export
_export(
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 1564, in _export
graph, params_dict, torch_out = _model_to_graph(
^^^^^^^^^^^^^^^^
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
graph = _optimize_graph(
^^^^^^^^^^^^^^^^
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 639, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/utils.py", line 1836, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/symbolic_opset10.py", line 747, in dequantize
return symbolic_helper.dequantize_helper(g, input)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/symbolic_helper.py", line 1525, in dequantize_helper
unpacked_qtensors = _unpack_quantized_tensor(qtensor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/data/xxx/xxx/prev_code/xxx/venv/lib/python3.11/site-packages/torch/onnx/symbolic_helper.py", line 189, in _unpack_quantized_tensor
raise errors.SymbolicValueError(
torch.onnx.errors.SymbolicValueError: ONNX symbolic expected the output of `%x.1 : Float(1, 36, 800, strides=[28800, 800, 1], requires_grad=0, device=cpu) = prim::Param()
` to be a quantized tensor. Is this likely due to missing support for quantized `prim::Param`. Please create an issue on https://github.com/pytorch/pytorch/issues [Caused by the value 'x.1 defined in (%x.1 : Float(1, 36, 800, strides=[28800, 800, 1], requires_grad=0, device=cpu) = prim::Param()
)' (type 'Tensor') in the TorchScript graph. The containing node has kind 'prim::Param'.]
Inputs:
Empty
Outputs:
#0: x.1 defined in (%x.1 : Float(1, 36, 800, strides=[28800, 800, 1], requires_grad=0, device=cpu) = prim::Param()
) (type 'Tensor')
```
This error arrose while I was doing the whole QuantStub and DeQuantstub workarounds suggested in the FAQ of QAT.
If no one has time to actually recreate this and debug it, please just tell me if full QAT to ONNX support is planned or not (I've seen contributors saying it is not planned, I've seen posts where there is said that it should be in torch 2.0, so maybe just a clear answer would be good), so I know if I can just disregard this until it eventually becomes an option.
### Versions
```
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] nvidia-cublas-cu11==11.10.3.66
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu11==11.7.101
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu11==11.7.99
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu11==11.7.99
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu11==8.5.0.96
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu11==10.9.0.58
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu11==10.2.10.91
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu11==11.4.0.1
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu11==11.7.4.91
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu11==2.14.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu11==11.7.91
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241217
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchinfo==1.8.0
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
``` | module: onnx,triaged | low | Critical |
2,777,788,902 | godot | Round region and margin size of `AtlasTexture` to integer | ### Tested versions
4.4dev7
### System information
Not relevant
### Issue description
`margin` and `region` size of `AtlasTexture` could be set as float value, but they always should be integer values.
`margin` and `region` position of `AtlasTexture` don't have such restriction, so Rect2I can not be used.
See comments https://github.com/godotengine/godot/pull/94365#issuecomment-2573955678 and https://github.com/godotengine/godot/pull/94365#issuecomment-2580034730
### Steps to reproduce
- Create `AtlasTexture`.
- Set region size to `(0.2, 0.2)` for example.
- Get errors (size is rounded to `(0, 0)`):
```
ERROR: The Image width specified (0 pixels) must be greater than 0 pixels.
ERROR: core/io/image.cpp:2865 - Condition "dsize == 0" is true.
```
### Minimal reproduction project (MRP)
[atlas_texturesize.zip](https://github.com/user-attachments/files/18361808/atlas_texturesize.zip)
| bug,topic:core | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.