id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
β | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,720,035,769 | ui | [feat]: SelectTrigger clear option | ### Feature description
Describe your feature request...
const SelectTrigger = React.forwardRef<
React.ElementRef<typeof SelectPrimitive.Trigger>,
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Trigger> &
CustomSelectTriggerProps
>(({ className, children, isClear, onClear, ...props }, ref) => (
<SelectPrimitive.Trigger
ref={ref}
className={cn(
"border-input bg-background ring-offset-background placeholder:text-muted-foreground focus:ring-ring flex h-10 w-full items-center justify-between rounded-md border px-3 py-2 text-sm focus:outline-none focus:ring-2 focus:ring-offset-2 disabled:cursor-not-allowed disabled:opacity-50",
className,
)}
{...props}
>
{children}
{isClear ? (
<div className='flex items-center pl-2'>
<CircleX
onClick={(e) => {
e.stopPropagation()
onClear && onClear()
}}
className='h-4 w-4 cursor-pointer text-red-500'
/>
</div>
) : (
<SelectPrimitive.Icon asChild>
<ChevronDown className='h-4 w-4 opacity-50' />
</SelectPrimitive.Icon>
)}
</SelectPrimitive.Trigger>
))
SelectTrigger.displayName = SelectPrimitive.Trigger.displayName
### Affected component/components
_No response_
### Additional Context
Additional details here...
const SelectTrigger = React.forwardRef<
React.ElementRef<typeof SelectPrimitive.Trigger>,
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Trigger> &
CustomSelectTriggerProps
>(({ className, children, isClear, onClear, ...props }, ref) => (
<SelectPrimitive.Trigger
ref={ref}
className={cn(
"border-input bg-background ring-offset-background placeholder:text-muted-foreground focus:ring-ring flex h-10 w-full items-center justify-between rounded-md border px-3 py-2 text-sm focus:outline-none focus:ring-2 focus:ring-offset-2 disabled:cursor-not-allowed disabled:opacity-50",
className,
)}
{...props}
>
{children}
{isClear ? (
<div className='flex items-center pl-2'>
<CircleX
onClick={(e) => {
e.stopPropagation()
onClear && onClear()
}}
className='h-4 w-4 cursor-pointer text-red-500'
/>
</div>
) : (
<SelectPrimitive.Icon asChild>
<ChevronDown className='h-4 w-4 opacity-50' />
</SelectPrimitive.Icon>
)}
</SelectPrimitive.Trigger>
))
SelectTrigger.displayName = SelectPrimitive.Trigger.displayName
### Before submitting
- [X] I've made research efforts and searched the documentation
- [ ] I've searched for existing issues and PRs | area: request | low | Minor |
2,720,086,048 | create-react-app | Why we need to use npx instead of npm for creating react app? | If you have a general question about Create React App or about building an app with Create React App we encourage you to post in GitHub Discussions instead of this issue tracker. The maintainers and other community members can provide help and answer your questions there: https://github.com/facebook/create-react-app/discussions
If you're looking for general information on using React, the React docs have a list of resources: https://reactjs.org/community/support.html
If you've discovered a bug or would like to propose a change please use one of the other issue templates.
Thanks!
| needs triage | low | Critical |
2,720,138,103 | ui | [feat]: Add more color options when adding shadcn to a project | ### Feature description
Currently at the question:
Which color would you like to use as the base color?
There's like 4 gray colors. But here on themes page, I see there's a lot more https://ui.shadcn.com/themes.
Please add them as well. I always have to modify this, quite frustrating.
### Affected component/components
ALL
### Additional Context
pnpm dlx shadcn@latest add dialog
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,720,144,033 | rust | Tracking Issue for Wasm floating point instructions not in `core` | Feature gate: `#![feature(wasm_numeric_instr)]`
This is a tracking issue for floating point instructions that aren't available in `core` but are available as simple instructions on Wasm directly.
### Public API
```rust
mod arch {
mod wasm32 {
pub fn f32_ceil(a: f32) -> f32;
pub fn f32_floor(a: f32) -> f32;
pub fn f32_trunc(a: f32) -> f32;
pub fn f32_nearest(a: f32) -> f32;
pub fn f32_sqrt(a: f32) -> f32;
pub fn f64_ceil(a: f64) -> f64;
pub fn f64_floor(a: f64) -> f64;
pub fn f64_trunc(a: f64) -> f64;
pub fn f64_nearest(a: f64) -> f64;
pub fn f64_sqrt(a: f64) -> f64;
}
}
```
### Steps / History
- [x] Implementation: https://github.com/rust-lang/stdarch/pull/1677
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
### Unresolved Questions
My impression from https://github.com/rust-lang/rust/issues/50145 was that these methods (except `sqrt()`) are not intended to ever be added to `core`. But looking at a [recent discussion in Zulip](https://rust-lang.zulipchat.com/#narrow/channel/219381-t-libs/topic/float.20methods.20in.20std.2C.20not.20core) it paints a different picture.
If indeed these methods are intended to come to `core`, then these additions will probably never be stabilized.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Major |
2,720,155,396 | react-native | UI issue in iOS with New Architecture in RN 0.75.3 β Unexpected padding behavior | ### Description
I am facing an issue with the UI on iOS React Native 0.75.3 .UI elements are going out of their boundary and this behaviour does not occur on Android.
### Steps to reproduce
RN 0.75.3 (with new architecture) IOS .
### React Native Version
0.75.3
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.3.1
CPU: (8) arm64 Apple M1
Memory: 86.86 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.11.1
path: ~/.nvm/versions/node/v20.11.1/bin/node
Yarn:
version: 1.22.22
path: ~/.nvm/versions/node/v20.11.1/bin/yarn
npm:
version: 10.8.2
path: ~/.nvm/versions/node/v20.11.1/bin/npm
Watchman:
version: 2024.03.18.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.2
- iOS 17.2
- macOS 14.2
- tvOS 17.2
- visionOS 1.0
- watchOS 10.2
Android SDK: Not Found
IDEs:
Android Studio: 2023.2 AI-232.10227.8.2321.11479570
Xcode:
version: 15.2/15C500b
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /usr/bin/javac
Ruby:
version: 3.3.1
path: /opt/homebrew/opt/ruby/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.3
wanted: 0.75.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
It is not a crash .
```
### Reproducer
https://stackoverflow.com/questions/79166621/padding-issues-and-layout-breaking-in-production-ios-with-expo-react-native
### Screenshots and Videos


| Platform: iOS,Needs: Author Feedback,Needs: Repro,Newer Patch Available | low | Critical |
2,720,197,402 | react-native | [0.76.3][NewArch][iOS + Android] Calling setValue on Animated.Value within useEffect breaks styling | ### Description
Hey!
While working on react-native-paper new-arch support I noticed weird issue related to assigning styles dynamically. Basically, some styles are not applied if we use Animated.Value. I checked and it's not related to library itself and the issue is reproducible with bare RN app with enabled new architecture.
Looks like calling `setValue` on instance of `Animated.Value` like in the code snippet below causes the problem:
```
function App(): React.JSX.Element {
const [disabled, setDisabled] = useState(false);
const {current: elevation} = useRef(useAnimatedValue(0));
useEffect(() => {
elevation.setValue(disabled ? 15 : 0);
}, [elevation, disabled]);
return (
<SafeAreaView>
<View style={styles.buttonContainer}>
<MyButton
elevation={elevation}
label="Click to disable"
onPress={() => setDisabled(prev => !prev)}
/>
<MyButton
label="Button without elevation"
disabled={disabled}
onPress={() => {}}
/>
<MyButton
elevation={elevation} // if we do not pass elevation, then background update works fine
label="Button with elevation"
disabled={disabled}
onPress={() => {}}
/>
</View>
</SafeAreaView>
);
}
const elevationLevel = [0, 3, 6, 9, 12, 15];
const inputRange = [0, 1, 2, 3, 4, 5];
const MyButton = ({elevation, label, disabled, onPress}: Props) => {
const style = {
...(elevation &&
Platform.OS === 'android' && {
elevation: elevation.interpolate({
inputRange,
outputRange: elevationLevel,
}),
}),
...(elevation &&
Platform.OS === 'ios' && {
shadowColor: '#000',
shadowOpacity: elevation.interpolate({
inputRange: [0, 1],
outputRange: [0, 0.3],
extrapolate: 'clamp',
}),
shadowOffset: {
width: 0,
height: elevation.interpolate({
inputRange,
outputRange: [0, 1, 1, 1, 2, 4],
}),
},
shadowRadius: elevation.interpolate({
inputRange,
outputRange: [0, 1, 2, 3, 3, 4],
}),
}),
};
return (
<Pressable onPress={onPress} disabled={disabled}>
<Animated.View
style={[styles.button, disabled && styles.disabledButton, style]}>
<Text>{label}</Text>
</Animated.View>
</Pressable>
);
};
const styles = StyleSheet.create({
buttonContainer: {
padding: 30,
backgroundColor: 'white',
height: '100%',
},
disabledButton: {
backgroundColor: 'gray',
},
button: {
backgroundColor: 'yellow',
margin: 16,
padding: 8,
borderWidth: 1,
borderColor: 'blue',
borderRadius: 10,
},
});
```
### Steps to reproduce
1. Fetch the provided repository
2. install deps and run the app
3. Click on "click to disable" button
4. observe that new backgroundColor is applied only to the button without elevation based on Animated.Value
### React Native Version
0.76.3
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.6.1
CPU: (10) arm64 Apple M1 Pro
Memory: 198.64 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.20.2
path: ~/.nvm/versions/node/v18.20.2/bin/node
Yarn:
version: 4.5.0
path: ~/.nvm/versions/node/v18.20.2/bin/yarn
npm:
version: 10.5.0
path: ~/.nvm/versions/node/v18.20.2/bin/npm
Watchman:
version: 2024.10.21.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/bogusz/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK:
API Levels:
- "28"
- "29"
- "30"
- "31"
- "33"
- "34"
- "34"
- "34"
- "35"
Build Tools:
- 29.0.2
- 29.0.3
- 30.0.2
- 30.0.3
- 31.0.0
- 33.0.0
- 33.0.1
- 34.0.0
- 35.0.0
- 35.0.0
System Images:
- android-29 | Google APIs ARM 64 v8a
- android-31 | Google APIs ARM 64 v8a
- android-33 | Google APIs ARM 64 v8a
- android-33 | Google Play ARM 64 v8a
- android-34 | Google Play ARM 64 v8a
- android-UpsideDownCakePrivacySandbox | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2023.3 AI-233.14808.21.2331.11842104
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 2.7.5
path: /Users/bogusz/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.3
wanted: 0.76.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
```
### Stacktrace or Logs
```text
-
```
### Reproducer
https://github.com/BogiKay/rn-animated-value-issue
### Screenshots and Videos
[animated-value-bug.webm](https://github.com/user-attachments/assets/4bdf03a0-8613-40d1-883a-a0f570934689)
| Platform: iOS,Platform: Android,API: Animated,Needs: Triage :mag: | low | Critical |
2,720,212,253 | godot | Physics: collision pass through between `StaticBody3D` w/ `ConcaveCollisionShape3D` and `RigidBody3D` with `ConvexPolygonShape3D` | ### Tested versions
Reproducible in 4.3 and 4.4dev5.
### System information
W11
### Issue description
As per the issue description, a rigid body with two `ConvexCollisionShape3D` registers an initial collision with a static body with a `ConcaveCollisionShape3D` that is actually convex, but then fails to register the second collision when both "legs" are supposed to be on the ground.
https://github.com/user-attachments/assets/599e5e92-ef46-4bfe-bbfb-868f1bf66139
Unfortunately not fixed by #99895.
### Steps to reproduce
Download MRP, make sure to have visible collision shapes, press play and watch the collision fail to happen! Should happen consistently.
### Minimal reproduction project (MRP)
[MRP.zip](https://github.com/user-attachments/files/18022818/MRP.zip)
| bug,topic:physics,topic:3d | low | Minor |
2,720,242,450 | three.js | USDZ exporter premultiplies alpha in textures when Texture.premultiplyAlpha == false | ### Description
When exporting textures the Canvas 2D API is used to convert Three textures into PNGs. But while input textures usually do not use premultiplied alpha, the [output bitmap](https://html.spec.whatwg.org/multipage/canvas.html#premultiplied-alpha-and-the-2d-rendering-context) of a CanvasRenderingContext2D must always use premultiplied alpha for transparent colors. This implies a lossy conversion step when drawing a non-premultiplied texture to a Canvas 2D rendering context where invisible colors disappear.
I encountered this bug in the USDZ exporter but I suspect that other exporters (like the GLTF exporter) have the same bug.
### Reproduction steps
1. Run the [live example](https://jsfiddle.net/ymtn56hv/)
2. Open the automatically downloaded USDZ in macOS Preview or iOS QuickLook
3. Check for [this defect](https://github.com/KhronosGroup/glTF-Sample-Assets/tree/main/Models/AlphaBlendModeTest#problem-premultiplied-alpha) on the checkmark.
### Code
```js
// code goes here
```
### Live example
https://jsfiddle.net/ymtn56hv/
### Screenshots
<img width="1021" alt="Bildschirmfoto 2024-12-05 um 13 09 33" src="https://github.com/user-attachments/assets/3c01f79e-428c-4ca7-b704-8c2facb3c75c">
### Version
r171
### Device
Desktop
### Browser
Chrome
### OS
MacOS | Addons | low | Critical |
2,720,272,567 | vscode | Git - Improve algorithm for picking the base branch it uses when populating Source Control Graph in Auto mode |
Type: <b>Feature Request</b>
I contribute to microsoft/vscode through my tracking repo gjsjohnmurray/vscode.
```
> git remote -v
origin https://github.com/gjsjohnmurray/vscode.git (fetch)
origin https://github.com/gjsjohnmurray/vscode.git (push)
upstream https://github.com/microsoft/vscode.git (fetch)
upstream https://github.com/microsoft/vscode.git (push)
```
When I want to start a branch for a PR here's what I do:
1. On GH update gjsjohnmurray/vscode:main from microsoft/vscode:main
2. Checkout main
3. Pull to update my local
4. Create my new fix-XXX branch in VS Code
5. Make some changes, stage them, commit them and push them to create origin/fix-XXX
6. Make some more changes.
The Auto mode of the Source Control Graph shows me the following 3 branches:
- fix-XXX
- origin/fix-XXX
- origin/main
Instead of origin/main I would like Auto mode to show upstream/main because that's where the commits of merged PRs from other contributers will first show up. They'll only reach my tracking repo (i.e. origin/main) when I get around to repeating step 1 of my original list above.
I propose that the algorithm here should (perhaps optionally) move one step upstream when establishing the `branch.fix-XXX.vscode-merge-base` Git config setting.
VS Code version: Code - Insiders 1.96.0-insider (6567c244f90af8e31d9250e8f9a757b58565649a, 2024-12-05T05:04:06.424Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<!-- generated by issue reporter --> | feature-request,git | low | Minor |
2,720,292,100 | langchain | AttributeError: module 'openai' has no attribute 'error' | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import LocalAIEmbeddings
embeddings = LocalAIEmbeddings(
openai_api_base="http://localhost:1234",
openai_api_key="test",
model="text-embedding-ada-002",
)
import urllib
from langchain_postgres.vectorstores import PGVector
pg_vector_store = PGVector(
embeddings=embeddings,
collection_name="wiki",
connection=f"postgresql+psycopg://CONNECTION",
use_jsonb=True,
)
from langchain_core.documents import Document
if __name__ == "__main__":
docs = get_list_of_documents() # code omitted, it reutrns list[Documents]
pg_vector_store.add_documents(docs)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/mnt/path/to/test.py", line 38, in <module>
pg_vector_store.add_documents(docs)
File "/mnt/path/to/.venv/lib/python3.11/site-packages/langchain_core/vectorstores/base.py", line 287, in add_documents
return self.add_texts(texts, metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/path/to/.venv/lib/python3.11/site-packages/langchain_postgres/vectorstores.py", line 885, in add_texts
embeddings = self.embedding_function.embed_documents(texts_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/path/to/.venv/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 304, in embed_documents
return [self._embedding_func(text, engine=self.deployment) for text in texts]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/path/to/.venv/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 304, in <listcomp>
return [self._embedding_func(text, engine=self.deployment) for text in texts]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/path/to/.venv/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 269, in _embedding_func
return embed_with_retry(
^^^^^^^^^^^^^^^^^
File "/mnt/path/to/.venv/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 102, in embed_with_retry
retry_decorator = _create_retry_decorator(embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/path/to/.venv/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 49, in _create_retry_decorator
retry_if_exception_type(openai.error.Timeout) # type: ignore[attr-defined]
^^^^^^^^^^^^
AttributeError: module 'openai' has no attribute 'error'
```
### Description
I am trying do embed some documents with LocalAI and pgvector.
This error should not appear. I found some [issues](https://community.openai.com/t/attributeerror-module-openai-has-no-attribute-error/486676) about the same error message from last year, which was solved by downgrading packages.
But those versions are too old, so I don't feel comfortable downgrading them that far.
I am on WSL2 Debian on Windows 11.
EDIT:
Using `openai` directly with the same configuration, I got no problem connecting to my LocalAI instance.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue Nov 5 00:21:55 UTC 2024
> Python Version: 3.11.2 (main, Sep 14 2024, 03:00:30) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.3.21
> langchain: 0.3.9
> langchain_community: 0.3.9
> langsmith: 0.1.147
> langchain_openai: 0.2.11
> langchain_postgres: 0.0.12
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.9
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.28.0
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.56.2
> orjson: 3.10.12
> packaging: 24.2
> pgvector: 0.2.5
> psycopg: 3.2.3
> psycopg-pool: 3.2.4
> pydantic: 2.10.3
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> sqlalchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2 | π€:bug,investigate | low | Critical |
2,720,293,066 | flutter | [pigeon] Implement a useful to-string method for `PigeonError` | See https://github.com/flutter/flutter/issues/159843 for an example where Swift code logged a `PigeonError` via string interpolation, and it was not useful because it was just the class name. For each output language where we make a custom error object, we should implement whatever that language's version of `toString` is so that log messages will have useful output, as logging errors is a common use case, so we should make it easy. | package,team-ecosystem,p: pigeon,P2,triaged-ecosystem | low | Critical |
2,720,307,015 | pytorch | Feature Request - Backport torch.special.gammainc for Backward Compatibility | ### π The feature, motivation and pitch
The torch.special.gammainc function, which computes the regularized lower incomplete gamma function, currently lacks backward compatibility support (actually, lack the grad w.r.t. input). This function is essential for various statistical computations, like gamma-related probability calculations.
### Alternatives
It has been implemented by tensorflow.math.igamma. Some related functions can be found in xla and Eigen.
### Additional context

cc @albanD | triaged,module: python frontend | low | Minor |
2,720,326,939 | vscode | Update Button missing |
Type: <b>Bug</b>
check for update button missing from my vs code's help option thus i dont have any method to update my vscode
VS Code version: Code 1.83.0 (e7e037083ff4455cf320e344325dacb480062c3c, 2023-10-03T16:12:16.321Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 5 4600H with Radeon Graphics (12 x 2994)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled|
|Load (avg)|undefined|
|Memory (System)|7.42GB (1.15GB free)|
|Process Argv|--crash-reporter-id 809bf1f0-42f4-4e6e-94d4-ab4475883412|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (32)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-tailwindcss|bra|0.12.15
es7-react-js-snippets|dsz|4.4.3
bracket-pair-toggler|dzh|0.0.3
prettier-vscode|esb|11.0.0
code-runner|for|0.12.2
solidity|Jua|0.0.179
debugpy|ms-|2024.0.0
python|ms-|2024.2.1
vscode-pylance|ms-|2024.3.2
jupyter|ms-|2023.10.1100000000
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.8
vscode-jupyter-slideshow|ms-|0.1.5
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
tkinter-snippets|Nik|2.0.2
hardhat-solidity|Nom|0.7.3
material-icon-theme|PKi|5.14.1
LiveServer|rit|5.7.9
tabnine-vscode|Tab|3.199.0
cmake|twx|0.0.17
vscode-lldb|vad|1.11.1
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.3.1
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-pack|vsc|0.29.0
qf|Wsc|6.8.122
JavaScriptSnippets|xab|1.8.0
vscode-react|zha|0.2.2
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
2f103344:31071589
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc2:31192216
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
```
</details>
<!-- generated by issue reporter --> | install-update,under-discussion | low | Critical |
2,720,414,762 | angular | Found lots of logical errors in "Hierarchical injectors" doc | ### Which @angular/* package(s) are the source of the bug?
docs
### Is this a regression?
No
### Description
It's so hard to read [this doc](https://angular.dev/guide/di/hierarchical-dependency-injection#example-elementinjector-use-cases), because there are lots of errors (at least, I think they are errors). I can't list all of them, let me for example:
1.

2.

### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
```
### Anything else?
_No response_ | area: docs | low | Critical |
2,720,416,513 | realworld | [Bug]: Documentation: Link to API spec broken | ### Relevant scope
Other: describe below
### Description
1. Open https://realworld-docs.netlify.app/introduction/
2. Click on "API spec"\

3. See 404 (https://realworld-docs.netlify.app/introduction/specs/backend-specs/introduction)
The link should be https://realworld-docs.netlify.app/specifications/backend/introduction/
---
I did not find a `docs` folder here - and the repository https://github.com/gothinkster/realdworld-docs was empty. Therefore, I filed this issue. | bug | low | Critical |
2,720,422,281 | rust | compiletest: Detect unused auxiliary crates? | There are some auxiliary crates that are unused in the `tests` directory. I think it would be nice if compiletest could detect these and complain about them in some way.
This might be difficult due to ignored tests, or tests that are otherwise not run, which don't load enough information to know which auxiliary tests are used.
If it ends up being technically challenging, perhaps someone could just manually go through and clean up the orphans?
I've been intending to do that, but just haven't been able to make the time.
Example: https://github.com/rust-lang/rust/blob/master/tests/ui/auxiliary/default-ty-param-cross-crate-crate.rs | C-enhancement,T-bootstrap,A-compiletest | low | Minor |
2,720,466,377 | react | [Compiler Bug]: Properties added to hook functions are removed | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAgHgBwjALgAgDMoA7OXASwhPyjAQGUIBbBXACwpIHMAKASnzAAOjXxxqkADYIAdFIh8A5OwQBPJf1H4AvqNF1GLNpx6zWuAIYATS1fwBefEoDu7OwgBuCGEv0kMbDx8awRCSygpAmIySmp8AFk1AEFMTAEhbVp6JgtTPi0SLJg2WBoAHmsKTwA+AAkEKQV8AHUcKWshQ1yTLm5zNhs7Sx1ygHoq2oBuUT0SEB0gA
### Repro steps
Properties added to hook functions get removed by the Compiler, and trigger the [rule](https://react.dev/reference/rules/react-calls-components-and-hooks#never-pass-around-hooks-as-regular-valueseslint(react-compiler/react-compiler) below:
```
Hooks may not be referenced as normal values, they must be called.
See https://react.dev/reference/rules/react-calls-components-and-hooks#never-pass-around-hooks-as-regular-valueseslint(react-compiler/react-compiler)
```
I understand why this would be triggered for other scenarios where the hooks are referenced as normal values, but is this case really a violation of hook rules in React? It's not listed as a "bad example" in the React docs. If it is, then the compiler and the linter behavior is correct and expected, but can't help but wonder if this is really an issue and not an oversight?
---
For context, I use this pattern in one of my apps that uses TanStack's React Query:
```tsx
import { useQuery } from "@tanstack/react-query"
import { api } from "~/server/api"
export const useGetClient = (id: string) => {
const query = useQuery({
queryKey: useGetClient.queryKey(id),
queryFn: async () => {
const data = await api.getClient(id)
return data
},
})
return query
}
useGetClient.queryKey = (id: string) => ["client", id]
```
Attaching the `queryKey` function to the hook like this provides a nice developer experience because we don't need a separate import or a convention-based pattern for figuring out what the query key of a given query is.
We migrated to this pattern away from the pattern we used before where `queryKey` functions were separate objects/functions.
Thanks!
### How often does this bug happen?
Every time
### What version of React are you using?
18.3.1
### What version of React Compiler are you using?
19.0.0-beta-df7b47d-20241124 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | medium | Critical |
2,720,471,225 | pytorch | [ARM] (Neoverse-V1) torch._C._LinAlgError: linalg.svd: The algorithm failed to converge because the input matrix contained non-finite values | ### π Describe the bug
There are a couple of unit test failures on arm neoverse-v1 platforms ( NOTE - these are unit tests which are currently excluded from CI )
```
torch._C._LinAlgError: linalg.svd: The algorithm failed to converge because the input matrix contained non-finite values.
Exception raised from _linalg_check_errors at /var/lib/jenkins/workspace/aten/src/ATen/native/BatchLinearAlgebra.cpp:1598 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0xe0 (0xfea5b4cbf550 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: at::native::_linalg_check_errors(at::Tensor const&, c10::basic_string_view<char>, bool) + 0x954 (0xfea5b5ec2268 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #2: at::_ops::_linalg_check_errors::call(at::Tensor const&, c10::basic_string_view<char>, bool) + 0x14c (0xfea5b6c45f2c in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #3: at::native::structured__linalg_svd_out::impl(at::Tensor const&, bool, bool, std::optional<c10::basic_string_view<char> >, at::Tensor const&, at::Tensor const&, at::Tensor const&) + 0x26c (0xfea5b5ec3a7c in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0x20d5f98 (0xfea5b6e15f98 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #5: <unknown function> + 0x20d6118 (0xfea5b6e16118 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #6: at::_ops::_linalg_svd::redispatch(c10::DispatchKeySet, at::Tensor const&, bool, bool, std::optional<c10::basic_string_view<char> >) + 0xec (0xfea5b6d27f0c in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0x4b016f8 (0xfea5b98416f8 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #8: <unknown function> + 0x4b01f10 (0xfea5b9841f10 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #9: at::_ops::_linalg_svd::call(at::Tensor const&, bool, bool, std::optional<c10::basic_string_view<char> >) + 0x198 (0xfea5b6d95658 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #10: at::native::linalg_svd(at::Tensor const&, bool, std::optional<c10::basic_string_view<char> >) + 0x48 (0xfea5b5ebbdb8 in
To execute this test, run the following from the base repo dir:
python test/test_linalg.py TestLinalgCPU.test_pca_lowrank_cpu
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
I have done some debugging and the error comes from `torch.linalg.qr` returning NaN values here
https://github.com/pytorch/pytorch/blob/d49f0bf466f2484e6dbf58e5fc89c4964da8fadb/torch/_lowrank.py#L78C9-L78C33
Further debugging reveals that lapackGeqrf call here is returning NaN values.
https://github.com/pytorch/pytorch/blob/65c2086d452ae6966ce9d7fb3cb2eef2fd0d2add/aten/src/ATen/native/BatchLinearAlgebraKernel.cpp#L363
and finally, we have found that the problem is caused by libopenblas dgeqrf_ function. We have confirmed that there are no Nan values in libopenblas <= 0.3.20. So we have raised an issue openBLAS to addrress the regression since 0.3.21.
https://github.com/OpenMathLib/OpenBLAS/issues/5006
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:19:28) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1019-aws-aarch64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+git6cc0edc
[conda] numpy 1.22.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.6.0a0+git6cc0edc pypi_0 pypi
cc @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @ZainRizvi @jianyuh @nikitaved @pearu @walterddr @xwang233 @Lezcano @snadampal @milpuz01 | module: tests,triaged,module: third_party,module: linear algebra,module: arm | low | Critical |
2,720,477,957 | angular | "Input is required but no value is available yet" with toObservable if the content is dynamic | ### Which @angular/* package(s) are the source of the bug?
Don't known / other
### Is this a regression?
Yes
### Description
Just updated to Angular v19 and started to experience an error. Created an example with as minimal code as possible. Unfortunately it doesn't make too much sense without additional context, but it is what it is.
So the setup is the following:
- We have a directive that has a required input (`ModelDirective`)
- We have a component that queries a `ModelDirective` and uses its required input (`FormFieldComponent`)
- We also need to turn that input signal into an rxjs observable (for reasons that is not obvious by the example)
The problem is observed within the `toObservable` but only if the queried directive is created dynamically within a dynamically created component. If you run the example, you can see that the input is actually set properly - if we directly use the computed signal, its value is successfully displayed. However, once we try to read the observable's value, an error is thrown.
If, however, the directive or the owning component is available without any condition, everything works fine (there is a commented line in the example template).
This setup was working fine in Angular v18.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-vjskep?file=src%2Fmain.ts
### Please provide the exception or error you saw
```true
ERROR RuntimeError: NG0950: Input is required but no value is available yet.
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.0.3
Node: 20.11.1
Package Manager: npm 10.2.4
OS: win32 x64
Angular: 19.0.3
... animations, cli, common, compiler, compiler-cli, core
... localize, platform-browser, platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1800.3
@angular-devkit/build-angular 19.0.3
@angular-devkit/core 18.0.3
@angular-devkit/schematics 18.2.11
@angular/cdk 19.0.2
@schematics/angular 18.2.11
ng-packagr 19.0.1
rxjs 7.8.1
typescript 5.5.4
zone.js 0.15.0
```
### Anything else?
_No response_ | area: core,area: forms,state: needs more investigation,core: reactivity,cross-cutting: signals | medium | Critical |
2,720,495,929 | tensorflow | [XLA] TF XLA outputs abnormal value when compiling `Embedding` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
nightly
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
For `Embedding` operator, when I set the `input_dim=1` (which means it indexed from 0 to 0), the output always returns **0 without XLA**.
After compilation, the outputs are usually some random tensors.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
tf.random.set_seed(42)
x = tf.constant([1])
# uncompiled model
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.embedding = tf.keras.layers.Embedding(1, 1)
def call(self, x):
output = self.embedding(x)
return output
m = Model()
output1 = m(x)
# compiled model
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.embedding = tf.keras.layers.Embedding(1, 1)
@tf.function(jit_compile=True)
def call(self, x):
output = self.embedding(x)
return output
m = Model()
output2 = m(x)
print(output1)
print(output2)
```
### Relevant log output
```shell
tf.Tensor([[0.]], shape=(1, 1), dtype=float32)
tf.Tensor([[-0.00567592]], shape=(1, 1), dtype=float32)
```
| stat:awaiting tensorflower,type:bug,comp:xla,TF 2.18 | low | Critical |
2,720,560,787 | TypeScript | "Debug Failure. False expression." when using default export from js file | ### π Search Terms
debug failure false expression export default composite allowjs js
### π Version & Regression Information
- This changed between versions 5.6.2 and 5.7.2
### β― Playground Link
https://stackblitz.com/edit/ts-false-expression-tqxlxy?file=tsconfig.json,package.json
### π» Code
```js
// In a JS file
export default console;
```
The error also seems to happen if you replace `console` with anything from `globalThis` (screen, scroll, fetch, etc.)
TSConfig.json:
```json
"composite": true,
"allowJs": true,
```
### π Actual behavior
Running tsc results in this error:
```
> tsc
Error: Debug Failure. False expression.
at Object.isUndefinedIdentifierExpression (/home/projects/ts-false-expression-tqxlxy/node_modules/typescript/lib/_tsc.js:50211:15)
at typeFromExpression (/home/projects/ts-false-expression-tqxlxy/node_modules/typescript/lib/_tsc.js:132377:22)
at serializeTypeOfExpression (/home/projects/ts-false-expression-tqxlxy/node_modules/typescript/lib/_tsc.js:132107:20)
at Object.serializeTypeOfDeclaration (/home/projects/ts-false-expression-tqxlxy/node_modules/typescript/lib/_tsc.js:132131:16)
at serializeTypeForDeclaration (/home/projects/ts-false-expression-tqxlxy/node_modules/typescript/lib/_tsc.js:52640:41)
at serializeMaybeAliasAssignment (/home/projects/ts-false-expression-tqxlxy/node_modules/typescript/lib/_tsc.js:54015:19)
at serializeAsAlias (/home/projects/ts-false-expression-tqxlxy/node_modules/typescript/lib/_tsc.js:53899:13)
at serializeSymbolWorker (/home/projects/ts-false-expression-tqxlxy/node_modules/typescript/lib/_tsc.js:53255:11)
at serializeSymbol (/home/projects/ts-false-expression-tqxlxy/node_modules/typescript/lib/_tsc.js:53119:11)
at eval (/home/projects/ts-false-expression-tqxlxy/node_modules/typescript/lib/_tsc.js:53089:11)
Node.js v18.20.3
```
### π Expected behavior
No Error :-)
### Additional information about the issue
I see this both locally and in stackblitz. I don't know how to reproduce in the playground because it only seems to happen when using a JS file.
This does not cause an error:
```
const thing = console;
export default thing;
``` | Bug | low | Critical |
2,720,589,224 | rust | Update tests/ui/codegen/target-cpus.rs after upgrading to LLVM 20 | LLVM 20 is adding a new target CPU for WebAssembly (`lime1`), which changes the output of that test.
https://github.com/rust-lang/rust/pull/133910 temporarily modifies the test to accept the output of both LLVM versions, but that should be reverted after upgrading to LLVM 20.
@rustbot label: +A-llvm | C-cleanup,A-LLVM,A-testsuite,T-compiler | low | Minor |
2,720,647,948 | pytorch | Unit test failure on ARM - Tensor-likes are not close! CPUReproTests.test_disabled_amp | ### π Describe the bug
We see this error on Arm Neoverse-V1, ( the test is currently not covered by CI but we want to enable it )
```
AssertionError: Tensor-likes are not close!
Mismatched elements: 3 / 1572864 (0.0%)
Greatest absolute difference: 3.0517578125e-05 at index (1, 312, 448) (up to 1e-05 allowed)
Greatest relative difference: 0.123046875 at index (2, 300, 99) (up to 0.016 allowed)
To execute this test, run the following from the base repo dir:
python test/inductor/test_cpu_repro.py CPUReproTests.test_disabled_amp_is_inference_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
looks like it is related to #133974 and commit c133661 which fixes it.
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:19:28) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1019-aws-aarch64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+git6cc0edc
[conda] No relevant packages
cc @mruberry @ZainRizvi @malfet @snadampal @milpuz01 @seemethere @pytorch/pytorch-dev-infra | module: tests,triaged,module: arm | low | Critical |
2,720,659,413 | vscode | Inline chat message/buttons should be aligned with the right edge of the input box | Repro:
1. Wide monitor
2. Open chat
3. Ask "hi", π yellow message shows on the far side right of the screen
4. Hover The message, π yellow message shifts to middle and feedback buttons show on far right side
---
Actual:

Expected:

---
Hovering actual:

Hovering expected (something like this?):

---
Happens in terminal on hover too:

| feature-request,ux,polish,inline-chat | low | Minor |
2,720,670,307 | youtube-dl | Dailywire Episode | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.12.17. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that site you are requesting is not dedicated to copyright infringement, see https://yt-dl.org/copyright-infringement. youtube-dl does not support such sites. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
- Search the bugtracker for similar site support requests: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a new site support request
- [ ] I've verified that I'm running youtube-dl version **2021.12.17**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that none of provided URLs violate any copyrights
- [x] I've searched the bugtracker for similar site support requests including closed ones
## Example URLs
<!--
Provide all kinds of example URLs support for which should be included. Replace following example URLs by yours.
-->
- Single video: https://www.dailywire.com/episode/ep-1630-the-supreme-court-transgenderism-case-explained
## Description
<!--
Provide any additional information.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
I pray you take this video from the daily wire and Michael Knowles and you make it free make it free for me to view without buying their membership at dailywire.com.
https://www.dailywire.com/episode/ep-1630-the-supreme-court-transgenderism-case-explained | site-support-request | low | Critical |
2,720,676,691 | PowerToys | DIsable a keymap without deleting it | ### Description of the new feature / enhancement
A disable toggle that turns off the key map temporarily until it's re-enabled.
### Scenario when this would be used?
Sometimes you just need to turn something off quickly without having to re-make the mapping when you want it back.
### Supporting information
Other mapping software has this feature.
Work around:

| Product-Keyboard Shortcut Manager,Needs-Triage | low | Minor |
2,720,678,766 | pytorch | Device assert throws a runtime error in cuda backend and results in a crash in xpu backend | ### π Describe the bug
When a device assert happens at the gpu device and a ` torch.***.synchronize()` is called following this, pytorch with `cuda` backend throws a runtime error. And in `xpu` version of pytorch a crash happens without raising a runtime error. Here is a reproducer with only torch.
```
import torch
device = torch.device("cuda")
try:
tensor = torch.ones(10, device=device)
index_tensor = torch.randint(20, (1,), device=device)
tensor[index_tensor] = 1
torch.cuda.synchronize()
except RuntimeError as e:
print(f"Caught an error: {e}") # this gets printed
```
Now if we run with "xpu" we see a crash occurs instead of runtime error, and the error can not be caught.
```
import torch
device = torch.device("xpu")
try:
tensor = torch.ones(10, device=device)
index_tensor = torch.randint(20, (1,), device=device)
tensor[index_tensor] = 1
torch.xpu.synchronize() # carshes here without runtime error
except RuntimeError as e:
print(f"Caught an error: {e}")
```
Shouldn't the behaviors be consistent in the both backends?
This problem causes the below issue in intel-xpu-backend-for-triton( https://github.com/intel/intel-xpu-backend-for-triton/issues/2755)
### Versions
<details>
<summary>Versions(Click to expand)</summary>
PyTorch version: 2.6.0a0+gitd0fd42e
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6438Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[pip3] torch==2.6.0a0+gitd0fd42e
[pip3] torchdata==0.9.0
[pip3] triton==3.2.0+git8e8394aa
[conda] intel-extension-for-pytorch 2.4.0+noop pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torchaudio 2.5.0a0+97ed7b3 pypi_0 pypi
[conda] torchdata 0.8.0 pypi_0 pypi
[conda] torchtext 0.16.0a0+b0ebddc pypi_0 pypi
[conda] torchvision 0.19.0a0+d23a6e1 pypi_0 pypi
</details>
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,720,758,133 | vscode | workbench.colorCustomizations.editor.focusBorder | <!-- β οΈβ οΈ Do Not Delete This! feature_request_template β οΈβ οΈ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Just like tabs, but for editors. | feature-request,editor-core | low | Minor |
2,720,787,554 | react | Bug: npm start kill automatically | Hello,
I am working on react project and my react app close frequently and it is utilising high RAM.

<!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version:
React: ^18.3.1
## Steps To Reproduce
1.
2.
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example:
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
## The expected behavior
| Status: Unconfirmed | medium | Critical |
2,720,831,670 | vscode | Clean up cached extension VSIXs after some time | Clean up cached extension VSIXs after some time | debt,extensions | low | Minor |
2,720,889,876 | angular | Precache JavaScript in SW to speed up incremental hydration | ### Which @angular/* package(s) are relevant/related to the feature request?
_No response_
### Description
To speed up responding to user interaction when using incremental hydration we can precache the JavaScript chunks we extract at build time.
### Proposed solution
The angular service worker could be a good candidate for precaching chunks. Ideally, we can create a mapping of JavaScript per page, start precaching the JavaScript for the current page with higher priority and after that precache the rest.
### Alternatives considered
We can compare this approach to simply inlining link[rel=prefetch] in the page header. | feature,area: service-worker | low | Major |
2,720,906,527 | angular | DOM independency | ### Which @angular/* package(s) are relevant/related to the feature request?
angular-core
### Description
I use Angulars Dependency Injection within a WebWorker Scope.
Angular Core references Types Window, Document, HTMLElement, Elementand Node, which are only visible within Window Scope. However I can import and use { Injectable, Injector } from '@angular/core' if i hack following lines into index.d.ts.
`/**
* @license Angular v16.2.10
* (c) 2010-2022 Google LLC. https://angular.io/
* License: MIT
*/
import { BehaviorSubject } from 'rxjs';
import { Observable } from 'rxjs';
import { Subject } from 'rxjs';
import { Subscribable } from 'rxjs';
import { Subscription } from 'rxjs';
/*NEW LINES ADDED*/
type Window = any;
type Document = any;
type HTMLElement = any;
type Element = any;
type Node = any;
/**
* @description
*
* Represents an abstract class `T`, if applied to a concrete class it would stop being
* instantiable.
*
* @publicApi
*/
export declare interface AbstractType<T>...
`
I will try publishing this change as "neutron-angular-core" and see if I can clean up my setup this way. However I love that rxjs can be used in any context and angular-core, also doesn't really depend on anything other then rxjs. So i think finding an official way to decouple angular core from DOM would be a great feature.
### Proposed solution
Proposal 1)
Move Ι΅Ι΅resolveBody and other method from angular-core to angular-documents or something.
Proposal 2)
Declare the problematic types they same way I did, but with the constraints in mind, that the respective methods need instead of = any.
### Alternatives considered
My workaround with a seperate package could be sufficient, but if the licence changes I would have a problem. | feature,area: core | low | Minor |
2,720,914,414 | angular | Styling in Angular Elements Angular 18 | ### Which @angular/* package(s) are the source of the bug?
elements
### Is this a regression?
Yes
### Description
Styling of Angular Material ist not applied in our Web Component if we don't import the style sheet. However, if we add the style sheet it overwrites the style from other Website-Components.
This was not a problem before we updated to Angular 17 respectively 18 and it was also not necessary to import the style sheet of the Web Component in order to have the proper styling.
### Please provide a link to a minimal reproduction of the bug
https://github.com/wysssa/angular18-elements
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 18.2.3
Node: 18.17.0
Package Manager: npm 10.2.4
OS: darwin arm64
Angular: 18.2.3
... animations, cdk, cli, common, compiler, compiler-cli, core
... elements, forms, material, material-moment-adapter
... platform-browser, platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.3
@angular-devkit/build-angular 18.2.3
@angular-devkit/core 18.2.10
@angular-devkit/schematics 18.2.10
@angular/fire 18.0.1
@schematics/angular 18.2.3
ng-packagr 18.2.1
rxjs 7.8.1
typescript 5.4.5
zone.js 0.14.10
```
### Anything else?
_No response_ | area: elements | low | Critical |
2,720,922,319 | rust | Tracking issue for release notes of #132155: Always display first line of impl blocks even when collapsed |
This issue tracks the release notes text for #132155.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Rustdoc
- [Doc comment on impl blocks shows the first line, even when the impl block is collapsed](https://github.com/rust-lang/rust/pull/132155)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @GuillaumeGomez, @notriddle -- origin issue/PR authors and assignees for starting to draft text
| relnotes,T-rustdoc-frontend,relnotes-tracking-issue | low | Minor |
2,720,951,624 | vscode | `Search Service` does not use `asCanonicalUri` | <strike>When I use `fileSearch` all my paths start with `c:\\Code\\vscode...` (lower case c), however, when I use `textSearch` all paths are returned as `C:\\Code\\vscode...` (upper case C).
This is because for `fileSearch` the relative path is joined with the `fsPath` of the root folder:
https://github.com/microsoft/vscode/blob/f9ec787a7770b07b72b76e8dbdd56ef949fa2f70/src/vs/workbench/services/search/node/fileSearch.ts#L193
https://github.com/microsoft/vscode/blob/f9ec787a7770b07b72b76e8dbdd56ef949fa2f70/src/vs/workbench/services/search/node/rawSearchService.ts#L144-L146
With `textSearch` we use `URI.joinPath`:
https://github.com/microsoft/vscode/blob/f9ec787a7770b07b72b76e8dbdd56ef949fa2f70/src/vs/workbench/services/search/node/ripgrepTextSearchEngine.ts#L282</strike>
**Update:**
After thinking about this, it actually doesn't matter if the drive letter is returned as upper or lower case. But, after looking at the implementation I'm wondering why the `SearchService` is no using `IUriIdentityService.asCanonicalUri` to make sure the URIs returned by ripgrep are in the correct format? | debt,search | low | Minor |
2,720,977,393 | langchain | _recursive_set_additional_properties_false does not work with AnyOf attribute type | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
class ContentA(BaseModel):
target: int
class ContentB(BaseModel):
value: str
class MyTool(BaseModel):
content: Union[ContentA, ContentB]
model = AzureChatOpenAI(...)
model = model.bind_tools(tools=[MyTool], strict=True, tool_choice=True)
model.invoke('hi')
```
### Error Message and Stack Trace (if applicable)
```
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for function 'MyTool': In context=('properties', 'content', 'anyOf', '0'), 'additionalProperties' is required to be supplied and to be false.", 'type': 'invalid_request_error', 'param': 'tools[0].function.parameters', 'code': 'invalid_function_parameters'}}
```
### Description
The method _recursive_set_additional_properties_false is missing a check for anyOf attribute to keep adding recursively.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #49~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Nov 6 17:42:15 UTC 2
> Python Version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.21
> langchain: 0.3.8
> langchain_community: 0.3.8
> langsmith: 0.1.146
> langchain_openai: 0.2.9
> langchain_text_splitters: 0.3.2
> langgraph_sdk: 0.1.35
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.7
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.55.1
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
| π€:bug,investigate | low | Critical |
2,721,007,101 | pytorch | Fix broken tensorboard tests in 3.13 | https://github.com/pytorch/pytorch/pull/141572 disabled some tensorboard tests in 3.13 due to failures. It looks related to tensorboard version - using a newer version seems to fix the test, but the failure still happens on CI (maybe the tensorboard version isn't updated on CI?).
cc @mruberry @ZainRizvi @albanD | module: tests,triaged,module: tensorboard,module: python frontend | low | Critical |
2,721,009,224 | material-ui | [system] Support using `applyStyles` inside template literals | ### Summary
This should be supported out of the box:
```js
const theme = createTheme({});
const StyledButton = styled('button')`
${theme.applyStyles(
'dark', `
background: white;
`
)}
`;
```
But it's not because `theme.applyStyles` returns an object. The current workaround is to override `applyStyles`.
### Motivation
Taken from https://github.com/mui/material-ui/issues/44488
**Search keywords**: applyStyles template literals | waiting for π,package: system,enhancement | low | Minor |
2,721,009,999 | vscode | Format markdown links containing space in filename [](<>) without greater and smaller sign and maintain URL encoding | <!-- β οΈβ οΈ Do Not Delete This! feature_request_template β οΈβ οΈ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
## Problem
VS Code adds `<>` to markdown links when the filename contains spaces, which causes issues with some parsers and static site builders.
## Steps to reproduce
activate `markdown: automatic link updates on file move.`
Move file into a diffent folder. Then move it back to the original position. If the markdown file contains links that point to a filename with spaces, markdownlinks are changed to `(<>)`.
## Details
When file is moved, that contains a URL-encoded link to a filename containing spaces e.g.
```
[link](file%20with%20spaces.md)
```
VSCode adds <> to the resulting markdown link and removes the URLEncoded attributes. The result for example is `[link](<file with spaces.md>)`.
However as a result VSCode has removed the url encoded property of the markdown link, causing issues in some static site generators or other markdown programmes as this [](<>) is not a pure standard markdown format in links in general. `<>` are more common in www.webpages. Some parsers interpret `[File]<file with spaces.md>` as a website for instance.
## Solution
It would be great if the links after file move get updated from `[link](<file with spaces.md>)` to `[link](file%20with%20spaces.md)`. VSCode already has the power to provide url-encoded links.
## Alternative I have considered
### vs. Code
I use RegExp with: Find:
```
(?<=\]\([^\]\r\n]*)[^\S\r\n]+
````
Replace: `%20` to find all spaces in Markdownlinks and replace it with %20.
Replace `](<` with `](` and also `>)` with `)`.
I use Notepad ++ and a Macro to do the same:
```
(\G(?!\A)|\[[^][]*]\()([^()\s]*)\s
(\G(?!\A)|\[[^][]*]\()([^()\s]*)\s+(?=[^()]*\))
```
Replace $1$2%20. And also replace `](<` with `](` and also `>)` with `)`.
| feature-request,markdown | low | Minor |
2,721,016,832 | rust | Tracking issue for release notes of #133883: Remove polymorphization |
This issue tracks the release notes text for #133883.
### Steps
- [x] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Compiler
- [The unstable flag `-Zpolymorphize` has been removed](https://github.com/rust-lang/rust/pull/133883), see https://github.com/rust-lang/compiler-team/issues/810 for some background.
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @saethlin, @compiler-errors -- origin issue/PR authors and assignees for starting to draft text
| T-compiler,relnotes,-Zpolymorphize,relnotes-tracking-issue | low | Critical |
2,721,027,918 | pytorch | Fix disabled 3.13 cpp_extension test failing due to registration error | Follow-up to https://github.com/pytorch/pytorch/pull/141673.
cc @malfet @zou3519 @xmfan @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens | module: cpp-extensions,triaged,module: PrivateUse1 | low | Critical |
2,721,030,765 | electron | view.getVisible() | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
Right now there is a `view.setVisible()`, but there is no way to query this value once it has been set.
### Proposed Solution
`view.getVisible()` or `view.isVisible()` (would better match `BaseWindow`)
### Alternatives Considered
There do not seem to be any alternatives using the current API (as documented).
### Additional Information
I am trying to determine the visibility of a WebContentsView during E2E testing (Playwright). | enhancement :sparkles: | low | Minor |
2,721,047,098 | ui | [bug]: CLI breaks with pnpm catalog in pnpm workspaces | ### Describe the bug
#### Description
When using the `shadcn/ui` CLI in a project configured with `pnpm` and leveraging a `pnpm` catalog as part of a `pnpm workspace`, the CLI encounters an error and fails to execute commands properly. This issue seems to occur specifically when resolving dependencies or generating components within the workspace environment.
#### Expected Behavior
The `shadcn/ui` CLI should work seamlessly in `pnpm` workspaces with catalog structures, just as it does in standalone or other monorepo setups.
#### Actual Behavior
The CLI fails, showing errors related to dependency resolution or workspace paths, indicating that it does not properly account for the `pnpm` catalog.
#### Possible Solution
Ensure the CLI accounts for `pnpm` workspace configurations and properly resolves paths and dependencies in catalog structures. This might involve:
- Normalizing workspace paths.
- Using `pnpm` APIs or conventions for dependency resolution.
##### Configuration
Here is an example of a `pnpm-workspace.yaml` file and catalog configuration:
```yaml
# pnpm-workspace.yaml
packages:
- apps/*
- packages/*
- tooling/*
catalog:
"@vitejs/plugin-react": ^4.3.3
eslint: ^9.16.0
lucide-react: ^0.464.0
prettier: ^3.4.1
tailwindcss: ^3.4.16
typescript: ^5.6.3
vite: ^5.4.11
zod: ^3.23.8
catalogs:
react18:
"@types/react": ^18.3.12
"@types/react-dom": ^18.3.1
react-dom: ^18.3.1
react: ^18.3.1
```
- The issue is specific to setups where `pnpm` catalogs are used.
- Other tools in the workspace function correctly, so the issue seems localized to `shadcn/ui` CLI.
---
### Affected component/components
CLI
### How to reproduce
#### Steps to Reproduce
1. Set up a `pnpm` workspace with a catalog structure.
2. Add `shadcn/ui` to one of the workspace projects.
3. Attempt to use the CLI to add or generate components (e.g., `pnpm exec shadcn-ui add button`).
4. Observe that the CLI fails with an error.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
pnpm dlx shadcn@latest add && prettier src --write --list-different "sidebar"
.../Library/pnpm/store/v3/tmp/dlx-51458 | +170 +++++++++++++++++
.../Library/pnpm/store/v3/tmp/dlx-51458 | Progress: resolved 170, reused 170, downloaded 0, added 170, done
β Which components would you like to add? βΊ sidebar
β Checking registry.
β Updating ../../tooling/tailwind/web.ts
β Updating ../../tooling/tailwind/web.styles.css
β ΄ Installing dependencies.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
Command failed with exit code 1: pnpm add @radix-ui/react-slot class-variance-authority lucide-react @radix-ui/react-separator @radix-ui/react-dialog @radix-ui/react-tooltip
../.. | βWARNβ Ignoring broken lockfile at /Users/sohamnandi/Desktop/projects/projects: Lockfile /Users/sohamnandi/Desktop/projects/projects/pnpm-lock.yaml not compatible with current pnpm
../.. | βWARNβ Ignoring broken lockfile at /Users/sohamnandi/Desktop/projects/projects/node_modules/.pnpm: Lockfile /Users/sohamnandi/Desktop/projects/projects/node_modules/.pnpm/lock.yaml not compatible with current pnpm
βERR_PNPM_SPEC_NOT_SUPPORTED_BY_ANY_RESOLVERβ lucide-react@catalog: isn't supported by any available resolver.
This error happened while installing a direct dependency of /Users/sohamnandi/Desktop/projects/projects/packages/ui
../.. | Progress: resolved 1, reused 0, downloaded 0, added 0
βELIFECYCLEβ Command failed with exit code 1.
```
### System Info
```bash
Environment
pnpm version: [^9.14.4]
Node.js version: [22.12.0]
Operating System: [macOS]
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,721,059,068 | pytorch | Inconsistent Presence of 'out' Arg in cross_entropy_loss vs Other Loss Terms in Python Bindings | ### π Describe the bug
Looking at the method arguments In the bindings generated in python_nn_functions.cpp, almost all loss functions have an out parameter but cross_entropy_loss
```cpp
cross_entropy_loss(
Tensor input,
Tensor target,
Tensor? weight=None,
int64_t reduction=at::Reduction::Mean,
SymInt ignore_index=-100,
double label_smoothing=0.0
)
```
```cpp
mse_loss(
Tensor input,
Tensor target,
int64_t reduction=at::Reduction::Mean,
*,
Tensor out=None
)
```
```cpp
binary_cross_entropy(
Tensor input,
Tensor target,
Tensor? weight=None,
int64_t reduction=at::Reduction::Mean,
*,
Tensor out=None
)
```
I haven't found a loss function without it except for cross_entropy and the following loss functions do have the out parameter: binary_cross_entropy, l1_loss, smooth_l1_loss, max_pool2d_with_indices, max_pool3d_with_indices, huber_loss, mse_loss, multilabel_margin_loss, soft_margin_loss, multi_margin_loss
There might be an intentional reason for this, but I thought I should bring it up since it looked inconsistent.
### Versions
'2.6.0a0+git7224cd4'
This isn't version specific so I'm going to omit further versions info
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,module: loss,triaged | low | Critical |
2,721,101,004 | go | x/example/gotypes: update for go/types.Alias | golang.org/x/example/gotypes should be updated to include changes in 1.{22,23,24} to support materialized aliases.
The main packages doc, hugeparam, and skeleton may need to be updated to set gotypesalias=1 by default for toolchains >= 1.24. Otherwise that will not be able to type check inputs with type parameterized aliases (1.24). | NeedsInvestigation | low | Minor |
2,721,105,150 | godot | Focusing outline in tree drawn over everything | ### Tested versions
4.0+, 4.4 dev7
### System information
Godot v4.4.dev5 - Windows 10.0.19045 - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - Radeon RX 560 Series (Advanced Micro Devices, Inc.; 31.0.14001.45012) - Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz (4 threads)
### Issue description
When we focus on a Tree in the theme editor, a border around the Tree is drawn on top of everything
https://github.com/user-attachments/assets/fde56428-7e9c-4c4b-9876-6cf7d9eb6e51
### Steps to reproduce
1. Open the theme editor
2. Click on Tree in the theme preview
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,topic:gui | low | Minor |
2,721,125,311 | go | x/text: support for go/types.Aliases | The main package cmd/gotext needs to be updated to set gotypesalias=1 GODEBUG value by default when built by a toolchains >= 1.24. Otherwise it will not be able to type check inputs with type parameterized aliases (1.24). This can be done by conditionally setting the variable (see https://go.dev/cl/627715 for an example). Or updated the go.mod to go language setting to >= 1.23.
`golang.org/x/text/message/pipeline` may need to by updated to support go/types.Alias being produced by the type checker. Given that this calls `go/types.Underlying()`, it likely needs to be updated. Alternatively this could be audited by an expert on the package to determine if this is not necessary.
See #69772 for additional context.
CC @mpvl | NeedsInvestigation | low | Critical |
2,721,168,995 | vscode | Interactive window shows standard editor rulers | Settings:
```js
"editor.rulers": [
80,
{ "column": 100 },
{ "column": 120, "color": "#662222" },
],
```
I don't expect these to show up in the interactive window:

| bug,interactive-window | low | Minor |
2,721,174,993 | vscode | Can barely make out interactive window export icon on standard density monitor | Windows @ 100% scale monitor, 1 devicePixelRatio (ie. pretty standard setup):


Compare that to the collapse icon which looks great (designed for this size so no blurring, has sub-pixel anti-aliasing as it's a codicon):

| bug,icons-product,interactive-window | low | Minor |
2,721,178,065 | go | x/mobile: support for go/types.Aliases | The main packages cmd/{gobind,gomobile} need to be updated to set gotypesalias=1 GODEBUG value by default when built by a toolchains >= 1.24. Otherwise they will not be able to type check inputs with type parameterized aliases (1.24). This can be done by conditionally setting the variable (see https://go.dev/cl/627715 for an example). Or updated the go.mod to go language setting to >= 1.23.
`golang.org/x/mobile/bind` may need to by updated to support go/types.Alias being produced by the type checker. Given that this calls `go/types.Underlying()`, it likely needs to be updated. Alternatively this could be audited by an expert on the package to determine if this is not necessary.
See #69772 for additional context.
CC @hyangah | NeedsInvestigation,mobile | low | Critical |
2,721,179,070 | vscode | Actives disappear when focusing between a notebook, interactive window and native REPL | 
| bug,notebook-layout,notebook-variables | low | Minor |
2,721,217,553 | flutter | 3.28 branch info | ### 3.28 beta branch info ###
| Repo | Branch | Commit | Internal CL |
| --- | --- | --- | --- |
| flutter/flutter | [flutter-3.28-candidate.0](https://github.com/flutter/flutter/tree/flutter-3.28-candidate.0) | https://github.com/flutter/flutter/commit/74669e4bf1352a5134ad68398a6bf7fac0a6473b | cl/702911218 |
| flutter/engine | [flutter-3.28-candidate.0](https://github.com/flutter/engine/tree/flutter-3.28-candidate.0) | https://github.com/flutter/engine/commit/9e8fcad4eaf6c266f032ea9ec304739fb316a7dc | N/A |
| dart-lang/sdk | [3.7.0-209.0.dev](https://github.com/dart-lang/sdk/tree/3.7.0-209.0.dev) | https://github.com/dart-lang/sdk/commit/61bfa9bbb91d50754f6b6a2bef5629c9fd074e6f | cl/702410045 |
| google/skia | as needed | https://github.com/google/skia/commit/c9647f13cdedac9871fd93a70e3fa27d8f8972b9 | N/A |
CC @christopherfujino @zanderso | team-release | low | Major |
2,721,227,637 | pytorch | torch.export.export fails to export a model with dynamic shapes for a custom type | ### π Describe the bug
torch.export.export fails to keep a dynamic dimension. The script is using a custom class. Is it a bug or did I forget something when I registed the custom class?
```python
from typing import Any, Dict, List, Tuple
import torch
import transformers
class ModelTakingDynamicCacheAsInput(torch.nn.Module):
def forward(self, x, dc):
kc = torch.cat(dc.key_cache, axis=1)
vc = torch.cat(dc.value_cache, axis=1)
y = (kc + vc).sum(axis=2, keepdim=True)
return x + y
###########################
# Let's check the model runs.
x = torch.randn(3, 8, 7, 1)
cache = transformers.cache_utils.DynamicCache(1)
cache.update(torch.ones((3, 8, 5, 6)), (torch.ones((3, 8, 5, 6)) * 2), 0)
model = ModelTakingDynamicCacheAsInput()
expected = model(x, cache)
print(expected.shape)
###########################
# Let's check it works with others shapes.
x = torch.randn(4, 8, 7, 1)
cache = transformers.cache_utils.DynamicCache(1)
cache.update(torch.ones((4, 8, 11, 6)), (torch.ones((4, 8, 11, 6)) * 2), 0)
model = ModelTakingDynamicCacheAsInput()
expected = model(x, cache)
print(expected.shape)
##########################
# Let's export.
try:
torch.export.export(model, (x, cache))
except Exception as e:
print("export failed with", e)
###########################
# Register serialization of DynamicCache
# ++++++++++++++++++++++++++++++++++++++
def flatten_dynamic_cache(
dynamic_cache: transformers.cache_utils.DynamicCache,
) -> Tuple[List[Any], torch.utils._pytree.Context]:
flat = [
(k, getattr(dynamic_cache, k))
for k in ["key_cache", "value_cache"]
if hasattr(dynamic_cache, k)
]
return [f[1] for f in flat], [f[0] for f in flat]
def unflatten_dynamic_cache(
values: List[Any],
context: torch.utils._pytree.Context,
output_type=None,
) -> transformers.cache_utils.DynamicCache:
cache = transformers.cache_utils.DynamicCache()
values = dict(zip(context, values))
for k, v in values.items():
setattr(cache, k, v)
return cache
def flatten_with_keys_dynamic_cache(d: Dict[Any, Any]) -> Tuple[
List[Tuple[torch.utils._pytree.KeyEntry, Any]],
torch.utils._pytree.Context,
]:
values, context = flatten_dynamic_cache(d)
return [(torch.utils._pytree.MappingKey(k), v) for k, v in zip(context, values)], context
torch.utils._pytree.register_pytree_node(
transformers.cache_utils.DynamicCache,
flatten_dynamic_cache,
unflatten_dynamic_cache,
serialized_type_name=f"{transformers.cache_utils.DynamicCache.__module__}.{transformers.cache_utils.DynamicCache.__name__}",
flatten_with_keys_fn=flatten_with_keys_dynamic_cache,
)
torch.fx._pytree.register_pytree_flatten_spec(
transformers.cache_utils.DynamicCache, lambda x, _: [x.key_cache, x.value_cache]
)
########################################
# Let's try to export again.
ep = torch.export.export(model, (x, cache))
print(ep.graph)
########################################
# With dynamic shapes now.
batch = torch.export.Dim("batch", min=1, max=1024)
clength = torch.export.Dim("clength", min=1, max=1024)
try:
ep = torch.export.export(
model,
(x, cache),
dynamic_shapes=({0: batch}, [[{0: batch, 2: clength}], [{0: batch, 2: clength}]]),
)
print(ep.graph)
failed = False
except Exception as e:
print("FAILS:", e)
# - Not all values of batch = L['x'].size()[0] in the specified range batch <= 1024 are valid because batch was inferred to be a constant (4).
failed = True
########################################
# If it failed, let's understand why.
if failed:
class Model(torch.nn.Module):
def forward(self, dc):
kc = dc.key_cache[0]
vc = dc.value_cache[0]
return kc + vc
ep = torch.export.export(
Model(),
(cache,),
dynamic_shapes={"dc": [[{0: batch, 2: clength}], [{0: batch, 2: clength}]]},
)
for node in ep.graph.nodes:
print(f"{node.name} -> {node.meta.get('val', '-')}")
# it prints out ``dc_key_cache_0 -> FakeTensor(..., size=(4, 8, 11, 6))``
# but it should be ``dc_key_cache_0 -> FakeTensor(..., size=(s0, 8, s1, 6))``
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241205+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] clip-anytorch==2.6.0
[pip3] CoCa-pytorch==0.1.0
[pip3] dalle2-pytorch==1.15.6
[pip3] ema-pytorch==0.7.0
[pip3] executorch==0.4.0
[pip3] flake8==7.1.1
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxconverter-common==1.14.0
[pip3] onnxruntime-gpu==1.21.0
[pip3] onnxruntime-training==1.21.0+cu121
[pip3] onnxscript==0.1.0.dev20240905
[pip3] open_clip_torch==2.26.1
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.8.4
[pip3] torch==2.6.0.dev20241205+cu124
[pip3] torch-fidelity==0.3.0
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.5.0
[pip3] torchaudio==2.5.0.dev20241205+cu124
[pip3] torchmetrics==1.4.3
[pip3] torchvision==0.20.0.dev20241205+cu124
[pip3] triton==3.1.0
[pip3] vector-quantize-pytorch==1.18.1
[conda] Could not collect
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Critical |
2,721,256,064 | kubernetes | Namespace deletion does not trigger container lifecycle hooks | ### What happened?
I want to ensure that certain commands are run on graceful deletion of a pod, so I have set up a container lifecycle prestop hook to do so. Deleting the pod does trigger the prestop hook as expected; however, when deleting the namespace that the pod is in, I would expect this to also be a graceful shutdown event, which should trigger the prestop hook, but this does not happen. In addition, helm uninstalling our helm chart with the relevant statefulset has the same issue. See the test example below where we have setup RBAC for kubectl access within the container:
```
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: <>
namespace: <>
labels:
component: <>
app: <>
spec:
...
template:
metadata:
labels:
spec:
containers:
- name: testContainer
image: <>
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- "kubectl label cm test-configmap -n test-prestop-hook testlabel=8 --overwrite"
```
### What did you expect to happen?
Prestop hook should be triggered by namespace deletion.
### How can we reproduce it (as minimally and precisely as possible)?
1. Bring up a pod with a container that has a prestop hook configured
2. Exec into the pod and label the ConfigMap to see that it works (no RBAC or image issues)
3. Delete the namespace that the pod is in (expected to trigger the prestop hook)
4. Wait 30 seconds and see that the prestop hook did not run, configmap label did not change to `testlabel: "8"`
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.28.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.10-eks-7f9249a
```
</details>
### Cloud provider
<details>
AWS
</details>
### OS version
<details>
NA
</details>
### Install tools
<details>
Using helm to install the helm chart with statefulset:
$ helm version
version.BuildInfo{Version:"v3.12.3", GitCommit:"3a31588ad33fe3b89af5a2a54ee1d25bfe6eaa5e", GitTreeState:"clean", GoVersion:"go1.20.7"}
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,kind/support,sig/node,triage/needs-information,needs-triage | low | Major |
2,721,285,587 | transformers | Training config that worked with transformers v4.4.6.3 results in OOM error with v4.47.0 (using SFTTrainer) | ### System Info
```
- `transformers` version: 4.47.0
- Platform: Linux-6.8.0-1015-aws-x86_64-with-glibc2.35
- Python version: 3.12.6
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: Yes
- Using GPU in script?: Yes
- GPU type: NVIDIA A100-SXM4-40GB
```
### Who can help?
@ArthurZucker @SunMarc @muellerz
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Training with transformers==4.46.3 runs as expected. Upgrading to transformers==4.47.0 (without changing anything else) leads to an OOM error in the very first training step (see stack trace below).
Run command: `accelerate launch --config_file ./accelerate_config.yaml train.py training=path/to/training_config`
### Accelerate Config
```
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: false
activation_checkpointing: true
machine_rank: 0
main_training_function: main
mixed_precision: 'bf16'
num_machines: 1
num_processes: 8
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### Training Config
```
{'accelerator_config': {'dispatch_batches': None,
'even_batches': True,
'gradient_accumulation_kwargs': None,
'non_blocking': False,
'split_batches': False,
'use_seedable_sampler': True},
'adafactor': False,
'adam_beta1': 0.9,
'adam_beta2': 0.999,
'adam_epsilon': 1e-08,
'attn_implementation': 'flash_attention_2',
'auto_find_batch_size': False,
'average_tokens_across_devices': False,
'batch_eval_metrics': False,
'bf16': 'auto',
'bf16_full_eval': False,
'chars_per_token': '<CHARS_PER_TOKEN>',
'data_seed': None,
'dataloader_drop_last': False,
'dataloader_num_workers': 0,
'dataloader_persistent_workers': False,
'dataloader_pin_memory': True,
'dataloader_prefetch_factor': None,
'dataset_batch_size': 1000,
'dataset_kwargs': {'skip_prepare_dataset': False},
'ddp_backend': None,
'ddp_broadcast_buffers': None,
'ddp_bucket_cap_mb': None,
'ddp_find_unused_parameters': None,
'ddp_timeout': 1800,
'debug': [],
'deepspeed': None,
'delete_ckpts': False,
'disable_tqdm': False,
'dispatch_batches': None,
'do_eval': True,
'do_predict': False,
'do_train': False,
'early_stopping_patience': 10,
'eval_accumulation_steps': None,
'eval_delay': 0,
'eval_do_concat_batches': True,
'eval_exampleset_info_path': '',
'eval_exampleset_path': '',
'eval_on_start': True,
'eval_packing': False,
'eval_steps': 10,
'eval_strategy': 'steps',
'eval_use_gather_object': False,
'evaluation_strategy': None,
'exampleset_info_path': '',
'exampleset_path': '',
'force_tokenize_data': False,
'fp16': False,
'fp16_backend': 'auto',
'fp16_full_eval': False,
'fp16_opt_level': 'O1',
'fsdp': [],
'fsdp_config': {'min_num_params': 0,
'xla': False,
'xla_fsdp_grad_ckpt': False,
'xla_fsdp_v2': False},
'fsdp_min_num_params': 0,
'fsdp_transformer_layer_cls_to_wrap': None,
'full_determinism': False,
'gradient_accumulation_steps': 4,
'gradient_checkpointing': False,
'gradient_checkpointing_kwargs': {'use_reentrant': False},
'greater_is_better': False,
'group_by_length': False,
'half_precision_backend': 'auto',
'hub_always_push': False,
'hub_model_id': None,
'hub_private_repo': None,
'hub_strategy': 'every_save',
'hub_token': '<HUB_TOKEN>',
'ignore_data_skip': False,
'include_for_metrics': [],
'include_inputs_for_metrics': False,
'include_num_input_tokens_seen': False,
'include_tokens_per_second': False,
'jit_mode_eval': False,
'label_names': ['labels'],
'label_smoothing_factor': 0.0,
'learning_rate': 0.0002,
'length_column_name': 'length',
'load_best_model_at_end': True,
'local_rank': 0,
'log_level': 'passive',
'log_level_replica': 'warning',
'log_on_each_node': True,
'logging_first_step': False,
'logging_nan_inf_filter': True,
'logging_steps': 1,
'logging_strategy': 'steps',
'lora_alpha': 32,
'lora_dropout': 0.05,
'lora_r': 16,
'lora_target_modules': ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'up_proj', 'down_proj', 'gate_proj'],
'lr_scheduler_kwargs': {},
'lr_scheduler_type': 'cosine',
'mask_instructions': True,
'max_grad_norm': 1.0,
'max_seq_length': 1024,
'max_steps': 100,
'meta_data': {},
'metric_for_best_model': 'loss',
'model_name_or_path': 'Qwen/Qwen2.5-7B-Instruct',
'mp_parameters': '',
'neftune_noise_alpha': None,
'no_cuda': False,
'num_of_sequences': 1024,
'num_train_epochs': 3,
'optim': 'adamw_torch',
'optim_args': None,
'optim_target_modules': None,
'overwrite_output_dir': False,
'packing': False,
'past_index': -1,
'per_device_eval_batch_size': 1,
'per_device_train_batch_size': 1,
'per_gpu_eval_batch_size': None,
'per_gpu_train_batch_size': None,
'prediction_loss_only': False,
'push_to_hub': False,
'push_to_hub_model_id': None,
'push_to_hub_organization': None,
'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>',
'ray_scope': 'last',
'remove_unused_columns': True,
'restore_callback_states_from_checkpoint': False,
'resume_from_checkpoint': None,
'save_on_each_node': False,
'save_only_model': False,
'save_safetensors': True,
'save_steps': 20,
'save_strategy': 'steps',
'save_total_limit': None,
'seed': 42,
'skip_memory_metrics': True,
'smoke_test': False,
'split_batches': None,
'tf32': None,
'torch_compile': False,
'torch_compile_backend': None,
'torch_compile_mode': None,
'torch_dtype': 'bfloat16',
'torch_empty_cache_steps': None,
'torchdynamo': None,
'tpu_metrics_debug': False,
'tpu_num_cores': None,
'use_cpu': False,
'use_ipex': False,
'use_legacy_prediction_loop': False,
'use_liger_kernel': False,
'use_mps_device': False,
'use_peft': False,
'val_set_size': 0.0,
'warmup_ratio': 0.1,
'warmup_steps': 0,
'weight_decay': 0.0}
```
### Training script
```
def main(cfg):
accelerator = Accelerator()
model_kwargs = dict(
attn_implementation=sft_config.attn_implementation,
torch_dtype=sft_config.torch_dtype,
use_cache=False,
)
model = AutoModelForCausalLM.from_pretrained(sft_config.model_name_or_path, **model_kwargs)
tokenizer = AutoTokenizer.from_pretrained(sft_config.model_name_or_path, use_fast=True)
tokenizer.pad_token = tokenizer.eos_token
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=sft_config,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
peft_config=None,
dataset_kwargs=sft_config.dataset_kwargs,
)
trainer.train()
trainer.save_model()
if __name__ == "__main__":
main()
```
### Stack trace
```
Traceback (most recent call last):
File "/home/ubuntu/***/train.py", line 233, in main
trainer.train()
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 2164, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 2522, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 3653, in training_step
loss = self.compute_loss(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 3709, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 864, in forward
output = self._fsdp_wrapped_module(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/accelerate/utils/operations.py", line 823, in forward
return model_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/accelerate/utils/operations.py", line 811, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 1184, in forward
loss = self.loss_function(logits, labels, self.vocab_size, **loss_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/***/.venv/lib/python3.12/site-packages/transformers/loss/loss_utils.py", line 36, in ForCausalLMLoss
logits = logits.float()
^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.97 GiB. GPU 5 has a total capacity of 39.38 GiB of which 1.53 GiB is free. Including non-PyTorch memory, this process has 37.84 GiB memory in use. Of the allocated memory 35.69 GiB is allocated by PyTorch, and 521.06 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
```
### Expected behavior
Training should complete without errors. | bug | low | Critical |
2,721,291,609 | ant-design | Dropdown "judders" if it can't figure out where to determine placement | ### Reproduction link
https://codesandbox.io/p/sandbox/8334zm
### Steps to reproduce
1. Open reproduction in new window (https://8334zm.csb.app/).
2. Open dropdown in Actions column.
### What is expected?
The action menu to correctly determine where to open based on available space.
### What is actually happening?
It endlessly "judders", causing a horrible user experience.
### Additional information
This reproduction users a specific column width to easily replicate the issue, but I have seen this issue numerous other locations outside of a table.
| Environment | Info |
| --- | --- |
| antd | 5.22.3 |
| React | Latest |
| System | Windows |
| Browser | All |
---
This seems to be caused by an rc-trigger change.
https://github.com/react-component/trigger/issues/496
https://github.com/react-component/trigger/pull/419
ref: UIEN-6867
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Major |
2,721,296,832 | TypeScript | No paste edits returned when copying between commonjs files | ### π Search Terms
5.8.0-dev.20241204
### π Version & Regression Information
- This changed between versions ______ and _______
- This changed in commit or PR _______
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about _________
- I was unable to test this on prior versions because _______
### β― Playground Link
_No response_
### π» Code
From https://github.com/microsoft/vscode/issues/235105#issuecomment-2517293705
1. Create a file `main.js`:
```ts
// main.js
const fs = require('fs');
const dirs = [
'',
'build',
];
if (fs.existsSync(`${__dirname}/../../.build/distro/npm`)) {
dirs.push('.build/distro/npm/remote/web');
}
```
2. Copy the line ` dirs.push('.build/distro/npm/remote/web');`
1. Paste into another file called `other.js`
Full logs: [tsserver.log](https://github.com/user-attachments/files/18029033/tsserver.log)
### π Actual behavior
No paste edits returned
Specifically `preparePasteEdits` returns `no content available`:
```
Info 209 [12:07:33.903] request:
{
"seq": 10,
"type": "request",
"command": "preparePasteEdits",
"arguments": {
"file": "/Users/matb/projects/sandbox/main.js",
"copiedTextSpan": [
{
"start": {
"line": 8,
"offset": 1
},
"end": {
"line": 9,
"offset": 1
}
}
]
}
}
Perf 210 [12:07:33.904] 10::preparePasteEdits: elapsed time (in milliseconds) 0.4798
Info 211 [12:07:33.904] response:
{"seq":0,"type":"response","command":"preparePasteEdits","request_seq":10,"success":false,"message":"No content available."}
```
### π Expected behavior
We should generate an edit in this case that exports `dir`. This is the behavior I see in other cases
### Additional information about the issue
_No response_ | Needs Proposal | low | Minor |
2,721,317,504 | pytorch | Refactor generate_fallback_kernel_with_runtime_lookup_jit | generate_fallback_kernel_with_runtime_lookup_jit and some related functions do no use IndentedBuffer to handle output code indentation nicely. It will be great to fix that and make it consistent with other parts of WrapperCodegen.
cc @chauhang @penguinwu | triaged,oncall: pt2 | low | Minor |
2,721,335,341 | pytorch | Fix segfaulting profiler tests in 3.13 | https://github.com/pytorch/pytorch/pull/141674 and https://github.com/pytorch/pytorch/pull/141951 disabled some profiler tests in 3.13/dynamo-wrapped due to segfaults.
cc @seemethere @malfet @pytorch/pytorch-dev-infra @mruberry @ZainRizvi @robieta @chaekit @guotuofeng @guyang3532 @dzhulgakov @davidberard98 @briancoutinho @sraikund16 @sanrise @albanD | module: tests,oncall: profiler,module: python frontend | low | Minor |
2,721,342,570 | vscode | Debugging VS Code prints `console.log` statements late | Steps to Reproduce:
1. have a early `console.log` statement in the window on startup
2. debug vscode with F5
π the log message appears quite late
| bug,debug,papercut :drop_of_blood: | low | Critical |
2,721,361,739 | vscode | GPU rendering: workbench.colorCustomizations for brackets only applies after window reload | Testing #234577, I only see the bracket color change from `workbench.colorCustomizations` applied after I reload the window. Non gpu lines show up correctly without the reload | bug,editor-gpu | low | Minor |
2,721,367,886 | ollama | Low GPU usage on second GPU | ### What is the issue?
I am on the 0.5.0 release (which links to 0.4.8-rc0) and using Qwen 2.5 32b Q5 with 32k context and flash attention with q8_0 KV cache.
I have a 3090 and 2080ti.
Ollama is putting 22GB on the 3090 and 5.3GB on the 2080ti.
When running a prompt the 3090 is at 80%-90% GPU usage while the 2080ti is only at 10%.
When using llama.cpp directly with split row, the VRAM on the 2080ti is mostly maxed and the GPU usage on both GPU is in the 50%-65% range.
----
My question: Why is the 3090 doing most of the work on Ollama?
### OS
Linux
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.4.8-rc0 | bug | medium | Critical |
2,721,373,610 | godot | Can't return typed arrays from C# functions to GDScript | ### Tested versions
Reproducible in 4.4dev5
### System information
Windows 10, Godot v4.4dev5
### Issue description
If I declare a function in C# like this:
```csharp
public Godot.Collections.Array<Vector2> GetCoolPlaces() {
return new Godot.Collections.Array<Vector2>(... some values);
}
```
I would expect to be able to put the result into a typed array variable in GDScript:
```gdscript
var cool_places: Array[Vector2] = CSharpObject.GetCoolPlaces()
```
But the above code will result in this runtime error:
`Trying to assign an array of type "Array" to a variable of type "Array[Vector2i]".`
It is possible to do by creating a new typed array from the result of the function like this:
```gdscript
var cool_places: Array[Vector2] = Array(CSharpObject.GetCoolPlaces(), TYPE_VECTOR2, "", null)
```
But this is really clunky and annoying to do, especially when it's a function you use often.
Note: Packed arrays would not be a solution here, as this is also a problem for custom classes (e.g. Resources).
### Steps to reproduce
Create a function in C# that returns an array and try to put the result of that function into a typed array in GDScript. See MRP for example
### Minimal reproduction project (MRP)
Reproduce by running this project:
[CSharp Typed Arrays.zip](https://github.com/user-attachments/files/18029521/CSharp.Typed.Arrays.zip)
| bug,topic:dotnet | low | Critical |
2,721,375,893 | vscode | 1px gap to side of minimap lets code show through |
Type: <b>Bug</b>
With word wrap off, there's about a 1px gap to the right side of the minimap that shows overflowing text (see the couple of white bars on the right side of the minimap below)

VS Code version: Code - Insiders 1.96.0-insider (Universal) (6567c244f90af8e31d9250e8f9a757b58565649a, 2024-12-05T05:04:06.424Z)
OS version: Darwin arm64 24.1.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M2 Max (12 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|2, 3, 4|
|Memory (System)|64.00GB (1.43GB free)|
|Process Argv|--crash-reporter-id 0fffb5da-9cd7-46fd-9e7f-a1564e8c5fda|
|Screen Reader|no|
|VM|0%|
</details>
<!-- generated by issue reporter --> | bug,editor-minimap | low | Critical |
2,721,383,611 | vscode | Rules show over line numbers with gpu rendering |
Type: <b>Bug</b>
1. Enable gpu editor rendering
2. Disable word wrap
3. In a file with long lines, scroll horizontally until the ruler is on top of the line numbers

VS Code version: Code - Insiders 1.96.0-insider (Universal) (6567c244f90af8e31d9250e8f9a757b58565649a, 2024-12-05T05:04:06.424Z)
OS version: Darwin arm64 24.1.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M2 Max (12 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|3, 3, 3|
|Memory (System)|64.00GB (0.97GB free)|
|Process Argv|--crash-reporter-id 0fffb5da-9cd7-46fd-9e7f-a1564e8c5fda|
|Screen Reader|no|
|VM|0%|
</details>
<!-- generated by issue reporter --> | bug,editor-gpu | low | Critical |
2,721,391,448 | angular | ComponentFixture.detectChanges causes NG0100 in zoneless mode | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
In zoneless mode, calling `ComponentFixture.detectChanges()` throws an NG0100 error.
Let's take the following component as an example:
```ts
@Component({
selector: 'app-root',
template: '{{ value }}',
})
export class AppComponent {
value = 0;
}
```
I am well aware that in real life, since we use zoneless change detection, changing the value of the component should not trigger a change detection, because no signal is being modified.
Let's run the following (bad) test for it:
```ts
describe('AppComponent', () => {
beforeEach(async () => {
await TestBed.configureTestingModule({
imports: [AppComponent],
providers: [provideExperimentalZonelessChangeDetection()],
}).compileComponents();
});
it('should detect changes', () => {
const fixture = TestBed.createComponent(AppComponent);
fixture.detectChanges();
expect(fixture.nativeElement.textContent).toBe('0');
fixture.componentInstance.value = 1;
fixture.detectChanges();
expect(fixture.nativeElement.textContent).toBe('1');
});
});
```
When executed, I expect this test to pass, because we imperatively launch a change detection instead of letting Angular autodetect the changes.
But the test neither passes nor properly fails. Instead, it throws an NG0100 error.
The better test, letting Angular auto-detect the changes, fails correctly:
```ts
import { ComponentFixtureAutoDetect, TestBed } from '@angular/core/testing';
import { AppComponent } from './app.component';
import { provideExperimentalZonelessChangeDetection } from '@angular/core';
describe('AppComponent', () => {
beforeEach(async () => {
await TestBed.configureTestingModule({
imports: [AppComponent],
providers: [
provideExperimentalZonelessChangeDetection(),
{ provide: ComponentFixtureAutoDetect, useValue: true },
],
}).compileComponents();
});
it('should detect changes', async () => {
const fixture = TestBed.createComponent(AppComponent);
await fixture.whenStable();
expect(fixture.nativeElement.textContent).toBe('0');
fixture.componentInstance.value = 1;
await fixture.whenStable();
expect(fixture.nativeElement.textContent).toBe('1');
});
});
```
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-ah4pxu?file=demo%2Fsrc%2Fapp%2Fapp.component.spec.ts,demo%2Fsrc%2Fapp%2Fapp.component.better.spec.ts
### Please provide the exception or error you saw
```true
Error: NG0100: ExpressionChangedAfterItHasBeenCheckedError: Expression has changed after it was checked. Previous value: '0'. Current value: '1'. Expression location: AppComponent component. Find more at https://angular.dev/errors/NG0100
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 19.0.4
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 19.0.3
... animations, common, compiler, compiler-cli, core, forms
... platform-browser, platform-browser-dynamic, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1900.4
@angular-devkit/build-angular 19.0.4
@angular-devkit/core 19.0.4
@angular-devkit/schematics 19.0.4
@angular/cli 19.0.4
@schematics/angular 19.0.4
rxjs 7.8.1
typescript 5.6.3
zone.js 0.15.0
```
### Anything else?
To run the tests in the stackblitz:
- `cd demo`
- `npm i`
- `ng test`
| area: testing,area: core,core: change detection,core: zoneless | medium | Critical |
2,721,424,909 | pytorch | [3.13] Fix failing torch.package dynamo-wrapped test | See https://github.com/pytorch/pytorch/pull/141886.
To repro: remove the skip and `PYTORCH_TEST_WITH_DYNAMO=1 pytest test/package/test_package_script.py -k test_save_shared_tensors`
cc @albanD | oncall: package/deploy,module: python frontend | low | Minor |
2,721,439,469 | pytorch | [3.13] Fix failing module tracker dynamo-wrapped test due to recursion error | See https://github.com/pytorch/pytorch/pull/141887.
Repro: remove skip and `PYTORCH_TEST_WITH_DYNAMO=1 pytest test/test_module_tracker.py -k test_confused_hierarchy`
Sample CI failure: https://hud.pytorch.org/pytorch/pytorch/pull/141264?sha=a486ca1f8486e1f092913e551e2e04ec0cb09956
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,721,460,358 | go | runtime: some possible SwissTable map benchmark improvements | The current runtime map benchmarks (imported from the CockroachDB SwissTable repo and used for recent runtime map work) have clearly been useful, especially for comparing the optimizations that have happened so far, but it might be worth tweaking those benchmarks.
For example, for the current benchmarks:
1. Most might be overly friendly to the branch predictor with repetitive key patterns, especially at lower elem counts.
2. Most of the elem sizes are power-of-two, which translates to a relatively low load factor and might not make the tables "sweat" as much as they would in practice.
3. Most of the benchmarks use a mod operation in the benchmark code, which might be around 1/3 of the CPU time for the benchmarks that fit in cache.
* This is fine if someone has this in mind when comparing results, but might be cleaner without.
4. Could probably benefit from some cold map cases (e.g., so many small or medium maps that they become cold) to better assess cache miss impact.
* The original SwissTable folks and Folly F14 folks had collaborated on C++ benchmarks to help them compare approaches, with a sample of one of their cold map C++ benchmarks [here](https://github.com/google/hashtable-benchmarks/blob/e19019081ab6776de17f6ddff78c3cf628dc5c72/hashtable_benchmarks.cc#L379-L409). (A couple of years ago, I had a WIP set of cold map Go benchmarks [here](https://github.com/thepudds/swisstable/blob/9c77dc657777eb8554d0a3d75bbb731f3364fa97/map_test.go#L1031), partially based on that).
Part of my immediate interest is the current benchmarks might not capture some of the benefits / costs of some adjustments to the current implementation, like:
* Trying 16-element group sizes (instead of the current 8), or
* Alternative memory layouts (like putting all the control bytes for a single table together, rather than current interspersing of control bytes with the slots)
In particular, I have a draft change to the runtime SwissTable for 16-element group sizes (currently SWAR-only, not SIMD), but I suspect some of the benefit would be masked by the current benchmarks (maybe -- I don't know without tweaking the benchmarks π
).
Separately, the benchmarks as they are might not be showing the advantages of the new implementation as clearly as they could compared to the old runtime map (including because "easier" benchmarks are probably making the prior map implementation look artificially better, vs. "easy" benchmarks not helping the SwissTable implementation as much).
----------
All that said, maybe some of those concerns don't matter in practice or maybe I'm mistaken.
Filing this issue to help with any discussion (including perhaps any feedback of "not worth it").
I plan on sending some benchmark CLs to attempt to improve or mitigate above issues.
CC @prattmic | NeedsInvestigation,compiler/runtime | low | Major |
2,721,469,424 | flutter | Fix for custom shaders inside an Opacity widget might be too aggressive in some situations | https://github.com/flutter/engine/pull/56936 fixed a bug where use of custom shaders might ignore the opacity modulation of an Opacity widget.
Theoretically it might also disable an optimization for group opacity in some rare circumstances in parts of scenes in close proximity to code that uses a custom shader. No functional bug will result but we will lose an opportunity to maintain the performance advantage of group opacity optimizations in these uncommon situations.
The problem is that we mark a DisplayList layer as incompatible with group opacity when it has any rendering operations that were drawn while a RuntimeEffect color source or image filter was in play even if the indicated rendering operation does not use those rendering attributes. The test for RuntimeEffects in the DisplayListBuilder needs to check the attribute usage of each rendering operation as it makes the determination of whether or not it will break the group opacity peephole optimization rules. | engine,P3,team-engine,triaged-engine | low | Critical |
2,721,477,749 | TypeScript | Mixin constructor not working with `args:unknown[]`, only with `args:any[]` | ### π Search Terms
[site:github.com/microsoft/typescript mixin class "unknown" args any args](https://www.google.com/search?num=10&sca_esv=34e05643fe6f2863&q=site:github.com/microsoft/typescript+mixin+class+%22unknown%22+args+any+args&sa=X&ved=2ahUKEwiRqLuRxpGKAxW_HzQIHV_0K0kQ5t4CegQIKxAB&biw=1911&bih=1094&dpr=1.75)
### π Version & Regression Information
- **This changed most likely after `unknown` was introduces and the bespoke/unique mixin type checking logic wasn't updated**.
### β― Playground Link
https://www.typescriptlang.org/play/?#code/C4TwDgpgBAwg9gOwM7AE4FcDGw6qgXigQgHcoAKAOmoENUBzJALinQQGsE4SEBtAXQCUBAHxQ4AIwBWEbACg5AMzbYAloigAxOHAA8AFSgQAHsAgIAJkliIUGbLhHkAQjSQQW+4QG85Uf1CoEMDoqAhQmAA2btbacEam5lZQru5QvgGZEbZoWDioVLQMzKwcXDwCPn5ZNUjokAXUlHSMgtU1AL4KNf6KOgRQAIwATADM3T19cACywQAWcBbkPoHBoeHAc6pIlFNQXZldXUA
### π» Code
```ts
type Constructor = new (...args: unknown[]) => object
function Foo<T extends Constructor>(Base: T) {
return class Foo extends Base { // <---------------- ERROR
constructor(...args: unknown[]) {
super(...args)
}
foo = 123
fooMethod() { return this.foo }
}
}
```
Change the `unknown` to `any` and the error goes away. You'd think they should behave the same: you don't care what the args are, you just need to pass them along.
Here's how to fix the error, changing `unknown`s to `any`s:
[playground](https://www.typescriptlang.org/play/?#code/C4TwDgpgBAwg9gOwM7AE4FcDGw6qgXigQgHcoAKAOmoENUBzJALihoRAG0BdASgID4ocAEYArCNgBQkgGboE2AJaIoAMThwAPABUoEAB7AICACZJYiFBmy5+5AEI0kEFtr4BvSVG9RUEYOioCFCYADZO5upweobGZlCOzlCePqkhlmhYOKhUtAzMrOzcHl5pZUjokDnUlHSMPKVlAL7SZd4yGgRQAIwATADMrW0dcACy-gAWcCbkHr7+gcHAE4pIlCNQLaktLUA)
Code:
```ts
type Constructor = new (...args: any[]) => object
function Foo<T extends Constructor>(Base: T) {
return class Foo extends Base { // <---------------- ERROR
constructor(...args: any[]) {
super(...args)
}
foo = 123
fooMethod() { return this.foo }
}
}
```
### π Actual behavior
Type error on `Foo`.
### π Expected behavior
I expected it to allow "any unknown args" to be passed along.
### Additional information about the issue
Plus, with an `unknown` type, the code in the mixin `constructor` can actually be more type safe, requiring type narrowing or type casting to use the args at all, whereas `any` allows anything to be done to the args which is not safe.
Maybe the mixin type checking algo could cast the base class constructor to `new (...args: any[])` and simply allow anything to be passed in, to make `unknown` work. Then the mixin type checking algo could still enforce `any[]` as well as (more preferably) `unknown[]`.
Also, maybe it would make sense to have an implicit `Constructor` type built into the mixin type checking algo so that mixins can just be really easy for plain JS users who are migrating to TypeScript.
I've met multiple people who are baffled by how difficult it is to write mixins in TypeScript when they come from plain JavaScript. | Suggestion,Awaiting More Feedback | low | Critical |
2,721,483,815 | TypeScript | Definitions have `.any()` instance and static methods on `AbortSignal` even though there is only a static method | ### π Search Terms
AbortSignal
### π Version & Regression Information
- This changed in commit or PR #58211
### β― Playground Link
https://www.typescriptlang.org/play/?noImplicitReturns=false&noUncheckedIndexedAccess=true&allowUnreachableCode=true&allowUnusedLabels=true&noUnusedLocals=true&noUnusedParameters=true&declaration=false&target=99&jsx=0&module=0&strictBuiltinIteratorReturn=true&useUnknownInCatchVariables=true&exactOptionalPropertyTypes=true&noFallthroughCasesInSwitch=true&noImplicitOverride=true&noPropertyAccessFromIndexSignature=true&ts=5.7.2&filetype=ts#code/PTAEBcAsEsGdQLYFMoHsAmp2qfAdquKEgB5zgBQeSA7qAIIBGqATuAMrQDmeAhgDYAKAJQA6XngCeggNoBdYUA
### π» Code
```ts
// this method does not exist
new AbortSignal().any([])
```
### π Actual behavior
TypeScript doesn't give `Property 'any' does not exist on type 'AbortSignal'.(2339)`.
### π Expected behavior
TypeScript gives `Property 'any' does not exist on type 'AbortSignal'.(2339)`.
### Additional information about the issue
_No response_ | Bug,Help Wanted,Domain: lib.d.ts | low | Minor |
2,721,491,014 | pytorch | [3.13] Fix skipped numpy 2.0 dynamo-wrapped tests | Added skips in https://github.com/pytorch/pytorch/pull/141862 and https://github.com/pytorch/pytorch/pull/141950.
Logs for former failure: https://ossci-raw-job-status.s3.amazonaws.com/log/33781064111
Sample:
```
2024-12-02T11:02:07.7846581Z PRINTING LOG FILE of torch_np/numpy_tests/lib/test_function_base 1/1 (test/test-reports/torch_np.numpy_tests.lib.test_function_base_1.1_7c04d20bed56d027_.log)
2024-12-02T11:02:07.7847867Z Traceback (most recent call last):
2024-12-02T11:02:07.7848824Z File "/var/lib/jenkins/workspace/test/torch_np/numpy_tests/lib/test_function_base.py", line 41, in <module>
2024-12-02T11:02:07.7850113Z from numpy.lib import delete, extract, insert, msort, place, setxor1d, unwrap, vectorize
2024-12-02T11:02:07.7851588Z ImportError: cannot import name 'delete' from 'numpy.lib' (/opt/conda/envs/py_3.13/lib/python3.13/site-packages/numpy/lib/__init__.py)
2024-12-02T11:02:07.7852778Z Got exit code 1
2024-12-02T11:02:07.7853877Z No stepcurrent file found. Either pytest didn't get to run (e.g. import error) or file got deleted (contact dev infra)
2024-12-02T11:02:07.7854775Z
2024-12-02T11:02:07.7855958Z FINISHED PRINTING LOG FILE of torch_np/numpy_tests/lib/test_function_base 1/1 (test/test-reports/torch_np.numpy_tests.lib.test_function_base_1.1_7c04d20bed56d027_.log)
2024-12-02T11:02:07.7857361Z
2024-12-02T11:02:07.7857674Z torch_np/numpy_tests/lib/test_function_base 1/1 failed!
```
Logs for latter failure: https://hud.pytorch.org/pytorch/pytorch/pull/141264?sha=0b5977838f532164868d660cf11dbd762909d062
cc @mruberry @ZainRizvi @rgommers @albanD | module: tests,triaged,module: numpy,module: python frontend | low | Critical |
2,721,499,538 | go | runtime: TestSUID hangs with "Password" prompt on FreeBSD | @siebenmann [reports](https://mastodon.social/@cks/113601252814952553) that `all.bash` hangs with a "Password:" prompt on their FreeBSD machine due to `runtime.TestSUID`'s [`su` call](https://cs.opensource.google/go/go/+/master:src/runtime/security_test.go;l=33;drc=8b5ed3cdaeba621687c71c19174bb4db0f5713f0).
Our FreeBSD builders allow `su` with no password (I think because our user is in the `wheel` group?). Presumably real FreeBSD systems don't put the main user in `wheel`. Or maybe `su` is asking for the password of the primary user? I'm not sure how this configuration works.
On my Linux workstation, the test skips as `su` seems to automatically exit with error (detects a non-interactive stdin perhaps?). Perhaps FreeBSD's `su` doesn't do this?
cc @golang/freebsd @rolandshoemaker | OS-FreeBSD,NeedsFix,compiler/runtime | low | Critical |
2,721,504,451 | langchain | OpenAI with OpenRouter throws unexpected 'proxies' keyword argument error | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain.chat_models import ChatOpenAI
from langchain_core.messages import HumanMessage
llm = ChatOpenAI(
openai_api_key=os.getenv("OPENROUTER_API_KEY"),
openai_api_base="https://openrouter.ai/api/v1",
model_name="openai/gpt-4o-mini",
)
# example of using the chat model
messages = [
HumanMessage(
content="what model are you?"
)
]
llm.invoke(messages)
```
### Error Message and Stack Trace (if applicable)
{
"name": "TypeError",
"message": "__init__() got an unexpected keyword argument 'proxies'",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[14], line 7
3 from langchain_core.messages import HumanMessage
5 os.environ[\"OPENROUTER_API_KEY\"] = \"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\"
----> 7 llm = ChatOpenAI(
8 openai_api_key=os.getenv(\"OPENROUTER_API_KEY\"),
9 openai_api_base=\"https://openrouter.ai/api/v1\",
10 model_name=\"openai/gpt-4o-mini\",
11 )
13 # example of using the chat model
14 messages = [
15 HumanMessage(
16 content=\"what model are you?\"
17 )
18 ]
File c:\\Users\\hamdi\\anaconda3\\lib\\site-packages\\langchain_core\\_api\\deprecation.py:216, in deprecated.<locals>.deprecate.<locals>.finalize.<locals>.warn_if_direct_instance(self, *args, **kwargs)
214 warned = True
215 emit_warning()
--> 216 return wrapped(self, *args, **kwargs)
File c:\\Users\\hamdi\\anaconda3\\lib\\site-packages\\langchain_core\\load\\serializable.py:125, in Serializable.__init__(self, *args, **kwargs)
123 def __init__(self, *args: Any, **kwargs: Any) -> None:
124 \"\"\"\"\"\"
--> 125 super().__init__(*args, **kwargs)
[... skipping hidden 1 frame]
File c:\\Users\\hamdi\\anaconda3\\lib\\site-packages\\pydantic\\_internal\\_decorators_v1.py:148, in make_v1_generic_root_validator.<locals>._wrapper1(values, _)
147 def _wrapper1(values: RootValidatorValues, _: core_schema.ValidationInfo) -> RootValidatorValues:
--> 148 return validator(values)
File c:\\Users\\hamdi\\anaconda3\\lib\\site-packages\\langchain_core\\utils\\pydantic.py:219, in pre_init.<locals>.wrapper(cls, values)
216 values[name] = field_info.default
218 # Call the decorated function
--> 219 return func(cls, values)
File c:\\Users\\hamdi\\anaconda3\\lib\\site-packages\\langchain_community\\chat_models\\openai.py:355, in ChatOpenAI.validate_environment(cls, values)
343 client_params = {
344 \"api_key\": values[\"openai_api_key\"],
345 \"organization\": values[\"openai_organization\"],
(...)
351 \"http_client\": values[\"http_client\"],
352 }
354 if not values.get(\"client\"):
--> 355 values[\"client\"] = openai.OpenAI(**client_params).chat.completions
356 if not values.get(\"async_client\"):
357 values[\"async_client\"] = openai.AsyncOpenAI(
358 **client_params
359 ).chat.completions
File c:\\Users\\hamdi\\anaconda3\\lib\\site-packages\\openai\\_client.py:123, in OpenAI.__init__(self, api_key, organization, project, base_url, timeout, max_retries, default_headers, default_query, http_client, _strict_response_validation)
120 if base_url is None:
121 base_url = f\"https://api.openai.com/v1\"
--> 123 super().__init__(
124 version=__version__,
125 base_url=base_url,
126 max_retries=max_retries,
127 timeout=timeout,
128 http_client=http_client,
129 custom_headers=default_headers,
130 custom_query=default_query,
131 _strict_response_validation=_strict_response_validation,
132 )
134 self._default_stream_cls = Stream
136 self.completions = resources.Completions(self)
File c:\\Users\\hamdi\\anaconda3\\lib\\site-packages\\openai\\_base_client.py:857, in SyncAPIClient.__init__(self, version, base_url, max_retries, timeout, transport, proxies, limits, http_client, custom_headers, custom_query, _strict_response_validation)
840 raise TypeError(
841 f\"Invalid `http_client` argument; Expected an instance of `httpx.Client` but got {type(http_client)}\"
842 )
844 super().__init__(
845 version=version,
846 limits=limits,
(...)
855 _strict_response_validation=_strict_response_validation,
856 )
--> 857 self._client = http_client or SyncHttpxClientWrapper(
858 base_url=base_url,
859 # cast to a valid type because mypy doesn't understand our type narrowing
860 timeout=cast(Timeout, timeout),
861 proxies=proxies,
862 transport=transport,
863 limits=limits,
864 follow_redirects=True,
865 )
File c:\\Users\\hamdi\\anaconda3\\lib\\site-packages\\openai\\_base_client.py:755, in _DefaultHttpxClient.__init__(self, **kwargs)
753 kwargs.setdefault(\"limits\", DEFAULT_CONNECTION_LIMITS)
754 kwargs.setdefault(\"follow_redirects\", True)
--> 755 super().__init__(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'proxies'"
}
### Description
## Description
When using Openai with OpenRouter, the code that was working a few hours ago is now throwing an unexpected keyword argument error for 'proxies'.
## Error Message
TypeError: __init__() got an unexpected keyword argument 'proxies'
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.27744
> Python Version: 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:50:36) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.21
> langchain: 0.3.9
> langchain_community: 0.3.9
> langsmith: 0.1.147
> langchain_anthropic: 0.1.4
> langchain_huggingface: 0.1.2
> langchain_openai: 0.0.2.post1
> langchain_text_splitters: 0.3.2
> langgraph_sdk: 0.1.43
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.6
> anthropic: 0.20.0
> async-timeout: 4.0.2
> dataclasses-json: 0.6.4
> defusedxml: 0.8.0rc2
> httpx: 0.28.0
> httpx-sse: 0.4.0
> huggingface-hub: 0.26.2
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.55.0
> orjson: 3.10.12
> packaging: 23.2
> pydantic: 2.10.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.1
> requests: 2.27.1
> requests-toolbelt: 1.0.0
> sentence-transformers: 3.3.1
> SQLAlchemy: 2.0.28
> tenacity: 8.2.3
> tiktoken: 0.5.2
> tokenizers: 0.20.3
> transformers: 4.46.3
> typing-extensions: 4.12.2
| π€:bug,investigate | low | Critical |
2,721,505,925 | ui | [bug]: inconsistency in distinguishing files and folders in file tree | ### Describe the bug
Hello and thanks for this great project!
Related to [A sidebar with a collapsible file tree.](https://ui.shadcn.com/blocks#sidebar-11)
I believe there is an issue with the file tree. Although the component renders correctly, it seems that it is doing so only because the sample data is incorrect. There seems to be a pattern where a string followed by a nested array represents a folder containing the array of items. So for example `["hello", ["route.ts"]]` represents a folder named `hello` which contains a file `route.ts`. This would also imply that `"components", ["ui", "button.tsx", "card.tsx"]` would represent a folder named `components` which contains three items `"ui", "button.tsx", "card.tsx"`. If the data were to properly represent `ui` as a folder containing `"button.tsx", "card.tsx"` it would be structured like this: `"components", ["ui", ["button.tsx", "card.tsx"]]`.
To fix the issue, I believe that we need to make the sample data consistent, and also fix the implementation. I will submit [a PR](https://github.com/shadcn-ui/ui/pull/6001) that proposes a solution. I am happy to provide further information or otherwise contribute in any way that would be useful.
# Correctly rendered component

# Current behavior
Sample data provided in the component:
```ts
// This is sample data.
const data = {
changes: [
{
file: "README.md",
state: "M",
},
{
file: "api/hello/route.ts",
state: "U",
},
{
file: "app/layout.tsx",
state: "M",
},
],
tree: [
[
"app",
[
"api",
["hello", ["route.ts"]],
"page.tsx",
"layout.tsx",
["blog", ["page.tsx"]],
],
],
[
"components",
["ui", "button.tsx", "card.tsx"],
"header.tsx",
"footer.tsx",
],
["lib", ["util.ts"]],
["public", "favicon.ico", "vercel.svg"],
".eslintrc.json",
".gitignore",
"next.config.js",
"tailwind.config.js",
"package.json",
"README.md",
],
}
```
# Expected behavior
The following sample data should render the component correctly:
```ts
const data = {
changes: [
{
file: "README.md",
state: "M",
},
{
file: "api/hello/route.ts",
state: "U",
},
{
file: "app/layout.tsx",
state: "M",
},
],
tree: [
[
"app",
[
"api",
["hello", ["route.ts"]],
"page.tsx",
"layout.tsx",
["blog", ["page.tsx"]],
],
],
[
"components",
[["ui", ["button.tsx", "card.tsx"]], "header.tsx", "footer.tsx"],
],
["lib", ["util.ts"]],
["public", ["favicon.ico", "vercel.svg"]],
".eslintrc.json",
".gitignore",
"next.config.js",
"tailwind.config.js",
"package.json",
"README.md",
],
}
```
### Affected component/components
sidebar-11
### How to reproduce
1. The issue is visible here: [A sidebar with a collapsible file tree.](https://ui.shadcn.com/blocks#sidebar-11)
1. Compare the Preview and Code.
Alternatively, you can see the problem in a different context:
Update the sample data to have a new file under the "api/hello" folder:
```ts
[
"app",
[
"api",
["hello", ["route.ts", "a-new-file.ts"]],
"page.tsx",
"layout.tsx",
["blog", ["page.tsx"]],
],
],
```
Notice that `route.ts` is now a folder containing `a-new-file.ts`.
### Codesandbox/StackBlitz link
https://stackblitz.com/edit/stackblitz-starters-eh5nkj?file=components%2Fapp-sidebar.tsx
### Logs
_No response_
### System Info
```bash
NA
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,721,526,399 | pytorch | torch.fx.GraphModule constructor doesn't copy all parameters/buffers from source graph module | ### π Describe the bug
As can be seen here: https://github.com/pytorch/pytorch/blob/main/torch/fx/graph_module.py#L478 We don't call remove_duplicate=False, so as a result, duplicate parameters get ignored. Not sure if this is intended behaviour or not.
### Versions
Main
cc @chauhang @penguinwu @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4 | triaged,oncall: pt2,oncall: fx,export-triage-review,oncall: export | low | Critical |
2,721,546,379 | react | [React 19] codemod fails when ~/.codemod does not exist | ## Summary
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
When running `npx codemod@latest react/19/migration-recipe` I get an error like...
```
Error: ENOENT: no such file or directory, scandir '/home/xxxx/.codemod'
at async readdir (node:internal/fs/promises:948:18)
at async Object.findCredentials (/home/xxxx/.npm/_npx/bef22f83919465d9/node_modules/codemod/dist/index.cjs:32395:12105)
at async CYn (/home/xxxx/.npm/_npx/bef22f83919465d9/node_modules/codemod/dist/index.cjs:32395:12501)
at async DYn.get (/home/xxxx/.npm/_npx/bef22f83919465d9/node_modules/codemod/dist/index.cjs:32405:302)
at async Mne (/home/xxxx/.npm/_npx/bef22f83919465d9/node_modules/codemod/dist/index.cjs:32405:1787)
at async QCi (/home/xxxx/.npm/_npx/bef22f83919465d9/node_modules/codemod/dist/index.cjs:52988:1393)
at async R2e (/home/xxxx/.npm/_npx/bef22f83919465d9/node_modules/codemod/dist/index.cjs:52990:377)
at async Object.handler (/home/xxxx/.npm/_npx/bef22f83919465d9/node_modules/codemod/dist/index.cjs:52990:1075) {
errno: -2,
code: 'ENOENT',
syscall: 'scandir',
path: '/home/xxxx/.codemod'
}
```
If I run `mkdir ~/.codemod` and then run the command again, it works normally. | React 19 | low | Critical |
2,721,573,468 | rust | E0596 error should suggest adding mut rather than a type signature | ### Code
```Rust
struct Bar();
impl Bar {
fn mutate(&mut self) {}
}
struct Foo(Bar);
fn f() -> Option<Foo> {
None
}
fn main() {
while let Some(x) = f() {
let y = match x {
Foo(ref bar) => {
bar
}
};
y.mutate()
}
}
```
### Current output
```Shell
Compiling playground v0.0.1 (/playground)
error[E0596]: cannot borrow `*y` as mutable, as it is behind a `&` reference
--> src/main.rs:20:9
|
20 | y.mutate()
| ^ `y` is a `&` reference, so the data it refers to cannot be borrowed as mutable
|
help: consider annotating `Bar` with `#[derive(Clone)]`
|
1 + #[derive(Clone)]
2 | struct Bar();
|
help: consider specifying this binding's type
|
15 | let y: &mut Bar = match x {
| ++++++++++
For more information about this error, try `rustc --explain E0596`.
error: could not compile `playground` (bin "playground") due to 1 previous error
```
### Desired output
```Shell
help: consider changing this to be mutable
|
16 | Foo(ref mut bar) => {
| +++
```
### Rationale and extra context
If you make rustc's first suggested change, you still get the same error, and if you make the second, you just end up with this error after:
```
error[E0308]: mismatched types
--> src/main.rs:17:17
|
17 | bar
| ^^^ types differ in mutability
|
= note: expected mutable reference `&mut Bar`
found reference `&Bar`
```
Which wasn't really any help at all. With my alternative suggested change, you'll still get one more error, but that one will suggest changing `Some(x)` to `Some(mut x)`, and that will make it work.
### Other cases
```Rust
```
### Rust Version
Tested on the Rust Playground with both stable 1.83.0 and 1.85.0-nightly 2024-12-04 acabb5248231987ae1f0 (sorry for the lack of the exact command you wanted, but see rust-lang/rust-playground#1043)
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,721,593,551 | react | Portal can't be placed inside img tag | React version: 18.2
## Steps To Reproduce
1. `<img>{ReactDOM.createPortal(<div />, document.body)}</img>`
## The current behavior
react-dom.development.js:2942 Uncaught Error: img is a void element tag and must neither have `children` nor use `dangerouslySetInnerHTML`.
## The expected behavior
Since portal does not create actual DOM inside the image tag, it does not violate DOM rules, and it would be useful to allow it for event bubbling purposes. | Status: Unconfirmed | medium | Critical |
2,721,605,295 | pytorch | add `torch._foreach_clone` | ### π The feature, motivation and pitch
A "foreach" version of `Tensor.clone()` to clone a list of tensors. Can be nice in situations such as follows:
```python
# optionally transform some tensor_list_1, but tensor_list_1 should not be altered inplace
if do_transform:
tensor_list_2 = torch._foreach_some_op(tensor_list_1, ...)
else:
tensor_list_2 = torch._foreach_clone(tensor_list_1)
# do inplace foreach ops on tensor_list_2
...
```
### Alternatives
- "identity foreach ops" like `torch._foreach_add(tensor_list, 0)`, but it feels like a hack and might introduce unnecessary compute;
- `for` loop / list comprehension: `[t.clone() for t in tensor_list]`. But all other `_foreach_*` ops can also be substituted similarly and the hope is that `_foreach` will make things run faster AFAIK.
### Additional context
_No response_
cc @crcrpar @mcarilli @janeyx99 | feature,triaged,actionable,module: mta | low | Minor |
2,721,610,667 | ui | [bug]: SIderbar turn transparent on mobile | ### Describe the bug
When i open the sidebar on mobile i can see the calender but the background is transparent. I'm also getting a warning message: "@radix-ui_react-dialog.js?v=452130b3:336 Warning: Missing `Description` or `aria-describedby={undefined}` for {DialogContent}."
### Affected component/components
Sidebar
### How to reproduce
Open Sidebar on mobile
<img width="477" alt="Screen Shot 2024-12-05 at 11 53 05 PM" src="https://github.com/user-attachments/assets/1db323e3-2305-42b3-bbfb-df568b559a4a">
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
Warning: Missing `Description` or `aria-describedby={undefined}` for {DialogContent}.
```
### System Info
```bash
Chrome,Mac
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,721,654,403 | go | build: windows/arm (32-bit ARM) port is broken | The Go porting policy has a section on broken ports, https://go.dev/wiki/PortingPolicy#broken-ports. The GOOS=windows GOARCH=arm (not arm64) port was marked broken in August 2024 ([CL 601777](https://go.dev/cl/601777)) for the following reasons:
- The port stopped working, failing to build with make.bat, tracked in issue #68552.
- The builder stopped running, and no new running builder has been added yet, tracked in #67308. (A running builder is needed to know whether the port is working.)
This is an umbrella issue to track the status of the port.
CC @golang/windows, @golang/release. | OS-Windows,NeedsInvestigation,umbrella,arch-arm | low | Critical |
2,721,654,942 | rust | E0499 error shouldn't suggest adding a semicolon when there already is one | ### Code
```Rust
use std::marker::PhantomData;
struct Bar<'a>(PhantomData<&'a mut i32>);
impl<'a> Drop for Bar<'a> {
fn drop(&mut self) {}
}
struct Foo();
impl Foo {
fn f(&mut self) -> Option<Bar<'_>> {
None
}
fn g(&mut self) {}
}
fn main() {
let mut foo = Foo();
while let Some(_) = foo.f() {
foo.g();
};
}
```
### Current output
```Shell
Compiling playground v0.0.1 (/playground)
error[E0499]: cannot borrow `foo` as mutable more than once at a time
--> src/main.rs:22:9
|
21 | while let Some(_) = foo.f() {
| -------
| |
| first mutable borrow occurs here
| a temporary with access to the first borrow is created here ...
22 | foo.g();
| ^^^ second mutable borrow occurs here
23 | };
| - ... and the first borrow might be used here, when that temporary is dropped and runs the destructor for type `Option<Bar<'_>>`
|
help: consider adding semicolon after the expression so its temporaries are dropped sooner, before the local variables declared by the block are dropped
|
23 | };;
| +
For more information about this error, try `rustc --explain E0499`.
error: could not compile `playground` (bin "playground") due to 1 previous error
```
### Desired output
```Shell
Not have the help section since there already is a semicolon
```
### Rationale and extra context
_No response_
### Other cases
```Rust
```
### Rust Version
Tested on the Rust Playground with both stable 1.83.0 and 1.85.0-nightly 2024-12-04 acabb5248231987ae1f0 (sorry for the lack of the exact command you wanted, but see rust-lang/rust-playground#1043)
### Anything else?
_No response_ | A-diagnostics,T-compiler,D-incorrect | low | Critical |
2,721,672,652 | go | x/build/cmd/relnote: improve experience for documenting new standard library packages | The cmd/relnote test requires that all new APIs have a corresponding entry in doc/next/\*-stdlib/\*-minor. This works as intended for most APIs, but some new APIs are entirely new packages. Those are not considered "minor" changes, and get their own heading.
The current approach for new packages is to include a file in \*-minor/nnn.md anyway, but make its content just a comment like "\<!-- This is a new package; covered in 6-stdlib/5-sha3.md. --\>".
This causes some friction down the road, as `relnote generate` ends up creating empty headings like these:
https://cs.opensource.google/go/x/website/+/master:_content/doc/go1.24.md;l=271-277;drc=4ccfb99db16b1419ac640fb08dd654053bd819ee
Those need to eventually be removed.
This is a tracking issue to improve this edge case. A simple fix is to make `relnote generate` detect when a fragment contains nothing but a comment, and in such cases avoid creating a heading. A more involved change would be to make the cmd/relnote test recognize when a new package is documented in \*-stdlib and not require the file with a comment to be added in the first place, if that can be done without risking forgetting to document such changes.
CC @golang/release. | Documentation,Builders,NeedsInvestigation,Friction | low | Minor |
2,721,677,121 | godot | Error spam when re-importing a model with a saved `SkeletonProfile`, and model breakage when deleting a resource | ### Tested versions
4.4 dev6
### System information
Godot v4.4.dev6 - Windows 10.0.19045 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated Radeon RX 560 Series (Advanced Micro Devices, Inc.; 31.0.14001.45012) - Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz (4 threads)
### Issue description
If we save the SkeletonProfileHumanoid resource in the model import window, then click the re-import button, a spam of errors occurs.
Also, if we delete a previously saved resource and try to open the model in the model import window, it will break and will not open.

### Steps to reproduce
1. Open the Shibahu_skechfab.fbx model import window
2. Create a new `BoneMap`
3. Create a new `SkeletonProfileHumanoid` and save it as a resource
4. Click the Retry Import button
5. A huge number of errors (You can also clear the `BoneMap` and see the errors again)
6. Delete your previously saved `SkeletonProfileHumanoid` resource and the model breaks and will not open again (Reload the project to make sure).
### Minimal reproduction project (MRP)
The manipulations were carried out with the model from this comment: https://github.com/godotengine/godot/pull/99999#pullrequestreview-2477151374
Model: https://sketchfab.com/3d-models/shibahu-2e9dffd948f747668609a5a477daad86
| bug,topic:import,topic:animation,topic:3d | low | Critical |
2,721,678,332 | godot | Applying DOF under Camera Attributes in World Environment causes SubViewports with TrasparentBG to not render VisualInstance3D objects | ### Tested versions
Tested and reproducible in: v4.4.dev5.mono, v4.4vdev6.mono
### System information
Godot v4.4.dev6.mono - Windows 10.0.26100 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Ti (NVIDIA; 32.0.15.6614) - 11th Gen Intel(R) Core(TM) i5-11400 @ 2.60GHz (12 threads)
### Issue description
I came across this bug. If you have a SubViewport with a camera as its child, tick the TransparentBG and go to your CameraAttributes and select turn on DOF (either near or far), then any VisualInstance3D objects won't show up on the SubViewport, whilst correctly showing up when previewing the camera. This happens regardless of cull masks set

Camera preview is correct

SubViewport is empty (here, as well as when running the project)
### Steps to reproduce
1. Add a SubViewport to your world, tick "TransparentBG"
2. Add a Camera3D as it's child
3. Add a WorldEnvironment
4. Under CameraAttributes, add CameraAttributePractical and tick near or far blur OR just add CameraAttributePhysical
6. Add an object to check the bug
7. Camera should see the object, while the SubViewport should be empty
### Minimal reproduction project (MRP)
[viewport_bug_mrp.zip](https://github.com/user-attachments/files/18030919/viewport_bug_mrp.zip)
| bug,topic:rendering,topic:3d | low | Critical |
2,721,684,380 | PowerToys | Hitting F5 in File Explorer turns all preview thumbnails black | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
File Explorer: Thumbnail preview
### Steps to reproduce
Open File Explorer, to a directory with svg files in it that have existing thumbnails.
Press F5.
### βοΈ Expected Behavior
I didn't expect anything to happen, maybe refresh the thumbnails?
### β Actual Behavior
All thumbnails were replaced with black squares. You have to change the thumbnail size and watch the thing laboriously generate thumbnails from scratch.

### Other Software
_No response_ | Issue-Bug,Product-File Explorer,Needs-Triage | low | Minor |
2,721,687,910 | material-ui | [TextField] Default Webkit autofill color does not use variables or palette | https://github.com/mui/material-ui/blob/63896426afaa1a3d9f817b551ca0268afe7d7287/packages/mui-material/src/OutlinedInput/OutlinedInput.js#L141
Looking at this I was having issue with the color of the TextField when autofill was put in (Chrome). The color is hardcoded in and not referenced to any css var or theme.palette. I can of course override this with sx props on the slotProps.htmlInput or globally override in the theme.
It seems like a very odd choice to do a very solid dark blue when someone could have a completely different color theme than the default blue.
**Search keywords**: | waiting for π,component: text field,package: material-ui,ready to take | low | Major |
2,721,715,034 | tensorflow | TensorFlow source code compilation error | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf 2.18.0
### Custom code
Yes
### OS platform and distribution
Ubuntu 24.04.1 LTS
### Mobile device
_No response_
### Python version
3.12.1
### Bazel version
7.3.1
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
My CPU supports AVX512F, I want to build it from source to support my CPU instruction set
### Standalone code to reproduce the issue
```shell
ERROR: Skipping '//tensorflow/tools/pip_package:build_pip_package': no such target '//tensorflow/tools/pip_package:build_pip_package': target 'build_pip_package' not declared in package 'tensorflow/tools/pip_package' defined by /usr/workerdir/workerspace/tensorflow/tensorflow/tools/pip_package/BUILD (did you mean 'build_pip_package.py'? Tip: use `query "//tensorflow/tools/pip_package:*"` to see all the targets in that package)
WARNING: Target pattern parsing failed.
```
### Relevant log output
```shell
ERROR: Skipping '//tensorflow/tools/pip_package:build_pip_package': no such target '//tensorflow/tools/pip_package:build_pip_package': target 'build_pip_package' not declared in package 'tensorflow/tools/pip_package' defined by /usr/workerdir/workerspace/tensorflow/tensorflow/tools/pip_package/BUILD (did you mean 'build_pip_package.py'? Tip: use `query "//tensorflow/tools/pip_package:*"` to see all the targets in that package)
WARNING: Target pattern parsing failed.
```
| stat:awaiting tensorflower,type:build/install,subtype: ubuntu/linux,TF 2.18 | low | Critical |
2,721,773,429 | rust | Incorrect diagnostic `error[E0309]: the parameter type `T` may not live long enough`, suggests duplicate bound | ### Code
```Rust
use std::marker::PhantomData;
trait Unit<'t, T> {}
trait Trait {
fn frob<'a, T>() -> impl Unit<'a, T>;
}
struct Wrapper<T>(PhantomData<T>);
impl<'a, T> Unit<'a, T> for Wrapper<T> where T: 'a {}
struct Foo;
impl Trait for Foo {
fn frob<'a, T: 'a>() -> impl Unit<'a, T> {
Wrapper(PhantomData)
}
}
```
### Current output
```
error[E0309]: the parameter type `T` may not live long enough
--> src/lib.rs:13:29
|
13 | fn frob<'a, T: 'a>() -> impl Unit<'a, T> {
| -- ^^^^^^^^^^^^^^^^ ...so that the type `T` will meet its required lifetime bounds...
| |
| the parameter type `T` must be valid for the lifetime `'a` as defined here...
|
note: ...that is required by this bound
--> src/lib.rs:13:20
|
13 | fn frob<'a, T: 'a>() -> impl Unit<'a, T> {
| ^^
help: consider adding an explicit lifetime bound
|
13 | fn frob<'a, T: 'a + 'a>() -> impl Unit<'a, T> {
| ++++
For more information about this error, try `rustc --explain E0309`.
error: could not compile `testing` (lib) due to 1 previous error
```
### Additional Context
The program compiles fine if `frob` is a free function, or if `frob` is not a trait implementation.
```rust
trait Unit<'t, T> {}
trait Trait {
fn frob<'a, T>() -> impl Unit<'a, T>;
}
struct Wrapper<T>(PhantomData<T>);
impl<'a, T> Unit<'a, T> for Wrapper<T> where T: 'a {}
// Not a trait impl
struct Foo;
impl Foo {
fn frob<'a, T: 'a>() -> impl Unit<'a, T> {
Wrapper(PhantomData)
}
}
// Regular function
fn frob<'a, T: 'a>() -> impl Unit<'a, T> {
Wrapper(PhantomData)
}
```
### Rust Version
```Shell
rustc 1.85.0-nightly (acabb5248 2024-12-04)
binary: rustc
commit-hash: acabb5248231987ae1f0c215208d1005a5db402d
commit-date: 2024-12-04
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.5
``` | A-diagnostics,A-lifetimes,A-trait-system,T-compiler,A-impl-trait,D-invalid-suggestion,S-has-mcve | low | Critical |
2,721,843,240 | go | x/build: add doas package on NetBSD systems | ### Go version
devel
### Output of `go env` in your module/workspace:
```shell
N/A
```
### What did you do?
#70702 reports that su asks for a password on some FreeBSD systems, causing the runtime test `TestSUID` to fail and leaving the terminal in a broken state. Using the `doas` command with the `-n` option avoids this problem. `doas` is available as a separate package on FreeBSD and perhaps also NetBSD. Let's add it to the builders, and switch the test to use it. It's OK to run the test on systems where there is no `doas` command; the test will be skipped.
### What did you see happen?
```
/bin/sh: doas: not found
```
### What did you expect to see?
A passing test.
CC @golang/release | OS-NetBSD,Builders,NeedsFix | low | Critical |
2,721,848,878 | vscode | Unit test failure: Ternery Search Tree | OOM from Javascript heap https://dev.azure.com/monacotools/Monaco/_build/results?buildId=308771&view=logs&j=55ac390b-ffb1-50ea-3650-525dd9a3fd80&t=a0dd9636-50d2-5f8a-f17f-6bcaa254d802
Crash might not be from this suite given we run all suites in the same renderer process, but what should be checked is for any abnormal memory allocation over the course of the run to eliminate any leaks or rule this out as legitimate oom | bug,freeze-slow-crash-leak,unit-test-failure | low | Critical |
2,721,889,787 | flutter | [Web] Shift + scroll wheel not working when run on mac . | ### Steps to reproduce
1. Using macbook and start run debug or release
2. Hold left shift an scroll by using scroll wheel
### Expected results
Web can start scroll horizontal
### Actual results
Cannot scroll horizontal
Note :
- it work normal when run on window
- When start on safari . it cannot scroll by hold shift but press + scroll at same time it work
### Code sample
<details open><summary>Code sample</summary>
```dart
Scrollbar(
thumbVisibility: true,
controller: controller,
child: SingleChildScrollView(
controller: controller,
scrollDirection: Axis.horizontal,
padding: EdgeInsets.symmetric(horizontal: 10.w),
child: ConstrainedBox(
constraints: BoxConstraints(
minWidth: 800,
maxWidth: max(context.width, 800),
minHeight: context.height,
),
child: CustomScrollView(
shrinkWrap: true,
key: UniqueKey(),
slivers: [
SliverToBoxAdapter(
child: SizedBox(height: 20.h),
),
SliverToBoxAdapter(child: widget.title),
SliverToBoxAdapter(
child: SizedBox(height: 12.h),
),
const SliverToBoxAdapter(
child: Divider(
color: Color(0xffDCDCDD),
),
),
SliverToBoxAdapter(
child: SizedBox(height: 20.h),
),
SliverToBoxAdapter(child: widget.header),
SliverPadding(
padding: EdgeInsets.symmetric(vertical: 15.h),
sliver: const SliverToBoxAdapter(
child: Divider(
color: Color(0xffEDEDF0),
),
),
),
SliverReorderableList(
itemCount: widget.count,
itemBuilder: (context, index) {
return SizedBox(
key: ValueKey(index),
child: widget.itemBuilder.call(context, index),
);
},
onReorder: (int oldIndex, int newIndex) {
widget.onReorder?.call(oldIndex, newIndex);
},
),
],
),
),
),
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
There is no log
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[β] Flutter (Channel stable, 3.24.5, on macOS 14.5 23F79 darwin-arm64, locale
vi-VN)
β’ Flutter version 3.24.5 on channel stable at /Users/hieucg/flutter
β’ Upstream repository https://github.com/flutter/flutter.git
β’ Framework revision dec2ee5c1f (3 weeks ago), 2024-11-13 11:13:06 -0800
β’ Engine revision a18df97ca5
β’ Dart version 3.5.4
β’ DevTools version 2.37.3
[β] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
β’ Android SDK at /Users/hieucg/Library/Android/sdk
β’ Platform android-35, build-tools 35.0.0
β’ ANDROID_HOME = /Users/hieucg/Library/Android/sdk
β’ Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
β’ Java version OpenJDK Runtime Environment (build
17.0.11+0-17.0.11b1207.24-11852314)
β’ All Android licenses accepted.
[β] Xcode - develop for iOS and macOS (Xcode 16.0)
β’ Xcode at /Applications/Xcode.app/Contents/Developer
β’ Build 16A242d
β’ CocoaPods version 1.15.2
[β] Chrome - develop for the web
β’ Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[β] Android Studio (version 2024.1)
β’ Android Studio at /Applications/Android Studio.app/Contents
β’ Flutter plugin can be installed from:
π¨ https://plugins.jetbrains.com/plugin/9212-flutter
β’ Dart plugin can be installed from:
π¨ https://plugins.jetbrains.com/plugin/6351-dart
β’ Java version OpenJDK Runtime Environment (build
17.0.11+0-17.0.11b1207.24-11852314)
[β] VS Code (version 1.95.3)
β’ VS Code at /Applications/Visual Studio Code.app/Contents
β’ Flutter extension version 3.102.0
[β] Connected device (3 available)
β’ macOS (desktop) β’ macos β’ darwin-arm64 β’
macOS 14.5 23F79 darwin-arm64
β’ Mac Designed for iPad (desktop) β’ mac-designed-for-ipad β’ darwin β’
macOS 14.5 23F79 darwin-arm64
β’ Chrome (web) β’ chrome β’ web-javascript β’
Google Chrome 131.0.6778.87
[β] Network resources
β’ All expected network resources are available.
β’ No issues found!
```
</details>
| c: new feature,platform-web,a: mouse,has reproducible steps,P2,team-web,triaged-web,found in release: 3.24,found in release: 3.27 | low | Critical |
2,721,899,854 | pytorch | MacOS tests has not been running for few weeks | https://github.com/pytorch/pytorch/pull/135386 rendered regular MacOS test shard useless
<s>I.e. https://github.com/pytorch/pytorch/actions/runs/12191328925/job/34010247281?pr=141921 finishes in 18 sec for PR https://github.com/pytorch/pytorch/pull/141921 that could have some effect on Mac tests</s>
### Versions
CI
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @pytorch/pytorch-dev-infra | high priority,module: ci,triaged,module: regression | low | Major |
2,721,914,090 | TypeScript | [bug] declaration files have invalid class member generics with mixin & multiple layer inheritance | ### π Search Terms
declaration .d.ts invalid class member field generic mixin inherit
### π Version & Regression Information
- This changed between versions ______ and _______
- This changed in commit or PR _______
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about _________
- I was unable to test this on prior versions because _______
### β― Playground Link
https://www.typescriptlang.org/play/?#code/MYGwhgzhAEBCkFMA8AVAfNA3gKGn6EArgA4IBOAsggLYBG50AvFgL7STQrYvbAD2AOwgAXaBQCWAD3ECm0JABEwwsABpoAYXBRoCScIQCAJjAEIA7tAAUAOjtHlYAFzsBATwDaAXQCUTDPAQyEoqaGi4+FYAyiTkLlqQEH6MGKCJuvqGJtAxpGRYEfj41FIyVHQMzACMhdAsQA
### π» Code
```ts
class Base<T> {
superMember = {} as T
}
const Mixin = <Data, Class extends new (...data: any[]) => Base<Data>>
(Super: Class) => class extends Super {
mixinMember = 1
}
```
### π Actual behavior
```ts
declare class Base<T> {
superMember: T;
}
declare const Mixin: <Data, Class extends new (...data: any[]) => Base<Data>>(Super: Class) => {
new (...data: any[]): {
mixinMember: number;
superMember: T; // invalid generic variable in .d.ts
};
} & Class;
```
### π Expected behavior
`tsc` generates completely valid `.d.ts` files with mixin inheritance.
### Additional information about the issue
_No response_ | Bug | low | Critical |
2,721,926,877 | pytorch | Segmentation fault (core dumped) in `convolution` | ### π Describe the bug
Under specific inputs, `convolution` triggered a crash.
```python
import torch
input = torch.full((3, 10, 6, 1, 1, 9, 1,), 1.11111e+15, dtype=torch.float32)
weight = torch.full((10, 2, 8, 1, 0, 7, 8,), 0, dtype=torch.float32)
bias = None
stride = [498444555]
padding = [1610637938]
dilation = [4]
transposed = True
output_padding = [1]
groups = 2
torch.ops.aten.convolution(input, weight, bias, stride, padding, dilation, transposed, output_padding, groups)
```
Output:
```
Segmentation fault (core dumped)
```
### Versions
PyTorch version: 2.5.0a0+git32f585d
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.13.0 | packaged by Anaconda, Inc. | (main, Oct 7 2024, 21:29:38) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.3 MiB (40 instances)
L1i cache: 1.3 MiB (40 instances)
L2 cache: 40 MiB (40 instances)
L3 cache: 55 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+git32f585d
[conda] numpy 2.1.3 pypi_0 pypi
[conda] torch 2.5.0a0+git32f585d pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @jgong5 @mingfeima @sanchitintel @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @frank-wei | module: crash,module: convolution,triaged,module: mkldnn,module: intel,module: edge cases,module: empty tensor | low | Critical |
2,721,933,539 | PowerToys | FancyZone | ### Description of the new feature / enhancement
It would be great if you could scale application windows published on terminal servers (terminal server, citrix, parallels, azure remote desktops, etc.) to zones.
Sorry if this is not the right forum for this request. But I felt the need to ask because I am a big fan of Power Toys. You guys do a fantastic job!!
### Scenario when this would be used?
It's obvious
### Supporting information
_No response_ | Product-FancyZones,Needs-Triage | low | Minor |
2,721,946,038 | vscode | Opening a folder in `vscode.dev` with a domain (i.e. *.com) as its name through Remote Tunnels returns a βblockedβ page |
Type: <b>Bug</b>
1. Open https://vscode.dev
2. Connect to a Remote Tunnel
3. Attempt to open a folder like `/home/demo/sime.demo.com`
4. Observe the page being blocked
5. Manually change the page URL to add a trailing slash
6. Observe the page loading successfully
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T20:05:02.650Z)
OS version:
Modes:
System Info: Mozilla/5.0 (iPad; CPU OS 18_1_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.1.1 Mobile/15E148 Safari/604.1Extensions: none
<!-- generated by issue reporter --> | vscode.dev | low | Critical |
2,721,956,296 | flutter | [in_app_purchase_storekit] Make StoreKit2 the default for all supported devices. | Once most of the bugs have been ironed out, we should make Storekit 2 APIs default for supported devices. Devices below Mac 10 and ios 15 should continue to use the original storekit api. | p: in_app_purchase,package,P2,team-ios,triaged-ios | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.