id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,813,365,866 | pytorch | Workaround for unsupported return of multiple tensors for `torch.cond` in a model intended for torch.onnx.export(dynamo=true,...) | ### ๐ Describe the bug
Trying to `onnx.export` a `nn.Module` with a conditional in its computational graph. In essence similar to this example:
```py
import torch
class Wrapper(torch.nn.Module):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.cond_model = CondModel()
def forward(self, x):
nt = self.cond_model(x)
return nt
class CondModel(torch.nn.Module):
def forward(self, x):
def true_fn(x,z):
x = x + 1.0
z = z * 0.0
return x,z
def false_fn(x,z):
x = x - 1.0
z = z * 1.0
return x,z
z = torch.rand(x.shape)
nt = torch.cond(x.sum() > 0, true_fn, false_fn, [x,z])
return nt
```
As per [the documentation](https://pytorch.org/docs/2.6/cond.html#torch._higher_order_ops.cond.cond), the return from `torch.cond` must be a single tensor. Is there a dirty workaround that allows to get multiple tensors from the return?
I tried using nested tensors:
```py
def true_fn(x,z):
x = x + 1.0
z = z * 0.0
nt = torch.nested.nested_tensor([x,z], layout=torch.jagged)
return nt
```
But compile fails at validation of the `.shape` of the return tensors (`.shape` in `NestedTensors` [loses precise meaning](https://pytorch.org/docs/2.6/nested.html#size)):
```pytb
torch._dynamo.exc.Unsupported: Expect branches to return tensors with same metadata but find pair[0] differ in 'shape: torch.Size([2, s1]) vs torch.Size([2, s2])', 'stride: (s1, 1) vs (s2, 1)', where lhs is TensorMetadata(shape=torch.Size([2, s1]), dtype=torch.float32, requires_grad=False, stride=(s1, 1), memory_format=None, is_quantized=False, qparams={}) and rhs is TensorMetadata(shape=torch.Size([2, s2]), dtype=torch.float32, requires_grad=False, stride=(s2, 1), memory_format=None, is_quantized=False, qparams={})
```
<details>
**<summary>Full traceback here</summary>**
```pytb
Traceback (most recent call last):
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 55, in graph_break_as_hard_error
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 906, in call_function
unimplemented(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/exc.py", line 356, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Expect branches to return tensors with same metadata but find pair[0] differ in 'shape: torch.Size([2, s1]) vs torch.Size([2, s2])', 'stride: (s1, 1) vs (s2, 1)', where lhs is TensorMetadata(shape=torch.Size([2, s1]), dtype=torch.float32, requires_grad=False, stride=(s1, 1), memory_format=None, is_quantized=False, qparams={}) and rhs is TensorMetadata(shape=torch.Size([2, s2]), dtype=torch.float32, requires_grad=False, stride=(s2, 1), memory_format=None, is_quantized=False, qparams={})
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/iony/DTU/f24/thesis/code/early_exit_vit/simple/example_conditional.py", line 34, in <module>
result = model(input_tensor)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/DTU/f24/thesis/code/early_exit_vit/simple/example_conditional.py", line 9, in forward
nt = self.cond_model(x)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/iony/DTU/f24/thesis/code/early_exit_vit/simple/example_conditional.py", line 28, in forward
nt = torch.cond(x.sum() > 0, true_fn, false_fn, [x,z])
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 201, in cond
return torch.compile(_cond_op_wrapper, backend=backend, fullgraph=True)(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1406, in __call__
return self._torchdynamo_orig_callable(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 566, in __call__
return _compile(
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1006, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 734, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 769, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1402, in transform_code_object
transformations(instructions, code_options)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 237, in _fn
return fn(*args, **kwargs)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 681, in transform
tracer.run()
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run
super().run()
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1076, in run
while self.step():
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 986, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 683, in wrapper
return inner_fn(self, inst)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1763, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 921, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_dynamo/variables/higher_order_ops.py", line 58, in graph_break_as_hard_error
raise UncapturedHigherOrderOpError(reason + msg) from e
torch._dynamo.exc.UncapturedHigherOrderOpError: Cond doesn't work unless it is captured completely with torch.compile. Scroll up to find out what causes the graph break.
from user code:
File "/home/iony/miniconda3/envs/eevit/lib/python3.9/site-packages/torch/_higher_order_ops/cond.py", line 193, in _cond_op_wrapper
return cond_op(*args, **kwargs)
```
</details>
Is the feature not implemented, even in a nightly? Or is there another workaround that might work if I intend to operate only for inference?
### Error logs
_No response_
### Versions
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.19.2
[pip3] onnxscript==0.1.0.dev20241226
[pip3] torch==2.6.0.dev20241226+cu124
cc @chauhang @penguinwu @zou3519 @ydwu4 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @bdhirsh @yf225 | module: onnx,triaged,oncall: pt2,module: higher order operators,oncall: export | low | Critical |
2,813,420,147 | flutter | Investigate what android builds/tests can be faster by not installing xcode | "[Contributor] has upstreamed a change to our recipes that will allow mac targets in our .ci.yaml configs to opt out of installing Xcode (obviously provided your test doesn't require it) with a property"
https://flutter-review.googlesource.com/c/recipes/+/62500
https://groups.google.com/a/google.com/g/flutter-team/c/fKuizPWBI58/m/Z8IQT248EgAJ?utm_medium=email&utm_source=footer | team,platform-android,team-android,fyi-infra | low | Minor |
2,813,425,465 | rust | "expected opaque type, found ..." when trait impl with type_alias_impl_trait | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
type U32Iter = impl Iterator<Item = u32>;
struct Foo {
iter: U32Iter,
}
impl From<u32> for Foo {
fn from(value: u32) -> Self {
Self {
iter: std::iter::once(value),
}
}
}
```
I expected to see this happen: Compiles
Instead, this happened:
```
12 | iter: std::iter::once(value),
| ^^^^^^^^^^^^^^^^^^^^^^ expected opaque type, found `Once<u32>`
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
Compiling smth v0.1.0 (/tmp/smth)
error[E0308]: mismatched types
--> src/main.rs:12:19
|
3 | type U32Iter = impl Iterator<Item = u32>;
| ------------------------- the expected opaque type
...
12 | iter: std::iter::once(value),
| ^^^^^^^^^^^^^^^^^^^^^^ expected opaque type, found `Once<u32>`
|
= note: expected opaque type `U32Iter`
found struct `std::iter::Once<u32>`
note: this item must have the opaque type in its signature in order to be able to register hidden types
--> src/main.rs:10:8
|
10 | fn from(value: u32) -> Self {
| ^^^^
For more information about this error, try `rustc --explain E0308`.
error: could not compile `smth` (bin "smth") due to 1 previous error
```
</p>
</details>
| C-bug,needs-triage | low | Critical |
2,813,445,999 | godot | Objects whose `_init()` takes arguments fail to be decoded when sent over RPC | ### Tested versions
4.4dev7
### System information
Godot v4.4.dev7 - Windows 10 (build 19045) - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (NVIDIA; 32.0.15.6636) - AMD Ryzen 7 5800X 8-Core Processor (8 threads)
### Issue description
Let's say you have a `PlayerData` `Object` that for convenience defines a constructor:
```gdscript
class_name PlayerData extends Object
var position: Vector3
var rotation: Vector3
func _init(position_: Vector3, rotation_: Vector3) -> void:
self.position = position_
self.rotation = rotation_
```
When you try to send a `PlayerData` over RPC, the receiving side fails to decode it because it doesn't know what to put into these arguments.
### Steps to reproduce
Download the MRP. Open Debug > Customize Run Instances and configure it as shown here:

Run the project. Observe the errors.
### Minimal reproduction project (MRP)
[netbug.zip](https://github.com/user-attachments/files/18561226/netbug.zip) | discussion,documentation,topic:network | low | Critical |
2,813,466,695 | go | runtime: spurious SEGV in package initializer (darwin/amd64) | ```
#!watchflakes
default <- pkg == "golang.org/x/tools/go/analysis/passes/loopclosure" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8724618820959648753)):
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x50 pc=0x4d0537d]
goroutine 1 [running]:
go/types.init.2()
/Users/swarming/.swarming/w/ir/x/w/goroot/src/go/types/universe.go:282 +0x1dd
FAIL golang.org/x/tools/go/analysis/passes/loopclosure 0.030s
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation,Tools,compiler/runtime | low | Critical |
2,813,470,095 | pytorch | while_loop fails to export when strict=False | Repro:
Apply the following patch, then run `python test/export/test_export.py -k test_while_loop_simple` and the test fails when `strict=False`
```
diff --git a/test/export/test_export.py b/test/export/test_export.py
index 9b070141ac3..22096a68a1b 100755
--- a/test/export/test_export.py
+++ b/test/export/test_export.py
@@ -66,6 +66,8 @@ from torch.testing._internal.common_utils import (
IS_MACOS,
IS_SANDCASTLE,
IS_WINDOWS,
+ instantiate_parametrized_tests,
+ parametrize,
run_tests,
skipIfCrossRef,
skipIfXpu,
@@ -12162,6 +12164,26 @@ class TestExportCustomClass(TorchTestCase):
"torch.ops.aten.upsample_bilinear2d.vec", 1, exactly=True
).run(ep.graph_module.code)
+ @parametrize("strict", [False, True])
+ def test_while_loop_simple(self, strict):
+ class Simple(torch.nn.Module):
+ def forward(self, ci, a, b):
+ def cond_fn(i, x, y):
+ return i > 0
+
+ def body_fn(i, x, y):
+ return i - 1, x + y, y - x
+
+ return torch._higher_order_ops.while_loop(cond_fn, body_fn, [ci, a, b])
+
+ example_inputs = (
+ torch.tensor(1),
+ torch.randn(10, 20),
+ torch.randn(10, 20),
+ )
+ ep = export(Simple(), example_inputs, strict=strict)
+
+instantiate_parametrized_tests(TestExportCustomClass)
if __name__ == "__main__":
run_tests()
```
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,export-triaged,oncall: export | low | Minor |
2,813,478,413 | godot | Custom objects' fields are not transferred when sent over RPC | ### Tested versions
4.4dev7
### System information
Godot v4.4.dev7 - Windows 10 (build 19045) - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (NVIDIA; 32.0.15.6636) - AMD Ryzen 7 5800X 8-Core Processor (8 threads)
### Issue description
Let's say you have a `PlayerData` `Object`:
```gdscript
class_name PlayerData extends Object
var username: String = "default_user"
var position: Vector3
var rotation: Vector3
```
If we create a dummy `PlayerData` like this:
```gdscript
var data := PlayerData.new()
data.username = "lena"
data.position = Vector3.FORWARD
data.rotation = Vector3.BACK
```
And try to send it over RPC, the receiving side will instead get a `PlayerData` object equivalent to just `PlayerData.new()`, i.e. the fields are not transferred.
### Steps to reproduce
Download the MRP. Open Debug > Customize Run Instances and configure it as shown here:

Run the project. Observe the errors.
### Minimal reproduction project (MRP)
[netbug.zip](https://github.com/user-attachments/files/18561405/netbug.zip) | discussion,topic:network | low | Critical |
2,813,499,812 | pytorch | No broadcasting by default. | ### ๐ The feature, motivation and pitch
I actually ran into shape bug but pytorch guessed it would be some broadcasting intentionally, and wasted a bit time( like hours, but still acceptable).
Then I decide to print the shape and denote it in the variable name, like g_reshaped__batch_1_output.
I prefer the no-broadcasting to be the by default, and when I need it, I write a line to tell pytorch to do it.
Although the variable names will still be very long in my code.

### Alternatives
The default behavior now is the alternative.
Another alternative is like, specify a batch dimention for everywhere, so pytorch knows the xth dim is batch, and it never touches the batch dim.
### Additional context
It's ok if you guys decide not to make this change.
The only problem I can remember is, when I use mseloss or any loss that automatically reduce the shape to scalar, I always have to print the shape to make sure it's correct.
My personal convention is a explicit batch dim everywhere and assert it in every function. It makes all my test code have one extra layer of [], but it makes me much more confident about my code. | triage review | low | Critical |
2,813,541,652 | godot | Godot 4.3 becomes highly unresponsive, likely due to Windows 11 24H2 update | ### Tested versions
- This issue is on Godot 4.3.stable.official [77dcf97d8], unsure of other versions. This issue didn't seem to happen with the limited testing I did with v4.2.stable.official [46dc27791]
- Windows 11 version 24H2
### System information
Windows 11 - Godot 4.3.stable.official - Vulkan (Forward+) - AMD Ryzen 7 5800X3D 8-Core Processor - 3.40 GHz - NVIDIA GeForce RTX 2070
### Issue description
After my pc installed the latest Windows update yesterday (24H2), all Godot projects in 4.3 become highly unresponsive on launch. Even opening new blank projects seems to create unresponsiveness. Even after long load times, if a project does manage to launch to the editor, the editor is also highly unresponsive.
Fixes attempted:
- restarting the editor
- restarting the pc
- deleting the .godot folder and re-importing files
- deleting the entire project and re-cloning it from version control
- running Godot as administrator
### Steps to reproduce
On the newest feature update to Windows 11, 24H2. Create a new project and try to interact with the editor on v4.3.stable.official [77dcf97d8].
### Minimal reproduction project (MRP)
On the newest feature update to Windows 11, 24H2. Create a new project and try to interact with the editor on v4.3.stable.official [77dcf97d8]. | bug,platform:windows,topic:editor,needs testing,performance | low | Major |
2,813,542,959 | material-ui | [docs] add avatar upload example | ### Related page
https://mui.com/material-ui/react-avatar/
### Kind of issue
Missing information
### Issue description
I recently needed to implement avatar upload inside form during sign up flow, I ended up with a bit hacky solution where I need to have visually hidden input nearby the clickable avatar, and that avatar will be a label, so that click on label will cause click on input. I also needed to put another hack in place so that input navigation with tab would work. It seems like quite popular usecase. Potentially adding section in the docs on how to implement it could save a lot of time and hacks in code to other people like me.
Just as the reference, this is what I end up with:
```tsx
import { memo, useId, useState } from 'react'
import type { AvatarProps, FormHelperTextProps } from '@mui/material'
import { Avatar, FormHelperText, Stack, styled } from '@mui/material'
import PersonIcon from '@mui/icons-material/Person'
import { avatarConstraints } from '../../model/contact-info-schema'
import { useFocus } from '@/shared/lib/react/useFocus'
import { useTab } from '@/shared/lib/react/useTab'
const VisuallyHiddenInput = styled('input')({
clip: 'rect(0 0 0 0)',
clipPath: 'inset(50%)',
height: 1,
overflow: 'hidden',
position: 'absolute',
bottom: 0,
left: 0,
whiteSpace: 'nowrap',
width: 1,
})
const ClickableAvatar = styled(Avatar)(({ theme }) => ({
width: 100,
height: 100,
cursor: 'pointer',
transition: 'all .1s',
'&[data-focused="true"]': {
outline: `4px solid ${theme.palette.primary.main}`, // Replace with your desired style
outlineOffset: '4px',
},
'&:hover': {
filter: 'brightness(90%)',
},
'&:active': {
scale: 0.9,
},
}))
type ClickableAvatarProps = Omit<AvatarProps<'label'>, 'component'>
interface AvatarUploadProps {
avatarProps?: ClickableAvatarProps
inputProps?: React.InputHTMLAttributes<HTMLInputElement>
helperTextProps?: FormHelperTextProps
}
export const AvatarUpload = memo(function AvatarUpload({
avatarProps,
inputProps,
helperTextProps,
}: AvatarUploadProps) {
const [imageSrc, setImageSrc] = useState<string>()
const id = useId()
const [isInputFocused, bindFocus] = useFocus()
const { isTabLastKey } = useTab()
const handleImageChange = (event: React.ChangeEvent<HTMLInputElement>) => {
const file = event.target.files?.[0]
if (file) {
// Read the file as a data URL
const reader = new FileReader()
reader.onload = () => {
setImageSrc(reader.result as string)
}
reader.readAsDataURL(file)
}
inputProps?.onChange?.(event)
}
return (
<Stack alignItems={'center'}>
<ClickableAvatar
// @ts-expect-error it can't see component prop for some reason
component='label'
role={undefined}
variant='circular'
src={imageSrc}
htmlFor={id}
data-focused={isInputFocused && isTabLastKey.current}
{...avatarProps}
>
<PersonIcon
sx={{
fontSize: '40px',
}}
/>
</ClickableAvatar>
<VisuallyHiddenInput
id={id}
type='file'
accept={avatarConstraints.type.join(', ')}
multiple={false}
{...inputProps}
{...bindFocus}
onChange={handleImageChange}
/>
<FormHelperText {...helperTextProps} />
</Stack>
)
})
```
useFocus.ts (just tracks focus of the elem):
```tsx
import { useCallback, useMemo, useState } from 'react'
interface UseFocusBind {
onBlur: () => void
onFocus: () => void
}
export type UseFocusReturnType = [boolean, UseFocusBind]
/**
* Tracks if an element is focused or not.
* @returns {[boolean, {onBlur: () => void, onFocus: () => void}]}
*/
export const useFocus = (): UseFocusReturnType => {
const [isFocused, setIsFocused] = useState(false)
const onBlur = useCallback(() => {
setIsFocused(false)
}, [])
const onFocus = useCallback(() => {
setIsFocused(true)
}, [])
return useMemo(
() => [isFocused, { onBlur, onFocus }],
[isFocused, onBlur, onFocus],
)
}
```
useTab(tracks if the tab key is the last pressed):
```tsx
import { useEffect, useRef } from 'react'
/**
* Custom React Hook to track if the Tab key was the last key pressed.
*
* This hook sets up global event listeners for 'keydown' and 'mousedown' events.
* It updates a ref `isTabLastKey` to determine if the Tab key was the last key pressed.
* The use of a ref prevents unnecessary re-renders of your component when these events occur.
*
* @returns {Object} - An object containing:
* - `isTabLastKey` (Ref<boolean>): A ref that is `true` if the last key pressed was Tab, `false` otherwise.
*
* @example
* const { isTabLastKey } = useTab();
* // You can now use isTabLastKey.current in your component logic
*/
export const useTab = () => {
const isTabLastKey = useRef(false)
useEffect(() => {
const handleKeyDown = (event: KeyboardEvent) => {
if (event.key === 'Tab') {
isTabLastKey.current = true
} else {
isTabLastKey.current = false
}
}
const handleMouseDown = () => {
isTabLastKey.current = false
}
window.addEventListener('keydown', handleKeyDown)
window.addEventListener('mousedown', handleMouseDown)
return () => {
window.removeEventListener('keydown', handleKeyDown)
window.removeEventListener('click', handleMouseDown)
}
}, [])
return { isTabLastKey }
}
```
### Context
_No response_
**Search keywords**: mui avatar profile upload input hidden | waiting for ๐,ready to take,enhancement,support: docs-feedback | low | Critical |
2,813,578,916 | rust | Associated Type Bounds on Trait Bounds on GAT produce error | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I hope this is a genuine bug and not just working as intended๐
I tried this code:
```rust
trait Assoc {
type Type;
}
trait Static: 'static {
type Same<T>: Assoc<Type: Static>;
}
```
I expected to see this happen: The code compiles.
Instead, this happened: The code produces the following compile error:
```
error[E0310]: the associated type `<<Self as Static>::Same<T> as Assoc>::Type` may not live long enough
--> src/lib.rs:6:28
|
6 | type Same<T>: Assoc<Type: Static>;
| ^^^^^^
| |
| the associated type `<<Self as Static>::Same<T> as Assoc>::Type` must be valid for the static lifetime...
| ...so that the type `<<Self as Static>::Same<T> as Assoc>::Type` will meet its required lifetime bounds...
|
note: ...that is required by this bound
--> src/lib.rs:5:15
|
5 | trait Static: 'static {
| ^^^^^^^
help: consider adding an explicit lifetime bound
|
6 | type Same<T>: Assoc<Type: Static> where <<Self as Static>::Same<T> as Assoc>::Type: 'static;
|
```
Note that either removing `<T>` or replacing it with `<T: 'static>` makes the error go away.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
Also persists on latest nightly:
```
rustc 1.86.0-nightly (f85c6de55 2025-01-26)
binary: rustc
commit-hash: f85c6de55206dbee5ffedfd821df1503a7b92346
commit-date: 2025-01-26
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.7
``` | A-trait-system,A-associated-items,C-bug,F-associated_type_bounds,needs-triage | low | Critical |
2,813,642,720 | vscode | VS Code frequently stuck on "Saving 'somefile': Writing into file..." | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
Issue is sporadic so there is no way to reliably test.
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.4
- OS Version: Kubuntu 22.04, Plasma 5.24.7, KDE 5.92.0, QT: 5.15.3, Kernel: 6.8.0-51-generic (64-bit)
Steps to Reproduce:
1. Open a file.
2. Edit.
3. Save.
Code will frequently get stuck on "Saving 'somefile.ext': Writing into file...". It happens often enough that it affecting my productivity.
Process:
1. open file from FTP using PCManFM-QT
2. make changes
3. Save the file
Most of the time the save is almost instant. Sometimes, it hangs as described. Code sometimes will come back eventually, sometimes as much as an hour later and I can save the file again, and it works, or may not be back after several hours and require a system reboot..
A check of the logs shows:
2025-01-25 12:21:57.295 [error] Error: ESPIPE: invalid seek, copyfile '/run/user/1000/gvfs/ftp:host=some-host,user=some-user/inetpub-dev/webcontent/Admin/securityFx/loginform.css' -> '/home/some-user/.config/Code/User/History/5c371153/GGjc.css': Unknown (FileSystemError): Error: ESPIPE: invalid seek, copyfile '/run/user/1000/gvfs/ftp:host=some-host,user=some-user/inetpub-dev/webcontent/Admin/securityFx/loginform.css' -> '/home/some-user/.config/Code/User/History/5c371153/GGjc.css'
at f9.create (file:///usr/share/code/resources/app/out/main.js:38:28875)
at Vi (file:///usr/share/code/resources/app/out/main.js:38:26169)
at xc.ib (file:///usr/share/code/resources/app/out/main.js:49:22946)
at xc.fb (file:///usr/share/code/resources/app/out/main.js:49:22389) | info-needed | low | Critical |
2,813,646,962 | go | cmd/compile: ICE: unexpected types2.Invalid in writePkgStub | Observed in https://ci.chromium.org/ui/p/golang/builders/ci/x_tools-gotip-darwin-amd64-longtest/b8725796120163106641/overview, attached to umbrella issue https://github.com/golang/go/issues/70399#issuecomment-2589049515:
```
=== RUN TestVetStdlib
vet_std_test.go:101: go vet std failed (exit status 1):
# math/rand/v2 [math/rand/v2.test]
<unknown line number>: internal compiler error: unexpected types2.Invalid
goroutine 1 [running]:
runtime/debug.Stack()
runtime/debug/stack.go:26 +0x5e
cmd/compile/internal/base.FatalfAt({0x528d00?, 0xc0?}, {0x75726fe, 0x19}, {0x0, 0x0, 0x0})
cmd/compile/internal/base/print.go:230 +0x1ea
cmd/compile/internal/base.Fatalf(...)
cmd/compile/internal/base/print.go:195
cmd/compile/internal/noder.(*pkgWriter).typIdx(0xc000528d00, {0x77dfa70, 0x7d78560}, 0xc000183ea0)
cmd/compile/internal/noder/writer.go:533 +0x597
cmd/compile/internal/noder.(*writer).typ(0xc0007338c0, {0x77dfa70?, 0x7d78560?})
cmd/compile/internal/noder/writer.go:481 +0x2f
cmd/compile/internal/noder.(*writer).unionType(0xc0007338c0, 0xc0000106a8)
cmd/compile/internal/noder/writer.go:651 +0x6a
cmd/compile/internal/noder.(*pkgWriter).typIdx(0xc000528d00, {0x77dfb88, 0xc0000106a8}, 0xc000183ea0)
cmd/compile/internal/noder/writer.go:609 +0x42e
cmd/compile/internal/noder.(*writer).typ(0xc000733810, {0x77dfb88?, 0xc0000106a8?})
cmd/compile/internal/noder/writer.go:481 +0x2f
cmd/compile/internal/noder.(*writer).interfaceType(0xc000733810, 0xc000513310)
cmd/compile/internal/noder/writer.go:696 +0x2a5
cmd/compile/internal/noder.(*pkgWriter).typIdx(0xc000528d00, {0x77dfa20, 0xc000513310}, 0xc000183ea0)
cmd/compile/internal/noder/writer.go:605 +0x98a
cmd/compile/internal/noder.(*writer).typ(0xc000733550, {0x77dfa20?, 0xc000513310?})
cmd/compile/internal/noder/writer.go:481 +0x2f
cmd/compile/internal/noder.(*writer).doObj(0xc000733550, 0xc000733600, {0x77e7760, 0xc0003edbc0})
cmd/compile/internal/noder/writer.go:883 +0x318
cmd/compile/internal/noder.(*pkgWriter).objIdx(0xc000528d00, {0x77e7760, 0xc0003edbc0})
cmd/compile/internal/noder/writer.go:815 +0x84b
cmd/compile/internal/noder.(*pkgWriter).objInstIdx(0xc000528d00, {0x77e7760, 0xc0003edbc0}, 0x0, 0xc000183e00)
cmd/compile/internal/noder/writer.go:756 +0xf4
cmd/compile/internal/noder.(*writer).obj(0xc0007334a0, {0x77e7760?, 0xc0003edbc0?}, 0xc000554380?)
cmd/compile/internal/noder/writer.go:730 +0x33
cmd/compile/internal/noder.(*writer).namedType(0xc0007334a0, 0xc0003edbc0, 0x0)
cmd/compile/internal/noder/writer.go:631 +0x52
cmd/compile/internal/noder.(*pkgWriter).typIdx(0xc000528d00, {0x77df9a8, 0xc000554380}, 0xc000183e00)
cmd/compile/internal/noder/writer.go:550 +0x8cc
cmd/compile/internal/noder.(*writer).typ(0xc000733290, {0x77df9a8?, 0xc000554380?})
cmd/compile/internal/noder/writer.go:481 +0x2f
cmd/compile/internal/noder.(*writer).objDict(0xc000733290, {0x77e7800, 0xc00054e620}, 0xc000183e00)
cmd/compile/internal/noder/writer.go:914 +0xea
cmd/compile/internal/noder.(*pkgWriter).objIdx(0xc000528d00, {0x77e7800, 0xc00054e620})
cmd/compile/internal/noder/writer.go:823 +0x8ff
cmd/compile/internal/noder.(*pkgWriter).objInstIdx(0xc000528d00, {0x77e7800, 0xc00054e620}, 0x0, 0x0)
cmd/compile/internal/noder/writer.go:756 +0xf4
cmd/compile/internal/noder.(*writer).obj(0xc0001a29a0, {0x77e7800?, 0xc00054e620?}, 0x0?)
cmd/compile/internal/noder/writer.go:730 +0x33
cmd/compile/internal/noder.writePkgStub({0x0?, {0x0?, 0x0?}}, {0xc000431200, 0x7, 0x7})
cmd/compile/internal/noder/unified.go:343 +0x6fa
cmd/compile/internal/noder.unified({0x0?, {0x0?, 0x0?}}, {0xc000431200?, 0x770eaa0?, 0x0?})
cmd/compile/internal/noder/unified.go:195 +0xb3
cmd/compile/internal/noder.LoadPackage({0xc000022460, 0x7, 0x8})
cmd/compile/internal/noder/noder.go:77 +0x43a
cmd/compile/internal/gc.Main(0x77d76e0)
cmd/compile/internal/gc/main.go:208 +0xcc5
main.main()
cmd/compile/main.go:57 +0xf9
--- FAIL: TestVetStdlib (310.69s)
``` | NeedsInvestigation,compiler/runtime,BugReport | low | Critical |
2,813,657,663 | vscode | Test: SSH connection to unsupported distros with custom sysroot | Refs: https://github.com/microsoft/vscode/pull/235232
- [ ] anyOS @chrisdias
- [x] anyOS @alexr00
Complexity: 4
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23238873%0A%0A&assignees=deepak1556)
---
We plan to provide a path for users on legacy servers to continue using latest version of the remote server if the system contains a sysroot with the required libraries. The necessary steps to achieve this are documented in https://github.com/microsoft/vscode-docs/pull/7953/files. This TPI aims to validate this flow,
**Prerequisites:**
* Install OpenSSH compatible SSH client, refs https://code.visualstudio.com/docs/remote/troubleshooting#_installing-a-supported-ssh-client
* Unreleased VSCode client from https://github.com/microsoft/vscode/pull/235232#issuecomment-2615764717
* Latest Pre-release of Remote - SSH extension, `version >= 0.117.2025012315`
* Sysroot
- For testing purpose you can use the pre-compiled version that has `-glibc-2.28` suffix from https://github.com/microsoft/vscode-linux-build-agent/releases/tag/v20240129-253798
- If you are feeling adventurous try `Build the sysroot` step from https://github.com/microsoft/vscode-docs/pull/7953/files
* Patchelf https://github.com/NixOS/patchelf/releases/tag/0.18.0
* Container with SSH server running, you can use the following sample container or spin your own but make sure it is a distro version that used to run legacy servers (Ubuntu <= 18, Debian <= 9, RHEL <= 7)
```docker
FROM amazonlinux:2
RUN yum update -y && yum install -y sudo openssh-server tar procps-ng
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN useradd -m -s /bin/bash test-ssh-user
RUN usermod -a -G wheel test-ssh-user
RUN echo "test-ssh-user:test123" | chpasswd
EXPOSE 22
```
Once the container is created, run the following commands to start the ssh server
```
su test-ssh-user
sudo ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
sudo /usr/sbin/sshd -D
```
**Testing:**
* Use the ssh extension to connect to the host, command should end up being `ssh test-ssh-user@localhost -p <forwarded port>`
* Server connection should fail with error about server missing glibc and libstdc++ prerequisites
* Close the remote connection
* Set the environment variables from step 4) of https://github.com/microsoft/vscode-docs/pull/7953/files to the users `~/.profile` or `~/.bash_profile`
* Retry connecting to the host via the SSH extension
* Connection should be successful, you should see a prompt about connecting to an unsupported server | testplan-item | low | Critical |
2,813,694,029 | langchain | Azure AI Search Client Component Calling get_index function twice | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
We have the following code:
```python
openai_api_key: str = "PLACEHOLDER FOR YOUR API KEY"
openai_api_version: str = "2023-05-15"
model: str = "text-embedding-ada-002"
azure_endpoint: str = "PLACEHOLDER FOR YOUR AZURE OPENAI ENDPOINT"
azure_openai_api_key: str = "PLACEHOLDER FOR YOUR AZURE OPENAI KEY"
azure_openai_api_version: str = "2023-05-15"
azure_deployment: str = "text-embedding-ada-002"
vector_store_address: str = "YOUR_AZURE_SEARCH_ENDPOINT"
vector_store_password: str = "YOUR_AZURE_SEARCH_ADMIN_KEY"
embeddings: AzureOpenAIEmbeddings = AzureOpenAIEmbeddings(
azure_deployment=azure_deployment,
openai_api_version=azure_openai_api_version,
azure_endpoint=azure_endpoint,
api_key=azure_openai_api_key,
)
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
# Configure max retries for the Azure client
additional_search_client_options={"retry_total": 4},
)
docs = vector_store.similarity_search(
query="What did the president say about Ketanji Brown Jackson",
k=3,
search_type="similarity",
)
print(docs[0].page_content)
```
In AzureSearch, get_index function is called twice which is not needed. One is called for intialization of client and the second time is called for the intialization of client_async. Please check this line: https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/azuresearch.py#L361
### Error Message and Stack Trace (if applicable)
It is just adding up a latency to AzureSearch component since it is called get_index function twice.
### Description
I am trying to use LangChain to integrate with Azure AI Search and noticed that get_index function is getting called twice when trying to intialize AzureSearch.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT Thu Jun 30 08:18:26 UTC 2022
> Python Version: 3.10.14 (main, Jul 2 2024, 22:06:13) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.3.31
> langchain: 0.3.15
> langchain_community: 0.3.15
> langsmith: 0.3.2
> langchain_text_splitters: 0.3.5
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> orjson: 3.10.15
> packaging: 24.1
> pydantic: 2.10.4
> pydantic-settings: 2.7.1
> pytest: Installed. No version info available.
> PyYAML: 6.0.1
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> rich: 13.9.4
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> typing-extensions: 4.12.2
> zstandard: 0.23.0 | โฑญ: vector store | low | Critical |
2,813,711,857 | langchain | docs: pgvector docs issues | https://python.langchain.com/docs/integrations/vectorstores/pgvector/ install instructions in these docs are wrong, you get this error if you follow them
```
ImportError: no pq wrapper available.
Attempts made:
- couldn't import psycopg 'c' implementation: No module named 'psycopg_c'
- couldn't import psycopg 'binary' implementation: No module named 'psycopg_binary'
- couldn't import psycopg 'python' implementation: libpq library not found
```
Easiest fix is to tell user to install psycopg-binary too, and add a link to psycopg docs for explanation of additional options | ๐ค:docs,investigate | low | Critical |
2,813,712,255 | flutter | Release process must check in an engine.version file | When trying to publish the 3.29.0-0.1.pre beta release, packaging builds failed to find the binaries: https://ci.chromium.org/ui/p/dart-internal/builders/flutter/Linux%20packaging_release_builder/276/infra
I believe the long-term fix to this is the conductor should check in an up to date engine.version on release branches. | P1,team-release | medium | Critical |
2,813,717,608 | godot | A Container Node with a RichTextLabel child will resize incorrectly if the position/size is altered on the same frame it is instanced | ### Tested versions
Tested Versions:
- Godot 4.4.beta
- Godot 4.3.stable
- Godot 4.0.stable
I think it's highly probable that this bug has been present since 4.0.
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.22631 - Vulkan (Forward+) - dedicated AMD Radeon RX 7900 XTX (Advanced Micro Devices, Inc.; 32.0.12033.1030) - AMD Ryzen 7 7700X 8-Core Processor (16 Threads)
### Issue description
### The Problem
This bug seems to be very specific, but consistent.
When a `Container` is instanced with a `RichTextLabel` within it, if the `Container`'s `position` or `size` are changed via scripting in the same frame it is instanced, the `RichTextLabel` will incorrectly resize itself for a single frame (?) as if it had a width of 0, rather than the width of the `Container`, therefore increasing the `Container`'s height dramatically.
**Note**: I recall this has happened to me in the past when doing other actions that aren't directly altering `position` and `size`, e.g. calling `set_anchors_and_offsets_preset()` on the `Container`, but for reproduction purposes altering `position` will do the trick.
### Samples
A couple samples showcasing the problem using the MRP (added below).
When I run the scene without altering any `position` or `size` via code, everything works as expected, meaning the Containers have their appropriate size (set by the PackedScene data or the size of their contents).

(Note that the "squished and tall" `Containers` with vertical text are not bugged. I've manually set them to about ~5 width to compare heights in the sample below this one.)
In the next sample, I alter the `position` via code of the Containers I instance via code in the same frame I instance them, and the bug occurs. And, as you can see, the bugged containers match the height of their "minimal width" counterparts. I.e., the Containers became exactly as tall as needed for the `RichTextLabel` to be auto-wrapped if the Container had a width of 0.

### Notes
This bug occurs even if there's a chain of `Containers` (child of a child of a child) with a `RichTextLabel` at the middle or end of it. The `RichTextLabel` does _not_ need to be a direct child of the moved/resized `Container` for the bug to occur. However, there _must_ be a chain of `Containers`. If there's a non-container involved, such as a regular Control, the bug does not reproduce.
Additionally, the bug occurs only if the `Container` is wider than the `RichTextLabel`'s `minimum_size`. If the RichTextLable has a `minimum_size` equal or higher than the `Container`, the bug does not occur. This matches the theory that the `RichTextLabel` resizing itself to 0 (it's minimum size) it's causing the bug: if the min_width is >= the Container's width, the bug is not observable.
### Why is this a Problem
This bug creates the issue that if you instantiate a **PackedScene** with an auto-wrapped `RichTextLabel` within a `Container` setup and move it on the same frame (an event fairly common for UI such as custom Popups or Tooltips), the Container will increase its height dramatically to accommodate the `RichTextLabel` as if it had a width of 0 (therefore auto-wrapping every single letter or word).
The only way to work around this is to call `reset_size()` on the instanced Container post-creation or waiting a 1-2 frames before editing the Container's position or size, which are workarounds with should not be necessary.
### Steps to reproduce
1. All the following steps must occur on the same frame.
2. Instantiate a `Container` (such as a `PanelContainer`) with a `RichTextLabel` as its child. This can be done by instantiating each Node separately and setting the latter as a child of the former, all via code. Alternatively, you can instantiate a PackedScene with that Node setup already prepared.
2.2. The `Container` should have a `size` or `custom_minimum_size` larger than 0. For easier bug visibility, 200x0 is recommended.
2.1. The `RichTextLabel` must have `fit_content` to `true`, `autowrap` to anything other than `off`, and have a `minimum_size` of `(0,0)`.
3. Afterwards, alter the `Container`'s `position` or `size` via code.
The result will be a `Container` that has resized its height to the `RichTextLabel` as if it had a width of 0, and since its auto-wrapped, the `RichTextLabel` becomes massively tall. On the same or next frame, the `RichTextLabel` "fixes" itself, but the `Container` remains with the excessive height.

### Minimal reproduction project (MRP)
The following project is made in `4.4.beta`, but should work in any version down to `4.0.stable`.
In it, I messed around to gather all the data I mention above. In the main scene root, there is a script with a simple exported boolean to toggle if you want the bug to trigger or not (`position` change on the `Container`).
[ui-resize-test-4.4.beta.zip](https://github.com/user-attachments/files/18562510/ui-resize-test-4.4.beta.zip) | bug,topic:gui | low | Critical |
2,813,726,762 | ollama | /clear not actually clearing | ### What is the issue?
Steps that I've done, which shows the bug. There might be a simpler sequence but this is mine.
1. ollama run hf.co/mradermacher/DS-R1-Distill-Q2.5-7B-RP-GGUF:latest
2. /set parameter num_ctx 16384
3. save chrisdeepseek
4. /bye
5. ollama run chrisdeepseek
6. create flappybird.py code.
7. (do some testing extra)
8. /bye
9. ollama run chrisdeepseek
10. flappy bird code comes back!
11. /clear
12. /bye
13. ollama run chrisdeepseek
14. flaapy bird code comes back!
Seems like the larger context doesn't actually get cleared.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7 | bug | low | Critical |
2,813,727,316 | PowerToys | File Explorer Add-ons Preview Pane Monaco | ### Description of the new feature / enhancement
Monaco should not preview .txt files because Monaco takes a few more milliseconds to preview these files than the native reader. Alternative previewer should be set for txt files.
Monaco is a source code previewer that opens files in the preview pane for quick reading. Monaco reads .txt files in addition to other source code type files like cpp, py, json, xml... Monaco takes a few more milliseconds to preview these files.
### Scenario when this would be used?
File Explorer add-on would have one switch for Monaco supported files. And another switch for .txt files where a different previewer is used. Each previewer can be toggled on and off as an option for File explorer add-ons.
### Supporting information
Monaco is a source code previewer that opens files in the preview pane for quick reading. Monaco reads .txt files in addition to other source code type files like cpp, py, json, xml... Monaco takes a few more milliseconds to preview these files. | Needs-Triage | low | Minor |
2,813,737,711 | pytorch | TorchScript run_method fails from 2.5.0 onward on Ubuntu | ### ๐ Describe the bug
Using https://download.pytorch.org/libtorch/cu118/libtorch-shared-with-deps-2.5.1%2Bcu118.zip or https://download.pytorch.org/libtorch/cu118/libtorch-shared-with-deps-2.5.0%2Bcu118.zip the following code does not return. Console prints the "run..." but never seems to finish execution of the `run_method` line.
```cpp
TEST(MLLib, JITForward) {
// Here we use compile on a small TorchScript snippet.
auto identity_module = torch::jit::compile(R"JIT(
def forward(x):
return x
)JIT");
// Run forward
std::cout << "run..." << std::endl;
auto output = identity_module->run_method("forward", torch::ones({2, 3}));
auto t = output.toTensor();
std::cout << "Output size: " << t.sizes() << std::endl;
std::cout << "Output:\n" << t << std::endl;
}
```
### Versions
libtorch 2.5.0 / 2.5.1
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit,module: deadlock | low | Critical |
2,813,739,415 | flutter | [shared_preferences] Android integration tests running twice | While investigating another issue, I noticed that FTL was showing the integration tests for `shared_preferences` running twice: once via `MainActivityTest`, and once via `FlutterActivityTest`. The plugin has both files, and they have essentially identical content, which seems like it's probably a mistake. We should figure out why that's the case, remove one if it is just an accident, and check other plugins to see if they have this issue. | a: tests,team,p: shared_preferences,package,team-ecosystem,P1,triaged-ecosystem | medium | Minor |
2,813,755,167 | rust | Misleading error message - Returning Option<T> when Clone not implemented | When a function returns `Option<T>` and the function is basically a wrapper on an `Arc<RwLock<Option<T>>>` but `T` does not implement `Clone` then the code `self.value.read().unwrap().as_ref().map(|value| value.clone())` will show a compile error `"mismatched types"` as the return type is `Option<_> and found Option<&_>`. This suggests that I need to clone the value, but I am already doing that. Trying to dereference the value also fails with the error that the `T` does not implement `Copy` (nor do I want it to).
Example:
```
struct MyStruct<T>
{
value: Arc<RwLock<Option<T>>>,
}
impl<T> MyStruct<T>
{
fn get_value(&self) -> Option<T> {
let locked_value = self.value.read().unwrap();
locked_value.as_ref().map(|value| value.clone())
}
}
```
https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=9ab38c00a0a33a5c07b2aa42fa510762
I expected to see this happen:
The underlying issue of `"no method named clone found for type"` is the true underlying problem that is hidden from the developer.
Instead, this happened:
I was misdirected to the return type being invalid.
I was under the impression that bad or misleading error messages in Rust are considered bugs, but if that is not the case please let me know what I need to do to help resolve this issue. Thank you.
Edit:
While constructing the more complete example, I noticed that omitting the `.as_ref()` then caused the compiler to properly call out the lack of `Clone`. When I added `T: Clone` then an error would appear since I was missing `.as_ref()`. This bug demonstrates that if you already start with `.as_ref()` included, that the compiler fails to point you to the lack of `Clone`.
| A-diagnostics,T-compiler,D-terse | low | Critical |
2,813,777,246 | yt-dlp | Getting the needed info from a video in json format | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
I want to make a json file which has a few (twitch) vods and only 3 fields for each: title, url and image (thumbnail). Ideally it should follow this layout
`{"title": "vod_title", "url": "vod_url", "image": "vod_image"}`
The entry for the vod in the example below should look like this (jq was used to make it human readable).
```
$ cat test.json | jq .
{
"title": "Liquid vs. Heroic - Map 1 [Ancient] - IEM Chengdu 2024 - Group A",
"url": "https://dgeft87wbj63p.cloudfront.net/53796fb12af8b7e13676_eslcs_41137427548_3955492727/720p60/highlight-2114118194.m3u8",
"image": "https://static-cdn.jtvnw.net/cf_vods/d2nvs31859zcd8/53796fb12af8b7e13676_eslcs_41137427548_3955492727/thumb/custom-a24f6a98-c704-4ca9-b690-b948993a1d0d-0x0.jpeg"
}
```
How can I do that? The `--print-json` parameter pops all sorts of useless stuff that I have to filter out.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
$ yt-dlp -vU -f 720p60 --get-url --get-title --get-thumbnail https://m.twitch.tv/videos/2114118194
[debug] Command-line config: ['-vU', '-f', '720p60', '--get-url', '--get-title', '--get-thumbnail', 'https://m.twitch.tv/videos/2114118194']
[debug] User config "/home/user/.config/yt-dlp/config": ['--format-sort', 'vcodec:h264,res:720,acodec:m4a', '--no-cache-dir', '--force-ipv4', '--restrict-filenames']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [3b4531934] (zip)
[debug] Python 3.13.1 (CPython x86_64 64bit) - Linux-6.12.10-amd64-x86_64-with-glibc2.40 (OpenSSL 3.4.0 22 Oct 2024, glibc 2.40)
[debug] exe versions: ffmpeg 7.1 (fdk,setts), ffprobe 7.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.12.14, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[twitch:vod] Extracting URL: https://m.twitch.tv/videos/2114118194
[twitch:vod] 2114118194: Downloading stream metadata GraphQL
[twitch:vod] 2114118194: Downloading video access token GraphQL
[twitch:vod] 2114118194: Downloading m3u8 information
[twitch:vod] 2114118194: Downloading storyboard metadata JSON
[debug] Sort order given by user: vcodec:h264, res:720, acodec:m4a
[debug] Formats sorted by: hasvid, ie_pref, vcodec:h264(7), res:720(720.0), acodec:m4a(9), lang, quality, fps, hdr:12(7), channels, size, br, asr, proto, vext, aext, hasaud, source, id
[info] v2114118194: Downloading 1 format(s): 720p60
Liquid vs. Heroic - Map 1 [Ancient] - IEM Chengdu 2024 - Group A
https://dgeft87wbj63p.cloudfront.net/53796fb12af8b7e13676_eslcs_41137427548_3955492727/720p60/highlight-2114118194.m3u8
https://static-cdn.jtvnw.net/cf_vods/d2nvs31859zcd8/53796fb12af8b7e13676_eslcs_41137427548_3955492727/thumb/custom-a24f6a98-c704-4ca9-b690-b948993a1d0d-0x0.jpeg
``` | question | low | Critical |
2,813,777,732 | ui | [bug]: Shadcn blocks missing | ### Describe the bug
Previously there were a ton of blocks available such as tables etc but now there are only a few sidebars, authentications and logins.
### Affected component/components
Tables and more
### How to reproduce
1. Go to https://ui.shadcn.com/blocks
2. Scroll down and see how many blocks you find
3. You see a few more if you click the tabs ("Sidebars, Authentication, Login") but not many
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
```
### System Info
```bash
Safari
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug | low | Critical |
2,813,831,969 | ollama | [v0.5.4] Download timeouts cause download cache corruption. Any download that needs to be retried by re-running ollama ends up corrupted at 100% download(file sha256-sha256hash-partial-0 not found). | ### What is the issue?
Running ollama through alpaca.
I'm aware this is a seperate project will mirror the bug report.
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.4 | bug | low | Critical |
2,813,854,203 | node | flaky: test-fs-cp | ### Test
test-fs-cp
### Platform
Windows x64
### Console output
```console
not ok 304 parallel/test-fs-cp
12:01:46 ---
12:01:46 duration_ms: 520.01000
12:01:46 severity: fail
12:01:46 exitcode: 1
12:01:46 stack: |-
12:01:46 node:assert:128
12:01:46 throw new AssertionError(obj);
12:01:46 ^
12:01:46
12:01:46 AssertionError [ERR_ASSERTION]: Expected values to be strictly equal:
12:01:46 + actual - expected
12:01:46
12:01:46 + 'ERR_FS_EISDIR'
12:01:46 - 'ERR_FS_CP_EINVAL'
12:01:46 ^
12:01:46
12:01:46 at file:///c:/workspace/node-test-binary-windows-js-suites/node/test/parallel/test-fs-cp.mjs:687:12
12:01:46 at c:\workspace\node-test-binary-windows-js-suites\node\test\common\index.js:491:15
12:01:46 at node:fs:190:23
12:01:46 at callbackifyOnRejected (node:util:230:10)
12:01:46 at process.processTicksAndRejections (node:internal/process/task_queues:90:21) {
12:01:46 generatedMessage: true,
12:01:46 code: 'ERR_ASSERTION',
12:01:46 actual: 'ERR_FS_EISDIR',
12:01:46 expected: 'ERR_FS_CP_EINVAL',
12:01:46 operator: 'strictEqual'
12:01:46 }
12:01:46
12:01:46 Node.js v24.0.0-pre
12:01:46 ...
```
### Build links
- https://ci.nodejs.org/job/node-test-binary-windows-js-suites/32255/RUN_SUBSET=0,nodes=win11-COMPILED_BY-vs2022/testReport/junit/(root)/parallel/test_fs_cp/
### Additional information
_No response_ | flaky-test | low | Critical |
2,813,857,913 | godot | SVG import options are ignored during conversion to a Windows icon on export | ### Tested versions
- Reproducible in: 4.3.stable, 4.4.beta1
### System information
Godot v4.3.stable - Windows 10.0.26100 - GLES3 (Compatibility) - NVIDIA GeForce RTX 4090 (NVIDIA; 32.0.15.6636) - 13th Gen Intel(R) Core(TM) i9-13900K (32 Threads)
### Issue description
When you use a SVG file as an icon in a Windows export preset, its import options like **SVG > Scale** or **Normal Map Invert Y** are ignored. As seen in this video:
https://github.com/user-attachments/assets/bec266df-3976-49e4-879e-cf757719e2e5
This means that if your SVG file has a low resolution defined in its metadata, increasing SVG scale will still result in a blurry icon once exported.
This is likely where the issue should be fixed: https://github.com/godotengine/godot/blob/e5498020b6250d9284efcdcf0f85f39d48d8548f/platform/windows/export/export_plugin.cpp#L96-L107
This is not just a resolution issue, as **Normal Map Invert Y** (which turns a white icon magenta) is also ignored.
### Steps to reproduce
- Import a low-resolution SVG in the project ([editor icons](https://github.com/godotengine/godot/tree/master/editor/icons) are a good fit for this since they're 16x16). Set their SVG scale on import to a value like 16 (so they're imported as 256x256).
- Configure rcedit in the Editor Settings.
- Create a Windows export preset, set the icon path and export the project.
- Look at the resulting executable icon in Windows Explorer with zoom (<kbd>Ctrl + Mouse wheel</kbd>).
### Minimal reproduction project (MRP)
[test-icon-export.zip](https://github.com/user-attachments/files/18562508/test-icon-export.zip) | bug,platform:windows,topic:editor,topic:export | low | Minor |
2,813,862,395 | ui | [feat]: Add support for Radix's Data List | ### Feature description
Most components from Radix have a ShadCN equivalent. However there is not one for their Data List component.
It's a component-equivalent of the `<dl>`, `<dt>`, and `<dd>` elements.
While it's not a very involved component and so could be made by end users, it's still something that is used commonly enough that it'd be a worthwhile addition to the library.
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues and PRs | area: request | low | Minor |
2,813,869,786 | flutter | [monorepo] better description for the `emergency` label | The `emergency` label should have a better description, and link to more detailed docs about what it does and when it should be used. | team-infra,P1,triaged-infra,monorepo | medium | Minor |
2,813,870,788 | vscode | I dont install it, it keep my copy and paste func lagging a lot |
Type: <b>Bug</b>
I dont install it, it keep my copy and paste func lagging a lot
VS Code version: Code 1.96.4 (Universal) (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Darwin x64 24.0.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-8257U CPU @ 1.40GHz (8 x 1400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|4, 5, 4|
|Memory (System)|16.00GB (0.06GB free)|
|Process Argv|--crash-reporter-id 0d24a37a-f78b-49fc-8c0a-38d18982d114|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (42)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-django|bat|1.15.0
vscode-intelephense-client|bme|1.12.6
tailwindshades|bou|0.0.5
vscode-tailwindcss|bra|0.14.1
doxdocgen|csc|1.4.0
vscode-eslint|dba|3.0.10
python-environment-manager|don|1.2.7
python-extension-pack|don|1.7.0
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
code-runner|for|0.12.2
vscode-pull-request-github|Git|0.102.0
better-cpp-syntax|jef|1.27.1
vsc-python-indent|Kev|1.19.0
debugpy|ms-|2024.14.0
python|ms-|2024.23.2025012401
vscode-pylance|ms-|2024.12.1
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-powertoys|ms-|0.1.1
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.394.0
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
cmake-tools|ms-|1.19.52
cpptools|ms-|1.23.4
cpptools-extension-pack|ms-|1.3.0
remote-explorer|ms-|0.4.3
autodocstring|njp|0.6.1
postman-for-vscode|Pos|1.6.1
LiveServer|rit|5.7.9
open-in-browser|tec|2.0.0
bootstrap4-vscode|the|6.1.0
pdf|tom|1.2.2
cmake|twx|0.0.17
vscode-lldb|vad|1.11.2
vscodeintellicode|Vis|1.3.2
volar|Vue|2.2.0
jinja|who|0.0.8
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
2f103344:31071589
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
hdaa2157:31222309
copilot_t_ci:31222730
```
</details>
<!-- generated by issue reporter --> | triage-needed | low | Critical |
2,813,876,211 | ui | [bug]: JSON Serialization Error Due to Circular Reference in DOM Element and React FiberNode | ### Describe the bug
This type error run in loop :
TypeError: Converting circular structure to JSON
--> starting at object with constructor 'HTMLButtonElement'
| property '__reactFiber$t0tlvvefhtf' -> object with constructor 'FiberNode'
--- property 'stateNode' closes the circle
at JSON.stringify (<anonymous>)
Equal with : HTMLDivElement , HTMLButtonElement, HTMLUListElement etc
### Affected component/components
Button , etc
### How to reproduce

### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
Uncaught (in promise) TypeError: Converting circular structure to JSON
--> starting at object with constructor 'HTMLButtonElement'
| property '__reactFiber$t0tlvvefhtf' -> object with constructor 'FiberNode'
--- property 'stateNode' closes the circle
at JSON.stringify (<anonymous>)
at backend.bundle.js:1:303762
at backend.bundle.js:1:303832
at Generator.next (<anonymous>)
at backend.bundle.js:1:298495
at new Promise (<anonymous>)
at g (backend.bundle.js:1:298240)
at backend.bundle.js:1:298719
at backend.bundle.js:1:298580
at backend.bundle.js:1:305524
(anonymous) @ backend.bundle.js:1
(anonymous) @ backend.bundle.js:1
(anonymous) @ backend.bundle.js:1
g @ backend.bundle.js:1
(anonymous) @ backend.bundle.js:1
(anonymous) @ backend.bundle.js:1
(anonymous) @ backend.bundle.js:1
(anonymous) @ backend.bundle.js:1
g @ backend.bundle.js:1
(anonymous) @ backend.bundle.js:1
(anonymous) @ backend.bundle.js:1
(anonymous) @ backend.bundle.js:1
(anonymous) @ backend.bundle.js:1Understand this errorAI
content.bundle.js:1 Port received message: {action: 'devTools', payload: {โฆ}}
content.bundle.js:1 Port received message: {action: 'aReactApp', payload: {โฆ}}
backend.bundle.js:1 Uncaught (in promise) TypeError: Converting circular structure to JSON
--> starting at object with constructor 'HTMLButtonElement'
| property '__reactFiber$t0tlvvefhtf' -> object with constructor 'FiberNode'
--- property 'stateNode' closes the circle
at JSON.stringify (<anonymous>)
at backend.bundle.js:1:303762
at backend.bundle.js:1:303832
at Generator.next (<anonymous>)
at backend.bundle.js:1:298495
at new Promise (<anonymous>)
at g (backend.bundle.js:1:298240)
at backend.bundle.js:1:298719
at a (backend.bundle.js:1:298644)
```
### System Info
```bash
Browser
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug | low | Critical |
2,813,877,447 | pytorch | Performance degradation in scaled_dot_product_attention with attn_mask and sequence length multiples of 16 | ### ๐ Describe the bug
There's a performance degradation when using `scaled_dot_product_attention` with the `attn_mask` argument, when the sequence length is a multiple of 16. This issue can be reproduced using the following code snippet.
**Reproducer Code**
```
import torch
from torch._inductor.runtime.benchmarking import benchmarker
from torch.nn import functional as F
def run(seqlen):
with torch.device("cuda"):
def f(q, k, v, mask):
return F.scaled_dot_product_attention(
q, k, v, attn_mask=mask, dropout_p=0.0
)
f_compiled = torch.compile(f)
# Create inputs
bsz = 32
q = torch.randn(bsz, 16, seqlen, 64, dtype=torch.bfloat16)
k = torch.randn(bsz, 16, seqlen, 64, dtype=torch.bfloat16)
v = torch.randn(bsz, 16, seqlen, 64, dtype=torch.bfloat16)
mask = torch.ones([bsz, 1, seqlen, seqlen], dtype=torch.bool)
inputs = [q, k, v, mask]
# Benchmark
time = benchmarker.benchmark_gpu(lambda: f_compiled(*inputs), warmup=5, rep=50)
return time
for seqlen_start in [1008, 1024, 2048, 4096]:
for offset in range(-1, 2):
seqlen = seqlen_start + offset
torch._dynamo.reset()
time = run(seqlen)
print(seqlen, time)
print()
```
**Output on H100 GPU**
```
1007 1.569983959197998
1008 2.0037760734558105
1009 1.5577600002288818
1023 1.553056001663208
1024 2.000607967376709
1025 1.7111680507659912
2047 6.071455955505371
2048 8.064703941345215
2049 6.349376201629639
4095 23.773408889770508
4096 35.05900955200195
4097 24.331039428710938
```
**Analysis**
The results show that incrementing the sequence length from multiples of 16 (1024, 2048, 4096) to non-multiples of 16 (1025, 2049, 4097) results in up to 1.43x speedup. This is counterintuitive and suggests that the selected/generated kernel for sequence lengths of multiples of 16 could be improved.
**Expected**
I'd expect multiples of 16 to perform equal, or even better than neighboring sizes, because of e.g. better divisibility for tiling.
### Error logs
_No response_
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 184
On-line CPU(s) list: 0-183
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 184
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 11.5 MiB (184 instances)
L1i cache: 11.5 MiB (184 instances)
L2 cache: 92 MiB (184 instances)
L3 cache: 2.9 GiB (184 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-183
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @eellison @yanboliang @ezyang @BoyuanFeng | triaged,oncall: pt2,module: inductor | low | Critical |
2,813,881,054 | flutter | Some macbots look unhealthy | Sampling the last 9 days (accounts for the weekend):
bots looking very sad:
* build804-m9
* build810-m9
* build815-m9
bots questionable:
* build812-m9
* build802-m9
* build810-m9
* build805-m9
* build806-m9
| team-infra | low | Minor |
2,813,888,696 | pytorch | dynamo cannot trace global op_set .__contains__ | ### ๐ Describe the bug
This gives a graph break:
```python
import torch
op_set = {
torch._C._set_grad_enabled,
torch.amp._enter_autocast,
torch.amp._exit_autocast,
}
def f(x):
if torch.ops.aten.add in op_set:
return x.sin()
return x.cos()
torch.compile(f, fullgraph=True, backend="eager")(torch.randn(3,4))
```
error message:
```
Traceback (most recent call last):
File "/data/users/yidi/pytorch/test.py", line 13, in <module>
torch.compile(f, fullgraph=True, backend="eager")(torch.randn(3,4))
File "/data/users/yidi/pytorch/torch/_dynamo/eval_frame.py", line 566, in _fn
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 1400, in __call__
return self._torchdynamo_orig_callable(
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 565, in __call__
return _compile(
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 997, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/data/users/yidi/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 726, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 760, in _compile_inner
out_code = transform_code_object(code, transform)
File "/data/users/yidi/pytorch/torch/_dynamo/bytecode_transformation.py", line 1404, in transform_code_object
transformations(instructions, code_options)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 236, in _fn
return fn(*args, **kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/convert_frame.py", line 680, in transform
tracer.run()
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 2914, in run
super().run()
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 1084, in run
while self.step():
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 994, in step
self.dispatch_table[inst.opcode](self, inst)
File "/data/users/yidi/pytorch/torch/_dynamo/symbolic_convert.py", line 2197, in CONTAINS_OP
self.push(right.call_method(self, "__contains__", [left], {}))
File "/data/users/yidi/pytorch/torch/_dynamo/variables/user_defined.py", line 818, in call_method
return super().call_method(tx, name, args, kwargs)
File "/data/users/yidi/pytorch/torch/_dynamo/variables/base.py", line 413, in call_method
unimplemented(f"call_method {self} {name} {args} {kwargs}")
File "/data/users/yidi/pytorch/torch/_dynamo/exc.py", line 361, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: call_method UserDefinedObjectVariable(set) __contains__ [TorchInGraphFunctionVariable(aten.add)] {}
from user code:
File "/data/users/yidi/pytorch/test.py", line 10, in f
if torch.ops.aten.add in op_set:
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
However, when the op_set is a local variable it works fine.
### Versions
on master
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | oncall: pt2,module: dynamo,dynamo-triage-jan2025 | low | Critical |
2,813,916,179 | excalidraw | creating an elbow arrow from a frame adds it to the frame incorrectly | When creating an elbow arrow by clicking a bit inside the shape (in this case the frame), it binds it to the shape as expected, but in case of a frame it also adds it to the frame as a child because you initially clicked inside the frame when creating the elbow arrow. This puts the arrow completely outside the frame and doesn't render it.
Instead, if we detect the the arrow is being bound to the frame during creation, we should never add it to the frame as a child (-> `frameId: null`)
 | bug | low | Minor |
2,813,917,717 | TypeScript | "No inputs were found in config file", when looking at some files within node_modules | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.4
- OS Version: Darwin arm64 23.5.0
When a node_module bundles in a tsconfig file I'll end up getting this error. Most packages do not do this, it makes sense, the tsconfig is really just for builds. BUT occasionally they do bundle it, and when that happens obviously sources can not be found. I'll contact the offending package, but a built in solution would be nice as well since this is a frequent problem - this happens multiple times per year for me.
**Example repo:** https://github.com/deca-inc/vscode-repro-issue-238768
**Steps to Reproduce:**
1. Make any basic typescript project, with any package manager
2. Install a package that has a tsconfig.json within it, where the inputs won't be found
3. Import that node_module then try to "meta/command+click" into the definition of that import
**Outcome:** A "Problem" appears under the Problem tab. There's no way to make this problem go away unless you close the files and close vscode completely.
**Possible solutions:**
1. Don't look for tsconfig files under node_modules ever
2. Document how to ignore specific offending packages - it's pretty hard to find the answer for this out there on search engines.
| Help Wanted,Possible Improvement | low | Critical |
2,813,933,878 | go | proposal: allow eliding "_" fields in unkeyed struct literals | ### Proposal Details
Change treatment of `_` fields so that they may be omitted from unkeyed struct literals and (if omitted) do not count as unexported fields. In the case of a struct with multiple `_` fields, either all must be present or all must be omitted, to avoid ambiguity.
The reason for this is to make it easier to use `_` fields for things like establishing field alignment, modifying layout, and [other type attributes](https://go.dev/issue/67265). Without this change, `_` fields require providing pointless initial "values" in unkeyed literals, and make it impossible to use unkeyed literals across package boundaries (it may be bad style, but it is legal Go and probably exists -- a particular use case is Windows). One specific motivation was looking at the work required and effects on code/tools/usability to add `structs.HostLayout` annotations to the code generated by cgo; it was messy and unpleasant.
This breaks no Go code, so it is compatible.
It also "potentially affects" very little code anyway; unkeyed literals for structs with `_` fields are rare. I ran an ecosystem analysis of 44,000 Go projects that were imported 5 or more times. Of those, there were only 137 instances of an unkeyed literal "initializing" a `_` field. Of that 137, 127 were in tests, of of that 127, 75 were literals for the same struct type. Of the 10 "non-test" examples, it turned out to really be only 5 because of duplication in the report, appearing in only 3 p[rojects, and 2 of that 3 involved VNC (i.e., derivative code).
In any case where the export-forbidding properties of `_` were desirable, renaming that field to any other export-forbidding name (e.g., `__` or `underscore`) obtains the same effect.
This will require a change to the Go compiler, and to tools that analyze Go code, so that both forms of struct literals are recognized. | LanguageChange,Proposal,LanguageChangeReview,LanguageProposal | low | Minor |
2,813,938,328 | puppeteer | [Bug]: Dblclick events can get dropped when using WebDriver BiDi with Chrome | ### Minimal, reproducible example
```TypeScript
import puppeteer from 'puppeteer';
const puppeteerOptions = {
//browser: 'firefox',
headless: false,
protocol: 'webDriverBiDi',
}
function timeout(time) {
return page.evaluate(time => {
return new Promise(r => {
setTimeout(r, time)
})
}, time)
}
async function enterValue(selector, value) {
const el = (await page.$(selector))
await el.evaluate(node => ((node).value = ''))
await el.type(value)
await el.press('Enter')
}
const browser = await puppeteer.launch(puppeteerOptions);
const page = await browser.newPage();
await page.goto(String.raw`data:text/html,
<style>
.todo-list li.editing .edit {
display: block;
}
.todo-list li.editing .view {
display: none;
}
.todo-list li label {
display: block;
}
.todo-list li .edit {
display: none;
}
</style>
<script>
function editTodo() {
item.className = 'editing';
}
function keydup(event) {
if (event.key === 'Enter') {
item.className = '';
document.querySelector('label').innerHTML = item.querySelector('.edit').value;
}
}
</script>
<ul class="todo-list">
<li id=item>
<div class="view">
<input class="toggle" type="checkbox" />
<label ondblclick="editTodo(event)">Foo</label>
<button class="destroy"></button>
</div>
<input class="edit" onkeyup="keydup(event)" type="text" />
</li>
</ul>
<div id="output"></div>
`)
await page.click('label', { count: 2 });
await enterValue('.edit', 'edited!')
await page.click('label', { count: 2 });
await enterValue('.edit', 'edited again!')
console.log(await page.evaluate(() => document.querySelector("label").innerText));
await browser.close();
```
### Background
This is a reduced test case from the vue e2e test suite.
### Expectation
The script should print `edited again!`
### Reality
It prints `edited!`. In Chrome without WebDriver BiDi and in Firefox with WebDriver BiDi it prints `edited again!`
### Puppeteer configuration file (if used)
```TypeScript
```
### Puppeteer version
24.1.1
### Node version
23.5.0
### Package manager
yarn
### Package manager version
1.22.22
### Operating system
macOS | bug,P1,confirmed,bidi | medium | Critical |
2,813,943,892 | ollama | Problems with deepseek-r1:671b, ollama keeps crashing on long answers | ### What is the issue?
Hi all,
I'm using an r960 with 2TB of ram, so ram is not a problem here. I'm experiencing constant crashes of ollama 0.5.7 and deepseek-r1:671b, even increasing the context window with the command /set parameter num_ctx 4096.
I also tried a second system, an r670 csp with 1TB of ram, but the problem occurs in the same way.
I'm not able to use gpu due to the massive size of the model, anyway plenty of cores do the job for my current pourposes.
os are ubuntu 22.04.5 and 24.04.1
### OS
Linux
### GPU
_No response_
### CPU
Intel
### Ollama version
0.5.7 | bug | low | Critical |
2,813,947,969 | tailwindcss | [v4] Ignore certain paths from scanning no longer supported | **What version of Tailwind CSS are you using?**
v4.0.0
**What build tool (or framework if it abstracts the build tool) are you using?**
Vite 6
**What version of Node.js are you using?**
22.13
**Describe your issue**
In v3 one could add a line with an exclamation mark `!` prefix in the `tailwind.config.js` to exclude certain files or paths from being scanned:
```js
content: [
'./resources/views/**/*.blade.php',
'!./resources/views/somefoldername/*',
],
```
but that option seems to have vanished in v4. I know I can do `@import 'tailwindcss' source(none);` and then `@source '../../../resources/views';` which works but there is no way to exclude a subfolder from that path.
One work around is to add all the subfolders by hand, but that's a bit of a PITA when the project grows and you forget you need to add every view subfolder to the config css now. | v4 | low | Minor |
2,813,973,259 | next.js | Webpack hot reloading stops working if projects is located in a folder starting with ".git" | ### Link to the code that reproduces this issue
https://github.com/lorissikora/empty-nexts-template
### To Reproduce
1. Create or navigate to a folder starting with `.git` (example `.git-projects`)
2. Create a new project inside of it (example `npx create-next-app@canary` and select defaults)
3. `next dev` (without `--turbo`)
4. Open example page in browser
5. Change `src/app/page.tsx`
6. Observe no change
### Current vs. Expected behavior
**Current**
If the parent folders of a nextjs application start with ".git", the webpack dev server (without `--turbo`) will not hot reload any changes made to the app.
Example paths of this might be:
- `/path/to/my/.git-projects/nextjs-app` (this is my setup, which is how I found this in the first place)
- `/path/to/my/.git-projects/a/b/c/nextjs-app`
- `/path/to/my/.git/nextjs-app`
This is not an issue when using Turbopack.
**Expected**
Hot reloading should work.
---
I honestly don't expect this to be fixed, but I would love for others, who are facing a similar issue, to find this and spare them of multiple hours of wondering why this isn't working as expected.
### Provide environment information
```bash
/bin/sh: pnpm: command not found
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:11 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6020
Available memory (MB): 32768
Available CPU cores: 12
Binaries:
Node: 22.9.0
npm: 10.8.3
Yarn: 1.22.19
pnpm: N/A
Relevant Packages:
next: 15.2.0-canary.27 // Latest available version is detected (15.2.0-canary.27).
eslint-config-next: 15.2.0-canary.27
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Webpack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
Was tested on `@latest` and `@canary`. Ignore the github repo, it is simply the default template of the current canary version. | Webpack | low | Minor |
2,813,973,892 | ollama | Ollama: torch.OutOfMemoryError: CUDA out of memory | ### What is the issue?
Running some tests using pytest with the following 6 models. What I find is that if I run all tests with each model before go on to the next model, the tests mostly worked fine. 123/126 passed. But if I run each test against all 6 models sequentially and then go to the next test then I see hangs or out of memory error. Is this a known issue? I expect the order of running tests using Ollama should not matter.
ollama version is 0.5.7
pytest 8.3.4
| NVIDIA-SMI 550.144.03 Driver Version: 550.144.03 CUDA Version: 12.4 |
| 0 NVIDIA GeForce RTX 4070 Ti Off | 00000000:01:00.0 On | N/A |
Linux kennethpc 6.8.0-51-generic #52-Ubuntu SMP PREEMPT_DYNAMIC Thu Dec 5 13:09:44 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
qwen2:latest
qwen2.5:latest
mistral:latest
llama3-groq-tool-use:latest
llama3.2:latest
llama3.2:latest
Here is some examples of the errors. Sometimes I simply see hangs:
FAILED tests/_1_misc_test.py::test_t6_func[mistral:latest] - assert None is not None
FAILED tests/_1_misc_test.py::test_t6_func[llama3-groq-tool-use:latest] - assert None is not None
FAILED tests/_2_rag_test.py::test_t7_func[qwen2:latest-chroma] - assert 768 == 384
FAILED tests/_2_rag_test.py::test_t7_func[qwen2.5:latest-chroma] - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacity of 11.72 GiB of which 41.50 MiB is free. Process 263255 has 3.40 GiB memory in use. Process 263532 has 6.09 GiB memory in use. In...
FAILED tests/_2_rag_test.py::test_t7_func[qwen2.5:latest-huggingface] - torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacity of 11.72 GiB of which 41.50 MiB is free. Process 263255 has 3.40 GiB memory in use. Process 263532 has 6.09 GiB memory in use. In...
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7 | bug | low | Critical |
2,813,981,617 | puppeteer | [Feature]: make DOM events available | ### Feature description
#12682 proposes to add a `waitForEvent` to `Page`. This idea is a combination of two ideas:
1. Listening to events that are emitted by an element on the page and getting them out to the Puppeteer context
2. Finding the element itself and creating a Promise that resolves once an event (or a number of events) have been emitted/received.
I would like to focus on idea number 1 and take it a step further: get to a situation where we can say 'Puppeteer, please listen to events of type so-and-so that are emitted by in-page object such-and-such, and expose them in the Puppeteer context'. In other words:
```ts
// getting a handle
const button = await page.evaluateHandle(() => document.getElementById('button'))
const buttonThatEmitsClickEvents = await doSomethingInvolving(
button, // probably, right? unless this could become an 'instance method' on `ElementHandle` or `JSHandle`...
'click', // definitely, because we need to know what to listen to, we can't listen to all types of events
/* and maybe something else */
);
// the idea being that we can now
buttonThatEmitsClickEvents.addEventListener('click', e => {
// now the question is of course: which properties of the original click event made their way to this `e`? Probably not
// `preventDefault()`, but I think we should be able to get `offsetX` for example.
})
```
As you can see, I'm not sure what the API would look like exactly. Also I don't know if this even makes sense as a feature for Puppeteer. I suppose it could be argued that there is no need to get events from the page, because any event that happens in the page was probably caused by Puppeteer itself, so this adds no new information. On the other hand, it could also be argued that someone might want to test some functionality that produces custom DOM-like events.
What do you think?
(By the way: I really needed this for one of my projects, so I created [an npm package](https://www.npmjs.com/package/puppeteer-event-target-handle) that provides a form of the functionality that I am proposing.) | feature | low | Minor |
2,813,990,519 | pytorch | Dynamo performance test benchmarks reuse model state between eager and compiled | ### ๐ Describe the bug
See `model`:
https://github.com/pytorch/pytorch/blob/64cd81712ddf867d1c4dc46ba4554d40d6d7d610/benchmarks/dynamo/common.py#L3431-L3470
Later, it is also used by the `speedup_experiment`. This can cause issues with stateful models like convit_base: https://github.com/huggingface/pytorch-image-models/blob/d81da93c1640a504977b0ee494791e5c634ec63c/timm/models/convit.py#L67-L70, or DDP/FSDP wrappers.
If we deepcopy the model, a few cudagraphs perf benchmarks start to fail e.g. convit_base, llama, cm3leon_generate
`RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run.`
### Versions
main
cc @chauhang @penguinwu | oncall: pt2 | low | Critical |
2,813,995,730 | PowerToys | SLNX solution files | ### Description of the new feature / enhancement
So, today I learned about the existence of a new format: SLNX. I opened this issue to track the what, when and why. I presume it'll be a big task.
### Scenario when this would be used?
Using new solutions?
### Supporting information
I find it strange to see how few info is online.
https://blog.ndepend.com/slnx-the-new-net-solution-xml-file-format/ | Area-Build,Needs-Triage | low | Minor |
2,814,009,755 | ollama | Support Request for jonatasgrosman/wav2vec2-large-xlsr-53-italian | (.venv) raphy@raohy:~/llama.cpp$ git clone https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian
Cloning into 'wav2vec2-large-xlsr-53-italian'...
remote: Enumerating objects: 99, done.
remote: Total 99 (delta 0), reused 0 (delta 0), pack-reused 99 (from 1)
Unpacking objects: 100% (99/99), 545.41 KiB | 1.55 MiB/s, done.
Filtering content: 100% (2/2), 2.35 GiB | 92.80 MiB/s, done.
(.venv) raphy@raohy:~/llama.cpp$ ollama create Modelfile
transferring model data
unpacking model metadata
Error: Models based on 'Wav2Vec2ForCTC' are not yet supported
| model request | low | Critical |
2,814,039,333 | ollama | Support Janus-Pro-7b for vision models | Just announced and performing great with OCR
https://huggingface.co/deepseek-ai/Janus-Pro-7B | feature request | high | Critical |
2,814,052,976 | pytorch | Error instead of silent specializing when an unbacked dimension has a value range cardinality of one | ```
import torch
torch._dynamo.config.automatic_dynamic_local_pgo = False
@torch.compile()
def fn(x):
return torch.cat([x, torch.ones(5, 5)])
x = torch.ones(5, 5)
torch._dynamo.decorators.mark_unbacked(x, 0)
torch._dynamo.decorators.mark_unbacked(x, 1)
fn(x)
```
Results in
```
L_x_: "f32[u0, 5][5, 1]cpu"
```
even though we explicitly marked x.size()[1] as unbacked.
We should error similar to the constraint tests in mark_dynamic
cc @chauhang @penguinwu @ezyang | oncall: pt2,module: dynamic shapes | low | Critical |
2,814,059,365 | next.js | Calling a server action during render tries to update `Router` component, causing an error | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/crazy-field-gnyj7q
### To Reproduce
1. Visit the reproduction app.
2. In the app preview, see the Next.js error pop-up:
> __Console Error__
>
> Cannot update a component (`Router`) while rendering a different component (`Demo`). To locate the bad setState() call inside `Demo`, follow the stack trace as described in https://react.dev/link/setstate-in-render
Note that there is no explicit `setState()` call inside the component.
### Current vs. Expected behavior
Currently, calling a server action during a render causes a React error (see above).
The expected behavior is that the server action returns a `Promise` (without causing a React error), and that the `Promise`, if properly cached, can be consumed with `use` (without causing a React error).
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.2.0-canary.27 // Latest available version is detected (15.2.0-canary.27).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
_No response_ | Runtime | low | Critical |
2,814,092,472 | transformers | meta-llama/Llama-3.2-11B-Vision-Instruct, device_map = 'auto', padding ruins _prepare_4d_causal_attention_mask_with_cache_position | ### System Info
### OS info
- `transformers` version: 4.48.1
- Platform: Linux-5.4.0-192-generic-x86_64-with-glibc2.31
- Python version: 3.12.0
- Huggingface_hub version: 0.26.5
- Safetensors version: 0.4.5
- Accelerate version: 1.2.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using parallel device setup
- Using GPU in script (8 gpu's in my case)
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
@amyeroberts , @qubvel @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
## Steps to replicate
```python
import requests
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["<|image|><|begin_of_text|>What do you see here?", "<|image|><|begin_of_text|>What do you see here but longer?"]
images = [image, image]
repo_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"
processor = AutoProcessor.from_pretrained(repo_id)
model = MllamaForConditionalGeneration.from_pretrained(repo_id, device_map = 'auto')
batch = processor(text=texts, images=images, return_tensors="pt", padding=True)
with torch.no_grad():
model_output = model(
input_ids = batch['input_ids'],
attention_mask = batch['attention_mask'],
pixel_values = batch['pixel_values'],
aspect_ratio_ids = batch['aspect_ratio_ids'],
aspect_ratio_mask = batch['aspect_ratio_mask'],
cross_attention_mask = batch['cross_attention_mask'],
)
```
### Full traceback:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[8], line 2
1 with torch.no_grad():
----> 2 model_output = model(
3 input_ids = batch['input_ids'],
4 attention_mask = batch['attention_mask'],
5 pixel_values = batch['pixel_values'],
6 aspect_ratio_ids = batch['aspect_ratio_ids'],
7 aspect_ratio_mask = batch['aspect_ratio_mask'],
8 cross_attention_mask = batch['cross_attention_mask'],
9 # labels = batch['labels']
10 )
12 # model_output = model.generate(**batch, max_new_tokens = 64)
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/accelerate/hooks.py:170, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs)
168 output = module._old_forward(*args, **kwargs)
169 else:
--> 170 output = module._old_forward(*args, **kwargs)
171 return module._hf_hook.post_forward(module, output)
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/transformers/models/mllama/modeling_mllama.py:2131, in MllamaForConditionalGeneration.forward(self, input_ids, pixel_values, aspect_ratio_mask, aspect_ratio_ids, attention_mask, cross_attention_mask, cross_attention_states, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, cache_position, num_logits_to_keep)
2128 cross_attention_mask = cross_attention_mask[:, :, cache_position]
2129 full_text_row_masked_out_mask = full_text_row_masked_out_mask[:, :, cache_position]
-> 2131 outputs = self.language_model(
2132 input_ids=input_ids,
2133 attention_mask=attention_mask,
2134 position_ids=position_ids,
2135 cross_attention_states=cross_attention_states,
2136 cross_attention_mask=cross_attention_mask,
2137 full_text_row_masked_out_mask=full_text_row_masked_out_mask,
2138 past_key_values=past_key_values,
2139 use_cache=use_cache,
2140 inputs_embeds=inputs_embeds,
2141 labels=labels,
2142 output_hidden_states=output_hidden_states,
2143 output_attentions=output_attentions,
2144 return_dict=return_dict,
2145 cache_position=cache_position,
2146 num_logits_to_keep=num_logits_to_keep,
2147 )
2149 return outputs
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/transformers/models/mllama/modeling_mllama.py:1939, in MllamaForCausalLM.forward(self, input_ids, attention_mask, position_ids, cross_attention_states, cross_attention_mask, full_text_row_masked_out_mask, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, cache_position, num_logits_to_keep, **loss_kwargs)
1936 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1938 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
-> 1939 outputs = self.model(
1940 input_ids=input_ids,
1941 cross_attention_states=cross_attention_states,
1942 attention_mask=attention_mask,
1943 position_ids=position_ids,
1944 cross_attention_mask=cross_attention_mask,
1945 full_text_row_masked_out_mask=full_text_row_masked_out_mask,
1946 past_key_values=past_key_values,
1947 inputs_embeds=inputs_embeds,
1948 use_cache=use_cache,
1949 output_attentions=output_attentions,
1950 output_hidden_states=output_hidden_states,
1951 return_dict=return_dict,
1952 cache_position=cache_position,
1953 )
1955 hidden_states = outputs[0]
1956 logits = self.lm_head(hidden_states[:, -num_logits_to_keep:, :]).float()
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1553, in Module._wrapped_call_impl(self, *args, **kwargs)
1551 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1552 else:
-> 1553 return self._call_impl(*args, **kwargs)
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1562, in Module._call_impl(self, *args, **kwargs)
1557 # If we don't have any hooks, we want to skip the rest of the logic in
1558 # this function, and just call forward.
1559 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1560 or _global_backward_pre_hooks or _global_backward_hooks
1561 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1562 return forward_call(*args, **kwargs)
1564 try:
1565 result = None
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/transformers/models/mllama/modeling_mllama.py:1758, in MllamaTextModel.forward(self, input_ids, attention_mask, position_ids, cross_attention_states, cross_attention_mask, full_text_row_masked_out_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict, cache_position)
1755 if position_ids is None:
1756 position_ids = cache_position.unsqueeze(0)
-> 1758 causal_mask = self._update_causal_mask(
1759 attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions
1760 )
1762 # create position embeddings to be shared across the decoder layers
1763 position_embeddings = self.rotary_emb(hidden_states, position_ids)
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/transformers/models/mllama/modeling_mllama.py:1111, in MllamaPreTrainedModel._update_causal_mask(self, attention_mask, input_tensor, cache_position, past_key_values, output_attentions)
1104 target_length = (
1105 attention_mask.shape[-1]
1106 if isinstance(attention_mask, torch.Tensor)
1107 else past_seen_tokens + sequence_length + 1
1108 )
1110 # In case the provided `attention` mask is 2D, we generate a causal mask here (4D).
-> 1111 causal_mask = self._prepare_4d_causal_attention_mask_with_cache_position(
1112 attention_mask,
1113 sequence_length=sequence_length,
1114 target_length=target_length,
1115 dtype=dtype,
1116 device=device,
1117 cache_position=cache_position,
1118 batch_size=input_tensor.shape[0],
1119 )
1121 if (
1122 self.config._attn_implementation == "sdpa"
1123 and attention_mask is not None
(...)
1128 # using left padding. This is required by F.scaled_dot_product_attention memory-efficient attention path.
1129 # Details: https://github.com/pytorch/pytorch/issues/110213
1130 min_dtype = torch.finfo(dtype).min
File ~/mtrix2/mt-rix/.venv/lib/python3.12/site-packages/transformers/models/mllama/modeling_mllama.py:1187, in MllamaPreTrainedModel._prepare_4d_causal_attention_mask_with_cache_position(attention_mask, sequence_length, target_length, dtype, device, cache_position, batch_size, **kwargs)
1185 causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
1186 mask_length = attention_mask.shape[-1]
-> 1187 padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :]
1188 padding_mask = padding_mask == 0
1189 causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
1190 padding_mask, min_dtype
1191 )
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
```
### Expected behavior
## Expected Behaviour
Described in Potential mitigation BP, basically attention mask(causal_mask should handle device similar to attention mask for multiple gpu setups)
## Notes:
1. I've also tried to use: ```processor.tokenizer.padding_side = 'left' ``` throws same error
2. Potential mitigation: transfer attention_mask within `MllamaPreTrainedModel._prepare_4d_causal_attention_mask_with_cache_position` to self.device if provided, updated code:
```python
def _prepare_4d_causal_attention_mask_with_cache_position(
attention_mask: torch.Tensor,
sequence_length: int,
target_length: int,
dtype: torch.dtype,
device: torch.device,
cache_position: torch.Tensor,
batch_size: int,
**kwargs,
):
"""
Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape
`(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing.
Args:
attention_mask (`torch.Tensor`):
A 2D attention mask of shape `(batch_size, key_value_length)` or a 4D attention mask of shape
`(batch_size, 1, query_length, key_value_length)`.
sequence_length (`int`):
The sequence length being processed.
target_length (`int`):
The target length: when generating with static cache, the mask should be as long as the static cache,
to account for the 0 padding, the part of the cache that is not filled yet.
dtype (`torch.dtype`):
The dtype to use for the 4D attention mask.
device (`torch.device`):
The device to plcae the 4D attention mask on.
cache_position (`torch.Tensor`):
Indices depicting the position of the input sequence tokens in the sequence.
batch_size (`torch.Tensor`):
Batch size.
"""
if attention_mask is not None:
attention_mask = attention_mask.to(device)
cache_position = cache_position.to(device)
if attention_mask is not None and attention_mask.dim() == 4:
# In this case we assume that the mask comes already in inverted form and requires no inversion or slicing.
causal_mask = attention_mask
else:
min_dtype = torch.finfo(dtype).min
causal_mask = torch.full(
(sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device
)
if sequence_length != 1:
causal_mask = torch.triu(causal_mask, diagonal=1)
causal_mask *= torch.arange(target_length, device=device) > cache_position.reshape(-1, 1)
causal_mask = causal_mask[None, None, :, :].expand(batch_size, 1, -1, -1)
if attention_mask is not None:
causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
mask_length = attention_mask.shape[-1]
padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :]
padding_mask = padding_mask == 0
causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
padding_mask, min_dtype
)
return causal_mask
``` | bug | low | Critical |
2,814,107,366 | ollama | s390x support for the install.sh | It would be good if the install.sh script be capable of ollama install in s390x machines, it would make life easier for multiple developers currently working on it.
curl -fsSL https://ollama.com/install.sh | sh
You may say that you current don't have a s390x to develop it , you can get a trial one for 120 days [here ](https://community.ibm.com/zsystems/form/l1cc-oss-vm-request/) .
The process to build it from source works fine (make -j 5) command as it works fine for any linux other arch/x86/arm machines.
Thanks | feature request | low | Minor |
2,814,113,911 | flutter | Attestation failed while packaging Flutter 3.29.0-0.2.pre beta release | Google internal only logs: https://ci.chromium.org/ui/p/dart-internal/builders/flutter/Windows%20Production%20Engine%20Drone/8248/infra
```
flutter_windows_3.29.0-0.2.pre-beta.zip provenance rejected: NoAttestationsProvided: Caller didn't provide any attestation with subject matching the digest of the requested artifact: '_:{sha256:a475487459ea10bd47a58763387b1d4b3545670e8b818637145e57f7c0ee485e}'.
``` | team-release | low | Critical |
2,814,123,483 | yt-dlp | YouTube Clipped audio? (over 0dBFS) | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
Hi all!
I have been recently investigating something interesting I noticed with the music I downloaded from YouTube using some various tools such as this one.
The issue is that some of the songs I download go over 0dBFS! (here is the one from the example [Euphoria](https://music.youtube.com/watch?v=KZ-daHDx6HA&si=0MnqbM0U8FqXOygB)
That data, that could seem to be lost because of it being clipped, actually can be recovered if I bring the volume down using an audio editor such as adobe audition.
Here is the raw audio as it comes from youtubedl in webm format, encoded with OPUS: [image](https://imgur.com/a/VD7MHdu)
And here it is when I lower its overall volume: [image](https://imgur.com/a/GTso9pK)
As you can see the clipped data can actually be recovered, that is I think, because of the 32bit properties the file has as you can see here: [image](https://imgur.com/a/pxswje2)
I am still to try capturing youtube's audio with a 32 bit capable sound card, but still, it would NOT make any sense for the audio played by youtube to be anywhere what the downloaded is, clipped.
I have been behind this for a while but don't seem to be getting any close.
What are your thoughts on this?
There is obviously no way youtube songs are uploaded in a 32 bit floating point format... The standard is 24 bit and 16 for distribution
Are we maybe in front of what it is called "loudness war"?
### Provide verbose output that clearly demonstrates the problem
(It's not actually a problen with yt-dlp itself)
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
``` | question | low | Critical |
2,814,126,756 | flutter | When releasing 3.29 beta provenance rejected | https://ci.chromium.org/ui/p/dart-internal/builders/flutter/Windows%20Production%20Engine%20Drone/8248/infra
```
flutter_windows_3.29.0-0.2.pre-beta.zip provenance rejected: NoAttestationsProvided: Caller didn't provide any attestation with subject matching the digest of the requested artifact: '_:{sha256:a475487459ea10bd47a58763387b1d4b3545670e8b818637145e57f7c0ee485e}'
```
Google owned bug that covers some of how provenance works in builds https://b.corp.google.com/issues/309500637
Google owned document about provenance. [go/bcid-for-flutter-in-google3](http://goto.google.com/bcid-for-flutter-in-google3) | c: regression,a: release,team-release | low | Critical |
2,814,127,167 | pytorch | Behavior Difference with Flex Attention + Sequence Packing | ### ๐ Describe the bug
### Issue:
I'm observing odd attention map patterns in my flex attention sequence packing runs that aren't there on (what should be) identical F.SDPA runs:
F.SDPA maps at step 24k

Flex Attention maps a step 24K
ย 
I created these maps with this function: https://gist.github.com/zaptrem/932adb082755574409e0084e8647757c
It calculates attention manually (so we can see the attention maps), compares the final result to that of Flex Attention to verify the maps are accurate, then plots them.
Here is the image it outputs while running it on my model:

And here is the text it prints:
https://gist.github.com/zaptrem/cd8147ae21b569287dfa841eba519148
### Background:
- I'm training a causal transformer. In order to enable efficient training on variable sequence lengths I decided to add sequence packing with flex attention + BlockMasks.
- I trained two models for comparison: F.SDPA with batch size 44 where each sequence is 512 long, and Flex Attention batch size 22 where each sequence is length 1024 (two 512-long sequences packed together). Causal+document based BlockMasks are applied as specified here: https://gist.github.com/zaptrem/ddf6fb358104dda3866597ba1c34fa40
The losses are similar (flex attention is consistently slightly lower), but the attention masks are not and though it's hard to tell so early in training I believe the outputs are worse from the flex attention model.
### Versions
poetry run python collect_env.py
Collecting environment information...
PyTorch version: 2.7.0.dev20250127+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-1023-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 104
On-line CPU(s) list: 0-103
Vendor ID: GenuineIntel
Model name: Intel Xeon Processor (SapphireRapids)
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 104
Socket(s): 1
Stepping: 4
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3.3 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 416 MiB (104 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-103
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] audio-diffusion-pytorch==0.1.3
[pip3] ema-pytorch==0.6.5
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnxruntime==1.18.1
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-metric-learning==2.5.0
[pip3] pytorch-triton==3.2.0+gitb2684bf3
[pip3] torch==2.7.0.dev20250127+cu124
[pip3] torch-audiomentations==0.11.1
[pip3] torch-optimi==0.2.1
[pip3] torch-pitch-shift==1.2.4
[pip3] torch-stoi==0.2.3
[pip3] torchaudio==2.6.0.dev20250127+cu124
[pip3] torchcde==0.2.5
[pip3] torchcfm==1.0.5
[pip3] torchdiffeq==0.2.4
[pip3] torchdyn==1.0.6
[pip3] torchmetrics==1.4.0.post0
[pip3] torchsde==0.2.6
[pip3] torchvision==0.22.0.dev20250127+cu124
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Critical |
2,814,140,927 | rust | generic_const_items: Generic function pointer leads to ICEs during codegen | Reproducer:
```rs
#![feature(generic_const_items)]
const _IDENTITY<T>: fn(T) -> T = |x| x;
fn main() {}
```
Start of compiler output (stopping before backtrace):
```
warning: the feature `generic_const_items` is incomplete and may not be safe to use and/or cause compiler crashes
--> oi.rs:1:12
|
1 | #![feature(generic_const_items)]
| ^^^^^^^^^^^^^^^^^^^
|
= note: see issue #113521 <https://github.com/rust-lang/rust/issues/113521> for more information
= note: `#[warn(incomplete_features)]` on by default
error: internal compiler error: compiler/rustc_codegen_llvm/src/context.rs:1236:21: `fn_abi_of_instance(<{[email protected]:3:34: 3:37} as FnOnce<(T,)>>::call_once - shim, [])` failed: Layout(Unknown(T/#0))
--> /home/fmease/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
|
250 | extern "rust-call" fn call_once(self, args: Args) -> Self::Output;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
<details><summary>Rest of compiler output (includes backtrace)</summary>
```
warning: the feature `generic_const_items` is incomplete and may not be safe to use and/or cause compiler crashes
--> oi.rs:1:12
|
1 | #![feature(generic_const_items)]
| ^^^^^^^^^^^^^^^^^^^
|
= note: see issue #113521 <https://github.com/rust-lang/rust/issues/113521> for more information
= note: `#[warn(incomplete_features)]` on by default
error: internal compiler error: compiler/rustc_codegen_llvm/src/context.rs:1236:21: `fn_abi_of_instance(<{[email protected]:3:34: 3:37} as FnOnce<(T,)>>::call_once - shim, [])` failed: Layout(Unknown(T/#0))
--> /home/fmease/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
|
250 | extern "rust-call" fn call_once(self, args: Args) -> Self::Output;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
thread 'rustc' panicked at compiler/rustc_codegen_llvm/src/context.rs:1236:21:
Box<dyn Any>
stack backtrace:
0: 0x74ca1ac8ea90 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h2f23fd9f9d9d249c
1: 0x74ca1b414f26 - core::fmt::write::hda581013c22cc38e
2: 0x74ca1c378bd1 - std::io::Write::write_fmt::h7b67a2c48701ad74
3: 0x74ca1ac8e8f2 - std::sys::backtrace::BacktraceLock::print::hf557d5f06e408e4b
4: 0x74ca1ac90d72 - std::panicking::default_hook::{{closure}}::h003adb2133b1767b
5: 0x74ca1ac90bfa - std::panicking::default_hook::h16009a902eb48a3c
6: 0x74ca19e42759 - std[b112ec976dab40f4]::panicking::update_hook::<alloc[fdb5804898039ca3]::boxed::Box<rustc_driver_impl[8694186ee707eb2f]::install_ice_hook::{closure#1}>>::{closure#0}
7: 0x74ca1ac918f3 - std::panicking::rust_panic_with_hook::h1d06543d16cce998
8: 0x74ca19e7d421 - std[b112ec976dab40f4]::panicking::begin_panic::<rustc_errors[98d48f3eae9e9f2e]::ExplicitBug>::{closure#0}
9: 0x74ca19e72326 - std[b112ec976dab40f4]::sys::backtrace::__rust_end_short_backtrace::<std[b112ec976dab40f4]::panicking::begin_panic<rustc_errors[98d48f3eae9e9f2e]::ExplicitBug>::{closure#0}, !>
10: 0x74ca19e720df - std[b112ec976dab40f4]::panicking::begin_panic::<rustc_errors[98d48f3eae9e9f2e]::ExplicitBug>
11: 0x74ca19e87351 - <rustc_errors[98d48f3eae9e9f2e]::diagnostic::BugAbort as rustc_errors[98d48f3eae9e9f2e]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x74ca1a3e646c - <rustc_errors[98d48f3eae9e9f2e]::DiagCtxtHandle>::span_bug::<rustc_span[863c7cf5cfba53d9]::span_encoding::Span, alloc[fdb5804898039ca3]::string::String>
13: 0x74ca1a46c4b7 - rustc_middle[a35b023f906b1e15]::util::bug::opt_span_bug_fmt::<rustc_span[863c7cf5cfba53d9]::span_encoding::Span>::{closure#0}
14: 0x74ca1a451bfa - rustc_middle[a35b023f906b1e15]::ty::context::tls::with_opt::<rustc_middle[a35b023f906b1e15]::util::bug::opt_span_bug_fmt<rustc_span[863c7cf5cfba53d9]::span_encoding::Span>::{closure#0}, !>::{closure#0}
15: 0x74ca1a451a8b - rustc_middle[a35b023f906b1e15]::ty::context::tls::with_context_opt::<rustc_middle[a35b023f906b1e15]::ty::context::tls::with_opt<rustc_middle[a35b023f906b1e15]::util::bug::opt_span_bug_fmt<rustc_span[863c7cf5cfba53d9]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
16: 0x74ca18ed1a57 - rustc_middle[a35b023f906b1e15]::util::bug::span_bug_fmt::<rustc_span[863c7cf5cfba53d9]::span_encoding::Span>
17: 0x74ca19cc5786 - <rustc_codegen_llvm[5b2e353c6d9ab786]::context::CodegenCx as rustc_middle[a35b023f906b1e15]::ty::layout::FnAbiOfHelpers>::handle_fn_abi_err
18: 0x74ca19c9c6c8 - <rustc_codegen_llvm[5b2e353c6d9ab786]::context::CodegenCx as rustc_middle[a35b023f906b1e15]::ty::layout::FnAbiOf>::fn_abi_of_instance::{closure#0}
19: 0x74ca1c31a5ce - <rustc_codegen_llvm[5b2e353c6d9ab786]::context::CodegenCx as rustc_codegen_ssa[763a256790c9aaeb]::traits::declare::PreDefineCodegenMethods>::predefine_fn
20: 0x74ca1c317b2e - rustc_codegen_llvm[5b2e353c6d9ab786]::base::compile_codegen_unit::module_codegen
21: 0x74ca1c402008 - <rustc_codegen_llvm[5b2e353c6d9ab786]::LlvmCodegenBackend as rustc_codegen_ssa[763a256790c9aaeb]::traits::backend::ExtraBackendMethods>::compile_codegen_unit
22: 0x74ca1c3fe371 - <rustc_codegen_llvm[5b2e353c6d9ab786]::LlvmCodegenBackend as rustc_codegen_ssa[763a256790c9aaeb]::traits::backend::CodegenBackend>::codegen_crate
23: 0x74ca1c405074 - <rustc_interface[7b6200b50ffd4527]::queries::Linker>::codegen_and_build_linker
24: 0x74ca1c388a5d - rustc_interface[7b6200b50ffd4527]::passes::create_and_enter_global_ctxt::<core[1c802f461cf195ce]::option::Option<rustc_interface[7b6200b50ffd4527]::queries::Linker>, rustc_driver_impl[8694186ee707eb2f]::run_compiler::{closure#0}::{closure#2}>::{closure#2}::{closure#0}
25: 0x74ca1c3c3dd3 - rustc_interface[7b6200b50ffd4527]::interface::run_compiler::<(), rustc_driver_impl[8694186ee707eb2f]::run_compiler::{closure#0}>::{closure#1}
26: 0x74ca1c2857f5 - std[b112ec976dab40f4]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[7b6200b50ffd4527]::util::run_in_thread_with_globals<rustc_interface[7b6200b50ffd4527]::util::run_in_thread_pool_with_globals<rustc_interface[7b6200b50ffd4527]::interface::run_compiler<(), rustc_driver_impl[8694186ee707eb2f]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
27: 0x74ca1c2854d9 - <<std[b112ec976dab40f4]::thread::Builder>::spawn_unchecked_<rustc_interface[7b6200b50ffd4527]::util::run_in_thread_with_globals<rustc_interface[7b6200b50ffd4527]::util::run_in_thread_pool_with_globals<rustc_interface[7b6200b50ffd4527]::interface::run_compiler<(), rustc_driver_impl[8694186ee707eb2f]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[1c802f461cf195ce]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
28: 0x74ca1c284c6f - std::sys::pal::unix::thread::Thread::new::thread_start::hf2e53aa54a2bf93d
29: 0x74ca166a339d - <unknown>
30: 0x74ca1672849c - <unknown>
31: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.86.0-nightly (1e9b0177d 2025-01-24) running on x86_64-unknown-linux-gnu
query stack during panic:
end of query stack
error: aborting due to 1 previous error; 1 warning emitted
error: process exited unsuccessfully: exit status: 101
```
</details>
Interestingly(?), this ICE doesn't occur if we don't coerce the closure to a function pointer:
```rs
#![feature(generic_const_items, type_alias_impl_trait)]
type Fn<T> = impl FnOnce(T) -> T;
const _IDENTITY<T>: Fn<T> = |x| x;
fn main() { /* _IDENTITY/*::<i32>*/(23); */ }
``` | I-ICE,T-compiler,C-bug,requires-incomplete-features,A-monomorphization,F-generic_const_items | low | Critical |
2,814,163,331 | pytorch | export serde turn hop's tuple arg into list | ### ๐ Describe the bug
See below repro:
```
import io
import torch
class Simple(torch.nn.Module):
def forward(self, ci, a, b):
def cond_fn(i, x, y):
return i > 0
def body_fn(i, x, y):
return i - 1, x + y, y - x
return torch._higher_order_ops.while_loop(cond_fn, body_fn, (ci, a, b))
example_inputs = (
torch.tensor(1),
torch.randn(10, 20),
torch.randn(10, 20),
)
ep = torch.export.export(Simple(), example_inputs)
print(ep)
buffer = io.BytesIO()
torch.export.save(ep, buffer)
buffer.seek(0)
loaded_ep = torch.export.load(buffer)
print(loaded_ep)
```
Before:
```
while_loop = torch.ops.higher_order.while_loop(while_loop_cond_graph_0, while_loop_body_graph_0, (ci, a, b), ()); while_loop_cond_graph_0 = while_loop_body_graph_0 = ci = a = b = None
```
after serde:
```
while_loop = torch.ops.higher_order.while_loop(while_loop_cond_graph_0, while_loop_body_graph_0, [ci, a, b], []); while_loop_cond_graph_0 = while_loop_body_graph_0 = ci = a = b = None
```
### Versions
on master
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo | oncall: pt2,oncall: export | low | Critical |
2,814,178,540 | ollama | Add support fo Qwen 2.5 VL models (3B, 7B and 32B) instruct versions | Hello, now hours after being released, I would like to suggest that you add support for the Qwen2.5-VL models.
**[Qwen2.5-VL - Hugginface collection](https://huggingface.co/collections/Qwen/qwen25-vl-6795ffac22b334a837c0f9a5)** | model request | low | Major |
2,814,190,751 | deno | AWS S3 PutObjectCommand extremely slow | Version: Deno 2.1.7
I am using `"@aws-sdk/client-s3": "npm:@aws-sdk/[email protected]",` to upload files to S3.
Here is sample code, uploading a 4KB file takes 6 seconds. The time is taken on step `s3Client.send(command);`.
I also tried uploading a 100MB file, took ~7.5 second. So the upload speed seems fine. Could it be issue with connection establish?
I ran the same code with bun, took 0.8ms to upload a 4KB file.
```ts
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
// Configure the S3 client
const s3Client = new S3Client({
region: Deno.env.get('AWS_REGION') || 'us-east-1',
credentials: {
accessKeyId: Deno.env.get('AWS_ACCESS_KEY_ID') || '',
secretAccessKey: Deno.env.get('AWS_SECRET_ACCESS_KEY') || ''
}
});
// Function to upload a file to S3
async function uploadFileToS3(bucketName: string, key: string, filePath: string) {
try {
// Read the file
const fileContent = await Deno.readFile(filePath);
const command = new PutObjectCommand({
Bucket: bucketName,
Key: key,
Body: fileContent
});
console.time('send');
const response = await s3Client.send(command);
console.timeEnd('send');
console.log('File uploaded successfully:', response);
} catch (error) {
console.error('Error uploading file:', error);
throw error; // Re-throw the error for better error handling
}
}
// Example usage
const bucketName = Deno.env.get('AWS_BUCKET_NAME') || '';
await uploadFileToS3(
bucketName,
'wacv24-2686.mp4',
'/Users/hk/Downloads/wacv24-2686.mp4'
);
``` | bug,node compat | low | Critical |
2,814,194,397 | yt-dlp | "Error parsing Opus packet header." + "Error during demuxing: Error number -10054 occurred" (while using "--download-sections") | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting a bug unrelated to a specific site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
[Error during demuxing: Error number -10054 occurred]: When trying to **only partialy** download the yt-video (even if '--download-sections' is "*0-inf" which is whole video), the muxed audio in the MKV is corrupted, plays only part of the audio stream (not whole), after some point the video becomes silent without any audio. (Though the problem doesn't exist if not specifying the '--download-sections' at all.)

[Command Used]:
yt-dlp -f "299+251" "https://www.youtube.com/watch?v=VQSL9kGcjuw" --download-sections "*0-1:40" -vU
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
Command-line config: ['-f', '299+251', 'https://www.youtube.com/watch?v=VQSL9kGcjuw', '--dow nload-sections', '*0-1:40', '-vU']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [3b4531934] (win_ exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023 )
[debug] exe versions: ffmpeg N-118369-g959b799c8d-20250127 (setts), ffprobe N-118369-g959b799c8d-202 50127, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, m utagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.2
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/la test
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[youtube] Extracting URL: https://www.youtube.com/watch?v=VQSL9kGcjuw
[youtube] VQSL9kGcjuw: Downloading webpage
[youtube] VQSL9kGcjuw: Downloading tv client config
[youtube] VQSL9kGcjuw: Downloading player 37364e28
[youtube] VQSL9kGcjuw: Downloading tv player API JSON
[youtube] VQSL9kGcjuw: Downloading ios player API JSON
[debug] Loading youtube-nsig.37364e28 from cache
[debug] [youtube] Decrypted nsig NLkeZs0AV9UD8G6r => gLpmXbsDO4v26A
[debug] Loading youtube-nsig.37364e28 from cache
[debug] [youtube] Decrypted nsig nj6OBraCMFPNja4b => iC1YrLCUkQcavA
[debug] [youtube] VQSL9kGcjuw: ios client https formats require a GVS PO Token which was not provide d. They will be skipped as they may yield HTTP Error 403. You can manually pass a GVS PO Token for t his client with --extractor-args "youtube:po_token=ios.gvs+XXX". For more information, refer to htt ps://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide . To enable these broken formats anyway, pass --ex tractor-args "youtube:formats=missing_pot"
[youtube] VQSL9kGcjuw: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[info] VQSL9kGcjuw: Downloading 1 format(s): 299+251
[info] VQSL9kGcjuw: Downloading 1 time ranges: 0.0-100.0
[debug] Invoking ffmpeg downloader on "https://rr4---sn-ivuoxu-ua86.googlevideo.com/videoplayback?ex pire=1738038428&ei=PAiYZ-SMNsSrp-oPu_DO2A8&ip=2a00%3Aa040%3A1a5%3A21c3%3A39b5%3Ad8bd%3A7e44%3Ad851&i d=o-AJTgL_gC0gQO_dNGX4vy0byPW_bzDj3M6rYENOFSDsxl&itag=299&aitags=134%2C136%2C160%2C243%2C298%2C299&s ource=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1738016828%2C&mh=YU&mm=31%2C29&mn=sn-ivuoxu-ua 86%2Csn-ua87sn76&ms=au%2Crdu&mv=m&mvi=4&pl=52&rms=au%2Cau&initcwndbps=3415000&bui=AY2Et-O6bK1JzzCqSt VsEOEZbY57LtASan_N9J1w7VtyRVjAFyivbMcWD7Fm6qchepL7aM0VI1QUKqss&vprv=1&svpuc=1&mime=video%2Fmp4&ns=cq IO7yImhOcGUBGH09SvWrYQ&rqh=1&gir=yes&clen=64897328&dur=114.316&lmt=1663275351105969&mt=1738016446&fv ip=5&keepalive=yes&lmw=1&fexp=51326932%2C51353498%2C51371294&c=TVHTML5&sefc=1&txp=5319224&n=iC1YrLCU kQcavA&sparams=expire%2Cei%2Cip%2Cid%2Caitags%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cvprv%2Csvpuc%2Cmim e%2Cns%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswRAIgPkG3uCyPAdmIuK_B5iHd9yoOhbF55mlb42BA6BpG2usCIC XZ3L7scth0s_z_DMRQz-a1ZkZmZ38BWF0b4t1S3yrK&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2C initcwndbps&lsig=AGluJ3MwRAIgfGa-i0FsEuko43Dw8fvHKm71hTvbX8PmxRtty7pL7vsCIAS7iNEiGgIRq5rNtRDNLIl0mDP QWXDgUsgqnDpfBv8y", "https://rr4---sn-ivuoxu-ua86.googlevideo.com/videoplayback?expire=1738038428&ei =PAiYZ-SMNsSrp-oPu_DO2A8&ip=2a00%3Aa040%3A1a5%3A21c3%3A39b5%3Ad8bd%3A7e44%3Ad851&id=o-AJTgL_gC0gQO_d NGX4vy0byPW_bzDj3M6rYENOFSDsxl&itag=251&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=17380 16828%2C&mh=YU&mm=31%2C29&mn=sn-ivuoxu-ua86%2Csn-ua87sn76&ms=au%2Crdu&mv=m&mvi=4&pl=52&rms=au%2Cau&i nitcwndbps=3415000&bui=AY2Et-O6bK1JzzCqStVsEOEZbY57LtASan_N9J1w7VtyRVjAFyivbMcWD7Fm6qchepL7aM0VI1QUK qss&vprv=1&svpuc=1&mime=audio%2Fwebm&ns=cqIO7yImhOcGUBGH09SvWrYQ&rqh=1&gir=yes&clen=1870769&dur=114. 361&lmt=1663275544532607&mt=1738016446&fvip=5&keepalive=yes&lmw=1&fexp=51326932%2C51353498%2C5137129 4&c=TVHTML5&sefc=1&txp=5318224&n=iC1YrLCUkQcavA&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequ iressl%2Cxpc%2Cbui%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswRAIgISEmtOv xfrrfmpd-8nLs8VkV_tWazpnj-vj9KqHnbO8CIFxqMZmsRoe_SxWmFo5kn5_6BhJYkPeOaflZCdXgLNPV&lsparams=met%2Cmh% 2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3MwRAIgfGa-i0FsEuko43Dw8fvHKm71hTvbX8Pm xRtty7pL7vsCIAS7iNEiGgIRq5rNtRDNLIl0mDPQWXDgUsgqnDpfBv8y"
[download] Destination: Koyori Hips [VQSL9kGcjuw].mkv
[debug] ffmpeg command line: ffmpeg -y -loglevel verbose -headers "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Sec-Fetch-Mode: navigate
" -t 100.0 -i "https://rr4---sn-ivuoxu-ua86.googlevideo.com/videoplayback?expire=1738038428&ei=PAiYZ -SMNsSrp-oPu_DO2A8&ip=2a00%3Aa040%3A1a5%3A21c3%3A39b5%3Ad8bd%3A7e44%3Ad851&id=o-AJTgL_gC0gQO_dNGX4vy 0byPW_bzDj3M6rYENOFSDsxl&itag=299&aitags=134%2C136%2C160%2C243%2C298%2C299&source=youtube&requiressl =yes&xpc=EgVo2aDSNQ%3D%3D&met=1738016828%2C&mh=YU&mm=31%2C29&mn=sn-ivuoxu-ua86%2Csn-ua87sn76&ms=au%2 Crdu&mv=m&mvi=4&pl=52&rms=au%2Cau&initcwndbps=3415000&bui=AY2Et-O6bK1JzzCqStVsEOEZbY57LtASan_N9J1w7V tyRVjAFyivbMcWD7Fm6qchepL7aM0VI1QUKqss&vprv=1&svpuc=1&mime=video%2Fmp4&ns=cqIO7yImhOcGUBGH09SvWrYQ&r qh=1&gir=yes&clen=64897328&dur=114.316&lmt=1663275351105969&mt=1738016446&fvip=5&keepalive=yes&lmw=1 &fexp=51326932%2C51353498%2C51371294&c=TVHTML5&sefc=1&txp=5319224&n=iC1YrLCUkQcavA&sparams=expire%2C ei%2Cip%2Cid%2Caitags%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Ccle n%2Cdur%2Clmt&sig=AJfQdSswRAIgPkG3uCyPAdmIuK_B5iHd9yoOhbF55mlb42BA6BpG2usCICXZ3L7scth0s_z_DMRQz-a1Zk ZmZ38BWF0b4t1S3yrK&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3M wRAIgfGa-i0FsEuko43Dw8fvHKm71hTvbX8PmxRtty7pL7vsCIAS7iNEiGgIRq5rNtRDNLIl0mDPQWXDgUsgqnDpfBv8y" -head ers "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Ch rome/91.0.4472.114 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Sec-Fetch-Mode: navigate
" -t 100.0 -i "https://rr4---sn-ivuoxu-ua86.googlevideo.com/videoplayback?expire=1738038428&ei=PAiYZ -SMNsSrp-oPu_DO2A8&ip=2a00%3Aa040%3A1a5%3A21c3%3A39b5%3Ad8bd%3A7e44%3Ad851&id=o-AJTgL_gC0gQO_dNGX4vy 0byPW_bzDj3M6rYENOFSDsxl&itag=251&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1738016828% 2C&mh=YU&mm=31%2C29&mn=sn-ivuoxu-ua86%2Csn-ua87sn76&ms=au%2Crdu&mv=m&mvi=4&pl=52&rms=au%2Cau&initcwn dbps=3415000&bui=AY2Et-O6bK1JzzCqStVsEOEZbY57LtASan_N9J1w7VtyRVjAFyivbMcWD7Fm6qchepL7aM0VI1QUKqss&vp rv=1&svpuc=1&mime=audio%2Fwebm&ns=cqIO7yImhOcGUBGH09SvWrYQ&rqh=1&gir=yes&clen=1870769&dur=114.361&lm t=1663275544532607&mt=1738016446&fvip=5&keepalive=yes&lmw=1&fexp=51326932%2C51353498%2C51371294&c=TV HTML5&sefc=1&txp=5318224&n=iC1YrLCUkQcavA&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl %2Cxpc%2Cbui%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswRAIgISEmtOvxfrrfm pd-8nLs8VkV_tWazpnj-vj9KqHnbO8CIFxqMZmsRoe_SxWmFo5kn5_6BhJYkPeOaflZCdXgLNPV&lsparams=met%2Cmh%2Cmm%2 Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3MwRAIgfGa-i0FsEuko43Dw8fvHKm71hTvbX8PmxRtty7 pL7vsCIAS7iNEiGgIRq5rNtRDNLIl0mDPQWXDgUsgqnDpfBv8y" -c copy -map 0:0 -map 1:0 -f matroska "file:Koyo ri Hips [VQSL9kGcjuw].mkv.part"
ffmpeg version N-118369-g959b799c8d-20250127 Copyright (c) 2000-2025 the FFmpeg developers
built with gcc 14.2.0 (crosstool-NG 1.26.0.120_4d36f27)
configuration: --prefix=/ffbuild/prefix --pkg-config-flags=--static --pkg-config=pkg-config --cros s-prefix=x86_64-w64-mingw32- --arch=x86_64 --target-os=mingw32 --enable-gpl --enable-version3 --disa ble-debug --disable-w32threads --enable-pthreads --enable-iconv --enable-zlib --enable-libfreetype - -enable-libfribidi --enable-gmp --enable-libxml2 --enable-lzma --enable-fontconfig --enable-libharfb uzz --enable-libvorbis --enable-opencl --disable-libpulse --enable-libvmaf --disable-libxcb --disabl e-xlib --enable-amf --enable-libaom --enable-libaribb24 --enable-avisynth --enable-chromaprint --ena ble-libdav1d --enable-libdavs2 --enable-libdvdread --enable-libdvdnav --disable-libfdk-aac --enable- ffnvcodec --enable-cuda-llvm --enable-frei0r --enable-libgme --enable-libkvazaar --enable-libaribcap tion --enable-libass --enable-libbluray --enable-libjxl --enable-libmp3lame --enable-libopus --enabl e-librist --enable-libssh --enable-libtheora --enable-libvpx --enable-libwebp --enable-libzmq --enab le-lv2 --enable-libvpl --enable-openal --enable-libopencore-amrnb --enable-libopencore-amrwb --enabl e-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-librav1e --enable-librubberband --en able-schannel --enable-sdl2 --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libsvtav1 - -enable-libtwolame --enable-libuavs3d --disable-libdrm --enable-vaapi --enable-libvidstab --enable-v ulkan --enable-libshaderc --enable-libplacebo --disable-libvvenc --enable-libx264 --enable-libx265 - -enable-libxavs2 --enable-libxvid --enable-libzimg --enable-libzvbi --extra-cflags=-DLIBTWOLAME_STAT IC --extra-cxxflags= --extra-libs=-lgomp --extra-ldflags=-pthread --extra-ldexeflags= --cc=x86_64-w6 4-mingw32-gcc --cxx=x86_64-w64-mingw32-g++ --ar=x86_64-w64-mingw32-gcc-ar --ranlib=x86_64-w64-mingw3 2-gcc-ranlib --nm=x86_64-w64-mingw32-gcc-nm --extra-version=20250127
libavutil 59. 56.100 / 59. 56.100
libavcodec 61. 31.101 / 61. 31.101
libavformat 61. 9.106 / 61. 9.106
libavdevice 61. 4.100 / 61. 4.100
libavfilter 10. 9.100 / 10. 9.100
libswscale 8. 13.100 / 8. 13.100
libswresample 5. 4.100 / 5. 4.100
libpostproc 58. 4.100 / 58. 4.100
[tcp @ 00000188bd676940] Starting connection attempt to 2a00:a040:3:8::f port 443
[tcp @ 00000188bd676940] Successfully connected to 2a00:a040:3:8::f port 443
[mov,mp4,m4a,3gp,3g2,mj2 @ 00000188bd6c8340] advanced_editlist does not work with fragmented MP4. di sabling.
[h264 @ 00000188bd6db2c0] Reinit context to 1920x1088, pix_fmt: yuv420p
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'https://rr4---sn-ivuoxu-ua86.googlevideo.com/videoplayback? expire=1738038428&ei=PAiYZ-SMNsSrp-oPu_DO2A8&ip=2a00%3Aa040%3A1a5%3A21c3%3A39b5%3Ad8bd%3A7e44%3Ad851 &id=o-AJTgL_gC0gQO_dNGX4vy0byPW_bzDj3M6rYENOFSDsxl&itag=299&aitags=134%2C136%2C160%2C243%2C298%2C299 &source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1738016828%2C&mh=YU&mm=31%2C29&mn=sn-ivuoxu- ua86%2Csn-ua87sn76&ms=au%2Crdu&mv=m&mvi=4&pl=52&rms=au%2Cau&initcwndbps=3415000&bui=AY2Et-O6bK1JzzCq StVsEOEZbY57LtASan_N9J1w7VtyRVjAFyivbMcWD7Fm6qchepL7aM0VI1QUKqss&vprv=1&svpuc=1&mime=video%2Fmp4&ns= cqIO7yImhOcGUBGH09SvWrYQ&rqh=1&gir=yes&clen=64897328&dur=114.316&lmt=1663275351105969&mt=1738016446& fvip=5&keepalive=yes&lmw=1&fexp=51326932%2C51353498%2C51371294&c=TVHTML5&sefc=1&txp=5319224&n=iC1YrL CUkQcavA&sparams=expire%2Cei%2Cip%2Cid%2Caitags%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cvprv%2Csvpuc%2Cm ime%2Cns%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswRAIgPkG3uCyPAdmIuK_B5iHd9yoOhbF55mlb42BA6BpG2usC ICXZ3L7scth0s_z_DMRQz-a1ZkZmZ38BWF0b4t1S3yrK&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms% 2Cinitcwndbps&lsig=AGluJ3MwRAIgfGa-i0FsEuko43Dw8fvHKm71hTvbX8PmxRtty7pL7vsCIAS7iNEiGgIRq5rNtRDNLIl0m DPQWXDgUsgqnDpfBv8y':
Metadata:
major_brand : dash
minor_version : 0
compatible_brands: iso6avc1mp41
creation_time : 2022-09-15T20:55:39.000000Z
Duration: 00:01:54.32, start: 0.000000, bitrate: 4541 kb/s
Stream #0:0[0x1](und): Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt70 9, progressive, left), 1920x1080 [SAR 1:1 DAR 16:9], 5221 kb/s, 60 fps, 60 tbr, 15360 tbn (default)
Metadata:
creation_time : 2022-09-15T20:55:39.000000Z
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0]
[tcp @ 00000188bd676880] Starting connection attempt to 2a00:a040:3:8::f port 443
[tcp @ 00000188bd676880] Successfully connected to 2a00:a040:3:8::f port 443
Input #1, matroska,webm, from 'https://rr4---sn-ivuoxu-ua86.googlevideo.com/videoplayback?expire=173 8038428&ei=PAiYZ-SMNsSrp-oPu_DO2A8&ip=2a00%3Aa040%3A1a5%3A21c3%3A39b5%3Ad8bd%3A7e44%3Ad851&id=o-AJTg L_gC0gQO_dNGX4vy0byPW_bzDj3M6rYENOFSDsxl&itag=251&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D &met=1738016828%2C&mh=YU&mm=31%2C29&mn=sn-ivuoxu-ua86%2Csn-ua87sn76&ms=au%2Crdu&mv=m&mvi=4&pl=52&rms =au%2Cau&initcwndbps=3415000&bui=AY2Et-O6bK1JzzCqStVsEOEZbY57LtASan_N9J1w7VtyRVjAFyivbMcWD7Fm6qchepL 7aM0VI1QUKqss&vprv=1&svpuc=1&mime=audio%2Fwebm&ns=cqIO7yImhOcGUBGH09SvWrYQ&rqh=1&gir=yes&clen=187076 9&dur=114.361&lmt=1663275544532607&mt=1738016446&fvip=5&keepalive=yes&lmw=1&fexp=51326932%2C51353498 %2C51371294&c=TVHTML5&sefc=1&txp=5318224&n=iC1YrLCUkQcavA&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csou rce%2Crequiressl%2Cxpc%2Cbui%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswR AIgISEmtOvxfrrfmpd-8nLs8VkV_tWazpnj-vj9KqHnbO8CIFxqMZmsRoe_SxWmFo5kn5_6BhJYkPeOaflZCdXgLNPV&lsparams =met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3MwRAIgfGa-i0FsEuko43Dw8fvHKm 71hTvbX8PmxRtty7pL7vsCIAS7iNEiGgIRq5rNtRDNLIl0mDPQWXDgUsgqnDpfBv8y':
Metadata:
encoder : google/video-file
Duration: 00:01:54.36, start: -0.007000, bitrate: 130 kb/s
Stream #1:0(eng): Audio: opus, 48000 Hz, stereo, fltp, delay 312 (default)
[out#0/matroska @ 00000188bd711b00] Adding streams from explicit maps...
[vost#0:0/copy @ 00000188bd758380] Created video stream from input stream 0:0
[aost#0:1/copy @ 00000188bd758600] Created audio stream from input stream 1:0
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:0 -> #0:1 (copy)
Output #0, matroska, to 'file:Koyori Hips [VQSL9kGcjuw].mkv.part':
Metadata:
major_brand : dash
minor_version : 0
compatible_brands: iso6avc1mp41
encoder : Lavf61.9.106
Stream #0:0(und): Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt709, pr ogressive, left), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 5221 kb/s, 60 fps, 60 tbr, 1k tbn (default)
Metadata:
creation_time : 2022-09-15T20:55:39.000000Z
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0]
Stream #0:1(eng): Audio: opus ([255][255][255][255] / 0xFFFFFFFF), 48000 Hz, stereo, fltp, delay 3 12 (default)
[out#0/matroska @ 00000188bd711b00] Starting thread...
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 00000188bd6a1f40] Starting thread...
[in#1/matroska,webm @ 00000188bd6a0a80] Starting thread...
Press [q] to stop, [?] for help
[tls @ 00000188bd677680] Unable to read from socket:00:50.11 bitrate=4393.8kbits/s speed=1.95x
Last message repeated 2 times
[opus @ 00000188bf468180] Error parsing Opus packet header.
[in#1/matroska,webm @ 00000188bd6a0a80] Error during demuxing: Error number -10054 occurred
[in#1/matroska,webm @ 00000188bd6a0a80] Terminating thread with return code 0 (success)
[vist#0:0/h264 @ 00000188bd75ee00] All consumers of this stream are done4.8kbits/s speed=1.97x
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 00000188bd6a1f40] All consumers are done
[out#0/matroska @ 00000188bd711b00] All streams finished
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 00000188bd6a1f40] Terminating thread with return code 0 (success)
[out#0/matroska @ 00000188bd711b00] Terminating thread with return code 0 (success)
[AVIOContext @ 00000188bd66f900] Statistics: 58183196 bytes written, 2 seeks, 223 writeouts
[out#0/matroska @ 00000188bd711b00] Output file #0 (file:Koyori Hips [VQSL9kGcjuw].mkv.part):
[out#0/matroska @ 00000188bd711b00] Output stream #0:0 (video): 6001 packets muxed (57301864 bytes);
[out#0/matroska @ 00000188bd711b00] Output stream #0:1 (audio): 2542 packets muxed (817184 bytes);
[out#0/matroska @ 00000188bd711b00] Total: 8543 packets (58119048 bytes) muxed
[out#0/matroska @ 00000188bd711b00] video:55959KiB audio:798KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing overhead: 0.108909%
frame= 6001 fps=117 q=-1.0 Lsize= 56819KiB time=00:00:50.84 bitrate=9155.2kbits/s speed=0.989x
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 00000188bd6a1f40] Input file #0 (https://rr4---sn-ivuoxu-ua86.googlevideo.com/videoplayback?expire=1738038428&ei=PAiYZ-SMNsSrp-oPu_DO2A8&ip=2a00%3Aa040%3A1a5%3A21c3%3A39b5%3Ad8bd%3A7e44%3Ad851&id=o-AJTgL_gC0gQO_dNGX4vy0byPW_bzDj3M6rYENOFSDsxl&itag=299&aitags=134%2C136%2C160%2C243%2C298%2C299&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1738016828%2C&mh=YU&mm=31%2C29&mn=sn-ivuoxu-ua86%2Csn-ua87sn76&ms=au%2Crdu&mv=m&mvi=4&pl=52&rms=au%2Cau&initcwndbps=3415000&bui=AY2Et-O6bK1JzzCqStVsEOEZbY57LtASan_N9J1w7VtyRVjAFyivbMcWD7Fm6qchepL7aM0VI1QUKqss&vprv=1&svpuc=1&mime=video%2Fmp4&ns=cqIO7yImhOcGUBGH09SvWrYQ&rqh=1&gir=yes&clen=64897328&dur=114.316&lmt=1663275351105969&mt=1738016446&fvip=5&keepalive=yes&lmw=1&fexp=51326932%2C51353498%2C51371294&c=TVHTML5&sefc=1&txp=5319224&n=iC1YrLCUkQcavA&sparams=expire%2Cei%2Cip%2Cid%2Caitags%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswRAIgPkG3uCyPAdmIuK_B5iHd9yoOhbF55mlb42BA6BpG2usCICXZ3L7scth0s_z_DMRQz-a1ZkZmZ38BWF0b4t1S3yrK&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3MwRAIgfGa-i0FsEuko43Dw8fvHKm71hTvbX8PmxRtty7pL7vsCIAS7iNEiGgIRq5rNtRDNLIl0mDPQWXDgUsgqnDpfBv8y):
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 00000188bd6a1f40] Input stream #0:0 (video): 6002 packets read (57303720 bytes);
[in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 00000188bd6a1f40] Total: 6002 packets (57303720 bytes) demuxed
[AVIOContext @ 00000188bd6de7c0] Statistics: 57392720 bytes read, 0 seeks
[in#1/matroska,webm @ 00000188bd6a0a80] Input file #1 (https://rr4---sn-ivuoxu-ua86.googlevideo.com/videoplayback?expire=1738038428&ei=PAiYZ-SMNsSrp-oPu_DO2A8&ip=2a00%3Aa040%3A1a5%3A21c3%3A39b5%3Ad8bd%3A7e44%3Ad851&id=o-AJTgL_gC0gQO_dNGX4vy0byPW_bzDj3M6rYENOFSDsxl&itag=251&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1738016828%2C&mh=YU&mm=31%2C29&mn=sn-ivuoxu-ua86%2Csn-ua87sn76&ms=au%2Crdu&mv=m&mvi=4&pl=52&rms=au%2Cau&initcwndbps=3415000&bui=AY2Et-O6bK1JzzCqStVsEOEZbY57LtASan_N9J1w7VtyRVjAFyivbMcWD7Fm6qchepL7aM0VI1QUKqss&vprv=1&svpuc=1&mime=audio%2Fwebm&ns=cqIO7yImhOcGUBGH09SvWrYQ&rqh=1&gir=yes&clen=1870769&dur=114.361&lmt=1663275544532607&mt=1738016446&fvip=5&keepalive=yes&lmw=1&fexp=51326932%2C51353498%2C51371294&c=TVHTML5&sefc=1&txp=5318224&n=iC1YrLCUkQcavA&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cvprv%2Csvpuc%2Cmime%2Cns%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswRAIgISEmtOvxfrrfmpd-8nLs8VkV_tWazpnj-vj9KqHnbO8CIFxqMZmsRoe_SxWmFo5kn5_6BhJYkPeOaflZCdXgLNPV&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3MwRAIgfGa-i0FsEuko43Dw8fvHKm71hTvbX8PmxRtty7pL7vsCIAS7iNEiGgIRq5rNtRDNLIl0mDPQWXDgUsgqnDpfBv8y):
[in#1/matroska,webm @ 00000188bd6a0a80] Input stream #1:0 (audio): 2542 packets read (817184 bytes);
[in#1/matroska,webm @ 00000188bd6a0a80] Total: 2542 packets (817184 bytes) demuxed
[AVIOContext @ 00000188bd6e8bc0] Statistics: 835584 bytes read, 0 seeks
[tls @ 00000188bd677680] Failed to send close message
[download] 100% of 55.49MiB in 00:00:51 at 1.07MiB/s
``` | external issue,site-bug,triage,site:youtube | low | Critical |
2,814,235,778 | godot | export_range(x, y) number defaults to 0 when uninitialized in code even if x > 0 | ### Tested versions
Reproducible in 4.3.stable
### System information
Godot v4.3.stable unknown - EndeavourOS #1 SMP PREEMPT_DYNAMIC Sat, 18 Jan 2025 02:26:57 +0000 - Wayland - GLES3 (Compatibility) - Mesa Intel(R) Iris(R) Xe Graphics (RPL-P) - 13th Gen Intel(R) Core(TM) i7-1360P (16 Threads)
### Issue description
When `@export_range` is used on a variable left uninitialized or initialized to less than the first parameter in gdscript, it defaults to 0, even if the first parameter is larger than 0.
Ex:
```gdscript
@export_range(1, 2) var test_variable: float #left uninitialized, editor displays the value is 1, code prints 0
@export_range(1, 2) var test_variable2: float = 0.5 # initialized, editor still displays 1, code still prints 0
```
I've also attached a small reproduction project.
### Steps to reproduce
```gdscript
extends Node
@export_range(1, 2) var test_variable: float #left uninitialized, editor displays the value is 1, code prints 0
@export_range(1, 2) var test_variable2: float = 0.5 # initialized, editor still displays 1, code still prints 0
func _ready():
print(test_variable)
print(test_variable2)
```
Attaching to a node will have the editor display the values as 1
### Minimal reproduction project (MRP)
[bugtest.zip](https://github.com/user-attachments/files/18565636/bugtest.zip) | bug,discussion,topic:gdscript,topic:editor | low | Critical |
2,814,241,175 | flutter | Flutter Web: Status messages require role="status" (Accessibility) | ### Use case
Any message that makes users aware of important changes in content that are not given focus require the `status` role.
### Proposal
Add the WAI-ARIA role `status` to the the available semantics roles in existing status message-related widget libraries.
Examples of messages that makes users aware of important changes in content include (but are not limited to):
- Search Results
- After a user presses a Search button, the page content is updated to include the results of the search, which are displayed in a section below the Search button. The change to content also includes the message "5 results returned" near the top of this new content. This text is given an appropriate role for a status message. A screen reader announces, "Five results returned".
- Loading messages (example: "Loading...", "Please wait...", "Busy...")
- After a user activates a process, an icon symbolizing 'busy' appears on the screen. The screen reader announces "application busy".
- Success messages
- After a user presses an Add to Shopping Cart button, a section of content near the Shopping Cart icon adds the text "5 items". A screen reader announces "Five items" or "Shopping cart, five items".
- After a user submits a form, text is added to the existing form which reads, "Your form was successfully submitted." The screen reader announces the same message.
- After a user puts a photo in an album in an online photo app, a snackbar displays the message "Saved in 'Wedding' album", which is also read by a screen reader. | c: new feature,a: accessibility,platform-web,c: proposal,customer: castaway,team-accessibility | low | Minor |
2,814,269,986 | flutter | Flutter Web: Alert messages require role="alert" (Accessibility) | ### Use case
Any element that displays a brief, important, time-sensitive message in a way that attracts the user's attention without interrupting the user's task require the `alert` role.
### Proposal
Add the WAI-ARIA role `alert` to the the available semantics roles in existing error message-related widget libraries.
Examples of error messages include (but are not limited to):
- After a user unsuccessfully fills in a form because some of the data is in the incorrect format, text is added to the existing form which reads "5 errors on page". The screen reader announces the same message.
- Text description that identifies required fields that were not completed
- Text description when the user provides information that is not in the list of allowed values
- Text description when user input falls outside the required format or values
- Providing suggested correction text
- Providing spell checking and suggestions for text input | c: new feature,a: accessibility,platform-web,c: proposal,customer: castaway,team-accessibility | low | Critical |
2,814,289,543 | iptv | Add: TinyPop.uk | ### Channel ID (required)
TinyPop.uk
### Stream URL (required)
https://jmp2.uk/sam-GBBD3200003T6.m3u8
### Quality
1080p
### Label
None
### Timeshift
_No response_
### HTTP User Agent
_No response_
### HTTP Referrer
_No response_
### Notes
_No response_
### Contributing Guide
- [x] I have read [Contributing Guide](https://github.com/iptv-org/iptv/blob/master/CONTRIBUTING.md) | approved,streams:add | low | Minor |
2,814,289,999 | iptv | Add: PopUp.uk | ### Channel ID (required)
PopUp.uk
### Stream URL (required)
https://jmp2.uk/sam-GB25000016G.m3u8
### Quality
1080p
### Label
None
### Timeshift
_No response_
### HTTP User Agent
_No response_
### HTTP Referrer
_No response_
### Notes
_No response_
### Contributing Guide
- [x] I have read [Contributing Guide](https://github.com/iptv-org/iptv/blob/master/CONTRIBUTING.md) | approved,streams:add | low | Minor |
2,814,290,586 | iptv | Add: Pop.uk | ### Channel ID (required)
Pop.uk
### Stream URL (required)
https://jmp2.uk/sam-GBBC430000128.m3u8
### Quality
1080p
### Label
None
### Timeshift
_No response_
### HTTP User Agent
_No response_
### HTTP Referrer
_No response_
### Notes
_No response_
### Contributing Guide
- [x] I have read [Contributing Guide](https://github.com/iptv-org/iptv/blob/master/CONTRIBUTING.md) | approved,streams:add | low | Minor |
2,814,300,844 | rust | Unexpected panic on Aarch64 | <!--
Thank you for finding an Internal Compiler Error! ๐ง If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
```Rust
// for the code and crate that produces the error (in the form github actions log info)
// see here: https://github.com/mert-kurttutan/mathfun-rs/actions/runs/13000183891/job/36257044721
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0 (9fc6b4312 2025-01-07) running on aarch64-unknown-linux-gnu
```
### Error output
```
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.84.0 (9fc6b4312 2025-01-07) running on aarch[64](https://github.com/mert-kurttutan/mathfun-rs/actions/runs/13000183891/job/36257044721#step:5:65)-unknown-linux-gnu
note: compiler flags: -C opt-level=3 -C embed-bitcode=no -C strip=debuginfo
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [module_children] collecting child items of module `std`
#1 [resolver_for_lowering_raw] getting the resolver for lowering
end of query stack
error: could not compile `mathfun` (lib test)
Caused by:
process didn't exit successfully: `/home/runner/.rustup/toolchains/stable-aarch64-unknown-linux-gnu/bin/rustc --crate-name mathfun --edition=2021 src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --emit=dep-info,link -C opt-level=3 -C embed-bitcode=no --test --cfg 'feature="default"' --cfg 'feature="std"' --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values("default", "std"))' -C metadata=204[68](https://github.com/mert-kurttutan/mathfun-rs/actions/runs/13000183891/job/36257044721#step:5:69)bf63416aaa8 -C extra-filename=-20468bf63416aaa8 --out-dir /home/runner/work/mathfun-rs/mathfun-rs/target/release/deps -C strip=debuginfo -L dependency=/home/runner/work/mathfun-rs/mathfun-rs/target/release/deps --extern criterion=/home/runner/work/mathfun-rs/mathfun-rs/target/release/deps/libcriterion-568a2f1ed[69](https://github.com/mert-kurttutan/mathfun-rs/actions/runs/13000183891/job/36257044721#step:5:70)f7c25.rlib --extern libc=/home/runner/work/mathfun-rs/mathfun-rs/target/release/deps/liblibc-0[70](https://github.com/mert-kurttutan/mathfun-rs/actions/runs/13000183891/job/36257044721#step:5:71)c2e4fd579957f.rlib --extern libloading=/home/runner/work/mathfun-rs/mathfun-rs/target/release/deps/liblibloading-ff4[71](https://github.com/mert-kurttutan/mathfun-rs/actions/runs/13000183891/job/36257044721#step:5:72)d9e02bcfa6e.rlib --extern once_cell=/home/runner/work/mathfun-rs/mathfun-rs/target/release/deps/libonce_cell-bea0138bc7ff6d31.rlib --extern rand=/home/runner/work/mathfun-rs/mathfun-rs/target/release/deps/librand-ac532ae89949bc32.rlib --extern raw_cpuid=/home/runner/work/mathfun-rs/mathfun-rs/target/release/deps/libraw_cpuid-20c384d344ab9c34.rlib -Dwarnings` (exit status: 101)
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
<backtrace>
```
</p>
</details>
Some additional details:
This occurred when testing some simd (with neon intrinsics) enabled functions on GitHub runners machine, `ubuntu-24.04-arm`,
As you can check jobs page, it gives error both on stable, but not 1.81.0.
I know this is not really ideal for reproducibility. But, since the message claims it is a bug, I wanted to report early. | I-ICE,T-compiler,C-bug,O-AArch64 | low | Critical |
2,814,305,724 | node | Math.atan() behavior change | ### Version
v20.18.1
### Platform
```text
Darwin foo 24.2.0 Darwin Kernel Version 24.2.0: Fri Dec 6 19:01:59 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6000 arm64
```
### Subsystem
Math
### What steps will reproduce the bug?
Math.atan(2.859624123917431)
### How often does it reproduce? Is there a required condition?
It happens all the time on Mac OS X.
### What is the expected behavior? Why is that the expected behavior?
1.2343920821908787
This is what older versions of node.js returned, and what all versions of node.js on Linux x86 return.
### What do you see instead?
1.234392082190879
### Additional information
`1.234392082190879` is actually the more correct result and agrees with Python and Wolfram Alpha. But the concerning thing is that the result changed between v20.18.0 and v20.18.1 on Mac OS X arm64 only. On Linux x86 all versions return the less correct result of `1.2343920821908787`. It would be nice if node.js on all platforms returned the same value. The difference is small but it is causing some headaches in our unit tests between platforms and node versions.
Here's is what Wolfram Alpha shows with more significant digits:
`1.23439208219087881390254524370678138808146132211060418165008615225322232` | v8 engine | low | Critical |
2,814,328,164 | flutter | Flutter Web: Allow user to override text spacing via user stylesheet, bookmarklet, extension, or application (Accessibility) | ### Use case
Content must adapt to user-specified increased spacing between lines, words, letters, and paragraphs. Any combination of these may assist a user with effectively reading text. This is typically achieved via user stylesheet, bookmarklet, extension, or application.
Reference: [WCAG 1.4.12: Text Spacing](https://www.w3.org/WAI/WCAG22/Understanding/text-spacing.html)
### Proposal
Ensure users can modify text style properties to improve their reading experience:
- Line height (line spacing) to at least 1.5 times the font size
- Spacing following paragraphs to at least 2 times the font size
- Letter spacing (tracking) to at least 0.12 times the font size
- Word spacing to at least 0.16 times the font size | c: new feature,a: accessibility,platform-web,c: proposal,customer: castaway,team-accessibility | low | Minor |
2,814,332,684 | tauri | [bug] Android App often flashes back. | ### Describe the bug
My Apk is 700M in size after installation.From time to time, I often retreat. I suspect that the memory management mechanism of WebView is related to the memory management mechanism, and the application of small capacity will not flash back. Is there any good solution?
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
npm run tauri info
```
### Stack trace
```text
```
### Additional context
_No response_ | type: bug,status: needs triage,platform: Android | low | Critical |
2,814,337,096 | PowerToys | File Explorer doesn't work with Workspaces | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Workspaces
### Steps to reproduce
1) Created Fancy Zone with 4 apps.
2) Move each app to its place
3) Launched Workspaces
4) Created space, with those 4 apps opened
### โ๏ธ Expected Behavior
When launch saved Workspace, opens the 4 apps an placing them in assigned positions.
### โ Actual Behavior
Only 3 apps opens and go to assigned position, excluding File Explorer, which it doesn't launch. Or, if file Explorer is open, it does nothing.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,814,337,482 | pytorch | torch.linalg.eigh fails on CPU | ### ๐ Describe the bug
Based on this issue https://github.com/pytorch/pytorch/issues/94772 we see failure on CPU since PyTorch 2.4.0 Release.
Minumum test, requires [fc_layer_tensor.pt.zip](https://github.com/user-attachments/files/18566154/fc_layer_tensor.pt.zip) :
```python
import torch
SEED = 123
torch.manual_seed(SEED)
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
num_classes = 10
num_features = 28**2
fc_layer = torch.nn.Linear(in_features=num_features, out_features=num_classes, bias=False).to(DEVICE)
fc_layer.weight.grad = loaded_tensor = torch.load('fc_layer_tensor.pt')
vec_grad = torch.flatten(fc_layer.weight.grad)
precond_adagrad = torch.outer(vec_grad, vec_grad)
evals_adagrad, evecs_adagrad = torch.linalg.eigh(precond_adagrad.cpu())
```
Output:
```
python3 test5.py
/home/ubuntu/test5.py:12: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
fc_layer.weight.grad = loaded_tensor = torch.load('fc_layer_tensor.pt')
Intel oneMKL ERROR: Parameter 8 was incorrect on entry to SSYEVD.
Traceback (most recent call last):
File "/home/ubuntu/test5.py", line 15, in <module>
evals_adagrad, evecs_adagrad = torch.linalg.eigh(precond_adagrad.cpu())
RuntimeError: false INTERNAL ASSERT FAILED at "../aten/src/ATen/native/BatchLinearAlgebra.cpp":1538, please report a bug to PyTorch. linalg.eigh: Argument 8 has illegal value. Most certainly there is a bug in the implementation calling the backend library.
```
Full test
```
import torch
from torchvision import datasets, transforms
SEED = 123
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
batch_size = 512
num_classes = 10
num_features = 28**2
loss_fn = torch.nn.CrossEntropyLoss()
tforms = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
dataset = datasets.MNIST("~/data/", download=False, train=True, transform=tforms)
train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=False)
fc_layer = torch.nn.Linear(in_features=num_features, out_features=num_classes, bias=False).to(DEVICE)
for batch_ix, (inputs, targets) in enumerate(train_loader):
inputs, targets = inputs.to(DEVICE), targets.to(DEVICE)
fc_layer.weight.grad = None
logits = fc_layer(inputs.view(inputs.shape[0], -1))
loss = loss_fn(logits, targets)
loss.backward()
vec_grad = torch.flatten(fc_layer.weight.grad)
precond_adagrad = torch.outer(vec_grad, vec_grad)
# CPU computation works fine
evals_adagrad, evecs_adagrad = torch.linalg.eigh(precond_adagrad.cpu())
# But eigh computation on GPU fails
evals_adagrad, evecs_adagrad = torch.linalg.eigh(precond_adagrad)
```
### Versions
2.7.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | high priority,triage review,module: error checking,module: regression,module: linear algebra | low | Critical |
2,814,352,361 | flutter | Flutter Web: Support the user's settings for units of measurement, color, contrast, font type, font size, and focus cursor (Accessibility) | ### Use case
Content must adapt to the user's operating system or browser settings for units of measurement, color, contrast, font type, font size, and focus cursor.
### Proposal
When the user has made specific selections in their operating system or browser such as unit of measure, color, contrast, font size and type, and focus cursor indicators, they will be applied to the Flutter-based web application. | c: new feature,a: accessibility,platform-web,c: proposal,customer: castaway,team-accessibility | low | Minor |
2,814,355,305 | vscode | Test: Support for custom markdown fonts in md cells | Refs: https://github.com/microsoft/vscode/issues/143907
- [ ] anyOS @kieferrm
- [x] anyOS @dbaeumer
Complexity: 1
author: @Yoyokrazy
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23238907%0A%0A&assignees=Yoyokrazy)
---
## Summary
A setting has been introduced to allow users control over the font family for rendered markdown within md cells.
## Steps to Test:
- Change the setting `notebook.markupFontFamily` to your favorite font (I personally prefer `Comic Sans MS` for my markdown reading pleasure)
- Close and reopen the notebook, ensuring it renders correctly.
- Revert the setting (should leave it blank), repeat the close/reopen, and ensure we fall back to the default workbench font
## Known Limitations:
- notebooks need to be closed and reopened to re-render with the correct font: https://github.com/microsoft/vscode/issues/238908
---
Thanks for testing! | testplan-item | low | Minor |
2,814,357,231 | vscode | Setting change does not re-render notebook markdown | re: https://github.com/microsoft/vscode/issues/238907
Changing the font family via `notebook.markupFontFamily` requires the editor to be closed and reopened to render with the correct font. | debt,notebook-markdown | low | Minor |
2,814,376,523 | neovim | Various treesitter related test failures on armel / armhf / i386 | ### Problem
The Debian build of neovim is failing on [armel] / [armhf] / [i386] in various treesitter related tests. These happen to be the only 32-bit architectures that neovim is currently tested against.
[armel]: https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=armel&ver=0.10.3-1&stamp=1736066993&raw=0
[armhf]: https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=armhf&ver=0.10.3-1&stamp=1736068430&raw=0
[i386]: https://buildd.debian.org/status/fetch.php?pkg=neovim&arch=i386&ver=0.10.3-1&stamp=1736057447&raw=0
<details><summary>Test failure summary</summary>
```
FAILED 6 tests, listed below:
FAILED test/functional/lua/comment_spec.lua @ 243: commenting toggle_lines() respects tree-sitter injections
test/functional/lua/comment_spec.lua:260: Expected objects to be the same.
Passed in:
(string) '-- print(1)'
Expected:
(string) '"print(1)'
stack traceback:
test/functional/lua/comment_spec.lua:260: in function 'validate'
test/functional/lua/comment_spec.lua:265: in function <test/functional/lua/comment_spec.lua:243>
FAILED test/functional/lua/comment_spec.lua @ 452: commenting Operator respects tree-sitter injections
test/functional/lua/comment_spec.lua:470: Expected objects to be the same.
Passed in:
(string) '-- print(1)'
Expected:
(string) '"print(1)'
stack traceback:
test/functional/lua/comment_spec.lua:470: in function 'validate'
test/functional/lua/comment_spec.lua:475: in function <test/functional/lua/comment_spec.lua:452>
FAILED test/functional/lua/comment_spec.lua @ 503: commenting Operator recomputes local 'commentstring' based on cursor position
test/functional/lua/comment_spec.lua:520: Expected objects to be the same.
Passed in:
(string) ' -- print(1)'
Expected:
(string) ' "print(1)'
stack traceback:
test/functional/lua/comment_spec.lua:520: in function <test/functional/lua/comment_spec.lua:503>
FAILED test/functional/lua/comment_spec.lua @ 569: commenting Current line respects tree-sitter injections
test/functional/lua/comment_spec.lua:587: Expected objects to be the same.
Passed in:
(table: 0xf16c0be8) {
[1] = '"set background=dark'
[2] = 'lua << EOF'
*[3] = '-- print(1)'
[4] = 'EOF' }
Expected:
(table: 0xf16c0b70) {
[1] = '"set background=dark'
[2] = 'lua << EOF'
*[3] = '"print(1)'
[4] = 'EOF' }
stack traceback:
test/functional/lua/comment_spec.lua:587: in function <test/functional/lua/comment_spec.lua:569>
FAILED test/functional/lua/comment_spec.lua @ 629: commenting Textobject respects tree-sitter injections
test/functional/lua/comment_spec.lua:648: Expected objects to be the same.
Passed in:
(table: 0xf591f1f0) {
[1] = 'lua << EOF'
*[2] = 'EOF' }
Expected:
(table: 0xf591f178) {
[1] = 'lua << EOF'
*[2] = '-- print(1)'
[3] = '-- print(2)'
[4] = 'EOF' }
stack traceback:
test/functional/lua/comment_spec.lua:648: in function <test/functional/lua/comment_spec.lua:629>
FAILED test/functional/treesitter/language_spec.lua @ 138: treesitter language API retrieve the node given a range
test/functional/treesitter/language_spec.lua:149: Expected objects to be the same.
Passed in:
(string) 'nil'
Expected:
(string) '<node primitive_type>'
stack traceback:
test/functional/treesitter/language_spec.lua:149: in function <test/functional/treesitter/language_spec.lua:138>
ERROR 4 errors, listed below:
ERROR test/functional/treesitter/language_spec.lua @ 110: treesitter language API retrieve the tree given a range
test/functional/testnvim.lua:127: Error executing lua: [string "<nvim>"]:1: attempt to index global 'tree' (a nil value)
stack traceback:
[string "<nvim>"]:1: in main chunk
stack traceback:
test/functional/testnvim.lua:127: in function 'exec_lua'
test/functional/treesitter/language_spec.lua:121: in function <test/functional/treesitter/language_spec.lua:110>
ERROR test/functional/treesitter/language_spec.lua @ 124: treesitter language API retrieve the tree given a range when range is out of bounds relative to buffer
test/functional/testnvim.lua:127: Error executing lua: [string "<nvim>"]:1: attempt to index global 'tree' (a nil value)
stack traceback:
[string "<nvim>"]:1: in main chunk
stack traceback:
test/functional/testnvim.lua:127: in function 'exec_lua'
test/functional/treesitter/language_spec.lua:135: in function <test/functional/treesitter/language_spec.lua:124>
ERROR test/functional/treesitter/node_spec.lua @ 19: treesitter node API double free tree
test/functional/testnvim.lua:127: Error executing lua: [string "<nvim>"]:2: attempt to index a nil value
stack traceback:
[string "<nvim>"]:2: in main chunk
stack traceback:
test/functional/testnvim.lua:127: in function 'exec_lua'
test/functional/treesitter/node_spec.lua:21: in function <test/functional/treesitter/node_spec.lua:19>
ERROR test/functional/treesitter/node_spec.lua @ 44: treesitter node API get_node() with lang given
test/functional/testnvim.lua:127: Error executing lua: ...ucible-path/neovim-0.10.3/runtime/lua/vim/treesitter.lua:182: attempt to index local 'node' (a nil value)
stack traceback:
...ucible-path/neovim-0.10.3/runtime/lua/vim/treesitter.lua:182: in function 'get_range'
...ucible-path/neovim-0.10.3/runtime/lua/vim/treesitter.lua:217: in function <...ucible-path/neovim-0.10.3/runtime/lua/vim/treesitter.lua:210>
stack traceback:
test/functional/testnvim.lua:127: in function 'lua_eval'
test/functional/treesitter/node_spec.lua:54: in function <test/functional/treesitter/node_spec.lua:44>
44 SKIPPED TESTS
6 FAILED TESTS
4 ERRORS
```
</details>
### Steps to reproduce
make
make functionaltest TEST_FILE=test/functional/treesitter/node_spec.lua
### Expected behavior
Tests pass
### Nvim version (nvim -v)
v0.10.3 and v0.11.0-dev-1649+gc47496791a
### Vim (not Nvim) behaves the same?
n/a
### Operating system/version
Debian Trixie
### Terminal name/version
n/a
### $TERM environment variable
n/a
### Installation
build from repo | treesitter,platform:arm | low | Critical |
2,814,379,225 | vscode | Test: Notebook Inline Values after cell execution | Refs: https://github.com/microsoft/vscode/issues/237263
- [x] anyOS @lszomoru
- [ ] anyOS @meganrogge
Complexity: 4
author: @Yoyokrazy
[Create Issue](https://github.com/microsoft/vscode/issues/new?body=Testing+%23238911%0A%0A&assignees=Yoyokrazy)
---
## Summary
Support for inline values has been added for notebooks, using the setting `notebook.inlineValues`. When enabled, after a cell is executed, inline values will be displayed after the line, according to either any registered `InlineValueProvider` or the default fallback which simply does regex matching against the variables stored in the kernel after that execution. The latter may
## Steps to Test:
Part 1:
- Enable the setting `notebook.inlineValues`
- Execute a cell, ensuring that inline values are presented similar to how they would be during the debugging process, with different styling.
- when running more complex cells, it is expected that these values can and will be incorrect. This is due to the values all being retrieved from the kernel state after the cell has been executed. Local functions will have clearly incorrect values, as they are just matched via regex according to the variable name, without any concept of scope.
Part 2:
- Create an extension that contributes an inline value provider. A very simple example can be found here: https://github.com/Yoyokrazy/clean-nb-imports-ext/blob/inline-tpi/src/extension.ts
- More advanced examples can perform more effective symbol matching, ignore python keywords, avoid local functions, etc. Values can be retrieved from the kernel using inline variable lookups, or statically determined and returned as inline variable text. Be sure to test both.
An example of a more complex cell, and the difference between the standard regex fallback vs the pylance provider:
pylance provider

fallback (regex matching)

cell content
```python
a = {'value': 5}
b = 'world'
c = 10
def local_fn():
a = 5
b = 100
c = a + b
return c
result = local_fn()
a['pie'] = 3.14
test = 'hello'
print(a)
local_fn()
```
## Known Issues
- the fallback regex matching will often show more incorrect values than correct ones, leading to visual fatigue and overall not much useful info (lower in the cell will be more useful): https://github.com/microsoft/vscode/issues/238912
- inline values for dataframes can stretch past the viewport
- dataframes can render with broken LF characeters (\n), and render with an excessive amount of whitespace
---
Thanks for testing! | testplan-item | low | Critical |
2,814,379,657 | pytorch | DISABLED test_return_advanced_contextmanager (__main__.ContextlibContextManagerTests) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_return_advanced_contextmanager&suite=ContextlibContextManagerTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36249687168).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_return_advanced_contextmanager`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_ctx_manager.py", line 2400, in test_return_advanced_contextmanager
with self.assertRaises(InternalTorchDynamoError):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__
self._raiseFailure("{} not raised".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: InternalTorchDynamoError not raised
To execute this test, run the following from the base repo dir:
python test/dynamo/test_ctx_manager.py ContextlibContextManagerTests.test_return_advanced_contextmanager
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_ctx_manager.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | triaged,module: flaky-tests,skipped,oncall: pt2,module: dynamo | low | Critical |
2,814,379,746 | pytorch | DISABLED test_distributed_checkpoint_state_dict_type0_cuda (__main__.TestDistributedCheckpointCUDA) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_distributed_checkpoint_state_dict_type0_cuda&suite=TestDistributedCheckpointCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/36247589446).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 7 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_distributed_checkpoint_state_dict_type0_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 597, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 837, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 886, in _check_return_codes
raise RuntimeError(error)
RuntimeError: Process 1 exited with error code 10 and exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 726, in run_test
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 599, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3120, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 474, in instantiated_test
raise rte
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 199, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/checkpoint_utils.py", line 152, in wrapper
func(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/distributed/fsdp/test_distributed_checkpoint.py", line 71, in test_distributed_checkpoint
state_dict = model.state_dict()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2226, in state_dict
module.state_dict(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2226, in state_dict
module.state_dict(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2226, in state_dict
module.state_dict(
[Previous line repeated 1 more time]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2232, in state_dict
hook_result = hook(self, destination, prefix, local_metadata)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 715, in _post_state_dict_hook
processed_state_dict = _post_state_dict_hook_fn[fsdp_state._state_dict_type](
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 433, in _local_post_state_dict_hook
sharded_tensor = init_from_local_shards(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py", line 407, in init_from_local_shards
return ShardedTensor._init_from_local_shards(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/api.py", line 755, in _init_from_local_shards
dist.all_gather_object(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3037, in all_gather_object
input_tensor.resize_(max_object_size)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate more than 1EB memory.
To execute this test, run the following from the base repo dir:
python test/distributed/fsdp/test_distributed_checkpoint.py TestDistributedCheckpointCUDA.test_distributed_checkpoint_state_dict_type0_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `distributed/fsdp/test_distributed_checkpoint.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @wdvr | oncall: distributed,module: flaky-tests,skipped | low | Critical |
2,814,381,349 | vscode | Improve notebook inline value fallback | Re: https://github.com/microsoft/vscode/issues/237263
Values can be thinned out and avoid the issues in local vs global variables by using our builtin find all references or a similar approach. Could be costly perf wise. | debt,notebook-execution | low | Minor |
2,814,387,621 | vscode | Dataframes render with excessive whitespace, and lose a majority of info | Re: https://github.com/microsoft/vscode/issues/237263
Currently after converting to a dataframe, variables are rendered with an excessive amount of whitespace, causing a lot of the information about them to be lost until hover. This is visible in both the notebook variables view and the cell execution inline values.
Perhaps there can be a generic approach to recognizing dataframes, and parsing them into a more readable single line format that can be shared between the two features.
 | under-discussion,notebook-execution,notebook-variables | low | Minor |
2,814,471,100 | vscode | The order of configuration items in an extension's settings, as defined in the package.json file, is not respected when a search is done in the settings view | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.96.3 (user setup)
Commit: 91fbdddc47bc9c09064bf7acf133d22631cbf083
Date: 2025-01-09T18:14:09.060Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.19045
The order of configuration items in an extension's settings is not respected when a search is done in the settings panel.
In the table of contents in Settings, if you click on an extension (or expand the extension and choose one of its setting's categories), then the order will be correct. However, if you do a search for the extension name, then the settings are shown in the incorrect order. In this scenario they aren't even shown in proper lexicographic (alphabetical) order.
Steps to Reproduce:
1. Open settings
2. Search for the name of an extension
3. Observe that the order of the configuration items is not the same as as the order you would see if you DID NOT search, and had simply clicked on the extension name in the table of contents,. When searching, the order does not follow what is defined in the extension's package.json file.
In the screenshots, the name of the extension is "AtomicViz" and the search is done for the string "atomicviz". So the search results should be properly ordered.


| bug,settings-editor,settings-search | low | Critical |
2,814,476,867 | go | os/exec: TestWaitid failures | ```
#!watchflakes
default <- pkg == "os/exec" && test == "TestWaitid"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8724559873209158081)):
=== RUN TestWaitid
=== PAUSE TestWaitid
=== CONT TestWaitid
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,814,486,398 | godot | [Godot 4.4] Off-thread access error spam in specific case from AnimationTree | ### Tested versions
Reproducible on and after [53f3143028134bef517f1f598acd7f33a8bf8cf6] but I have no idea why this commit would be related. Bizarrely while initially bisecting I was only able to reproduce the error on mono builds and not standard, but after `git clean -fdx` and rebuilding to make sure I was only able to reproduce on standard and not mono.
### System information
Godot v4.4.beta.mono (6dc78c8aa) - macOS Sequoia (15.2.0) - Multi-window, 1 monitor - Metal (Forward+) - integrated Apple M2 (Apple8) - Apple M2 (8 threads)
### Issue description
This error gets printed out once per animation node within an animation tree (at least a blend tree) when opening the project and sometimes when running the project/opening a scene as well:
```
ERROR: This function in this node (/root/@EditorNode@21262/@Panel@14/@VBoxContainer@15/DockHSplitLeftL/DockHSplitLeftR/DockHSplitMain/@VBoxContainer@26/DockVSplitCenter/@EditorBottomPanel@7939/@VBoxContainer@7924/@AnimationTreeEditor@15287) can only be accessed from either the main thread or a thread group. Use call_deferred() instead.
at: is_visible (scene/main/canvas_item.cpp:124)
```
at least in this case (I'm not sure if there are other triggering cases) `is_visible` is called by `AnimationTreeEditor::get_animation_list`. The error can be avoided by changing the condition in that method to:
```c++
if (!singleton->tree || !singleton->is_accessible_from_caller_thread() || !singleton->is_visible()) {
[ .. ]
}
```
however this feels like a bad hack, especially without understanding the root cause
### Steps to reproduce
In an affected version, create an AnimationTree node with a blend tree root and add animation playback nodes to the tree. There doesn't need to be an animation library or an animation set on the node for the error to print. Save the scene and close + reopen the editor.
### Minimal reproduction project (MRP)
n/a, and I'm not sure about cross platform repro yet anyways | bug,topic:editor,regression,topic:animation | low | Critical |
2,814,496,752 | PowerToys | Feature Request: Ability to move between previewed images with the mouse scroll wheel | ### Description of the new feature / enhancement
Scrolling the mouse wheel up would go to be previous image in the list (replicating clicking the left or up arrow keys) and scrolling the mouse wheel down would go to the next image in the list (replicating clicking the down or right arrow keys). This would allow changing of the previewed image without needing to focus on the Peek window.
### Scenario when this would be used?
Every day I process images via search results for many folders at once (I do a search for '.' to get every png and jpg into one search result window) and I often add metadata to the files while they are still downloading, so the search results are always updating. Windows Photos has worked so far, but a recent update killed it. Peek *almost* works just as well, but I can't scroll through images with the mouse wheel like I could with Photos. This means I have to do extra button presses and alt-tab between applications every time I want to preview a new image, which over time adds quite a lot of extra time and activity to what was a relatively smooth process.
Being able to scroll between image files without needing to focus on the Peek app would return a lot of lost productivity to me and save me a lot of time daily.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,814,510,370 | flutter | Add Callbacks for Clipboard-Related Events (Paste, Cut, Copy, etc.) | ### Use case
Currently, with AdaptiveTextSelectionToolbar.buttonItems, we can access a list of List<ContextMenuButtonItem>? buttonItems, which allows us to add, remove, or customize each button. This makes it possible to rewrite the behavior of actions, such as the Paste button, as needed.
However, SystemContextMenu does not offer the same functionality. I am using SystemContextMenu specifically to avoid the default copy-paste dialog on iOS. In my use case, I would like to rewrite the Paste action so that my app can handle pasting images from the Clipboard, while still utilizing the native context menu.
While it might be challenging to rewrite the exact behavior of the system buttons, having a way to receive callbacks for each event (e.g., Copy, Cut, Paste) would be a good compromise. This would allow developers to respond to these events as needed without overriding the native menu behavior.
I am not sure how Flutter handles these events internally, but since actions like Paste seem to update the text selection handlers, there must already be a mechanism alerting Flutter about what was pasted. My request is to expose these events through callbacks so that developers can leverage them in their custom implementations.
### Proposal
Currently, we have the following for SystemContextMenu:
```
SystemContextMenu.editableText(
editableTextState: editableTextState,
);
```
My suggestion is to extend this API to include optional callbacks for handling specific actions like so:
```
SystemContextMenu.editableText(
editableTextState: editableTextState,
onPaste: (text) => ..., // Callback when the Paste action is triggered
onCut: (text) => ..., // Callback when the Cut action is triggered
onCopy: (text) => ..., // Callback when the Copy action is triggered
);
```
Benefits
1. Developers can implement custom functionality for each system action without fully overriding the native menu.
2. This approach would align with Flutterโs philosophy of providing flexible, customizable components.
3. It opens up use cases like pasting images or other non-text data from the Clipboard while maintaining native system interactions.
OR
Instead of extending SystemContextMenu, we could enhance TextField (or EditableText) to include callbacks for these context menu actions. This would allow the events to be handled more directly by the text input widget itself. For example:
```
TextField(
controller: textEditingController,
onPaste: (text) {
// Handle paste action
},
onCut: (text) {
// Handle cut action
},
onCopy: (text) {
// Handle copy action
},
);
```
| a: text input,c: new feature,framework,c: proposal,team-text-input | low | Minor |
2,814,513,932 | PowerToys | bug report button doesn't work | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
General, System tray interaction
### Steps to reproduce
it's immediately available...yet does nothing (understandable)
if you're being flooded with feedback maybe consider burying it deeper in the program
### โ๏ธ Expected Behavior
https://github.com/microsoft/PowerToys/issues/
to be opened in MS edge. the documentation button works.
### โ Actual Behavior
nothing happens.
### Other Software
_No response_ | Issue-Bug,Product-Settings,Needs-Triage | low | Critical |
2,814,515,102 | ui | [bug]: Tailwind V4 installation | ### Describe the bug
Tailwind 4 has a new installation and no longer applies to the installation instructions: https://tailwindcss.com/docs/installation/using-vite
### Affected component/components
All
### How to reproduce
N/A
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
```
### System Info
```bash
N/A
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug | low | Critical |
2,814,520,394 | ollama | Support for Zero-shot Text Classification Models | It would be helpful to developers if ollama supported zero-shot text classification models, such as [`deberta-v3-large-tasksource-nli`](https://huggingface.co/sileod/deberta-v3-large-tasksource-nli) or other offshoots of BERT, which are fairly small models, that allow you do things like pass in a list of categories and have it classify text (or other inputs) into one of those categories.
(Please consider supporting this as a feature, as it would allow us to develop applications, with the only LLM dependency being ollama; and end users would not be required to install a bunch of weird python dependencies or other developer tools.) | feature request | low | Minor |
2,814,520,836 | godot | Rigidbody3d:_on_sleeping_state_change does not work when the rigidbody is instantiated, bug on godot 4.4 beta1 | ### Tested versions
Tested in 4.4 beta1,
### System information
Linux-Mint (6.8), 4.4.1, amd5600xt, intel 12th gen
### Issue description
When you instantiate a rigidbody3d which has a script _on_sleeping_state_changed() the instated object _on_sleeping_state_changed() function will never fire.
### Steps to reproduce
Create a parent node, with a rigibody-child scene. On that child box give it a script that makes a call to _on_sleeping_state_changed(), I have it so that on _ready it gets a apply_central_impulse(throw_vector * 1). Have the parent scene node instantiate the rigidbody child scene. You will see that both the childs do get the impulse and in the inspector both will have their sleep checkbox go off and on, but only the non-instntaited node will have it's _on_sleeping_change function get triggered.
I've attached a video of me doing this test
https://github.com/user-attachments/assets/1c549f5d-df1f-4bc5-b672-03f486dfe99a
To stress because, It kind of looks like I clicked the checkbox, I did not I let the objects inertia slow it down until it stopped.
### Minimal reproduction project (MRP)
Godot4.4.1 just the standard assets was able to recreate this issue | topic:physics,needs testing | low | Critical |
2,814,555,143 | transformers | 4.48.1 breaks sliding window in eager attention for qwen2 | ### System Info
- `transformers` version: 4.48.1
- Platform: Linux-5.10.0-2.0.0.2-x86_64-with-glibc2.27
- Python version: 3.9.19
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.4.5
- Accelerate version: 0.34.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
None
### Expected behavior
I see v4.48.1 removes the sliding_window mask generation in `_prepare_4d_causal_attention_mask_with_cache_position` but leaves `sliding_window` arguments to `attention_interface`:
```
attn_output, attn_weights = attention_interface(
self,
query_states,
key_states,
value_states,
attention_mask,
dropout=0.0 if not self.training else self.attention_dropout,
scaling=self.scaling,
sliding_window=sliding_window, # main diff with Llama
**kwargs,
)
```
This is ok under fa2, but what if using eager attention? Eager attention only accepts attention_mask and does not handle sliding windows. I see this should be a bug? | bug | low | Critical |
2,814,555,158 | tauri | [feat] consider linting against duplicate dependencies in different versions | ### Describe the problem
Tauri is a big project and as a result, pulls in a fair few dependencies. Some of them are duplicates in different versions. For example, with the latest update to `tauri 2.2.5`, there are now two different versions of `dirs` being pulled in:
```
โฏ cargo tree -i -p [email protected]
dirs v6.0.0
โโโ tauri v2.2.5
โฏ cargo tree -i -p [email protected]
dirs v5.0.1
โโโ tauri-build v2.0.5
โ [build-dependencies]
โ โโโ tauri v2.2.5
โ โโโ tauri-plugin-dialog v2.2.0
โ โโโ tauri-plugin-fs v2.2.0
โ โ โโโ tauri-plugin-dialog v2.2.0 (*)
โ โโโ tauri-plugin-notification v2.2.0
โ โโโ tauri-plugin-opener v2.2.2
โ โโโ tauri-plugin-shell v2.2.0
โโโ tray-icon v0.19.1
โโโ tauri v2.2.5 (*)
```
This increases compile times for all downstream users of `tauri`.
### Describe the solution you'd like
Where possible, it would be nice if `tauri` could lint against duplicate dependencies (`cargo deny` does this quite well). I understand that this is difficult to do for plugins and the `tauri` CLI because they are in a different workspace. However, at least within the same workspace, it would be nice of e.g. `tauri 2.2.5` wouldn't pull in `dirs 5.0.1` via `tray-icon` and `dirs 6.0.0` itself.
Also, across the latest version of all plugins, it would be nice to enforce a single version of a particular dependency (where possible).
### Alternatives considered
_No response_
### Additional context
_No response_ | type: feature request | low | Minor |
2,814,606,420 | ui | [bug]: Search Results Text Appears Blurry When Scrolling | ### Describe the bug
When I search for something using the documentation search field on the [shadcn/ui website](https://ui.shadcn.com), the text in the search results appears blurry when scrolling down. This makes it difficult to read the content to user.
### Affected component/components
Search Bar
### How to reproduce
1. Go to [shadcn/ui](https://ui.shadcn.com).
2. Use the search bar at the top right to search for any term
3. Scroll down to view the search results.
4. Observe that the text in the results appears blurry.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
```
### System Info
```bash
OS: [ Windows 11]
Browser: [ Chrome, Brave]
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug | low | Critical |
2,814,669,000 | ollama | Deepseek 80% size reduction | New quants done by unsloth.ai:
| MoE Bits | Disk Size | Type | Quality | Link | Down_proj |
|-----------|-----------|----------|---------|------------------------------------------------------------------------------------------------------------------------------------------------|----------------|
| 1.58-bit | 131GB | IQ1_S | Fair | [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_S) | 2.06/1.56bit |
| 1.73-bit | 158GB | IQ1_M | Good | [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ1_M) | 2.06bit |
| 2.22-bit | 183GB | IQ2_XXS | Better | [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-IQ2_XXS) | 2.5/2.06bit |
| 2.51-bit | 212GB | Q2_K_XL | Best | [Link](https://huggingface.co/unsloth/DeepSeek-R1-GGUF/tree/main/DeepSeek-R1-UD-Q2_K_XL) | 3.5/2.5bit |
please consider adding them
https://unsloth.ai/blog/deepseekr1-dynamic
thanks! | model request | low | Minor |
2,814,669,370 | PowerToys | Unkown error when windows awake from power saving mode | ### Microsoft PowerToys version
0.87.1.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Awake
### Steps to reproduce
My Lenovo's Thinkpad Gen5 (AMD 8840u)
step 1 : Open the lcd panel
step 2 : displaying Lenovo logo
step 3 : Logon with finger print
step 4 : displayed below
Version: 0.87.1.0
OS Version: Microsoft Windows NT 10.0.22631.0
IntPtr Length: 8
x64: True
Date: 2025-01-24 ์ค์ 9:01:02
Exception:
System.Runtime.InteropServices.COMException (0xD0000701): 0xD0000701
at Standard.NativeMethods.DwmExtendFrameIntoClientArea(IntPtr hwnd, MARGINS& pMarInset)
at System.Windows.Appearance.WindowBackdropManager.UpdateGlassFrame(IntPtr hwnd, WindowBackdropType backdropType)
at System.Windows.Appearance.WindowBackdropManager.ApplyBackdrop(IntPtr hwnd, WindowBackdropType backdropType)
at System.Windows.ThemeManager.OnSystemThemeChanged()
at System.Windows.SystemResources.SystemThemeFilterMessage(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled)
at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs)
at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler)
### โ๏ธ Expected Behavior
_No response_
### โ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,814,671,496 | flutter | Default foreground color animation duration doesn't apply on icon of `Button` widgets | ### Steps to reproduce
1. Create a new Flutter project
2. Use the `ElevatedButton.icon()` widget and specify an `icon` and `label` widget
3. Override the `style` property to the following:
```dart
style: ButtonStyle(
foregroundColor: WidgetStateProperty.resolveWith<Color?>(
(states) => states.contains(WidgetState.hovered)
? Colors.white
: Colors.white24,
),
),
```
4. Notice the different hover animation duration on the icon and the label text (the animation doesn't seem to apply at all on the `icon`).
### Expected results
Same default foreground color hover animation duration ([`kThemeChangeDuration`](https://api.flutter.dev/flutter/material/kThemeChangeDuration-constant.html)) on both the icon and the label.
### Actual results
Default foreground color animation duration doesn't apply on the icon at all, only applied on the label widget which make it look buggy (see the screen recording attached).
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(const MyApp());
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
theme: ThemeData(useMaterial3: false),
home: const HomePage(),
);
}
}
class HomePage extends StatelessWidget {
const HomePage({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
SizedBox(
child: ElevatedButton.icon(
icon: Icon(Icons.star, size: 50),
label: Text('Button', style: TextStyle(fontSize: 36)),
onPressed: () {},
style: ButtonStyle(
foregroundColor: WidgetStateProperty.resolveWith<Color?>(
(states) => states.contains(WidgetState.hovered)
? Colors.white
: Colors.white24,
),
),
),
)
],
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/4a5d7ce8-7a5d-4fa6-acbb-2e8b45c47684
</details>
### Logs
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.27.3, on macOS 15.2 24C101 darwin-arm64, locale en-IN)
โข Flutter version 3.27.3 on channel stable at /Users/souvikbiswas/fvm/versions/stable
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision c519ee916e (6 days ago), 2025-01-21 10:32:23 -0800
โข Engine revision e672b006cb
โข Dart version 3.6.1
โข DevTools version 2.40.2
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/souvikbiswas/Library/Android/sdk
โข Platform android-34, build-tools 34.0.0
โข Java binary at:
/Users/souvikbiswas/Library/Java/JavaVirtualMachines/jbr-17.0.12/Contents/Home/bin/
java
โข Java version OpenJDK Runtime Environment JBR-17.0.12+1-1207.37-nomod (build
17.0.12+1-b1207.37)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 16.2)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 16C5032a
โข CocoaPods version 1.16.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.2)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[โ] VS Code (version 1.96.4)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.102.0
[โ] Connected device (6 available)
โข Pixel 9 Pro (mobile) โข 192.168.68.55:5555 โข android-arm64 โข
Android 15 (API 35)
โข Souvikโs iPad Air (mobile) โข 00008101-000E41A90ED0001E โข ios โข iOS
18.1.1 22B91
โข Manishaโs iPhone (mobile) โข 00008140-000A618C0CE3001C โข ios โข iOS
18.1.1 22B91
โข macOS (desktop) โข macos โข darwin-arm64 โข
macOS 15.2 24C101 darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข
macOS 15.2 24C101 darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข
Google Chrome 132.0.6834.111
! Error: Browsing on the local area network for Souvikโs Appleย Watch. Ensure the
device is unlocked and discoverable via Bluetooth. (code -27)
! Error: Browsing on the local area network for Souvikโs iPhone. Ensure the device is
unlocked and attached with a cable or associated with the same local area network
as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| framework,a: animation,f: material design,has reproducible steps,team-design,found in release: 3.27,found in release: 3.29 | low | Critical |
2,814,671,784 | vscode | Adopt `getTitleBarStyle` to know if custom title is used | Probing the custom title setting is not enough, as there are other settings impacting the decision if custom title is used. Rather, use `window.ts#getTitleBarStyle()`:
https://github.com/microsoft/vscode/blob/7a3d738bbb52a0222cbd97277b07d93113b79139/src/vs/platform/window/common/window.ts#L209
* [ ] https://github.com/microsoft/vscode/blob/7a3d738bbb52a0222cbd97277b07d93113b79139/src/vs/platform/quickinput/browser/quickInputController.ts#L922
* [ ] https://github.com/microsoft/vscode/blob/7a3d738bbb52a0222cbd97277b07d93113b79139/src/vs/workbench/contrib/debug/browser/debugToolBar.ts#L83
| bug,debt | low | Critical |
2,814,679,723 | rust | Inaccurate DWARF call stacks with optimizations | NB: I'm not sure if this is actually a bug, or just a limitation of DWARF-based call graph profiling.
---
I'm experiencing an issue where, when profiling an optimized Rust binary using `perf` with DWARF call graphs, functions appear in incorrect places in the call graph.
**Steps to reproduce:**
1. Build with `rustc 1.86.0-nightly (f85c6de55 2025-01-26)`:
```rust
use std::collections::HashMap;
#[inline(never)]
#[no_mangle]
fn merge(lhs: &mut HashMap<i32, f32>, rhs: &HashMap<i32, f32>) {
for (k, v) in rhs {
lhs.insert(*k, *v);
}
}
#[no_mangle]
#[inline(never)]
fn merge_all(inputs: Vec<(HashMap<i32, f32>, HashMap<i32, f32>)>) {
for (mut lhs, rhs) in inputs {
merge(&mut lhs, &rhs);
std::mem::forget(core::hint::black_box((lhs, rhs)));
}
}
pub fn main() {
for _ in 0..1000 {
let mut a = HashMap::<i32, f32>::default();
for key in 0..100 {
a.insert(key, key as f32);
}
let mut b = HashMap::default();
b.insert(-1, -1.0);
let inputs = std::iter::from_fn(|| Some((a.clone(), b.clone())))
.take(1000)
.collect::<Vec<_>>();
merge_all(inputs);
}
}
```
```toml
[package]
name = "test-stacks"
version = "0.1.0"
edition = "2021"
[profile.release]
debug = "full"
lto = false
```
Compile with:
```
RUSTFLAGS="-C force-frame-pointers=yes -Z merge-functions=disabled -C link-arg=-Wl,--icf=none" \
cargo +nightly -Z build-std build --release
```
2. Profile and open the report:
```
sudo perf record -g --call-graph dwarf,16384 -F 20000 -k mono -- target/release/test-stacks
sudo perf report -g --percent-limit 0
```
**Profiler output:**
```
- 92.91% 0.00% test-stacks test-stacks [.] main โ
main โ
std::rt::lang_start_internal โ
std::panic::catch_unwind (inlined) โ
std::panicking::try (inlined) โ
std::panicking::try::do_call (inlined) โ
- std::rt::lang_start_internal::_$u7b$$u7b$closure$u7d$$u7d$::h645042a4ac672685 (inlined) โ
- 92.91% std::panic::catch_unwind (inlined) โ
std::panicking::try (inlined) โ
std::panicking::try::do_call (inlined) โ
core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once (inlined) โ
std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h5cf5b7758a7d9323 โ
std::sys::backtrace::__rust_begin_short_backtrace โ
- core::ops::function::FnOnce::call_once (inlined) โ
- 86.83% test_stacks::main โ
- test_stacks::bench_map_large_merge (inlined) โ
+ 62.44% core::iter::traits::iterator::Iterator::collect (inlined) โ
do_bench ---> - 22.18% do_bench โ
+ 14.85% cfree@GLIBC_2.2.5 โ
+ 2.96% core::ptr::drop_in_place<std::collections::hash::map::HashMap<i32,f32>> (inlined) โ
- 1.90% merge โ
hashmap insert ---> - 1.58% std::collections::hash::map::HashMap<K,V,S>::insert (inlined) โ
0.52% _int_free โ
0.36% _int_free_merge_chunk โ
0.28% cfree@GLIBC_2.2.5 โ
0.15% _int_free_create_chunk โ
do_bench? ---------> - 0.12% do_bench โ
+ 0.11% core::ptr::drop_in_place<std::collections::hash::map::HashMap<i32,f32>> (inlined) โ
- 0.00% <alloc::vec::into_iter::IntoIter<T,A> as core::iter::traits::iterator::Iterator>::next (inlined) โ
0.00% <core::ptr::non_null::NonNull<T> as core::cmp::PartialEq>::eq (inlined) โ
- 0.00% core::ptr::non_null::NonNull<T>::read (inlined) โ
core::ptr::read (inlined) โ
0.00% core::hint::black_box (inlined) โ
- 0.00% <alloc::vec::Vec<T,A> as core::iter::traits::collect::IntoIterator>::into_iter (inlined) โ
core::mem::manually_drop::ManuallyDrop<T>::new (inlined)
```
- In the profiler output, `do_bench` appears as a child of `HashMap::insert`, which doesn't correspond to the actual call hierarchy in the source code.
- I've tried enabling frame pointers, disabling function merging, LTO, and linker-based ICF, which does not resolve the issue.
**Expected behavior:**
I expected the call stacks in the profiler to accurately reflect the function call hierarchy, with `do_bench` not appearing as a child of `HashMap::insert`.
| A-debuginfo,T-compiler,C-bug,S-needs-repro,E-needs-investigation | low | Critical |
2,814,698,214 | ollama | Individual quantized model download count | Hey,
I was been exploring the models on site, It would be great to have a total download count for each quantized version (e.g., q8_0, q4_K_M) to show how many times theyโve been downloaded. This would help users gauge the popularity and reliability of different models. Having clear download statistics for each version would make it easier to choose the best one. Thank you!
 | feature request,ollama.com | low | Minor |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.