id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,791,073,206 | pytorch | `torch.export` for Yolo Pose fails | ### 🐛 Describe the bug
I get an error when I try to export the Yolo-Pose model with `strict=True`
The error goes away with `strict=False`
`pip install ultralytics`
```
from ultralytics import YOLO
import torch
from torch.export import export
pose_model = YOLO("yolo11n-pose.pt") # Load model
pose_model.model.eval()
inputs = torch.rand((1,3,640,640))
exported_program: torch.export.ExportedProgram= export(pose_model.model, args=(inputs,))
```
Error Logs
```
Traceback (most recent call last):
File "/home/agunapal/export_games/pose/pose_export.py", line 7, in <module>
exported_program: torch.export.ExportedProgram= export(pose_model.model, args=(inputs,))
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/__init__.py", line 368, in export
return _export(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1031, in wrapper
raise e
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1004, in wrapper
ep = fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/exported_program.py", line 122, in wrapper
return fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1957, in _export
export_artifact = export_func( # type: ignore[operator]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1251, in _strict_export
return _strict_export_lower_to_aten_ir(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 1279, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/export/_trace.py", line 660, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1539, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1740, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 556, in _fn
return fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1740, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1395, in __call__
return self._torchdynamo_orig_callable(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 545, in __call__
return _compile(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1027, in _compile
raise InternalTorchDynamoError(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 977, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 706, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 741, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object
transformations(instructions, code_options)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 229, in _fn
return fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 658, in transform
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2912, in run
super().run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 442, in call_function
return tx.inline_user_function_return(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1816, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 410, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1738, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 349, in call_function
return super().call_function(tx, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 125, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 973, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3127, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3255, in inline_call_
tracer.run()
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1120, in run
while self.step():
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1032, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 640, in wrapper
return inner_fn(self, inst)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1828, in CALL_FUNCTION_KW
self.call_function(fn, args, kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 967, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 953, in call_function
tensor_variable = wrap_fx_proxy(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2108, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2174, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2272, in _wrap_fx_proxy
return handle_traced_output(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 2291, in handle_traced_output
set_example_value(proxy.node, example_value)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1640, in set_example_value
if symbol_to_path := torch.fx.experimental.symbolic_shapes.compute_unbacked_bindings(
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 999, in compute_unbacked_bindings
raise PendingUnbackedSymbolNotFound(
torch._dynamo.exc.InternalTorchDynamoError: PendingUnbackedSymbolNotFound: Pending unbacked symbols {zuf0} not in returned outputs FakeTensor(..., size=(6400, 1)) ((1, 1), 0).
Did you accidentally call new_dynamic_size() or item() more times than you needed to in your fake implementation?
For more help, see https://docs.google.com/document/d/1RWrH-3wLEpzR9kCS6gGBNen_-Fs-8PVbWWFE5AcgeWE/edit
from user code:
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/nn/tasks.py", line 112, in forward
return self.predict(x, *args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/nn/tasks.py", line 130, in predict
return self._predict_once(x, profile, visualize, embed)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/nn/tasks.py", line 151, in _predict_once
x = m(x) # run
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _call_impl
return forward_call(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/nn/modules/head.py", line 240, in forward
x = Detect.forward(self, x)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/nn/modules/head.py", line 72, in forward
y = self._inference(x)
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/nn/modules/head.py", line 105, in _inference
self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
File "/home/agunapal/anaconda3/envs/export/lib/python3.10/site-packages/ultralytics/utils/tal.py", line 314, in make_anchors
stride_tensor.append(torch.full((h * w, 1), stride, dtype=dtype, device=device))
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241112+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.26.5
Libc version: glibc-2.34
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn.so.9.1.0
/usr/lib64/libcudnn_adv.so.9.1.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn.so.9.1.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_engines_precompiled.so.9.1.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib64/libcudnn_graph.so.9.1.0
/usr/lib64/libcudnn_heuristic.so.9.1.0
/usr/lib64/libcudnn_ops.so.9.1.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
Frequency boost: enabled
CPU(s) scaling MHz: 100%
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241112+cu121
[pip3] torchaudio==2.5.0.dev20241112+cu121
[pip3] torchvision==0.20.0.dev20241112+cu121
[conda] numpy 2.0.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241112+cu121 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241112+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241112+cu121 pypi_0 pypi
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,module: dynamic shapes,module: dynamo,oncall: export | low | Critical |
2,791,078,474 | yt-dlp | Unable to download the video the error message goes like Unable to extract flashvars - Failed to parse JSON | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
Unable to extract flashvars - Failed to parse JSON (caused by JSONDecodeError('Expecting \',\' delimiter in \'eywords=" + "current\': line 33 column 83 (char 6760)')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template.
Version:
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
Command:
yt-dlp -f bestvideo+bestaudio/best --merge-output-format mkv --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/109.0" --referer "https://www.youtube.com" --add-header "Accept-Language: en-US,en;q=0.9" --add-header "DNT: 1" --add-header "Connection: keep-alive" --no-check-certificate --cookies-from-browser firefox --extractor-args "generic:impersonate" --download-archive "C:\Users\rajesh\user0\OK\Scripts\Downloder\Logs\yt-dlp - logs.txt" --progress --console-title --ignore-errors --no-warnings -o "%(title)s.%(ext)s" --exec "echo {} >> C:\Users\rajesh\user0\OK\Scripts\Downloder\Logs\yt-dlp - logs.txt" --download-sections "*00:00-" "https://zbporn.com/videos/639682/hot-indian-girl-gives-a-nice-blowjob-to-a-big-dick/"
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-f', 'bestvideo+bestaudio/best', '--merge-output-format', 'mkv', '--user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/109.0', '--referer', 'https://www.youtube.com', '--add-header', 'Accept-Language: en-US,en;q=0.9', '--add-header', 'DNT: 1', '--add-header', 'Connection: keep-alive', '--no-check-certificate', '--cookies-from-browser', 'firefox', '--extractor-args', 'generic:impersonate', '--download-archive', 'C:\\Users\\rajesh\\user0\\OK\\Scripts\\Downloder\\Logs\\yt-dlp - logs.txt', '--progress', '--console-title', '--ignore-errors', '--no-warnings', '-o', '%(title)s.%(ext)s', '--exec', 'echo {} >> C:\\Users\\rajesh\\user0\\OK\\Scripts\\Downloder\\Logs\\yt-dlp - logs.txt', '--download-sections', '*00:00-', 'https://zbporn.com/videos/639682/hot-indian-girl-gives-a-nice-blowjob-to-a-big-dick/', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [dade5e35c] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.26100-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg n7.1-152-gd72536008a-20250113 (setts), ffprobe n7.1-152-gd72536008a-20250113
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
Extracting cookies from firefox
[debug] Extracting cookies from: "C:\Users\rajesh\AppData\Roaming\Mozilla\Firefox\Profiles\k777fhsd.default-release-1731233125162\cookies.sqlite"
Extracted 848 cookies from firefox
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'C:\\Users\\rajesh\\user0\\OK\\Scripts\\Downloder\\Logs\\yt-dlp - logs.txt'
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[generic] Extracting URL: https://zbporn.com/videos/639682/hot-indian-girl-gives-a-nice-blowjob-to-a-big-dick/
[generic] hot-indian-girl-gives-a-nice-blowjob-to-a-big-dick: Downloading webpage
[generic] hot-indian-girl-gives-a-nice-blowjob-to-a-big-dick: Extracting information
[debug] Looking for embeds
[debug] Identified a KVS Player
ERROR: [generic] hot-indian-girl-gives-a-nice-blowjob-to-a-big-dick: Unable to extract flashvars - Failed to parse JSON (caused by JSONDecodeError('Expecting \',\' delimiter in \'eywords=" + "current\': line 33 column 83 (char 6760)')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\generic.py", line 2548, in _real_extract
File "yt_dlp\extractor\generic.py", line 2677, in _extract_embeds
File "yt_dlp\extractor\generic.py", line 2294, in _extract_kvs
File "yt_dlp\extractor\common.py", line 1371, in _search_json
File "yt_dlp\utils\_utils.py", line 564, in decode
File "json\decoder.py", line 353, in raw_decode
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 33 column 83 (char 6760)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "yt_dlp\extractor\common.py", line 1091, in _parse_json
File "json\__init__.py", line 359, in loads
File "yt_dlp\utils\_utils.py", line 573, in decode
json.decoder.JSONDecodeError: Expecting ',' delimiter in 'eywords=" + "current': line 33 column 83 (char 6760)
```
| NSFW,site-bug,triage | low | Critical |
2,791,101,473 | PowerToys | FanzyZones keeps swapping external monitors | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
I love FancyZones and rely on it daily. Lately I've noticed an issue where FancyZones swaps the layouts on my two external monitors. This swapping happens randomly on boot, resuming from sleep, or when connecting to my dock. This doesn't happen every time, but it's frequent enough that it happens multiple times per week. Unfortunately, this has happened with two different docks and multiple computers.
**Repro steps:**
1. Connect a Surface Dock v2 or Plugable Thunderbolt 4 dock to two 27” Dell UltraSharp monitors (U2717D)
2. Connect a Surface Laptop 5 to the Surface or Plugtable docks or an ASUS ProArt P16 to the Plugable dock
3. Set the PC to output the display to external monitors only
4. Restart the computer, wake it up from sleep, or disconnect/reconnect from the dock
### ✔️ Expected Behavior
FancyZones should remember which layout is on the left and right external displays
### ❌ Actual Behavior
FancyZones usually remembers which layout is on the left and right displays, but sometimes they are randomly swapped so the left layout is on the right monitor and the right layout is on the left monitor. This layout seems to persist across restarts until FancyZones swaps the layouts back. The only solution is to change my FancyZones layouts to be the ones I want.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,791,152,837 | PowerToys | Please prevent cmd PATH override | ### Description of the new feature / enhancement
When I type `>cmd` I expect it to open the normal `cmd`, with the `PATH` system variable I have meticulously extended myself.
Instead power toys overrides `PATH`, and that removes some of my custom paths and all paths to disks other than `C:`.
### Scenario when this would be used?
I have vscode and a bunch of `.code-workspace` files for it. They are spread accross folders so I wrote a small batch script in `C:\Scripts`.
I have setup `PATH` so when I open `cmd` and type `wsp myproject` it will run the batch file and look for a workspace with that name.
### Supporting information
I have extended both the system-wide and user-based `PATH`.
PowerToys overrides `PATH`, try running `echo %PATH%` in a normal command line and in one opened from the app.
P.s. couldn't call it a bug really, maybe it's intentional, but it's certainly unexpected and annoying. | Needs-Triage | low | Critical |
2,791,154,632 | flutter | `Container` can lose its child's state | ### Use case
The `Container` widget changes the tree hierarchy when its arguments change. This causes its child will lose its state if it doesn't have a global key.
Example by @justinmc: https://dartpad.dev/?id=bd243d23a7fd661563519c3eebece032
### Proposal
If possible, we should fix `Container` such that updating its configuration doesn't cause its child to lose state.
If not possible, we should update `Container`'s docs to explain this problem and how to use a global key to workaround it. | framework,P2,team-framework,triaged-framework | low | Minor |
2,791,191,768 | vscode | Filter bar in Log file open in editor | I love this filter:

Can I also have in the editor itself somewhere?

Usecase: I want to filter down to only events I care about, but compare an old log file to a new log file. | feature-request,output | low | Minor |
2,791,234,339 | pytorch | Interaction between torch._dynamo.disable and fullgraph=True | ### 🚀 The feature, motivation and pitch
We're encouraging people to use fullgraph=True to better identify graph breaks. At the same time, we're empowering users to use escape hatches like torch._dynamo.disable. These two work against each other, and the best workaround I can think of is to ask users to stop fullgraping and to use some other tool to inspect their graph breaks e.g. tlparse, graph count, TORCH_LOGS.
We should consider special casing torch._dynamo.disable so that it does not raise errors with fullgraph=True. This could be controlled by a flag, but I think it can be the default enablement experience.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,791,238,002 | PowerToys | FancyZones with Citrix Receivers is not working. | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
FancyZones
### Steps to reproduce
I am trying to utilize FancyZones to pin my applications in a 3x3 grid. The applications in question are Citrix environments and they do not snap to the grid nor do they resize. I saw some comments from 2021 and 2022 that suggested this could be fixed using the Citrix Workplace settings to adjust for High DPI scaling, but unfortunately that does not do anything for me. Are there any other suggestions for issues with Citrix? Thanks!
### ✔️ Expected Behavior
Citrix applications to snap to 3x3 grid in the same way that the web browser panes do.
### ❌ Actual Behavior
Citrix panes do not align in the grid where they are positioned, do not resize, and do not stay within their snapped grids (they aren't snapping in at all so I guess that's part of the reason?).
### Other Software
Citrix workspace envrionments (Epic). | Issue-Bug,Needs-Triage | low | Minor |
2,791,281,160 | godot | Script reference is broken when script is moved in FileSystem dock | ### Tested versions
4.4 beta1
Didn't test other versions, but probably a regression.
### System information
W10
### Issue description
When you have a script opened and then you move it to another directory, the script reference in the editor is broken. The internal path does not update, so re-focusing editor causes errors about missing file and trying to save the file will have no effect.
### Steps to reproduce
1. Open a script file in script editor
2. In FileSystem dock, move the file to another directory
3. Open the moved file in external editor
4. In Godot, modify and save the still opened script
5. See in external editor that changes are not saved
Alternatively:
3. Unfocus and focus the editor
4. Error in output
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,regression | low | Critical |
2,791,292,409 | pytorch | UserWarning: cuDNN SDPA backward got grad_output.strides() != output.strides() | ### 🐛 Describe the bug
I'm getting this warning when using TRainer and FSDP to pre-train Llama3.1-8b.
`UserWarning: cuDNN SDPA backward got grad_output.strides() != output.strides()`
This might introduce overhead in the training processes.
I have tried to disable the backend with:
```
import os
os.environ["TORCH_CUDNN_SDPA_ENABLED"] = "0"
from torch.nn.attention import SDPBackend
torch.backends.cuda.sdp_kernel = SDPBackend.FLASH_ATTENTION
```
However, the HF Trainer ignores these settings and continues using SDPA.
Here is the full script:
```
import datasets
import torch
import time
from torch.utils.data import DataLoader, Dataset
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
TrainingArguments,
TrainerCallback,
Trainer,
set_seed,
DataCollatorWithPadding,
)
from transformers.integrations import TensorBoardCallback
import GPUtil, psutil
from torch.utils.tensorboard import SummaryWriter
# Explicitly disable cuDNN SDPA to avoid stride mismatch warnings
import os
os.environ["TORCH_CUDNN_SDPA_ENABLED"] = "0"
# Set Flash Attention as the preferred backend
from torch.nn.attention import SDPBackend
torch.backends.cuda.sdp_kernel = SDPBackend.FLASH_ATTENTION
# Model and dataset configuration
LLM_MODEL = "meta-llama/Meta-Llama-3.1-8B"
DATASET_PATH = "../data-prep/data_files/llama31_tokenized_docs_full_dataset.parquet"
OUTPUT_DIR = "./llama3_8b_ddp_pretraining"
set_seed(42)
# Load model and tokenizer
model_name = LLM_MODEL
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
)
model.config.use_cache = False
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Ensure pad token is set for the tokenizer
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
# Custom dataset class with contiguous tensors
class CustomDataset(Dataset):
def __init__(self, dataset_name, tokenizer, split="train", max_tokens=None, max_length=512):
self.dataset = datasets.load_dataset(
"parquet",
data_files=dataset_name,
split=split
)
if max_tokens is not None:
self.dataset = self.dataset.filter(lambda x: x["num_tokens"] <= max_tokens)
self.tokenizer = tokenizer
self.max_length = max_length
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
input_ids = self.dataset[idx]["input_ids"]
if len(input_ids) > self.max_length:
input_ids = input_ids[:self.max_length]
attention_mask = [1] * len(input_ids)
padding_length = self.max_length - len(input_ids)
if padding_length > 0:
input_ids += [self.tokenizer.pad_token_id] * padding_length
attention_mask += [0] * padding_length
# Ensure tensors are contiguous
input_ids = torch.tensor(input_ids, dtype=torch.long).contiguous()
attention_mask = torch.tensor(attention_mask, dtype=torch.long).contiguous()
labels = input_ids.clone().contiguous()
return {"input_ids": input_ids, "attention_mask": attention_mask, "labels": labels}
# Initialize dataset and data collator
train_dataset = CustomDataset(
dataset_name=DATASET_PATH,
tokenizer=tokenizer,
split="train",
max_tokens=512,
max_length=512,
)
print(f"Training dataset size is: {len(train_dataset.dataset)} samples")
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
# Training arguments
training_args = TrainingArguments(
output_dir=OUTPUT_DIR,
optim="adamw_torch",
num_train_epochs=1,
per_device_train_batch_size=64,
gradient_accumulation_steps=8,
learning_rate=3e-5,
weight_decay=0.01,
warmup_steps=10,
lr_scheduler_type="cosine",
gradient_checkpointing=True,
dataloader_num_workers=8,
bf16=True,
logging_steps=10,
report_to="tensorboard",
save_strategy="epoch",
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=None,
data_collator=data_collator,
)
trainer.train()
```
### Versions
```
NVIDIA (PyTorch container) Release 24.12 (build 126674149)
Using CUDA 12.6 driver version 560.35.05 with kernel driver version 550.127.08
pytorch-triton 3.0.0+72734f086
torch 2.6.0a0+df5bbc09d1.nv24.12
torch-tb-profiler 0.4.3
torch_tensorrt 2.6.0a0
torchprofile 0.0.4
torchvision 0.20.0a0
transformers 4.48.0
accelerate 1.2.1
```
cc @csarofeen @ptrblck @xwang233 @eqy | module: cudnn,triaged,module: sdpa | low | Critical |
2,791,303,487 | flutter | Cocoon backfiller could take advantage of idle bots | Noticed while investigating [#161674](https://github.com/flutter/flutter/issues/161674#issuecomment-2594184332): The backfiller only runs one builder for each 'task'. If you look at some of the [bot lists](https://chromium-swarm.appspot.com/botlist?c=id&c=task&c=os&c=status&d=asc&f=pool%3Aluci.flutter.prod&k=pool&s=id); we should be utilizing them more - e.g. right now we have 364 idle. If I look at some of the bot utilizations, they appear to be <33% walltime.
| team-infra | low | Minor |
2,791,325,709 | PowerToys | [Settings] ImageResizer new preset does not use correct custom dimensions | ### Microsoft PowerToys version
0.87
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Settings
### Steps to reproduce
1. Open the Settings application and navigate to the ImageResizer page
2. Click the Add new size button
3. Note the dimensions of the new preset
### ✔️ Expected Behavior
The dimensions should match the custom size defaults from the `ImageresizerCustomSize` settings property: 1024 x 640.
(See the `ImageResizerProperties` class.)
### ❌ Actual Behavior
The dimensions repeat the existing Small preset width and height: 854 x 480.
### Other Software
_No response_ | Issue-Bug,Resolution-Fix Committed,Product-Image Resizer | low | Minor |
2,791,356,691 | ollama | Ollama not respecting structured outputs with some ordering of refs | ### What is the issue?
The following program fails, but should work.
Changing the `schema = ...` line to use `schema_works`, which is a slightly different schema, works. The two schemas should parse the same JSON - the only difference is the ordering in `"$defs"`.
I also have examples of pydantic-generated schemas which fail because of this bug.
It looks like the `json_schema_to_grammar` implementation has a "binding problem", that is it binds a def in `_refs` before visiting it, expecting a `"input#...` prefix, and consequently generates an incorrect grammar.
```python
from ollama import chat
import json
from jsonschema import validate, ValidationError
#
# Schema originally generated by pydantic. Depending
# on the alphabetical order of the names of the classes,
# both failing and working schemas can be generated.
# This is the smallest example I could find, and
# I'm using client version is 0.5.5.
#
# The *only* difference is the order of "C" and "A" in "$defs",
# This should not make any difference: both are supposed to be the
# same schema.
schema_fails = {
"$defs": {
"C": {
"properties": {"payload": {"$ref": "#/$defs/A"}},
"required": ["payload"],
"title": "C",
"type": "object",
},
"A": {
"properties": {"payload": {"$ref": "#/$defs/E"}},
"required": ["payload"],
"title": "A",
"type": "object",
},
"E": {"type": "boolean"},
},
"properties": {"payload": {"$ref": "#/$defs/C"}},
"required": ["payload"],
"title": "B",
"type": "object",
}
schema_works = {
"$defs": {
"A": {
"properties": {"payload": {"$ref": "#/$defs/E"}},
"required": ["payload"],
"title": "A",
"type": "object",
},
"C": {
"properties": {"payload": {"$ref": "#/$defs/A"}},
"required": ["payload"],
"title": "C",
"type": "object",
},
"E": {"type": "boolean"},
},
"properties": {"payload": {"$ref": "#/$defs/C"}},
"required": ["payload"],
"title": "B",
"type": "object",
}
schema = schema_fails # or schema_works
# schema_fails generates {"payload": {"payload": {"payload": {"payload": "true"}}}}
# schema_works generates {"payload": {"payload": {"payload": true}}}
response = chat(
model="mistral-nemo:latest",
messages=[
{
"role": "user",
"content": "I want to choose true, inside A, inside C, inside B",
}
],
format=schema,
options={"temperature": 0}, # Make responses more deterministic
)
valid_json = json.loads(response.message.content)
print(json.dumps(valid_json))
validate(instance=valid_json, schema=schema)
print("validates!")
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.5 | bug | low | Critical |
2,791,392,630 | pytorch | DISABLED test_distributed_checkpoint_state_dict_type1_cuda (__main__.TestDistributedCheckpointCUDA) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_distributed_checkpoint_state_dict_type1_cuda&suite=TestDistributedCheckpointCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35670700293).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_distributed_checkpoint_state_dict_type1_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 597, in wrapper
self._join_processes(fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 837, in _join_processes
self._check_return_codes(elapsed_time)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 886, in _check_return_codes
raise RuntimeError(error)
RuntimeError: Process 1 exited with error code 10 and exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 726, in run_test
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 599, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3128, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3128, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 485, in instantiated_test
raise rte
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 465, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 199, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/checkpoint_utils.py", line 44, in wrapper
func(self, *args, **kwargs)
File "/var/lib/jenkins/workspace/test/distributed/fsdp/test_distributed_checkpoint.py", line 67, in test_distributed_checkpoint
state_dict = model.state_dict()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2204, in state_dict
module.state_dict(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2204, in state_dict
module.state_dict(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2204, in state_dict
module.state_dict(
[Previous line repeated 1 more time]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2210, in state_dict
hook_result = hook(self, destination, prefix, local_metadata)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 714, in _post_state_dict_hook
processed_state_dict = _post_state_dict_hook_fn[fsdp_state._state_dict_type](
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/fsdp/_state_dict_utils.py", line 432, in _local_post_state_dict_hook
sharded_tensor = init_from_local_shards(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py", line 407, in init_from_local_shards
return ShardedTensor._init_from_local_shards(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/_shard/sharded_tensor/api.py", line 753, in _init_from_local_shards
dist.all_gather_object(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
return func(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 3037, in all_gather_object
input_tensor.resize_(max_object_size)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate more than 1EB memory.
Exception raised from allocate at /var/lib/jenkins/workspace/c10/cuda/CUDACachingAllocator.cpp:3623 (most recent call first):
C++ CapturedTraceback:
#4 std::_Function_handler<std::shared_ptr<c10::LazyValue<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > const> (), c10::SetStackTraceFetcher(std::function<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&) from Logging.cpp:0
#5 c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) from ??:0
#6 c10::cuda::CUDACachingAllocator::Native::NativeCachingAllocator::allocate(unsigned long) from :0
#7 at::native::resize_bytes_cuda(c10::StorageImpl*, unsigned long) from ??:0
#8 at::native::resize_cuda_(at::Tensor const&, c10::ArrayRef<long>, std::optional<c10::MemoryFormat>) from ??:0
#9 at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, std::optional<c10::MemoryFormat>) from ??:0
#10 torch::ADInplaceOrView::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, std::optional<c10::MemoryFormat>) from VariableTypeManual.cpp:0
#11 at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, std::optional<c10::MemoryFormat>) from ??:0
#12 torch::autograd::VariableType::(anonymous namespace)::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, std::optional<c10::MemoryFormat>) from VariableTypeManual.cpp:0
#13 at::_ops::resize_::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, std::optional<c10::MemoryFormat>) from ??:0
#14 torch::autograd::THPVariable_resize_(_object*, _object*, _object*) from python_variable_methods.cpp:0
#15 method_vectorcall_VARARGS_KEYWORDS from :0
#16 _PyEval_EvalFrameDefault from ??:0
#17 _PyFunction_Vectorcall from ??:0
#18 PyObject_Call from ??:0
#19 _PyEval_EvalFrameDefault from ??:0
#20 _PyFunction_Vectorcall from ??:0
#21 _PyEval_EvalFrameDefault from ??:0
#22 method_vectorcall from :0
#23 PyObject_Call from ??:0
#24 _PyEval_EvalFrameDefault from ??:0
#25 _PyFunction_Vectorcall from ??:0
#26 _PyEval_EvalFrameDefault from ??:0
#27 _PyFunction_Vectorcall from ??:0
#28 _PyEval_EvalFrameDefault from ??:0
#29 _PyFunction_Vectorcall from ??:0
#30 _PyEval_EvalFrameDefault from ??:0
#31 _PyFunction_Vectorcall from ??:0
#32 _PyEval_EvalFrameDefault from ??:0
#33 method_vectorcall from :0
#34 _PyEval_EvalFrameDefault from ??:0
#35 method_vectorcall from :0
#36 _PyEval_EvalFrameDefault from ??:0
#37 method_vectorcall from :0
#38 _PyEval_EvalFrameDefault from ??:0
#39 method_vectorcall from :0
#40 _PyEval_EvalFrameDefault from ??:0
#41 method_vectorcall from :0
#42 _PyEval_EvalFrameDefault from ??:0
#43 _PyFunction_Vectorcall from ??:0
#44 PyObject_Call from ??:0
#45 _PyEval_EvalFrameDefault from ??:0
#46 _PyFunction_Vectorcall from ??:0
#47 PyObject_Call from ??:0
#48 _PyEval_EvalFrameDefault from ??:0
#49 _PyFunction_Vectorcall from ??:0
#50 PyObject_Call from ??:0
#51 _PyEval_EvalFrameDefault from ??:0
#52 method_vectorcall from :0
#53 _PyEval_EvalFrameDefault from ??:0
#54 method_vectorcall from :0
#55 _PyEval_EvalFrameDefault from ??:0
#56 method_vectorcall from :0
#57 _PyEval_EvalFrameDefault from ??:0
#58 method_vectorcall from :0
#59 _PyEval_EvalFrameDefault from ??:0
#60 _PyFunction_Vectorcall from ??:0
#61 _PyEval_EvalFrameDefault from ??:0
#62 method_vectorcall from :0
#63 PyObject_Call from ??:0
#64 _PyEval_EvalFrameDefault from ??:0
#65 _PyFunction_Vectorcall from ??:0
#66 _PyEval_EvalFrameDefault from ??:0
#67 _PyFunction_Vectorcall from ??:0
#68 _PyEval_EvalFrameDefault from ??:0
#69 _PyFunction_Vectorcall from ??:0
#70 _PyEval_EvalFrameDefault from ??:0
#71 _PyFunction_Vectorcall from ??:0
#72 _PyEval_EvalFrameDefault from ??:0
#73 _PyEval_Vector from :0
#74 PyEval_EvalCode from ??:0
#75 run_eval_code_obj from :0
#76 run_mod from :0
#77 PyRun_StringFlags.localalias from :0
#78 PyRun_SimpleStringFlags.localalias from :0
#79 Py_RunMain.localalias from :0
#80 Py_BytesMain from ??:0
#81 __libc_start_main from ??:0
#82 _start from ??:0
To execute this test, run the following from the base repo dir:
python test/distributed/fsdp/test_distributed_checkpoint.py TestDistributedCheckpointCUDA.test_distributed_checkpoint_state_dict_type1_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `distributed/fsdp/test_distributed_checkpoint.py`
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 @wdvr | oncall: distributed,module: flaky-tests,skipped | low | Critical |
2,791,392,684 | pytorch | DISABLED test_aoti_eager_cache_hit_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_cache_hit_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35679665075).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_cache_hit_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 1071, in test_aoti_eager_cache_hit
res_value = getattr(torch.ops.aten, op_name)(input_tensor)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 1158, in __call__
return self._op(*args, **(kwargs or {}))
RuntimeError: aot_compile_function.ptr() != nullptr && aot_compile_function.ptr() != Py_None INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/torch/csrc/inductor/aoti_eager/kernel_holder.cpp":507, please report a bug to PyTorch. Failed to import - torch._inductor.aoti_eager.aoti_compile_with_persistent_cache
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_aoti_eager_cache_hit_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,791,393,380 | pytorch | DISABLED test_repeat_graph_capture_cublas_workspace_memory (__main__.TestCuda) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_repeat_graph_capture_cublas_workspace_memory&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35670628070).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 6 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_repeat_graph_capture_cublas_workspace_memory`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_cuda.py", line 2048, in test_repeat_graph_capture_cublas_workspace_memory
self.assertFalse(used_gb_before + 0.1 < used_gb_after)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 681, in assertFalse
raise self.failureException(msg)
AssertionError: True is not false
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/test_cuda.py TestCuda.test_repeat_graph_capture_cublas_workspace_memory
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda.py`
cc @ptrblck @msaroufim @eqy @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr | module: cuda,module: rocm,triaged,module: flaky-tests,skipped | low | Critical |
2,791,393,381 | pytorch | DISABLED test_aoti_eager_support_str_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_support_str_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35679665765).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 7 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_support_str_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 1023, in test_aoti_eager_support_str
res_value = getattr(torch.ops.aten, op_name)(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 1158, in __call__
return self._op(*args, **(kwargs or {}))
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_aoti_eager_support_str_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,791,393,383 | pytorch | DISABLED test_aoti_eager_dtype_device_layout_dynamic_shapes_cuda (__main__.DynamicShapesCodegenGPUTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_dtype_device_layout_dynamic_shapes_cuda&suite=DynamicShapesCodegenGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35679666011).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 9 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_dtype_device_layout_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 925, in test_aoti_eager_dtype_device_layout
res = torch.tril_indices(
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenGPUTests.test_aoti_eager_dtype_device_layout_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,791,400,689 | next.js | Nextjs 15.1 not saving Static Content on Redis on build time using instrumentation using @neshca/cache-handler | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/priceless-hill-52c3m3
### To Reproduce
1. Update url from cache-handler with your Redis instance
2. yarn install
3. yarn build
4. Verify that no key exists for /index in Redis
5. yarn start
6. Verify that no key exists for /index in Redis, but /xpto in Redis exists
7. Go to the browser and open => http://localhost:3000/
8. Verify that /index key exists in Redis
### Current vs. Expected behavior
I was expecting to have /index and /xpto after yarn build. I don't have anything being saved.
I was expecting to have /index also after yarn start. I only have /xpto being saved.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Enterprise
Available memory (MB): 32401
Available CPU cores: 12
Binaries:
Node: 22.4.1
npm: 10.9.0
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: N/A
Next.js Config:
output: standalone
```
### Which area(s) are affected? (Select all that apply)
Instrumentation
### Which stage(s) are affected? (Select all that apply)
next build (local), next start (local)
### Additional context
I was expecting to use instrumentation implementation and have Redis cache being populated with pre-rendered pages on build time, but it's only happening on start time. Besides that, I am unable to save the pre-rendered page if my page is on the root of the app, meaning app/page.tsx.
This issue happens both in 15.1 and 14.2.17 next versions. | Instrumentation | low | Minor |
2,791,406,661 | flutter | use Gradle's Kotlin DSL in plugin templates | ### Use case
This request builds on #151166 to ask that templates for _plugins_ also create `build.gradle.kts` for Android.
Using Flutter 3.27.2, `flutter create --template plugin --platforms android` produces Android code with Groovy Gradle files.
See https://github.com/flutter/flutter/tree/3.27.2/packages/flutter_tools/templates/plugin/android-kotlin.tmpl .
### Proposal
See above. | c: new feature,platform-android,tool,t: gradle,c: proposal,P2,a: plugins,team-android,triaged-android | low | Minor |
2,791,421,759 | PowerToys | make toped windows more obvious | ### Description of the new feature / enhancement
please add a sign or pattern on the "always on top" windows (maybe at the left of the minimize?), the frame is too Inconspicuous
I do know the frame can be bolded, but some background or windows is dark, which means I couldn't see if the window is on top.
### Scenario when this would be used?
window on top
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,791,465,075 | pytorch | Add API to detect if activation checkpointing is enabled in the current region or not | ### 🚀 The feature, motivation and pitch
I've been developing an experimental feature for torchao and doing a [PoC integration](https://github.com/pytorch/torchtitan/pull/778) in torchtitan. The implementation is based on custom Triton kernels, and we need to execute different kernels at different points in forward/backward depending on if AC is enabled or not.
At a high level:
- If activation checkpointing is enabled, we may want to optimize for peak memory usage and not precompute + save certain tensors for backward.
- If activation checkpointing is not enabled, we may want to optimize for throughput and precompute some tensors for backward pass during the forward pass, if there is a way to do efficiently.
After searching for a way to do this online, and then checking with @soulitzer, I found that pytorch currently provides no API to detect if the current region is using activation checkpointing or not. This would be a very useful feature for use cases like the one above.
### Alternatives
As an alternative/workaround, I implemented an explicit flag in my prototype code to indicate if we should optimize for peak memory usage in this particular FP8 linear layer or not, and [execute kernels conditionally based on that flag](https://github.com/pytorch/ao/blob/5e59b510b97d5a1cd08da59b1f6b2df6a1d8cdfd/torchao/prototype/float8nocompile/float8nocompile_linear.py#L72).
However, this is somewhat of a hack and hurts composability with AC. It relies on the user remembering to set this flag if they are using AC in this layer, and requires the user to implement [helper functions](https://github.com/pytorch/torchtitan/pull/778/files#diff-7792012777a5a91b75304ed92ff6414b2f414e1a92a20c7ce9f64b54fb3c7d4bR112-R119) for more advanced AC strategies like selective per layer AC.
### Additional context
_No response_
cc @soulitzer | module: checkpoint,triaged | low | Minor |
2,791,543,482 | vscode | GitHub - why there is an open github menu? i do not deploy my repo to github | 版本: 1.97.0-insider
提交: c799d209cd4846a2a822b55dbf2ca21893008faa
日期: 2025-01-15T23:09:30.246Z
浏览器: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Code-Insiders/1.97.0-insider Chrome/128.0.6613.186 Electron/32.2.7 Safari/537.36
 | github | low | Minor |
2,791,574,555 | pytorch | Error loading "torch\lib\aoti_custom_ops.dll" or one of its dependencies, when importing Torch, when building from Source on Windows 11 with cuDNN. | ### 🐛 Describe the bug
Hi there, thanks for the great work.
When I build from source on Windows 11, CUDA 12.6, VS 2022, and specifying to use cuDNN (either 9.5.1 or 9.6.0), it gives this next error
```
>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 999, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "C:\Users\User\Desktop\pytorch_compile\pytorch\Miniconda3\Lib\site-packages\torch\__init__.py", line 274, in <module>
_load_dll_libraries()
File "C:\Users\User\Desktop\pytorch_compile\pytorch\Miniconda3\Lib\site-packages\torch\__init__.py", line 270, in _load_dll_libraries
raise err
OSError: [WinError 126] No se puede encontrar el módulo especificado. Error loading "C:\Users\Pancho\Desktop\pytorch_compile\pytorch\Miniconda3\Lib\site-packages\torch\lib\aoti_custom_ops.dll" or one of its dependencies.
```
Dependencies doesn't say that a .DLL is missing

And procmon shows

I did have to set on the CMake file:
```
set(CUDNN_LIBRARY_PATH "C:/Program Files/NVIDIA/CUDNN/v9.6/lib/12.6/x64/cudnn64_9.lib")
set(CUDNN_INCLUDE_PATH "C:/Program Files/NVIDIA/CUDNN/v9.6/include/12.6")
```
Else it wouldn't detect it, even if having those env variables set on the Path. Related https://github.com/pytorch/pytorch/issues/114054
Paths are
[Paths.txt](https://github.com/user-attachments/files/18432859/Paths.txt)
Cmake config is
[CMakeCache.txt](https://github.com/user-attachments/files/18432788/CMakeCache.txt)
When not setting up cuDNN, torch works abeit very slowly for image diffusion pipelines.
Commit used was 834086c, and used mostly the `.\.ci\pytorch\win-test-helpers\build_pytorch.bat` file.
### Versions
Not applicable (can't import torch)
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | module: build,module: windows,triaged | low | Critical |
2,791,584,852 | pytorch | [inductor][cpu]amp fp16 llama dynamic shape cpp wrapper performance regression in 2025-01-07 nightly release | ### 🐛 Describe the bug
<p>amp fp16 dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>llama</td>
<td>multiple</td>
<td>32</td>
<td>2.211985</td>
<td>0.021930711999999998</td>
<td>0.048510405983319994</td>
<td>39.177836</td>
<td>32</td>
<td>2.507979</td>
<td>0.018847622</td>
<td>0.04726944017593801</td>
<td>41.306366</td>
<td>0.88</td>
<td>0.97</td>
<td>0.86</td>
<td>1.05</td>
</tr>
<tr>
<td>torchbench</td>
<td>llama</td>
<td>single</td>
<td>1</td>
<td>3.950647</td>
<td>0.01318508</td>
<td>0.05208959674676</td>
<td>37.938252</td>
<td>1</td>
<td>4.542274</td>
<td>0.011483397</td>
<td>0.05216073562477799</td>
<td>40.390422</td>
<td>0.87</td>
<td>1.0</td>
<td>0.87</td>
<td>1.06</td>
</tr>
</tbody>
</table>
the last good commit: e88d06f54eeb80669a8a97322cf55c4da0519f08
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench llama amp_fp16 first dynamic cpp
Testing with dynamic shapes.
Testing with cpp wrapper.
Testing with inductor.
multi-threads testing....
loading model: 0it [00:00, ?it/s]
cpu eval llama
running benchmark: 100%|███████████████████████████████████████████████████████████████████| 50/50 [00:03<00:00, 14.90it/s]
2.818x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,llama,32,2.817930,19.087040,33.137240,0.947519,340.724531,359.596442,531,1,0,0,0,0,0
```
the bad commit: b5b419d6276e5f0a9df623b45e9fb478f93ecc4b
```
/workspace/pytorch# bash inductor_single_run.sh multiple inference performance torchbench llama amp_fp16 first dynamic cpp
running benchmark: 100%|███████████████████████████████████████████████████████████████████| 50/50 [00:03<00:00, 13.42it/s]
2.532x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,llama,32,2.532279,22.605699,37.389425,0.928050,340.260454,366.640333,531,1,0,0,0,0,0
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>766a5e3a</td>
<td>main</td>
<td>766a5e3a</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>f2d6cfa6775601df5a038f7a4d0b37da75a53ed9</td>
<td>main</td>
<td>cf0b72c4ab960a847758132cc501cf793926e070</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+b6d4675</td>
<td>main</td>
<td>2.6.0a0+b6d4675</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
<td>main</td>
<td>0.7.0a0+11bb5b8</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh multiple inference performance torchbench llama amp_fp16 first dynamic cpp
Suspected guilty commit: b5b419d6276e5f0a9df623b45e9fb478f93ecc4b
[torchbench-llama-inference-amp_fp16-dynamic-cpp-multiple-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/18432852/torchbench-llama-inference-amp_fp16-dynamic-cpp-multiple-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129 @CaoE | oncall: pt2,oncall: cpu inductor | low | Critical |
2,791,598,472 | ollama | Multiple goroutines writing to the same file at once likely corrupts downloads | ### What is the issue?
I'm getting a lot of digest errors on windows when downloading new models. Models bigger than 10gb often take multiple tries, for models > 15gb it's nearly impossible to download them.
I think the culprit is likely to be this code:
https://github.com/ollama/ollama/blob/93a8daf285af45ed71544e79aae0cb15245e75f4/server/download.go#L271-L301
I don't think concurrent writes to the same file are safe (at least on windows), even with offsets.
I think each goroutine should get its own part file which are then merged in the end.
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.5.6 | bug | low | Critical |
2,791,613,224 | node | Add CBOR support | ### What is the problem this feature will solve?
CBOR is an alternative to JSON.
Due to its binary format and rich data type support, it is ideally suited for machine-to-machine data interchange.
### What is the feature you are proposing to solve the problem?
I'm proposing a "CBOR" counterpart to the "JSON" object.
In similarity to "JSON", a single object should suffice.
https://github.com/cyberphone/CBOR.js#cborjs
### What alternatives have you considered?
_No response_ | feature request | low | Minor |
2,791,616,648 | node | Mixing with `stdin` and `stderr stdout` about readable or writable in Child process spawn API options.stdio doc | page link: https://nodejs.org/api/child_process.html#optionsstdio
When descripting the available value of Stream Object, threre are some wrong notes:
1. "While it is technically possible to pass stdin as a writable or stdout/stderr as readable, it is not recommended."
Maybe the actual meaning is "While it is technically possible to pass stdin as a readable or stdout/stderr as writable, it is not recommended."
2. "e.g., passing a readable stream where a writable stream is expected"
This is my screenshot:
 | child_process,doc | low | Minor |
2,791,622,922 | pytorch | DISABLED test_run_decompositions_map_handle_to_new_nodes (__main__.TestNumericDebugger) | Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_run_decompositions_map_handle_to_new_nodes&suite=TestNumericDebugger&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35684218376).
Over the past 3 hours, it has been determined flaky in 8 workflow(s) with 16 failures and 8 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_run_decompositions_map_handle_to_new_nodes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr @malfet @albanD | oncall: quantization,triaged,module: flaky-tests,module: macos,skipped | low | Critical |
2,791,622,966 | pytorch | DISABLED test_pt2_traceable_aot_eager_cpu_float8_e5m2 (__main__.TestFloat8DtypeCPUOnlyCPU) | Platforms: asan, linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pt2_traceable_aot_eager_cpu_float8_e5m2&suite=TestFloat8DtypeCPUOnlyCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35681075767).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 10 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_pt2_traceable_aot_eager_cpu_float8_e5m2`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr | oncall: quantization,module: flaky-tests,skipped | low | Critical |
2,791,623,013 | pytorch | DISABLED test_compile_forward_select_cuda_float32 (__main__.TestNestedTensorOpInfoCUDA) | Platforms: linux, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_forward_select_cuda_float32&suite=TestNestedTensorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35686125481).
Over the past 3 hours, it has been determined flaky in 20 workflow(s) with 0 failures and 20 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_forward_select_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
cc @clee2000 @wdvr @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | triaged,module: flaky-tests,module: nestedtensor,skipped | low | Critical |
2,791,623,115 | pytorch | DISABLED test_compile_forward_clone_cpu_float32 (__main__.TestNestedTensorOpInfoCPU) | Platforms: asan, linux, mac, macos, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compile_forward_clone_cpu_float32&suite=TestNestedTensorOpInfoCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35684033086).
Over the past 3 hours, it has been determined flaky in 43 workflow(s) with 0 failures and 43 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_compile_forward_clone_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_nestedtensor.py`
cc @clee2000 @wdvr @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | triaged,module: flaky-tests,module: nestedtensor,skipped | low | Critical |
2,791,625,992 | pytorch | [inductor][cpu]float32 dynamic shape maml_omniglot performance regression in 2025-01-13 nightly release | ### 🐛 Describe the bug
<p>dynamic shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>maml_omniglot</td>
<td>single</td>
<td>5</td>
<td>1.991409</td>
<td>0.0011490320000000001</td>
<td>0.0022881926660880004</td>
<td>9.557705</td>
<td>5</td>
<td>2.569708</td>
<td>0.000891765</td>
<td>0.00229157565462</td>
<td>9.459977</td>
<td>0.77</td>
<td>1.0</td>
<td>0.78</td>
<td>0.99</td>
</tr>
</tbody>
</table>
<p>dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>maml_omniglot</td>
<td>single</td>
<td>5</td>
<td>2.090643</td>
<td>0.001101955</td>
<td>0.002303794507065</td>
<td>6.568006</td>
<td>5</td>
<td>2.732528</td>
<td>0.000844825</td>
<td>0.0023085079675999997</td>
<td>6.547288</td>
<td>0.77</td>
<td>1.0</td>
<td>0.77</td>
<td>1.0</td>
</tr>
</tbody>
</table>
the last good commit: f8fcb9e7d38b82844d72ae32c27d1592db27a8e2
```
/workspace/pytorch# bash inductor_single_run.sh single inference performance torchbench maml_omniglot float32 first dynamic
Testing with dynamic shapes.
Testing with inductor.
single-thread testing....
loading model: 0it [00:00, ?it/s]
cpu eval maml_omniglot
running benchmark: 100%|██████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 249.55it/s]
1.778x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,maml_omniglot,5,1.777746,1.250640,27.412123,0.846500,47.260877,55.830938,14,1,0,0,0,0,1
```
the bad commit: 28b4992e7a60bb3fbb07c591099fa810557b4e57
```
/workspace/pytorch# bash inductor_single_run.sh single inference performance torchbench maml_omniglot float32 first dynamic
Testing with dynamic shapes.
Testing with inductor.
single-thread testing....
loading model: 0it [00:00, ?it/s]
cpu eval maml_omniglot
running benchmark: 100%|██████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 224.95it/s]
1.434x
WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu]
dev,name,batch_size,speedup,abs_latency,compilation_latency,compression_ratio,eager_peak_mem,dynamo_peak_mem,calls_captured,unique_graphs,graph_breaks,unique_graph_breaks,autograd_captures,autograd_compiles,cudagraph_skips
cpu,maml_omniglot,5,1.433554,1.590300,30.843010,0.770410,47.418573,61.549773,14,1,0,0,0,0,1
```
### Versions
</table><p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>766a5e3a</td>
<td>main</td>
<td>766a5e3a</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>e0f67405a154e7f9ce1ca9533cbc1d156fe075d7</td>
<td>main</td>
<td>f2d6cfa6775601df5a038f7a4d0b37da75a53ed9</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.6.0a0+b6d4675</td>
<td>main</td>
<td>2.6.0a0+b6d4675</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob//main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh single inference performance torchbench maml_omniglot float32 first dynamic
Suspected guilty commit: 28b4992e7a60bb3fbb07c591099fa810557b4e57
[torchbench-maml_omniglot-inference-float32-dynamic-default-single-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/18433106/torchbench-maml_omniglot-inference-float32-dynamic-default-single-performance-drop_guilty_commit.log)
cc @chauhang @penguinwu @chuanqi129 | oncall: pt2,oncall: cpu inductor | low | Critical |
2,791,632,725 | flutter | [Proposal] Add official resource hash and js sharding for Flutter web, and reduce application size | ### Use case
Flutter web products are generally larger than native web products, which results in a slow opening speed of the website. Although CDN acceleration is enabled, it is still not ideal.
### Proposal
Add official resource hash and js sharding for Flutter web, and reduce application size | c: new feature,a: size,platform-web,c: proposal,team-web | low | Major |
2,791,657,081 | deno | Wrong path when using HTTP proxy in Node API `http` | Version: Deno 2.1.5
TLDR: when `http.request`'s opinions include proxy settings and `options.path` is a full URL (e.g., `http://httpbin.org/200`), the website that proxy server accesses should be `http://httpbin.org/200`, but in Deno it is `http://httpbin.org/http://httpbin.org/200`.
POC:
```js
import axios from "axios";
const instance = axios.create({
baseURL: "http://httpbin.org/",
proxy: {
protocol: "http",
host: "127.0.0.1",
port: 8892,
},
});
try {
const resp = await instance.get("/get");
console.log("success", resp.config);
} catch (e) {
console.log(e);
}
```
For convenience, I use Axios as demo, and related code is [here](https://github.com/axios/axios/blob/bad6d8b97b52c0c15311c92dd596fc0bff122651/lib/adapters/http.js#L464C11-L464C28). When I add a patch `options.path = new URL(options.path).pathname;`, it becomes correct.
This demo requires a http proxy running at port 8892 (such as Fiddler or mitmproxy).
In Node.JS, it can get `http://httpbin.org/get` correctly. In Deno, it gets a 404, while Fiddler shows it is requesting `http://httpbin.org/http://httpbin.org/get` | node compat | low | Minor |
2,791,680,095 | electron | When using BrowserWindow to load a URL in Electron 33.2.1 and performing video encoding/decoding with WebCodec on the page, I encountered an issue where the GPU memory usage is unstable. The issue manifests differently on different GPU configurations: | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.1
### What operating system(s) are you using?
Windows
### Operating System Version
window11
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
When using BrowserWindow to load a URL in Electron 33.2.1 and performing video encoding/decoding with WebCodec on the page, I encountered an issue where the GPU memory usage is unstable. The issue manifests differently on different GPU configurations:
Intel(R) Iris(R) Xe Graphics:
In Windows Task Manager, the GPU memory usage fluctuates significantly and eventually fills up. This results in video stuttering, black screens, and other issues on the page.

NVIDIA GeForce GTX 1050 Ti:
In Windows Task Manager, the GPU memory usage remains stable, with minimal fluctuations. The video plays smoothly on the page without stuttering or black screens.
I have already added some parameters when starting Electron, but there has been no noticeable improvement.

```
app.commandLine.appendSwitch('use-gl', 'angle');
app.commandLine.appendSwitch('use-angle', 'gl-egl');
app.commandLine.appendSwitch('enable-features', 'VaapiVideoDecoder');
app.commandLine.appendSwitch('enable-features', 'PlatformAudioEncoder');
```
### Actual Behavior
Is it possible to add configuration or startup parameters that can stabilize the GPU memory usage on Intel(R) Iris(R) Xe Graphics, similar to how it behaves on the NVIDIA GeForce GTX 1050 Ti, to prevent video playback issues such as stuttering and black screens?
### Testcase Gist URL
_No response_
### Additional Information
_No response_ | platform/windows,bug :beetle:,blocked/need-repro,33-x-y | low | Critical |
2,791,685,474 | vscode | Need case sensitive instance window switching | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.3
- OS Version: macOS 15.2
Steps to Reproduce:
1. Remote SSH to a linux server which supports case-sensitive file paths
2. Create and open a remote folder named `~/Workspaces/Project1`
3. Create and open a remote folder named `~/Workspaces/project1`, then VSCode switch to `Project1` instead of opening a new window for `project1`. In this case, I think it should open a new window for `project1` rather than regarding it as the same with `Project1`.
Not sure if this happens locally on a linux system. | feature-request,remote,ssh | low | Critical |
2,791,731,891 | pytorch | [RFC] Add CPP INT8 SDPA Template for Inductor CPU | ### 🚀 The feature, motivation and pitch
## Motivation
PyTorch Template is now a common method to implement the target kernel with high flexibility. We are considering to implement the Int8 SDPA CPU kernel by using template in PyTorch. With the method, the kernel code is generated from the corresponding template during compiling, and no explicit new OP needs to be added. In the future, by taking advantage of the template, it would also be more flexible to tune the optimized kernel with different parallel strategies or block sizes through benchmarking. This RFC proposes the approaches to implement the template-based method.
## Approaches
We propose a template-based method to implement the Int8 SDPA CPU kernel. Here are the design for the main components.
### Pattern Match
During the post grad fusion pass, we register a lowering pattern for Int8 SDPA. If the corresponding pattern hits, it can be replaced by `int8_sdpa_lowering` lowering function, which then further lowerings into the template.
### CPP INT8 SDPA Template
We create a CPP Int8 SDPA template `CppInt8SdpaTemplate` by inheriting the CPP flex attention template `CppFlexAttentionTemplate`. We tend to reuse the common parts in flex attention template as much as possible. Note that the CPP Int8 SDPA template does not need the modification-related inputs or member functions, as Int8 SDPA only needs the default one, simply adding the attention mask, for now.
#### Inputs
- Besides the SDPA typical inputs like query/key/value, extra zero points and scales need to be added for the quantization case.
- The `score_mod` and `mask_mod` are not needed.
#### Member functions
- The functions `add_choices` and `render` are overridden to support the int8 specific case.
- A new function `select_strategy` is added to generate the kernel with various parallel loop strategies or block sizes, according to the heuristic method given device info and input shapes.
- The modification-related functions like `apply_score_mod` are not needed.
#### Template codes
- Reuses the common codes in flex attention one.
- Adds more specific functions for data type int8, such as compensation functions.
### Alternatives
_No response_
### Additional context
_No response_
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | triaged,oncall: pt2,module: inductor | low | Minor |
2,791,764,232 | react-native | Dynamic styles with StyleSheet get an type error? | ### Description
It's work with this approach:
```js
import React from 'react';
import {
StyleSheet,
Text,
View,
} from 'react-native';
function App(): React.JSX.Element {
return (
<View
style={styles.dynamicStyle(100)}
>
<Text>Hello</Text>
</View>
);
}
const styles = StyleSheet.create({
dynamicStyle: (value: number) => ({
height: value,
}),
});
export default App;
```
I got this TypeScript error
```
This expression is not callable.
No constituent of type 'ViewStyle | TextStyle | ImageStyle' is callable
```
### Steps to reproduce
1. Git clone https://github.com/tiavina-mika/dynamic-styling-types-issue
2. run `npm install`
3. Open `App.tsx` file
4. See the error in the editor
### React Native Version
0.76.6
### Affected Platforms
Runtime - Web, Runtime - Android, Runtime - iOS, Runtime - Desktop, Build - MacOS, Build - Windows, Build - Linux
### Output of `npx react-native info`
```text
System:
OS: Windows 10 10.0.19045
CPU: (8) x64 Intel(R) Core(TM) i7-4710HQ CPU @ 2.50GHz
Memory: 6.80 GB / 15.95 GB
Binaries:
Node:
version: 20.16.0
path: C:\Program Files\nodejs\node.EXE
Yarn:
version: 1.22.22
path: C:\Program Files\nodejs\yarn.CMD
npm:
version: 10.8.1
path: C:\Program Files\nodejs\npm.CMD
Watchman:
version: 20210110.135312.0
path: C:\ProgramData\chocolatey\bin\watchman.EXE
SDKs:
Android SDK: Not Found
Windows SDK: Not Found
IDEs:
Android Studio: Version 2020.3.0.0 AI-203.7717.56.2031.7935034
Visual Studio: Not Found
Languages:
Java:
version: 1.8.0-262
path: /c/Program Files/OpenJDK/jdk-8.0.262.10-hotspot/bin/javac
Ruby: Not Found
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.6
wanted: 0.76.6
react-native-windows: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
new
```
### Stacktrace or Logs
```text
[{
"resource": "/E:Demos/dynamic-styling-types-issue/ReproducerApp/App.tsx",
"owner": "typescript",
"code": "2349",
"severity": 8,
"message": "This expression is not callable.\n No constituent of type 'ViewStyle | TextStyle | ImageStyle' is callable.",
"source": "ts",
"startLineNumber": 11,
"startColumn": 23,
"endLineNumber": 11,
"endColumn": 35
},{
"resource": "/E:/Demos/dynamic-styling-types-issue/ReproducerApp/App.tsx",
"owner": "typescript",
"code": "2322",
"severity": 8,
"message": "Type '(value: number) => { height: number; }' is not assignable to type 'ViewStyle | TextStyle | ImageStyle'.",
"source": "ts",
"startLineNumber": 19,
"startColumn": 17,
"endLineNumber": 21,
"endColumn": 5,
"relatedInformation": [
{
"startLineNumber": 19,
"startColumn": 17,
"endLineNumber": 21,
"endColumn": 5,
"message": "Did you mean to call this expression?",
"resource": "/E:dynamic-styling-types-issue/ReproducerApp/App.tsx"
}
]
}]
```
### Reproducer
https://github.com/tiavina-mika/dynamic-styling-types-issue
### Screenshots and Videos
 | Needs: Author Feedback | low | Critical |
2,791,764,759 | ant-design | tabs组件当type不为默认值line时,拖拽移动卡顿 | ### Reproduction link
[https://ant-design.antgroup.com/components/tabs-cn#tabs-demo-custom-tab-bar-node](https://ant-design.antgroup.com/components/tabs-cn#tabs-demo-custom-tab-bar-node)
### Steps to reproduce
https://ant-design.antgroup.com/components/tabs-cn#tabs-demo-custom-tab-bar-node 官方拖拽demo,只需要增加type不为line,即可复现
### What is expected?
拖拽没有延迟感
### What is actually happening?
拖拽没有延迟感
| Environment | Info |
| --- | --- |
| antd | 5.23.1 |
| React | react |
| System | mac |
| Browser | 谷歌 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | unconfirmed | low | Minor |
2,791,767,013 | ui | [feat]: Date Picker not have flexible like HTML Date Picker | ### Feature description
They are unable to select different years or months, which is complex; this feature must be added.
### Affected component/components
Date Picker
### Additional Context
N/A
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,791,768,149 | deno | Bug(deno): astro build error - Buffer is not defined | Version: Deno 2.1.4
---
I'm using the **deno** runtime with starlight, and _after upgrading_ to **astro 5** I've started getting build errors when using **starlight-openapi**.
```
16:26:50 ▶ starlight-openapi/components/Route.astro
16:26:50 ├─ /api/pokeapi/index.htmlBuffer is not defined
```
### To Reproduce
1. clone https://github.com/Indyandie/lucero/tree/build-bug-2025-01-03
1. run `deno install --allow-scripts`
1. run `deno task build`
### System Info
- NixOS
- deno 2.1.4
- astro 5.1.7
Related: https://github.com/HiDeoo/starlight-openapi/issues/61 | needs investigation,node compat | low | Critical |
2,791,823,016 | react | Issue while migrating from 18.2.0 to 19.0.0 |
I am migrating React 18.2.0 to 19.0.0. I first updated it to 18.3.1 as it was recommended and faced no issues on UI so far.
I updated all the required dependencies and installed react and react-dom 19.0.0.
but when I am trying to proceed next with migration recipe. I am facing below error.
My proxies are properly set as I am able to install all the other packages from NPM.
``$ npx codemod@latest react/19/migration-recipe
✖ Fetching "react/19/migration-recipe"...
Error while fetching codemod react/19/migration-recipe: AxiosError: Request failed with status code 407``
How can I fix this ?
| React 19 | medium | Critical |
2,791,837,465 | pytorch | DISABLED test_aoti_eager_support_out_dynamic_shapes_cuda (__main__.DynamicShapesCodegenGPUTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_support_out_dynamic_shapes_cuda&suite=DynamicShapesCodegenGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35694427297).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 7 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_support_out_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 975, in test_aoti_eager_support_out
res_tensor = torch.clamp(
RuntimeError: create_func_( &container_handle_, num_models, device_str.c_str(), cubin_dir.empty() ? nullptr : cubin_dir.c_str()) API call failed at /var/lib/jenkins/workspace/torch/csrc/inductor/aoti_runner/model_container_runner.cpp, line 81
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenGPUTests.test_aoti_eager_support_out_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,791,837,550 | pytorch | DISABLED test_dropout_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_dropout_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35694427782).
Over the past 3 hours, it has been determined flaky in 7 workflow(s) with 7 failures and 7 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_dropout_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 8288, in test_dropout
result1 = fn1(x)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 8283, in fn1
@torch.compile(backend="inductor")
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1211, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 322, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 671, in inner_fn
outs = compiled_fn(args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 489, in wrapper
return compiled_fn(runtime_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 464, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1228, in run
return compiled_fn(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 397, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, new_static_input_idxs, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 427, in cudagraphify
return manager.add_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2255, in add_function
return fn, fn(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1949, in run
out = self._run(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2057, in _run
out = self.run_eager(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2221, in run_eager
return node.run(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 635, in run
check_memory_pool(self.device_index, self.cuda_graphs_pool, refs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1754, in check_memory_pool
if torch._C._cuda_checkPoolLiveAllocations(device, pool_id, unique_storages):
MemoryError: std::bad_alloc
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_dropout_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,791,837,624 | pytorch | DISABLED test_config_option_dont_assume_alignment_cudagraphs_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_config_option_dont_assume_alignment_cudagraphs_dynamic_shapes_cuda&suite=DynamicShapesGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35693621761).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_config_option_dont_assume_alignment_cudagraphs_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 11166, in test_config_option_dont_assume_alignment_cudagraphs
res = fn_c(inp)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn
return fn(*args, **kwargs)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 11143, in fn
def fn(x):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 755, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1211, in forward
return compiled_fn(full_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 309, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 100, in g
return f(*args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1823, in forward
fw_outs = call_func_at_runtime_with_args(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args
out = normalize_as_list(f(args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 489, in wrapper
return compiled_fn(runtime_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 671, in inner_fn
outs = compiled_fn(args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/output_code.py", line 464, in __call__
return self.current_callable(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1228, in run
return compiled_fn(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 397, in deferred_cudagraphify
fn, out = cudagraphify(model, inputs, new_static_input_idxs, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 427, in cudagraphify
return manager.add_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2255, in add_function
return fn, fn(inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1949, in run
out = self._run(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2057, in _run
out = self.run_eager(new_inputs, function_id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 2221, in run_eager
return node.run(new_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 635, in run
check_memory_pool(self.device_index, self.cuda_graphs_pool, refs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1754, in check_memory_pool
if torch._C._cuda_checkPoolLiveAllocations(device, pool_id, unique_storages):
MemoryError: std::bad_alloc
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_dynamic_shapes.py DynamicShapesGPUTests.test_config_option_dont_assume_alignment_cudagraphs_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,791,840,158 | PowerToys | 未能初始化插件:Everything | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
[PowerToysReport_2025-01-16-14-24-13.zip](https://github.com/user-attachments/files/18434511/PowerToysReport_2025-01-16-14-24-13.zip)
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,791,878,408 | flutter | onPopInvokedWithResult is not working as expected. | ### Steps to reproduce
onPopInvokedWithResult behavior is different in android and IOS.
after closing dilaog its not working like android
### Expected results
unxpactedly pop is perform in ios after closing dilaog. flutter 3.24.0
### Actual results
onPopInvokedWithResult should work same as android
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| waiting for customer response,in triage | low | Minor |
2,791,900,647 | rust | ICE: `InvalidProgram(Layout(SizeOverflow` | <!--
[31mICE[0m: Rustc ./a.rs '-Zmir-opt-level=5 -Zvalidate-mir -ooutputfile -Zdump-mir-dir=dir' 'thread 'rustc' panicked at compiler/rustc_const_eval/src/const_eval/valtrees.rs:375:77: 'called `Result::unwrap()` on an `Err` value: InvalidProgram(Layout(SizeOverflow([u8; 13554212585355425205_usize])))'', 'thread 'rustc' panicked at compiler/rustc_const_eval/src/const_eval/valtrees.rs:375:77: 'called `Result::unwrap()` on an `Err` value: InvalidProgram(Layout(SizeOverflow([u8; 13554212585355425205_usize])))''
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
//@compile-flags: -Zmir-opt-level=5 -Zvalidate-mir
fn function_with_bytes<const BYTES: &'static [u8; 0xc7b889180b67b07d_bc1a3c88783d35b5_u128]>(
) -> &'static [u8] {
BYTES
}
fn main() {
function_with_bytes::<b"aa">() == &[];
}
````
original:
````rust
fn function_with_bytes<const BYTES: &'static [u8; 0xc7b889180b67b07d_bc1a3c88783d35b5_u128]>() -> &'static [u8] {
BYTES
}
fn main() {
function_with_bytes::<b"aa">() == &[];
}
````
Version information
````
rustc 1.86.0-nightly (5cd16b7f2 2025-01-16)
binary: rustc
commit-hash: 5cd16b7f2bc3624f2d658aa87151279878d2652a
commit-date: 2025-01-16
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.6
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/5cd16b7f2bc3624f2d658aa87151279878d2652a/compiler/rustc_const_eval/src/const_eval/valtrees.rs#L369-L381
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc -Zmir-opt-level=5 -Zvalidate-mir`
<details><summary><strong>Program output</strong></summary>
<p>
```
error: `&'static [u8; 13554212585355425205]` is forbidden as the type of a const generic parameter
--> /tmp/icemaker_global_tempdir.FIw1XfCIbDrc/rustc_testrunner_tmpdir_reporting.ijfwCzlkhntL/mvce.rs:1:37
|
1 | fn function_with_bytes<const BYTES: &'static [u8; 0xc7b889180b67b07d_bc1a3c88783d35b5_u128]>(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: the only supported types are integers, `bool`, and `char`
help: add `#![feature(adt_const_params)]` to the crate attributes to enable more complex and user defined types
|
1 + #![feature(adt_const_params)]
|
help: add `#![feature(unsized_const_params)]` to the crate attributes to enable references to implement the `ConstParamTy` trait
|
1 + #![feature(unsized_const_params)]
|
error[E0308]: mismatched types
--> /tmp/icemaker_global_tempdir.FIw1XfCIbDrc/rustc_testrunner_tmpdir_reporting.ijfwCzlkhntL/mvce.rs:1:51
|
1 | fn function_with_bytes<const BYTES: &'static [u8; 0xc7b889180b67b07d_bc1a3c88783d35b5_u128]>(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `usize`, found `u128`
|
help: change the type of the numeric literal from `u128` to `usize`
|
1 | fn function_with_bytes<const BYTES: &'static [u8; 0xc7b889180b67b07d_bc1a3c88783d35b5_usize]>(
| ~~~~~
error[E0308]: mismatched types
--> /tmp/icemaker_global_tempdir.FIw1XfCIbDrc/rustc_testrunner_tmpdir_reporting.ijfwCzlkhntL/mvce.rs:7:27
|
7 | function_with_bytes::<b"aa">() == &[];
| ^^^^^ expected an array with a size of 13554212585355425205, found one with a size of 2
thread 'rustc' panicked at compiler/rustc_const_eval/src/const_eval/valtrees.rs:375:77:
called `Result::unwrap()` on an `Err` value: InvalidProgram(Layout(SizeOverflow([u8; 13554212585355425205_usize])))
stack backtrace:
0: 0x7e0bedef66aa - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h0e5f1585bfffb19f
1: 0x7e0bee612da6 - core::fmt::write::hb4406e0cc18cab0a
2: 0x7e0bef57f451 - std::io::Write::write_fmt::hbfb92718103b7507
3: 0x7e0bedef6502 - std::sys::backtrace::BacktraceLock::print::he782f6d80c255a43
4: 0x7e0bedef8982 - std::panicking::default_hook::{{closure}}::h7927be4c4a7836a0
5: 0x7e0bedef880a - std::panicking::default_hook::hce8a4e7a77e5c861
6: 0x7e0bed059a3b - std[5eed3342ae415129]::panicking::update_hook::<alloc[d9cc840343b62059]::boxed::Box<rustc_driver_impl[d6b89c31630ac8e2]::install_ice_hook::{closure#1}>>::{closure#0}
7: 0x7e0bedef9503 - std::panicking::rust_panic_with_hook::hc6e72cdac3b94dca
8: 0x7e0bedef91fa - std::panicking::begin_panic_handler::{{closure}}::h7f7a407352c9fced
9: 0x7e0bedef6b89 - std::sys::backtrace::__rust_end_short_backtrace::h7f36a8d1fa9d1d9e
10: 0x7e0bedef8ebd - rust_begin_unwind
11: 0x7e0beaba6950 - core::panicking::panic_fmt::hca9dd5375a399d1d
12: 0x7e0beb0ce966 - core::result::unwrap_failed::hb1947dc54d635233
13: 0x7e0beee8b97f - rustc_const_eval[d8aeece37abbbffc]::const_eval::valtrees::valtree_to_ref
14: 0x7e0bef1630f9 - rustc_const_eval[d8aeece37abbbffc]::const_eval::valtrees::valtree_to_const_value
15: 0x7e0bef162eb6 - <rustc_const_eval[d8aeece37abbbffc]::provide::{closure#1} as core[ced015e6fc2a4da0]::ops::function::FnOnce<(rustc_middle[7623bb75c8b82ad6]::ty::context::TyCtxt, (rustc_middle[7623bb75c8b82ad6]::ty::Ty, rustc_middle[7623bb75c8b82ad6]::ty::consts::valtree::ValTree))>>::call_once
16: 0x7e0bef162e72 - rustc_query_impl[22732cf2fa73812a]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[22732cf2fa73812a]::query_impl::valtree_to_const_val::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7623bb75c8b82ad6]::query::erase::Erased<[u8; 24usize]>>
17: 0x7e0bef162e3b - <rustc_query_impl[22732cf2fa73812a]::query_impl::valtree_to_const_val::dynamic_query::{closure#2} as core[ced015e6fc2a4da0]::ops::function::FnOnce<(rustc_middle[7623bb75c8b82ad6]::ty::context::TyCtxt, (rustc_middle[7623bb75c8b82ad6]::ty::Ty, rustc_middle[7623bb75c8b82ad6]::ty::consts::valtree::ValTree))>>::call_once
18: 0x7e0bef162009 - rustc_query_system[ce3a0679b26f255f]::query::plumbing::try_execute_query::<rustc_query_impl[22732cf2fa73812a]::DynamicConfig<rustc_query_system[ce3a0679b26f255f]::query::caches::DefaultCache<(rustc_middle[7623bb75c8b82ad6]::ty::Ty, rustc_middle[7623bb75c8b82ad6]::ty::consts::valtree::ValTree), rustc_middle[7623bb75c8b82ad6]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[22732cf2fa73812a]::plumbing::QueryCtxt, false>
19: 0x7e0bef161d45 - rustc_query_impl[22732cf2fa73812a]::query_impl::valtree_to_const_val::get_query_non_incr::__rust_end_short_backtrace
20: 0x7e0bef127767 - <rustc_mir_transform[1ba400a67d95abf]::gvn::VnState>::insert
21: 0x7e0bef11e34b - <rustc_mir_transform[1ba400a67d95abf]::gvn::VnState>::simplify_operand
22: 0x7e0bef11fe90 - <rustc_mir_transform[1ba400a67d95abf]::gvn::VnState>::simplify_rvalue
23: 0x7e0bec203dd6 - <rustc_mir_transform[1ba400a67d95abf]::gvn::GVN as rustc_mir_transform[1ba400a67d95abf]::pass_manager::MirPass>::run_pass
24: 0x7e0bee6046f3 - rustc_mir_transform[1ba400a67d95abf]::pass_manager::run_passes_inner
25: 0x7e0bee731b74 - rustc_mir_transform[1ba400a67d95abf]::optimized_mir
26: 0x7e0bee73141d - rustc_query_impl[22732cf2fa73812a]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[22732cf2fa73812a]::query_impl::optimized_mir::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7623bb75c8b82ad6]::query::erase::Erased<[u8; 8usize]>>
27: 0x7e0bee8d30df - rustc_query_system[ce3a0679b26f255f]::query::plumbing::try_execute_query::<rustc_query_impl[22732cf2fa73812a]::DynamicConfig<rustc_query_system[ce3a0679b26f255f]::query::caches::DefIdCache<rustc_middle[7623bb75c8b82ad6]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[22732cf2fa73812a]::plumbing::QueryCtxt, false>
28: 0x7e0bee8d24f3 - rustc_query_impl[22732cf2fa73812a]::query_impl::optimized_mir::get_query_non_incr::__rust_end_short_backtrace
29: 0x7e0beb71a024 - <rustc_middle[7623bb75c8b82ad6]::ty::context::TyCtxt>::instance_mir
30: 0x7e0bee917792 - rustc_interface[8d12bef601c5487]::passes::run_required_analyses
31: 0x7e0bef57ac5e - rustc_interface[8d12bef601c5487]::passes::analysis
32: 0x7e0bef57ac2f - rustc_query_impl[22732cf2fa73812a]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[22732cf2fa73812a]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[7623bb75c8b82ad6]::query::erase::Erased<[u8; 0usize]>>
33: 0x7e0bef5dc2d5 - rustc_query_system[ce3a0679b26f255f]::query::plumbing::try_execute_query::<rustc_query_impl[22732cf2fa73812a]::DynamicConfig<rustc_query_system[ce3a0679b26f255f]::query::caches::SingleCache<rustc_middle[7623bb75c8b82ad6]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[22732cf2fa73812a]::plumbing::QueryCtxt, false>
34: 0x7e0bef5dc00e - rustc_query_impl[22732cf2fa73812a]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
35: 0x7e0bef637fa9 - rustc_interface[8d12bef601c5487]::passes::create_and_enter_global_ctxt::<core[ced015e6fc2a4da0]::option::Option<rustc_interface[8d12bef601c5487]::queries::Linker>, rustc_driver_impl[d6b89c31630ac8e2]::run_compiler::{closure#0}::{closure#2}>::{closure#2}::{closure#0}
36: 0x7e0bef62b1d6 - rustc_interface[8d12bef601c5487]::interface::run_compiler::<(), rustc_driver_impl[d6b89c31630ac8e2]::run_compiler::{closure#0}>::{closure#1}
37: 0x7e0bef479ec7 - std[5eed3342ae415129]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[8d12bef601c5487]::util::run_in_thread_with_globals<rustc_interface[8d12bef601c5487]::util::run_in_thread_pool_with_globals<rustc_interface[8d12bef601c5487]::interface::run_compiler<(), rustc_driver_impl[d6b89c31630ac8e2]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
38: 0x7e0bef479b99 - <<std[5eed3342ae415129]::thread::Builder>::spawn_unchecked_<rustc_interface[8d12bef601c5487]::util::run_in_thread_with_globals<rustc_interface[8d12bef601c5487]::util::run_in_thread_pool_with_globals<rustc_interface[8d12bef601c5487]::interface::run_compiler<(), rustc_driver_impl[d6b89c31630ac8e2]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[ced015e6fc2a4da0]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
39: 0x7e0bef47932f - std::sys::pal::unix::thread::Thread::new::thread_start::h6a23afa4b51367f7
40: 0x7e0be98a339d - <unknown>
41: 0x7e0be992849c - <unknown>
42: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.86.0-nightly (5cd16b7f2 2025-01-16) running on x86_64-unknown-linux-gnu
note: compiler flags: -Z mir-opt-level=5 -Z validate-mir -Z dump-mir-dir=dir
query stack during panic:
#0 [valtree_to_const_val] converting type-level constant value to mir constant value
#1 [optimized_mir] optimizing MIR for `main`
#2 [analysis] running analysis passes on this crate
end of query stack
error: aborting due to 3 previous errors
For more information about this error, try `rustc --explain E0308`.
```
</p>
</details>
<!--
query stack:
#0 [valtree_to_const_val] converting type-level constant value to mir constant value
#1 [optimized_mir] optimizing MIR for `main`
#2 [analysis] running analysis passes on this crate
-->
| I-ICE,T-compiler,C-bug,needs-triage | low | Critical |
2,791,904,332 | vscode | Missing padding to the last element in the installed extensions | Missing padding to the last element in the installed extensions
see the highlighted item
scroll is at the bottom but the settings and trust icons are getting a slightly overlapped over the border of the element and looks like last element missed the padding

//edit
I think the item is getting overlapped over the 'Recommended' submenu


cc @sandy081
Version: 1.96.3 (user setup)
Commit: 91fbdddc47bc9c09064bf7acf133d22631cbf083
Date: 2025-01-09T18:14:09.060Z
Electron: 32.2.6
ElectronBuildId: 10629634
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.22621 | polish,extensions | low | Minor |
2,791,909,989 | PowerToys | Powertoys setting did not show after installation | ### Microsoft PowerToys version
v0.87.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
I tried installed and reinstalled Powertoys by Github and Store, also running it as administrator. The icon shows on the icon tray but I can only hover on it to see its version but left mouse click, double click, drag or right mouse click did not work. I also tried to restart my pc but it did not work either. Another issue is when I tried to open the Powertoys app via start menu but nothing happened and my keyboard input become laggy in all apps (delay 3s) for about 1 minute.
### ✔️ Expected Behavior
The Powertoys app show and I can modify its setting
### ❌ Actual Behavior
Nothing has shown up and it made my keyboard input laggy
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Major |
2,791,925,497 | tensorflow | tf.config.LogicalDeviceConfiguration() not able to set the memory limit but tf.config.experimental.VirtualDeviceConfiguration() is able to | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.17.0
### Custom code
Yes
### OS platform and distribution
Ubuntu 22.04.4 LTS
### Mobile device
_No response_
### Python version
3.10.12
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
12.2
### GPU model and memory
RTX A5000 24Gb
### Current behavior?
I was trying to set the memory limit of 10Gb on the virtual device using tf.config.LogicalDeviceConfiguration(), but when I trained the model it was taking way more than 10Gb of memory. Eventually I was able to set the memory limit using tf.config.experimental.VirtualDeviceConfiguration() but I'm not sure why
### Standalone code to reproduce the issue
```shell
# this was not able to set the memory limit
gpus = tf.config.list_physical_devices('GPU')
if gpus:
try:
tf.config.set_visible_devices(gpus[0], 'GPU')
tf.config.set_logical_device_configuration(gpus[0],
[tf.config.LogicalDeviceConfiguration(memory_limit=10*1024)])
logical_gpus = tf.config.list_logical_devices('GPU')
except RuntimeError as e:
print(e)
# this was able to set the memory limit
gpus = tf.config.list_physical_devices('GPU')
if gpus:
try:
tf.config.experimental.set_virtual_device_configuration(
gpus[0],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=10*1024)]
)
except RuntimeError as e:
print(e)
```
### Relevant log output
```shell
``` | stat:awaiting response,type:bug,2.17 | medium | Critical |
2,791,928,844 | flutter | error | ### Steps to reproduce
The following assertion was thrown during a platform message callback:
```
A KeyDownEvent is dispatched, but the state shows that the physical key is already pressed. If this occurs in real application, please report this bug to Flutter. If this occurs in unit tests, please ensure that simulated events follow Flutter's event model as documented in `HardwareKeyboard`. This was the event: KeyDownEvent#25b14(physicalKey: PhysicalKeyboardKey#ea6e1(usbHidUsage: "0x000700e0", debugName: "Control Left"), logicalKey: LogicalKeyboardKey#30261(keyId: "0x1100000000", keyLabel: "", debugName: "Key with ID 0x01100000000"), character: null, timeStamp: 0:03:15.533244)
'package:flutter/src/services/hardware_keyboard.dart':
Failed assertion: line 505 pos 16: '!_pressedKeys.containsKey(event.physicalKey)'
```
error.Log
when i use flutter_pickers alway show this error ,please help me to reslove this bug
### Expected results
show normal log and never show this
### Actual results
show normal log and never show this
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
void _selectBirthday(BuildContext context) {
// FocusScope.of(context).unfocus();
Pickers.showDatePicker(
context,
onConfirm: (PDuration p) {
DateTime selectedDate = DateTime(p.year!, p.month!, p.day!);
setState(() {
_selectedBirthday = selectedDate;
_birthdayString = formatDate(_selectedBirthday);
});
},
mode: DateMode.YMD,
suffix: Suffix(years: '年', month: '月', days: '日'),
selectDate: PDuration(year: _selectedBirthday.year, month: _selectedBirthday.month, day: _selectedBirthday.day),
pickerStyle: DefaultPickerStyle(),
);
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
Either the assertion indicates an error in the framework itself, or we should provide substantially more information in this error message to help you determine and fix the underlying cause.
In either case, please report this assertion by filing a bug on GitHub:
https://github.com/flutter/flutter/issues/new?template=2_bug.yml
When the exception was thrown, this was the stack:
#2 HardwareKeyboard._assertEventIsRegular.<anonymous closure> (package:flutter/src/services/hardware_keyboard.dart:505:16)
#3 HardwareKeyboard._assertEventIsRegular (package:flutter/src/services/hardware_keyboard.dart:520:6)
#4 HardwareKeyboard.handleKeyEvent (package:flutter/src/services/hardware_keyboard.dart:643:5)
#5 KeyEventManager.handleRawKeyMessage (package:flutter/src/services/hardware_keyboard.dart:1164:37)
#6 BasicMessageChannel.setMessageHandler.<anonymous closure> (package:flutter/src/services/platform_channel.dart:235:49)
#7 _DefaultBinaryMessenger.setMessageHandler.<anonymous closure> (package:flutter/src/services/binding.dart:581:35)
#8 _invoke2 (dart:ui/hooks.dart:344:13)
#9 _ChannelCallbackRecord.invoke (dart:ui/channel_buffers.dart:45:5)
#10 _Channel.push (dart:ui/channel_buffers.dart:135:31)
#11 ChannelBuffers.push (dart:ui/channel_buffers.dart:343:17)
#12 PlatformDispatcher._dispatchPlatformMessage (dart:ui/platform_dispatcher.dart:750:22)
#13 _dispatchPlatformMessage (dart:ui/hooks.dart:257:31)
(elided 2 frames from class _AssertionError)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| waiting for customer response,in triage | low | Critical |
2,791,976,002 | PowerToys | Mouse Without Borders no longer works since a Windows update | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
After installing the updates below, the "Mouse Without Borders" function can no longer establish the connection between two computers. I made a new key without success.
Windows 11 entreprise 23H2 :
2025-01 Mise à jour cumulative pour Windows 11 Version 23H2 pour les systèmes x64 (KB5050021)
2025-01 Mise à jour cumulative pour .NET Framework 3.5 pour et 4.8.1 pour Windows 11, version 23H2 pour les systèmes x64 (KB5049624)
Mise à jour de la configuration de Windows (KB5035942)
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,791,979,110 | flutter | When running the official Flutter demo on Windows, no components are displayed. However, it can be displayed normally on Chrome. | ### Steps to reproduce
<!-- Uploading "image.png"... -->
### Actual results
running the official Flutter demo on Windows, no components are displayed. However, it can be displayed normally on Chrome.
### Logs
<details open>
<summary>Logs</summary>
```console
<!-- Paste your logs here -->
```
</details>
### Flutter Doctor output
<details open>
<summary>Doctor output</summary>
```console
<!-- Paste your output here -->
```
</details>
| waiting for customer response,in triage | low | Minor |
2,791,982,267 | ant-design | tabs有莫名其妙的蓝框 | ### Reproduction link
[https://ant-design.antgroup.com/components/tabs-cn](https://ant-design.antgroup.com/components/tabs-cn)
### Steps to reproduce
就在官方文档tabs的页面,随便点几个tab切换一下,然后浏览器直接切换别的页面(不是跳转),然后就会出现蓝框
### What is expected?
没有蓝框
### What is actually happening?
有蓝框
| Environment | Info |
| --- | --- |
| antd | 5.23.1 |
| React | 18.3.1 |
| System | windos |
| Browser | edge |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ⌨️ Accessibility | low | Major |
2,792,016,034 | tauri | [bug] Backend Crashes when inside of GitHub Codespace | ### Describe the bug
When trying to run an application inside of GitHub Codespaces, the front end works fine with the port forward, however, because of the headless version, GTK/Tao crashes. I was wondering if there is a way to fix this?
### Reproduction
1. Open a repository in the GitHub Codespace
2. Run either `cargo tauri dev` or `npm run tauri dev` and see the backend crash
### Expected behavior
To leave the front end (Vite) working as usual, and leave the backend GTK initialisation to if there is a found display.
### Full `tauri info` output
```text
[✔] Environment
- OS: Ubuntu 22.4.0 x86_64 (X64) (Unknown DE on Unknown Session)
✔ webkit2gtk-4.1: 2.46.5
✔ rsvg2: 2.52.5
✔ rustc: 1.84.0 (9fc6b4312 2025-01-07)
✔ cargo: 1.84.0 (66221abde 2024-11-19)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (environment override by RUSTUP_TOOLCHAIN)
- node: 23.6.0
- npm: 10.9.2
[-] Packages
- tauri 🦀: 2.2.2
- tauri-build 🦀: 2.0.5
- wry 🦀: 0.48.1
- tao 🦀: 0.31.1
- tauri-cli 🦀: 2.2.4
- @tauri-apps/api : 2.2.0
- @tauri-apps/cli : 2.2.4
[-] Plugins
- tauri-plugin-opener 🦀: 2.2.4
- @tauri-apps/plugin-opener : 2.2.4
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
```text
thread 'main' panicked at /home/vscode/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tao-0.31.1/src/platform_impl/linux/event_loop.rs:212:53:
Failed to initialize gtk backend!: BoolError { message: "Failed to initialize GTK", filename: "/home/vscode/.cargo/registry/src/index.crates.io-6f17d22bba15001f/gtk-0.18.2/src/rt.rs", function: "gtk::rt::init", line: 141 }
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
### Additional context
Just so you know, Im not quite sure if this is a Tauri issue or a Tao issue for the window initialisation. | type: bug,status: needs triage | low | Critical |
2,792,017,531 | godot | Mutable properties of `Resource` reference declared as `const` are not read correctly in 4.3 and later | ### Tested versions
- Reproducible in: v4.4.dev7.mono.official [46c8f8c5c], v4.3.stable.mono.official [77dcf97d8]
- Not reproducible in: v4.2.2.stable.mono.official [15073afe3]
### System information
Godot v4.3.stable.mono - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 4080 (NVIDIA; 32.0.15.6636) - AMD Ryzen 9 7950X3D 16-Core Processor (32 Threads)
### Issue description
I have a custom resource like this:
```gdscript
class_name TestResource
extends Resource
var _value:Vector3
var value:Vector3:
get: return _value
func update(value:Vector3):
_value = value
```
Now I create such a resource using _Create_ > _New Resource ..._ > _TestResource_ and I name the file `my_test_resource.tres`. Now in a script I load this resource and call the `update` function:
```gdscript
extends Node
var _test_resource:TestResource
func _ready():
_test_resource = load("res://my_test_resource.tres")
func _process(delta):
# call the update function every frame
_test_resource.update(Vector3(Engine.get_process_frames(), 0, 0))
```
Finally in a second script, I also load this resource and try to read back the updates:
```gdscript
extends Node2D
# load as a constant
const my_resource_const:TestResource = preload("res://my_test_resource.tres")
# load the same resource as a variable
var my_resource_var:TestResource = preload("res://my_test_resource.tres")
func _process(delta: float) -> void:
print("Const: ", my_resource_const.value, " Var: ", my_resource_var.value)
```
I would expect the print to show the same values for `my_resource_const.value` and `my_resource_var.value` because both references point to the exact same resource object (and they in fact do, they have the same object ID). In Godot 4.2 this works fine.

However in Godot 4.3 or later the `my_resource_const.value` always prints `(0,0,0)` while the `my_resource_var.value` prints the up-to-date value:

My hypothesis is that this is caused by some kind of optimization that incorrectly caches the result of the call to the `value` property when the reference is declared as `const` ([here maybe?](https://github.com/godotengine/godot/blob/0726d3c7d5125d1a72ec318a2ec4ff11f9f7f8bb/modules/gdscript/gdscript_analyzer.cpp#L1996)). It is the same underlying object in both cases and the property value is actually changed (otherwise the reference delcared as `var` wouldn't update either).
### Steps to reproduce
Open the attached example project in Godot 4.2, open `const_vs_var.tscn` and run the scene. You should see the expected output. Now open the same project in Godot 4.3 or 4.4dev7 and run the scene again. You should see that in these version the `value` property of the variable declared as `const` does not read the updated value correctly.
### Minimal reproduction project (MRP)
[const_vs_var.zip](https://github.com/user-attachments/files/18435803/const_vs_var.zip) | bug,topic:gdscript | low | Minor |
2,792,085,879 | flutter | _debugRelayoutBoundaryAlreadyMarkedNeedsLayout() is not true with StatefulShellRoute | ### Steps to reproduce
1. Launch the code sample in debug mode.
2. Tap "Go to Screen A" right away.
There is a 3 seconds timer in the background, the button has to be tapped during this time frame.
3. The timer triggers rebuild of ScreenB while it exists in the background StatefulShellBranch.
4. ScreenB gets a new child widget after the build.
### Expected results
The app keeps running normally.
### Actual results
The app crashes.
### Code sample
Flutter 3.27.2
go_router 14.6.3
<details open><summary>Code sample</summary>
```dart
import 'dart:async';
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: GoRouter(
initialLocation: '/b/2',
routes: [
StatefulShellRoute.indexedStack(
branches: [
StatefulShellBranch(routes: [
GoRoute(
path: '/a',
builder: (context, state) => const ScreenA(),
),
]),
StatefulShellBranch(routes: [
GoRoute(
path: '/b',
builder: (context, state) => const ScreenB(),
routes: [
GoRoute(
path: '2',
builder: (context, state) => const ScreenB2(),
),
],
),
]),
],
builder: (context, state, navigationShell) => navigationShell,
),
],
),
);
}
}
class ScreenA extends StatelessWidget {
const ScreenA({super.key});
@override
Widget build(BuildContext context) {
return const Scaffold(body: Text("Screen A"));
}
}
class ScreenB extends StatefulWidget {
const ScreenB({super.key});
@override
State<ScreenB> createState() => _ScreenBState();
}
class _ScreenBState extends State<ScreenB> {
Timer? _timer;
bool _showNewChild = false;
@override
void initState() {
super.initState();
// Use a timer to trigger a rebuild.
_timer = Timer(const Duration(seconds: 3), () {
print("Trigger");
setState(() {
_showNewChild = true;
});
});
}
@override
void dispose() {
_timer?.cancel();
super.dispose();
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: Column(
children: [
const Text("Screen B"),
// Appearance of this widget triggers the crash.
if (_showNewChild) const Text("New Child"),
],
),
);
}
}
class ScreenB2 extends StatelessWidget {
const ScreenB2({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
body: Column(
children: [
const Text("Screen B/2"),
FilledButton(
onPressed: () {
context.go("/a");
},
child: const Text("Go to Screen A"),
),
],
),
);
}
}
```
</details>
### Screenshots or Video
### Logs
<details open><summary>Logs</summary>
```console
Trigger
══╡ EXCEPTION CAUGHT BY WIDGETS LIBRARY ╞═══════════════════════════════════════════════════════════
The following assertion was thrown building Text("New Child", dependencies: [DefaultSelectionStyle,
DefaultTextStyle, MediaQuery]):
Assertion failed:
file:///home/oleg/flutter/flutter/packages/flutter/lib/src/rendering/object.dart:2329:14
_debugRelayoutBoundaryAlreadyMarkedNeedsLayout()
is not true
The relevant error-causing widget was:
Text Text:file:///home/oleg/Projects/test_flutter_crash/lib/main.dart:92:36
When the exception was thrown, this was the stack:
dart-sdk/lib/_internal/js_dev_runtime/private/ddc_runtime/errors.dart 288:3 throw_
dart-sdk/lib/_internal/js_dev_runtime/private/profile.dart 110:39 assertFailed
packages/flutter/src/rendering/object.dart 2329:14 markNeedsLayout
packages/flutter/src/rendering/box.dart 2669:11 markNeedsLayout
packages/flutter/src/rendering/object.dart 1855:5 adoptChild
packages/flutter/src/rendering/object.dart 4355:5 insert
packages/flutter/src/widgets/framework.dart 6988:17 insertRenderObjectChild
packages/flutter/src/widgets/framework.dart 6746:35 attachRenderObject
packages/flutter/src/widgets/framework.dart 6611:5 mount
packages/flutter/src/widgets/framework.dart 7056:11 mount
packages/flutter/src/widgets/framework.dart 4480:15 inflateWidget
packages/flutter/src/widgets/framework.dart 3963:18 updateChild
packages/flutter/src/widgets/framework.dart 5656:16 performRebuild
packages/flutter/src/widgets/framework.dart 5347:7 rebuild
packages/flutter/src/widgets/framework.dart 5613:5 [_firstBuild]
packages/flutter/src/widgets/framework.dart 5607:5 mount
packages/flutter/src/widgets/framework.dart 4480:15 inflateWidget
packages/flutter/src/widgets/framework.dart 7049:36 inflateWidget
packages/flutter/src/widgets/framework.dart 3963:18 updateChild
packages/flutter/src/widgets/framework.dart 4150:32 updateChildren
packages/flutter/src/widgets/framework.dart 7074:17 update
packages/flutter/src/widgets/framework.dart 3941:14 updateChild
packages/flutter/src/widgets/framework.dart 5656:16 performRebuild
packages/flutter/src/widgets/framework.dart 5347:7 rebuild
packages/flutter/src/widgets/framework.dart 5707:5 update
<...>
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.2, on Ubuntu 22.04.2 LTS 5.19.0-41-generic, locale ru_RU.UTF-8)
• Flutter version 3.27.2 on channel stable at /home/oleg/flutter/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 68415ad1d9 (3 дня назад), 2025-01-13 10:22:03 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /home/oleg/Android/Sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /usr/lib/jvm/java-18-openjdk-amd64/bin/java
• Java version OpenJDK Runtime Environment (build 18.0.2-ea+9-Ubuntu-222.04)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• Chrome at google-chrome
[✗] Linux toolchain - develop for Linux desktop
• Ubuntu clang version 14.0.0-1ubuntu1.1
• cmake version 3.22.1
• ninja version 1.10.1
• pkg-config version 0.29.2
✗ GTK 3.0 development libraries are required for Linux development.
They are likely available from your distribution (e.g.: apt install libgtk-3-dev)
[✓] Android Studio (version 2024.2)
• Android Studio at /home/oleg/Apps/android-studio
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
[✓] VS Code (version 1.93.1)
• VS Code at /usr/share/code
• Flutter extension version 3.102.0
[✓] Connected device (2 available)
• Linux (desktop) • linux • linux-x64 • Ubuntu 22.04.2 LTS 5.19.0-41-generic
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.70
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| package,a: error message,has reproducible steps,p: go_router,team-framework,found in release: 3.27,found in release: 3.28 | low | Critical |
2,792,087,118 | vscode | Add a removal option to Compound Log menu |
Type: <b>Feature Request</b>
Trying out the new Compound Log feature, I added one and then looked for a removal option:

Found it on Command Palette, but maybe it also belongs on the log's menu.

VS Code version: Code - Insiders 1.97.0-insider (31188fed068c5c724d73a1956c846401d4d7b01d, 2025-01-16T05:07:10.789Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<!-- generated by issue reporter --> | feature-request,output | low | Minor |
2,792,091,978 | opencv | Can not build on MacOS | ### System Information
OpenCV Version: 4.10.0
Platform: MacOS 14
Compiler: Xcode 15.4
Python Version: 3.11
### Detailed description
When building OpenCV from source with the given setup the build will fail with the following error:
```
/Users/runner/work/Proxy-PDF-Maker/Proxy-PDF-Maker/.conan_home/p/b/opencc537d20d0094d/b/src/modules/gapi/src/compiler/gislandmodel.hpp:166:24: error: field has incomplete type 'std::exception_ptr'
std::exception_ptr eptr;
^
/Applications/Xcode_15.4.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.5.sdk/usr/include/c++/v1/__exception/operations.h:36:33: note: forward declaration of 'std::exception_ptr'
class _LIBCPP_EXPORTED_FROM_ABI exception_ptr;
^
In file included from /Users/runner/work/Proxy-PDF-Maker/Proxy-PDF-Maker/.conan_home/p/b/opencc537d20d0094d/b/src/modules/gapi/src/api/gbackend.cpp:14:
In file included from /Users/runner/work/Proxy-PDF-Maker/Proxy-PDF-Maker/.conan_home/p/b/opencc537d20d0094d/b/src/modules/gapi/src/api/gbackend_priv.hpp:21:
In file included from /Users/runner/work/Proxy-PDF-Maker/Proxy-PDF-Maker/.conan_home/p/b/opencc537d20d0094d/b/src/modules/gapi/src/compiler/gmodel.hpp:32:
/Users/runner/work/Proxy-PDF-Maker/Proxy-PDF-Maker/.conan_home/p/b/opencc537d20d0094d/b/src/modules/gapi/src/compiler/gislandmodel.hpp:178:61: error: initialization of incomplete type 'const std::exception_ptr'
virtual void post(GRunArgP&&, const std::exception_ptr& = {}) = 0; // Release the object back to the framework (mark available)
^
/Applications/Xcode_15.4.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.5.sdk/usr/include/c++/v1/__exception/operations.h:36:33: note: forward declaration of 'std::exception_ptr'
class _LIBCPP_EXPORTED_FROM_ABI exception_ptr;
^
/Users/runner/work/Proxy-PDF-Maker/Proxy-PDF-Maker/.conan_home/p/b/opencc537d20d0094d/b/src/modules/gapi/src/compiler/gislandmodel.hpp:178:61: note: passing argument to parameter here
virtual void post(GRunArgP&&, const std::exception_ptr& = {}) = 0; // Release the object back to the framework (mark available)
```
Highligh here is
```
error: field has incomplete type 'std::exception_ptr'
```
which appears due to a missing `#include <exception>`.
### Steps to reproduce
Run the following commands to build OpenCV:
```sh
cmake .. -G "Ninja" -DCMAKE_C_COMPILER=cc -DCMAKE_CXX_COMPILER=c++ -DCMAKE_BUILD_TYPE="Release"
cmake --build .
```
See also a build in Github Actions CI failing here: https://github.com/Malacath-92/Proxy-PDF-Maker/actions/runs/12786776491/job/35644533639
This build uses conan as a package manager, but it essentially executes the above commands during resolution of dependencies.
### Issue submission checklist
- [x] I report the issue, it's not a question
- [x] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [x] I updated to the latest OpenCV version and the issue is still there
- [x] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: build/install,platform: ios/osx | low | Critical |
2,792,105,255 | ollama | support ReaderLM-v2 | https://huggingface.co/jinaai/ReaderLM-v2
ReaderLM-v2 is specialized for tasks involving HTML parsing, transformation, and text extraction. | model request | low | Major |
2,792,111,527 | next.js | Extended Set Methods (like union()) Not Included in Default Polyfills Despite Documentation | ### Link to the code that reproduces this issue
https://github.com/nikolay-gipp-sibe/next-set-polyfill-issue
### To Reproduce
1. Create a new Next.js project using create-next-app
```
npx create-next-app@latest
```
2. Create a new component file (e.g. `app/test/page.tsx`) with the following content:
```typescript
'use client';
import { ReactElement } from 'react';
// Without this import, union() doesn't work
// import 'core-js/features/set'
export function TestSetUnion(): ReactElement {
const set: Set<unknown> = new Set([1, 2, 3]).union(new Set([1, 2]));
return <div>Test Set Union {set.size}</div>;
}
```
3. Open the page in **Chrome 103**
4. Observe the error: `TypeError: a.union is not a function`
5. Add the polyfill import manually:
```typescript
import 'core-js/features/set'
```
6. Refresh the page - now it works correctly
### Current vs. Expected behavior
### Current Behavior
When using Set's extended methods like `union()` without manual polyfill import, the application throws `TypeError: a.union is not a function` in Chrome 103, despite Next.js documentation stating that Set polyfills are included by default.
### Expected Behavior
Since Next.js documentation and source code (https://github.com/vercel/next.js/blob/canary/packages/next-polyfill-nomodule/src/index.js) include `import 'core-js/features/set'`, all Set methods including extended ones like `union()` should work without requiring manual polyfill imports.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Pro
Available memory (MB): 65389
Available CPU cores: 16
Binaries:
Node: 22.13.0
npm: 10.8.2
Yarn: 1.22.19
pnpm: 9.15.4
Relevant Packages:
next: 15.1.4 // Latest available version is detected (15.1.4).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.4.5
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Documentation, Developer Experience, Runtime, Performance
### Which stage(s) are affected? (Select all that apply)
next dev (local), Vercel (Deployed), Other (Deployed), next start (local)
### Additional context
### Additional Context
This issue affects older browsers like Chrome 103.
From [Next.js documentation](https://nextjs.org/docs/architecture/supported-browsers#polyfills):
> We inject widely used polyfills, including:
> If any of your dependencies include these polyfills, they'll be eliminated automatically from the production build to avoid duplication.
And in [polyfill source code](https://github.com/vercel/next.js/blob/canary/packages/next-polyfill-nomodule/src/index.js):
```javascript
import 'core-js/features/set'
```
The key points:
- The extended Set methods like `union()` work fine in server components
- But in client components, you get a TypeError unless you manually add import 'core-js/features/set' to the client component
- This creates confusing behavior when moving components from server to client rendering
| Performance,Runtime | low | Critical |
2,792,117,280 | pytorch | [torch.export] _insert_copy_for_mutations can't generate proper copy nodes for pure inplace ops | ### 🐛 Describe the bug
I am using the simplest `nn.Relu(inpleace=True)` to make a call to torch.export, and the following error occurs:
```
RuntimeError: Could not find input in either buffer or input nodes
```
My test code is as follows:
```
def ori_test():
x = torch.rand(2,3)
m = torch.nn.ReLU(inplace=True).eval()
m = torch.export.export(m, (x,))
mm = m.module() # error occurs
```
After debugging, I realised that the root cause was that `torch.export.export` was modifying the graph in a way that didn't strictly correspond to the rules in `class Graph`.
The generated graph is as follows:
```
graph():
%arg0_1 : [num_users=1] = placeholder[target=arg0_1]
%relu : [num_users=1] = call_function[target=torch.ops.aten.relu.default](args = (%arg0_1,), kwargs = {})
return (relu, relu)
```
It's ok. However, in `placeholder_naming_pass`, it get original args name "input" in aten and modified the structure of the graph into this
```
graph():
%input : [num_users=1] = placeholder[target=input]
%relu : [num_users=1] = call_function[target=torch.ops.aten.relu.default](args = (%input,), kwargs = {})
return (relu, relu)
```
**str "input" is confilicted with `builtins.__dict__`**. Thus, when it comes into `_unlift_exported_program_lifted_states`, it calls "copy.deepcopy", which is a method of `Graph`. In the process of copying, `_is_illegal_name` checks for naming conflicts, resulting in the diagram being modified as follows:
```
graph():
%input_1 : [num_users=1] = placeholder[target=input]
%relu : [num_users=1] = call_function[target=torch.ops.aten.relu.default](args = (%input_1,), kwargs = {})
return (relu, relu)
```
**This ultimately causes `_insert_copy_for_mutations` to fail to insert the copy node properly due to `input_name_to_node` mismatch.**
If possible, I think the same appropriate check should be added to `placeholder_naming_pass` to avoid this, although it may not be fully consistent with the naming in the original function.
Can any of the team members give some advice?
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.28.4
Libc version: glibc-2.31
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8369HB CPU @ 3.30GHz
Stepping: 11
CPU MHz: 3800.073
CPU max MHz: 4200.0000
CPU min MHz: 1200.0000
BogoMIPS: 6600.06
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 66 MiB
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 ida arat avx512_vnni
Versions of relevant libraries:
[pip3] intel_extension_for_pytorch==2.5.0
[pip3] numpy==1.26.3
[pip3] torch==2.5.1+cpu
[pip3] torchaudio==2.5.1+cpu
[pip3] torchvision==0.20.1+cpu
[conda] intel-extension-for-pytorch 2.5.0 pypi_0 pypi
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 1.26.3 pypi_0 pypi
[conda] torch 2.5.1+cpu pypi_0 pypi
[conda] torchaudio 2.5.1+cpu pypi_0 pypi
[conda] torchvision 0.20.1+cpu pypi_0 pypi
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,oncall: export | low | Critical |
2,792,148,404 | vscode | Copilot |
Type: <b>Bug</b>
I am unable to sign in to Copilot through GitHub. After clicking "Sign in to use Copilot for free," it does not redirect to the browser for signing in. I reinstalled the program, but I am facing the same issue on both my laptop and desktop.
VS Code version: Code 1.96.3 (91fbdddc47bc9c09064bf7acf133d22631cbf083, 2025-01-09T18:14:09.060Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i5-12400F (12 x 2496)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.84GB (6.34GB free)|
|Process Argv|--enable-proposed-api genuitecllc.codetogether --crash-reporter-id 66af26af-5f43-4771-934f-eb9230d74b88|
|Screen Reader|no|
|VM|0%|
</details>Extensions: none<details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,792,151,584 | langchain | OBSFileLoader.load() didn't seperate file content as expect | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from obs import ObsClient
from langchain_community.document_loaders import OBSFileLoader
client = ObsClient(access_key_id=ak, secret_access_key=sk, server=server)
obs = OBSFileLoader(bucket,key_name,client,endpoint)
obs.load()
#[Document(metadata={'source': 'whole content of in `key_name` file')]
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* I expect to see [Document(metadata={'source': 'segment 1'),Document(metadata={'source': 'segment ')...]
* Instead, [Document(metadata={'source': 'whole content of file')]
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:06:23) [MSC v.1942 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.10
> langchain_huggingface: 0.1.2
> langchain_milvus: 0.1.8
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.5
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> huggingface-hub: 0.27.1
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.7
> orjson: 3.10.13
> packaging: 24.2
> pydantic: 2.10.4
> pydantic-settings: 2.7.1
> pymilvus: 2.5.3
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> sentence-transformers: 3.3.1
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> tokenizers: 0.21.0
> transformers: 4.47.1
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available. | 🤖:bug | low | Critical |
2,792,165,905 | flutter | flutter: PlatformException(channel-error, Unable to establish connection on channel: "dev.flutter.pigeon.shared_preferences_foundation.UserDefaultsApi.set"., null, null) | ### Steps to reproduce
when i save data from background mode it works incorrect
### Expected results
should save data
### Actual results
flutter: PlatformException(channel-error, Unable to establish connection on channel: "dev.flutter.pigeon.shared_preferences_foundation.UserDefaultsApi.set"., null, null)
### Code sample
<details open><summary>Code sample</summary>
```dart
@pragma('vm:entry-point')
void callbackDispatcher() async {
....
StorageService.setStringList(key, data);
.....
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
flutter: PlatformException(channel-error, Unable to establish connection on channel: "dev.flutter.pigeon.shared_preferences_foundation.UserDefaultsApi.set"., null, null)
[zoneID] l:37.39195948,-122.16790371
[triggerTime] 2025-01-16 14:23:27.371285
[triggerType] dwell
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.0.1 24A348 darwin-arm64, locale en-UZ)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.1)
[✓] Connected device (5 available)
[✓] Network resources
• No issues found!
```
</details>
| waiting for customer response,in triage | low | Critical |
2,792,177,857 | vscode | Git - Source control icon blinks every few seconds if "Autofetch period" | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.97.0-insider (user setup)
Commit: 31188fed068c5c724d73a1956c846401d4d7b01d
Date: 2025-01-16T05:07:10.789Z
Electron: 32.2.7
ElectronBuildId: 10660205
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.26100
- OS Version: Windows 11
Steps to Reproduce:
1. Set `Git: Autofetch` to `true`
2. Set `Git: Autofetch period` to `1`
3. Click on `Source Control` icon
4. Click on any entry on Source Control Graph
5. Look at the Source control icon
It blinks every few seconds.
It doesn't blink until entry on Source Control Graph is clicked.
https://github.com/user-attachments/assets/2206b7f0-4adf-451d-9cce-edda0a0f9bc3
It's similar to https://github.com/microsoft/vscode/issues/219877 but with additional steps to reproduce. | bug,git | low | Critical |
2,792,186,646 | pytorch | ModuleNotFoundError: No module named 'torch.privateuseone' | ### 🐛 Describe the bug
When I add Backend::PrivateUse1, it throws an error ModuleNotFoundError: No module named 'torch.privateuseone'
import torch
a = torch.ones((3,3), device="privateuseone")
std::vector<std::pair<Backend, ScalarType>> all_declared_types() {
std::vector<std::pair<Backend, ScalarType>> ret;
// NOTE: Do not add more types here. This list controls the creation
// of legacy tensor types e.g. torch.cuda.FloatTensor which are
// maintained for backwards-compatibility only.
auto backends = {
Backend::PrivateUse1, Backend::CPU, Backend::CUDA, Backend::SparseCPU, Backend::SparseCUDA};

### Versions
PyTorch version: 2.5.0a0+gita8d6afb
Is debug build: True
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] optree==0.13.1
[pip3] torch==2.5.0a0+gita8d6afb
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2025.0.1 pypi_0 pypi
[conda] mkl-static 2025.0.1 pypi_0 pypi
[conda] numpy 2.2.1 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] torch 2.5.0a0+gita8d6afb dev_0 <develop>
cc @jbschlosser @NmomoN @mengpenghui @fwenguang @cdzhan @1274085042 @PHLens | module: cpp,triaged,module: PrivateUse1 | medium | Critical |
2,792,216,095 | tauri | [feat] frontend: dynamic resize, based on content | ### Describe the problem
If you want to have dynamic sized window, so based on the content, it's quite difficult to achieve this. It would be nice, if there would be a easy to make this possible.
### Describe the solution you'd like
It would be nice, if you could build windows with a settings like `.fit_size_to_content()`
```
tauri::WebviewWindowBuilder::new(
app,
WINDOW_LABEL,
tauri::WebviewUrl::App("/dashboard".into()),
)
.title("Motion Minute - Actionbar")
.center()
.fit_size_to_content()
.build()?;
```
### Alternatives considered
My current workaround for my Svelte project is to have a AutoSize component:
```
<script lang="ts">
import {onMount, onDestroy, tick} from 'svelte';
import {getCurrentWindow, PhysicalSize} from '@tauri-apps/api/window';
import {type} from "@tauri-apps/plugin-os"
import type { Snippet } from 'svelte';
import {debug} from "@tauri-apps/plugin-log";
interface Props {
ready: boolean;
children: Snippet;
[key: string]: unknown;
}
let {ready = true, children, ...rest}: Props = $props();
let container: HTMLDivElement | null = $state(null);
async function resizeWindow() {
const currentWindow = getCurrentWindow();
await debug("resizeWindow called");
if (container && ready) {
await tick();
let rect = container.getBoundingClientRect()
const factor = window.devicePixelRatio;
const width: number = Math.ceil(rect.width * factor);
const height: number = Math.ceil(rect.height * factor);
let topPadding = await currentWindow.isDecorated() && type() === 'macos' ? 55 : 0
let size = new PhysicalSize(width, height + topPadding);
let current = await currentWindow.outerSize()
await debug(`size before ${current.width}x${current.height}`)
await debug(`size after ${width}x${height + topPadding}`)
await currentWindow.setSize(size);
await tick();
await currentWindow.center();
await currentWindow.show();
await currentWindow.setFocus();
}
}
let observer: ResizeObserver;
onMount(async () => {
observer = new ResizeObserver(async () => {
await resizeWindow();
});
if (container) observer.observe(container);
});
onDestroy(() => {
debug("unmount observer");
if (observer) observer.disconnect();
});
</script>
<div {...rest} bind:this={container}>
{@render children?.()}
</div>
```
This seems to work, but not always. Maybe there exist a better workaround for this.
### Additional context
_No response_ | type: feature request | low | Critical |
2,792,222,252 | vscode | Extension is not able install and the setting is also opening |
Type: <b>Performance Issue</b>
The editor could not be opened due to an unexpected error: Expected ',' or ']' after array element in JSON at position 658 (line 1 column 659) this error is coming and extensions are not able to install
VS Code version: Code 1.96.3 (91fbdddc47bc9c09064bf7acf133d22631cbf083, 2025-01-09T18:14:09.060Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz (8 x 2803)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.70GB (4.05GB free)|
|Process Argv|--crash-reporter-id ec32a57b-c3c3-4165-946c-f2776b93e079|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
0 125 3300 code main
0 46 15820 utility-network-service
0 167 20592 window [1] (Untitled-1 - Visual Studio Code)
0 153 22220 gpu-process
0 110 22468 shared-process
0 87 23788 fileWatcher [1]
0 32 24100 crashpad-handler
0 120 24208 extensionHost [1]
```
</details>
<details>
<summary>Workspace Info</summary>
```
;
```
</details>
Extensions: none<details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,792,236,879 | kubernetes | [Flaking Test] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer | ### Which jobs are flaking?
master-informing:
- capz-windows-master
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
### Since when has it been flaking?
[16/01/2025, 07:01:18](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-capz-master-windows/1879619842433093632)
[16/01/2025, 05:56:18](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-capz-master-windows-2025/1879603486019031040)
[15/01/2025, 18:05:20](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/e2e-kops-aws-cni-kindnet/1879424553755611136)
[15/01/2025, 10:05:20](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/e2e-kops-aws-cni-kindnet/1879303756760223744)
### Testgrid link
https://testgrid.k8s.io/sig-release-master-informing#capz-windows-master
### Reason for failure (if possible)
```
{ failed [FAILED] Told to stop trying after 12.737s.
pod "oidc-discovery-validator" failed with status:
<v1.PodStatus>:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-01-15T20:20:07Z"
status: "False"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2025-01-15T20:19:56Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2025-01-15T20:20:05Z"
reason: PodFailed
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2025-01-15T20:20:05Z"
reason: PodFailed
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2025-01-15T20:19:56Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://b0b674fd3e410884b4a9353dc6d3f9f2b91057586cdefc335e9cc1394a855d31
image: registry.k8s.io/e2e-test-images/agnhost:2.53
imageID: registry.k8s.io/e2e-test-images/agnhost@sha256:99c6b4bb4a1e1df3f0b3752168c89358794d02258ebebc26bf21c29399011a85
lastState: {}
```
### Anything else we need to know?
N/A
### Relevant SIG(s)
/sig auth | kind/flake,sig/auth,sig/windows,needs-triage | low | Critical |
2,792,285,764 | vscode | Color of tab name for modified file reverts to default color if a problem is detected | - VS Code Version: 1.96.3
- OS Version: Windows 11
Steps to Reproduce:
1. With VSCode and GitLens enabled, open a project linked to a git repository
2. Modify a file locally. The tab name should appear in orange:

3. Generate a warning in the file, such as an unused function. The tab name reverts to default color:

Expected behavior:
The tab name should stay in orange as long as it's modified locally. | bug,workbench-tabs | low | Minor |
2,792,306,559 | ollama | ollama create fails for GGUF files with unaligned tensors | ### What is the issue?
```
$ ollama show --modelfile minicpm-v > Modelfile
$ ollama create minicpm-v:test
gathering model components
copying file sha256:262843d4806aeb402336980badd414a72576b20b1e5d537647da15f16c4a4df0 100%
copying file sha256:f8a805e9e62085805c69c427287acefc284932eb4abfe6e1b1ce431d27e2f4e0 100%
parsing GGUF
Error: invalid file magic
```
The tensors in some GGUF files are not aligned with `general.alignment` and when imported, the alignment bytes at the end of the file are treated as the start of a new GGUF, resulting in a failed `file magic` match.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.6 | bug | low | Critical |
2,792,311,648 | tensorflow | Issues on trying to compile TensorFlow C API for JETSON AGX Xavier using Bazel | On my JETSON AGX Xavier, with:
cuda: 11.4.315
cuDNN: 8.6.0
tensorrt: 8.5.2.2
jetpack: 5.1.3
python3 -c “import tensorflow as tf; print(‘TensorFlow version:’, tf.version)”
TensorFlow version: 2.11.0
I can’t compile tf with bazel ( bazel --version: bazel 5.3.0 ) , error:
~/tensorflow$ bazel build --config=opt --config=cuda //tensorflow:libtensorflow.so
Starting local Bazel server and connecting to it…
WARNING: The following configs were expanded more than once: [cuda]. For repeatable flags, repeats are counted twice and may lead to unexpected behavior.
INFO: Reading ‘startup’ options from /home/redans/tensorflow/.bazelrc: --windows_enable_symlinks
INFO: Options provided by the client:
Inherited ‘common’ options: --isatty=1 --terminal_columns=237
INFO: Reading rc options for ‘build’ from /home/redans/tensorflow/.bazelrc:
Inherited ‘common’ options: --experimental_repo_remote_exec
INFO: Reading rc options for ‘build’ from /home/redans/tensorflow/.bazelrc:
‘build’ options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --features=-force_no_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
INFO: Reading rc options for ‘build’ from /home/redans/tensorflow/.tf_configure.bazelrc:
‘build’ options: --action_env PYTHON_BIN_PATH=/usr/bin/python3.9 --action_env PYTHON_LIB_PATH=/usr/local/lib/python3.9/dist-packages --python_path=/usr/bin/python3.9 --action_env PYTHONPATH=/usr/local/lib/python3.9/dist-packages:/usr/local/lib/python3.9/dist-packages:/home/redans/ros2_ws/install/yolov8_ros/lib/python3.9/site-packages:/home/redans/ros2_ws/install/yolov8_msgs/lib/python3.9/site-packages:/home/redans/ros2_ws/install/realsense2_camera_msgs/lib/python3.9/site-packages:/opt/ros/humble/lib/python3.9/site-packages --action_env LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64:/home/redans/local/lib/python3.8/dist-packages/tensorflow:/home/redans/ros2_ws/install/yolov8_msgs/lib:/home/redans/ros2_ws/install/realsense2_camera/lib:/home/redans/ros2_ws/install/realsense2_camera_msgs/lib:/opt/ros/humble/opt/rviz_ogre_vendor/lib:/opt/ros/humble/lib/aarch64-linux-gnu:/opt/ros/humble/lib:/usr/local/cuda-11.4/lib64: --action_env GCC_HOST_COMPILER_PATH=/usr/bin/aarch64-linux-gnu-gcc-9 --config=cuda
INFO: Found applicable config definition build:short_logs in file /home/redans/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/redans/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:cuda in file /home/redans/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda --repo_env=HERMETIC_CUDA_VERSION=12.5.1 --repo_env=HERMETIC_CUDNN_VERSION=9.3.0 --@local_config_cuda//cuda:include_cuda_libs=true
INFO: Found applicable config definition build:cuda in file /home/redans/tensorflow/.tf_configure.bazelrc: --repo_env HERMETIC_CUDA_COMPUTE_CAPABILITIES=7.2
INFO: Found applicable config definition build:opt in file /home/redans/tensorflow/.tf_configure.bazelrc: --copt=-Wno-sign-compare --host_copt=-Wno-sign-compare
INFO: Found applicable config definition build:cuda in file /home/redans/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda --repo_env=HERMETIC_CUDA_VERSION=12.5.1 --repo_env=HERMETIC_CUDNN_VERSION=9.3.0 --@local_config_cuda//cuda:include_cuda_libs=true
INFO: Found applicable config definition build:cuda in file /home/redans/tensorflow/.tf_configure.bazelrc: --repo_env HERMETIC_CUDA_COMPUTE_CAPABILITIES=7.2
INFO: Found applicable config definition build:linux in file /home/redans/tensorflow/.bazelrc: --host_copt=-w --copt=-Wno-all --copt=-Wno-extra --copt=-Wno-deprecated --copt=-Wno-deprecated-declarations --copt=-Wno-ignored-attributes --copt=-Wno-array-bounds --copt=-Wunused-result --copt=-Werror=unused-result --copt=-Wswitch --copt=-Werror=switch --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++17 --host_cxxopt=-std=c++17 --config=dynamic_kernels --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /home/redans/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
ERROR: Traceback (most recent call last):
File “/home/redans/.cache/bazel/_bazel_redans/e3bb405f92452fe8b27464d0b3fdd1a7/external/rules_python/python/versions.bzl”, line 734, column 32, in
PLATFORMS = _generate_platforms()
File “/home/redans/.cache/bazel/_bazel_redans/e3bb405f92452fe8b27464d0b3fdd1a7/external/rules_python/python/versions.bzl”, line 723, column 15, in _generate_platforms
} | v.flag_values,
Error: unsupported binary operation: dict | dict
INFO: Found applicable config definition build:cuda in file /home/redans/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda --repo_env=HERMETIC_CUDA_VERSION=12.5.1 --repo_env=HERMETIC_CUDNN_VERSION=9.3.0 --@local_config_cuda//cuda:include_cuda_libs=true
INFO: Found applicable config definition build:cuda in file /home/redans/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda --repo_env=HERMETIC_CUDA_VERSION=12.5.1 --repo_env=HERMETIC_CUDNN_VERSION=9.3.0 --@local_config_cuda//cuda:include_cuda_libs=true
WARNING: The following configs were expanded more than once: [cuda]. For repeatable flags, repeats are counted twice and may lead to unexpected behavior.
ERROR: @local_config_cuda//:enable_cuda :: Error loading option @local_config_cuda//:enable_cuda: error loading package ‘’: at /home/redans/.cache/bazel/_bazel_redans/e3bb405f92452fe8b27464d0b3fdd1a7/external/local_tsl/third_party/py/python_init_repositories.bzl:3:6: at /home/redans/.cache/bazel/_bazel_redans/e3bb405f92452fe8b27464d0b3fdd1a7/external/rules_python/python/repositories.bzl:24:6: at /home/redans/.cache/bazel/_bazel_redans/e3bb405f92452fe8b27464d0b3fdd1a7/external/rules_python/python/private/python_register_multi_toolchains.bzl:22:6: initialization of module ‘python/versions.bzl’ failed
Do you have any suggestions? | stat:awaiting response,type:build/install,subtype:bazel,TF 2.11 | medium | Critical |
2,792,324,297 | flutter | Text cut off when wrapping text with specific font size in Opacity or ShaderMask on Android | ### Steps to reproduce
Run the code sample on Android Simulator Pixel 8 API 34.
### Expected results
Text wrapped in `Opacity` and `ShaderMask` shut not be cut off on top and bottom.
### Actual results
Text wrapped in `Opacity` and `ShaderMask` gets cut off on top and bottom.
Same results when running natively on Pixel 7 & other Android devices.
The cut-off is gone, when font size gets set to 48 or bigger.
Noticed similar effects in a production app on Android and also Web, when using the Inter variable font (https://rsms.me/inter/).
Similar to: https://github.com/flutter/flutter/issues/96322
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
final style = TextStyle(fontSize: 47);
return MaterialApp(
home: Scaffold(
body: Center(
child: Row(
mainAxisAlignment: MainAxisAlignment.center,
crossAxisAlignment: CrossAxisAlignment.center,
children: [
ShaderMask(
shaderCallback: (bounds) {
return LinearGradient(
colors: [
Colors.black,
Colors.red,
],
).createShader(bounds);
},
blendMode: BlendMode.srcIn,
child: Text("g", style: style),
),
Opacity(
opacity: 0.9,
child: Text("g", style: style),
),
Text("g", style: style)
],
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="142" alt="Image" src="https://github.com/user-attachments/assets/696856b0-09bf-4e63-b7a6-4aecec8060a0" />
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[!] Flutter (Channel stable, 3.27.2, on macOS 15.1.1 24B91 darwin-arm64, locale en-DE)
• Flutter version 3.27.2 on channel stable at /Users/tobias/dev/tools/flutter
! Warning: `flutter` on your path resolves to /Users/tobias/Dev/tools/flutter/bin/flutter, which is not inside your current Flutter SDK checkout at /Users/tobias/dev/tools/flutter. Consider adding
/Users/tobias/dev/tools/flutter/bin to the front of your path.
! Warning: `dart` on your path resolves to /opt/homebrew/Cellar/dart/3.5.4/libexec/bin/dart, which is not inside your current Flutter SDK checkout at /Users/tobias/dev/tools/flutter. Consider adding
/Users/tobias/dev/tools/flutter/bin to the front of your path.
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 68415ad1d9 (3 days ago), 2025-01-13 10:22:03 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update checks and upgrades.
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.1)
• Android SDK at /Users/tobias/Library/Android/sdk
• Platform android-35, build-tools 35.0.1
• ANDROID_HOME = /Users/tobias/Library/Android/sdk
• Java binary at: /Users/tobias/Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.14.3
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Users/tobias/Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.3.1.1)
• IntelliJ at /Users/tobias/Applications/IntelliJ IDEA Ultimate.app
• Flutter plugin version 83.0.4
• Dart plugin version 243.23177
[✓] VS Code (version 1.96.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension can be installed from:
🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (5 available)
• CUBOT X18 Plus (mobile) • S590W8040202841 • android-arm64 • Android 8.0.0 (API 26)
• sdk gphone64 arm64 (mobile) • emulator-5554 • android-arm64 • Android 14 (API 34) (emulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1.1 24B91 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1.1 24B91 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
! Error: Browsing on the local area network for iPad von Florian. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
! Error: Browsing on the local area network for MamaHandy. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| a: quality,a: typography,has reproducible steps,P2,e: impeller,team-engine,triaged-engine,found in release: 3.27,found in release: 3.28 | low | Critical |
2,792,333,404 | react-native | React Native - LayoutAnimation - border radius issue (black background on IOS) | ### Description
A parent view has style={{ overflow: 'hidden', borderTopRightRadius: 25 }}
Inside the parent a child view is rendered with a LayoutAnimation. The surface which is hidden behind the rounded corners has a black background color during the LayoutAnimation for a second (Im using react-native version 0.72.1
### Steps to reproduce
on LayoutAnimation
### React Native Version
0.72.1
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
none
```
### Stacktrace or Logs
```text
none
```
### Reproducer
none
### Screenshots and Videos
_No response_ | Platform: iOS,API: LayoutAnimation,Needs: Author Feedback,Needs: Repro,Type: Unsupported Version | low | Major |
2,792,336,253 | ollama | model wanted in ollama please:Qwen2.5-Math-PRM-7B | model wanted in ollama please:Qwen2.5-Math-PRM-7B | model request | low | Minor |
2,792,346,592 | vscode | Save before format on save | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
## Problem
The "format on save" feature (`"editor.formatOnSave": true`) waits until the formatter has formatted the file before saving it. If the formatter is slow, this can take quite some time. This creates multiple problems:
* There is a delay before the `Saving '[...]': Running Code Actions and Formatters...` message is displayed in the status bar and the `Running '[...]' Formatter` notification is shown, presumably to avoid cluttering the UI in case of a slow computer or disk, which is a valid goal. However, it feels unresponsive because I'm never sure when hitting <kbd>Ctrl</kbd>+<kbd>S</kbd> whether it registered my keystroke or whether it's just waiting for the formatter. Often, I find myself hitting <kbd>Ctrl</kbd>+<kbd>S</kbd> multiple times just to be sure.
* The formatting/saving seems to be aborted when I do something in VS Code after hitting <kbd>Ctrl</kbd>+<kbd>S</kbd>. Even the smallest actions abort it:
* Clicking somewhere in the active editor.
* Pressing an arrow key.
* Changing the active editor tab.
* You can easily overlook the indicators (e.g., in the editor tab bar) that the file hasn't been saved yet. I regularly run commands in a terminal that depend on the file to be saved (e.g., running tests), and often I get errors because it hasn't been saved.
The second and third point imply that I as the user have to wait and can't do anything, not even in VS Code (because it would abort the process), until formatting is complete. My formatter takes more than 10 seconds.
## Proposal
Saving and formatting should not be aborted when doing virtually anything in the editor after triggering the save.
## Additional Information
* The third point of the problem section and the proposal is the same as #112585, but that issue has been closed without explanation.
* Fixing the second point of the problem section has originally been part of this issue and is now #238052. | feature-request,formatting | low | Critical |
2,792,347,420 | ui | [bug]: | ### Describe the bug
The Date picker component has a latency or .... emm a state issue
### Affected component/components
Date Picker
### How to reproduce
1. Create a Date Picker
2. then Send the prop (date prop)
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
```
### System Info
```bash
Browsers
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues | bug | low | Critical |
2,792,348,737 | PowerToys | Version: 0.87.1.0 OS Version: Microsoft Windows NT 10.0.22631.0 IntPtr Length: 8 x64: True Date: 16-01-2025 16:05:24 | ### Microsoft PowerToys version
0.87.1.0
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
[2025-01-16.txt](https://github.com/user-attachments/files/18437724/2025-01-16.txt)
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,792,363,148 | electron | Crash when navigating in will-frame-navigate | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
34.0.0
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 10
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
No crash
### Actual Behavior
crash (see gist)
### Testcase Gist URL
https://gist.github.com/t57ser/0317dfb8c3d0ce3b0c7c1a81048f4f61
### Additional Information
_No response_ | platform/windows,bug :beetle:,status/confirmed,has-repro-gist,34-x-y | low | Critical |
2,792,369,041 | svelte | `$state` mutation callback | ## TL;DR
Reacting to deep state changes in external (`.svelte.js`) modules is complicated - the function cannot know if it's running from a component context (can use `$effect`) or not, for example, to create global state (has to create its own `$effect.root`, and worry about the cleanup). Additionally, effects are not synchronous, making certain patterns impossible. The alternative is custom set/update functions, which can become complicated with deep/nested properties, and cannot take advantage of all of the goodies `$state` proxies offer, like reactive array pushes.
The proposal is to allow being notified of changes to a single owned `$state` using a callback:
```js
const myState = $state({ ... }, {
// function definition is of course up to discussion
onchange(newValue) {
// do stuff with newValue synchronously
}
})
```
## Introduction
To explain this problem, I will use the example of implementing a local storage-synced state/store. You can also see the full discussion that lead to this in #14978, in fact, this whole proposal is "borrowed" from Rich's [idea](https://github.com/sveltejs/svelte/issues/14978#issuecomment-2591177993).
Replacing stores with state generally works great, except when working with functions that are meant to work both globally, and scoped to components. For the local storage-synced store, let's consider a few simple goals:
1) **Synchronous** - `$store = "new value"; expect(localStorage.getItem(key)).toEqual("new value")`
2) **Easy to use/no API for nested properties** - `$store.nested.prop = ...` is detected
3) **React to cross-tab changes**, but only when subscribed to
The goals (1) and (2) are built-in by stores, which is why many Svelte 4 users have come to take them for granted. More specifically, doing `$store.a.b = 123` calls `store.update((value) => { value.a.b = 123; return value })`. This removes the need to manually call the update function, although at a cost: this is very unintuitive for new users, and doesn't work with functions like `Array.push`. Goal (3) can be achieved with the `StartStopNotifier` interface, and a implementation might look something like this:
```js
function localStorageStore(key, initialValue) {
const store = writable(initialValue, (set) => {
const callback = (event) => set(...)
window.addEventListener("storage", callback)
return () => window.removeEventListener("storage", callback)
});
return {
subscribe: store.subscribe,
set(value) {
localStorage
store.setItem(key, value)
},
// update() omitted
}
}
```
Not very complicated once you understand the store contract, and works both globally and in components, since there the code is plain JS.
## The problem
Now, let's try to achieve the same with the new state/runes. Once again, we want the solution to work both globally (i.e. we can declare a global top-level `localStorageState` instance), but also use it in components, and we want to achieve our three existing goals. To achieve goal (1), we simply use a set function, or a setter for a `current` property. To achieve goal (2), the compiler doesn't "help" us anymore, instead we could use an effect. However, as effects are not synchronous, this conflicts with goal (1). Goal (3) is a little trickier to implement, as we need to use both `createSubscriber` and `$effect.tracking`, but achievable nonetheless. Let's consider a simple implementation that does not implement goal (2):
```js
function localStorageState(key, initialValue) {
let state = $state(initialValue)
const subscribe = createSubscriber(() => {
const callback = (event) => (state = ...)
window.addEventListener("storage", callback)
return () => window.removeEventListener("storage", callback)
});
return {
get current() {
if ($effect.tracking()) {
subscribe()
return state
} else {
// if there are no subscribers, state might be out of sync with localStorage
return localStorage.getItem(key)
}
},
set current(value) {
localStorage.setItem(key, value)
state = value
}
}
}
```
This _works_, but it is rather cumbersome to use with nested props, as `state.current.nested = 123` will not trigger our custom getter, while it will trigger reactive updates due to the `$state` proxy. Instead, we can use `$state.raw` to discourage this, and then do `state.current = { ...state.current, nested: 123 }` to perform an update. This can become more complicated for more nested properties, and neat new stuff that runes allow, like arrays and custom classes, possibly even requiring an external package like `deepmerge` to handle the update. We can give up goal (1) to try and fix this:
```js
$effect(() => {
localStorage.setItem(key, JSON.stringify(value))
})
```
...however this will fail with `effect_orphan` when used globally. We can fix this by wrapping the effect in an `$effect.tracking`, but then we'd have to worry about the clean-up.
All in all, implementing certain complex patterns with state, which include reacting to deep state changes, requires convoluted `$effect.root`s or unsynchronous updates.
## The solution
`$state` knows best - it creates a deep proxy that does a lot of stuff - proxifying new values when they are added, remembering who owns what... and by doing all that it also knows _when_ the state itself is changed, even by nested properties. Why wouldn't it share that information with us?
```js
$state(value, { onchange() { ... } }
```
This would solve every single problem in the example above: synchronously setting the local storage, not having to worry about `$effect.root` and its cleanup, and so it would work identically globally or in components:
```js
function localStorageState(key, initialValue) {
const state = $state(initialValue, {
onchange(newValue) {
localStorage.setItem(key, newValue)
}
})
const subscribe = createSubscriber(() => {
const callback = (event) => (state = ...)
window.addEventListener("storage", callback)
return () => window.removeEventListener("storage", callback)
});
return {
get current() {
if ($effect.tracking()) {
subscribe()
return state
} else {
return localStorage.getItem(key)
}
},
set current(value) {
state = value
}
}
}
```
## Importance
would make my life easier | feature request,runes | low | Minor |
2,792,380,355 | flutter | [url_launcher]: launchUrl with custom schema doesn't trigger the respective external application on Non-Safari browsers | Edit: Happens in iOS. Can't reproduce on Android
### Steps to reproduce
Setup a simple flutter web app which has CTA that triggers launchUrl with a custom schema.
The app that uses the custom schema is launched when the same action is done through safari but not in other browsers such as Chrome, Edge
### Expected results
Launch the Application
### Actual results
Nothing happens. the browser tab has a launching animation but nothing actual happens.
### Code sample
```
import 'package:flutter/gestures.dart';
import 'package:flutter/material.dart';
import 'package:url_launcher/url_launcher.dart';
void main() {
runApp(UnSupportedApp());
}
class UnSupportedApp extends StatelessWidget {
const UnSupportedApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: Center(
child: Text.rich(
TextSpan(
text: 'If you have already downloaded the app, ',
children: [
TextSpan(
text: 'click here to redirect to app',
recognizer: TapGestureRecognizer()
..onTap = () {
final customUri = Uri(
// Replace with your schema and host
scheme: "****",
host: "****",
path: "/",
);
print(customUri.toString());
launchUrl(
customUri,
mode: LaunchMode.externalApplication,
);
},
style: Theme.of(context)
.textTheme
.bodyMedium
?.copyWith(color: Colors.blue),
),
],
style: Theme.of(context).textTheme.bodyMedium,
),
),
)),
);
}
}
```
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://private-user-images.githubusercontent.com/45146774/374948689-0fe00bcc-57e7-4f46-b922-027c1228f546.MP4?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzcwMjQ4MjcsIm5iZiI6MTczNzAyNDUyNywicGF0aCI6Ii80NTE0Njc3NC8zNzQ5NDg2ODktMGZlMDBiY2MtNTdlNy00ZjQ2LWI5MjItMDI3YzEyMjhmNTQ2Lk1QND9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAxMTYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMTE2VDEwNDg0N1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWJiZDNlN2I4YmU5OWZlN2VkODUwZTRiNTUwMmRmNjY2MWEwYjE0MjQ1MzQwZmUzNmQwNGMzZTE0NGQ1OTc2YTgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.v24U-Fkkjn8B9vgwUojbOo14JF_5uDuig-0xND9Oj8k
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
[✓] Flutter (Channel stable, 3.22.1, on macOS 15.0 24A335 darwin-arm64, locale en-US)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2022.2)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.1.3)
[✓] VS Code (version 1.93.1) | in triage | low | Major |
2,792,381,979 | excalidraw | Rendering issue on Excalidraw Plus on Windows | I had no problem before leaving on vacation, and when I came back I lost my texts :

When I focus on the text I can see it correctly :

If I try to copy text inside, then go to vscode, paste it, it's just standard text. If I copy it from VScode and paste it back in excalidraw, it still bugs.
Any idea on how I can fix this ? Pretty problematic since I designed my whole DB inside of it 😅
EDIT : When I zoom in it tends to change the visual, until a big zoom where I can see corectly things .... Strange
EDIT 2 : It seems to happen on Zen Browser (using Firefox), but works correctly on Chrome | bug,firefox,Text rendering | low | Critical |
2,792,398,271 | vscode | NativeEditContext advertises focus for off-dom editor | I have a situation in which I have a blinking cursor in an editor but it doesn't accept backspace, e.g the `deleteLeft` command isn't executed. I was able to debug this and I can see that the wrong editor is selected and that happens because an off-dom editor is saying it has focus.
<img width="1484" alt="Screenshot 2025-01-16 at 11 57 53" src="https://github.com/user-attachments/assets/dda0e8dc-25a5-4b26-8d80-605d8fcaaba7" />
| important | low | Critical |
2,792,400,759 | flutter | [Web][Canvaskit] APNG does not play on Flutter 3.27+ | ### Steps to reproduce
The animation does not play when using CachedNetworkImage to display APNG, as shown in the following sample code.
### Expected results
APNG images play when displaying them with CachedNetworkImage.
### Actual results
Cannot play APNG on Web, in Flutter 3.27.2.
As far as I have researched, it plays in these condition:
- Using Image.network instead of CachedNetworkImage.
- on platforms other than the web (e.g., MacOS).
- build with --web-renderer html.
- on a previous Flutter SDK version (3.24.5).
### Code sample
<details open><summary>Code sample</summary>
```dart
...
CachedNetworkImage(
imageUrl: 'some_apng_url.png'
)
...
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.2, on macOS 15.1.1 24B91 darwin-arm64, locale ja-JP)
• Flutter version 3.27.2 on channel stable at /Users/harukaoki/development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 68415ad1d9 (3 days ago), 2025-01-13 10:22:03 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.1)
• Android SDK at /Users/harukaoki/Library/Android/sdk
• Platform android-35, build-tools 33.0.1
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[✓] VS Code (version 1.96.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (4 available)
• iPhone (8) (mobile) • 00008020-001124CE3CE9002E • ios • iOS 18.1.1 22B91
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1.1 24B91 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1.1 24B91 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 132.0.6834.83
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: regression,engine,platform-web,a: images,e: web_canvaskit,has reproducible steps,team-web,found in release: 3.27,found in release: 3.28 | low | Minor |
2,792,411,383 | vscode | Right click on problems tab: "Fix with AI Chat" | <!-- Please search existing issues to avoid creating duplicates -->
<!-- Please test using the latest insiders build to make sure your issue has not already been implemented: https://code.visualstudio.com/insiders/ -->
<!-- Describe the feature you'd like. -->
https://x.com/waderyan_/status/1864388066988970224

| feature-request,error-list | low | Minor |
2,792,415,398 | pytorch | Memory-efficient attention is not selected if inputs's ndim != 4 | ### 🐛 Describe the bug
```python
from contextlib import nullcontext
import torch
import torch.nn.functional as F
from torch.nn.attention import SDPBackend, sdpa_kernel
# shape used in FLUX's VAE
seq_len = 128 * 128
head_dim = 512
# ctx = sdpa_kernel([SDPBackend.EFFICIENT_ATTENTION])
ctx = nullcontext()
torch.cuda.reset_peak_memory_stats()
shape = (1, 1, 1, seq_len, head_dim)
q, k, v = [torch.randn(shape, dtype=torch.bfloat16, device="cuda") for _ in range(3)]
with ctx:
F.scaled_dot_product_attention(q, k, v)
print(f"{shape}: {torch.cuda.max_memory_allocated() / 1e9:.2f} GB")
torch.cuda.reset_peak_memory_stats()
shape = (1, 1, seq_len, head_dim)
q, k, v = [torch.randn(shape, dtype=torch.bfloat16, device="cuda") for _ in range(3)]
with ctx:
F.scaled_dot_product_attention(q, k, v)
print(f"{shape}: {torch.cuda.max_memory_allocated() / 1e9:.2f} GB")
torch.cuda.reset_peak_memory_stats()
shape = (1, seq_len, head_dim)
q, k, v = [torch.randn(shape, dtype=torch.bfloat16, device="cuda") for _ in range(3)]
with ctx:
F.scaled_dot_product_attention(q, k, v)
print(f"{shape}: {torch.cuda.max_memory_allocated() / 1e9:.2f} GB")
torch.cuda.reset_peak_memory_stats()
shape = (seq_len, head_dim)
q, k, v = [torch.randn(shape, dtype=torch.bfloat16, device="cuda") for _ in range(3)]
with ctx:
F.scaled_dot_product_attention(q, k, v)
print(f"{shape}: {torch.cuda.max_memory_allocated() / 1e9:.2f} GB")
```
```
(1, 1, 1, 16384, 512): 2.61 GB
(1, 1, 16384, 512): 0.11 GB
(1, 16384, 512): 2.61 GB
(16384, 512): 2.61 GB
```
If I use `ctx = sdpa_kernel([SDPBackend.EFFICIENT_ATTENTION])`, only the ones where `ndim == 4` will not error out.
**Expected behavior**: memory-efficient attention should be selected.
Possibly related: #127523 (but I don't use attention mask here)
cc: @drisspg
### Versions
torch==2.7.0.dev20250105+cu126
cc @chauhang @penguinwu @zou3519 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | triaged,module: sdpa | low | Critical |
2,792,418,084 | flutter | [go_router] Implement the method `goBranchPath` for StatefulNavigationShell similar to `go` | ### Use case
Our application can create a bottom navigation bar dynamically so we need to declare all possible routes in `GoRouter` config. And so the sequence of navigation destinations inside the `NavigationBar` can be diffrent then one declrared in `branches` of `StatefulShellRoute`. But the method `goBranch` allows to switch to route by `index` of initially declared routes.
Below the sample where 4 routes are declared in router config but the sequence is not `1,2,3,4` but `1,2,4`. So when user taps on `page4` he is redirected to `/path3` what is wrong for our business logic. I tried to workaround using the `context.go` method but now I lost a stateful functionality.
<details>
<summary>Sampe</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: _router,
);
}
}
final branch1 = GlobalKey<NavigatorState>();
final branch2 = GlobalKey<NavigatorState>();
final branch3 = GlobalKey<NavigatorState>();
final branch4 = GlobalKey<NavigatorState>();
final _router = GoRouter(
debugLogDiagnostics: true,
initialLocation: '/page1',
routes: [
StatefulShellRoute.indexedStack(
branches: [
StatefulShellBranch(
navigatorKey: branch1,
routes: [
GoRoute(
path: '/page1',
builder: (context, state) => const Text('page 1'),
),
],
),
StatefulShellBranch(
navigatorKey: branch2,
routes: [
GoRoute(
path: '/page2',
builder: (context, state) => const Text('page 2'),
),
],
),
StatefulShellBranch(
navigatorKey: branch3,
routes: [
GoRoute(
path: '/page3',
builder: (context, state) => const Text('page 3'),
),
],
),
StatefulShellBranch(
navigatorKey: branch4,
routes: [
GoRoute(
path: '/page4',
builder: (context, state) => const Text('page 4'),
),
],
),
],
builder: (context, state, navigationShell) {
return Home(child: navigationShell);
},
),
],
);
class Home extends StatelessWidget {
const Home({super.key, required this.child});
final StatefulNavigationShell child;
@override
Widget build(BuildContext context) {
return Scaffold(
body: SafeArea(
child: Center(
child: child,
),
),
bottomNavigationBar: NavigationBar(
destinations: const [
NavigationDestination(
icon: Icon(Icons.access_alarm),
label: 'page1',
),
NavigationDestination(
icon: Icon(Icons.access_alarm),
label: 'page2',
),
NavigationDestination(
icon: Icon(Icons.access_alarm),
label: 'page4',
),
],
selectedIndex: child.currentIndex,
onDestinationSelected: (index) {
child.goBranch(index, initialLocation: index == child.currentIndex);
},
),
);
}
}
```
</details>
### Proposal
The proposal is to create some method like `goBranchPath`.
```dart
void goBranchPath(String location, {bool initialLocation = false}) {
// find the required index of the branch which use [location]
// use default functionaluty
}
``` | c: new feature,package,c: proposal,P3,p: go_router,team-go_router,triaged-go_router | low | Critical |
2,792,423,432 | rust | Sendable type cannot be sent between threads safely | I tried this code:
```rust
use std::rc::Rc;
struct Unsendable(Rc<()>);
struct Sendable<T>(T);
unsafe impl<T> Send for Sendable<T> {}
impl<T> Sendable<T> {
#[inline(always)]
fn into_inner(self) -> T {
self.0
}
}
fn main() {
tokio::task::spawn({
let foo = Unsendable(Rc::new(()));
let foo = Sendable(foo);
async move {
let Sendable(foo) = foo;
// let foo = foo.into_inner();
}
});
}
```
If I replace line `let Sendable(foo) = foo;` with `let foo = foo.into_inner();` the error disappears.
I create bigger example, without tokio. [Rust Playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=f13884acdd320f8b218853a7249d28f2)
I expected to see successful compilation
Instead I got `error[E0277]: Rc<()> cannot be sent between threads safely` | C-enhancement,D-confusing | low | Critical |
2,792,448,520 | flutter | Add Kurdish Sorani (ckb) as an Officially Supported Language for Flutter Localizations | ### Use case
Kurdish Sorani (ckb) is widely spoken by millions of people, and adding it as a supported language in Flutter’s localization system would enable better accessibility for developers and end-users in this region. This would provide native language support for apps built with Flutter, ensuring inclusivity and a more localized experience.
My previous pull request ([PR #160925](https://github.com/flutter/flutter/pull/160925)) included full Kurdish Sorani translations for Material, Cupertino, and Widgets frameworks, along with support for RTL text and Kurdish numerals. However, it was recommended that Kurdish Sorani translations should be managed through the localization console and vendor team.
### Proposal
I suggest adding Kurdish Sorani (ckb) to Flutter’s officially supported languages in the localization console. The following steps could help with this integration:
1. Include Kurdish Sorani in the list of supported languages for the localization system.
2. Use the translations I provided in [PR #160925](https://github.com/flutter/flutter/pull/160925) as the starting point.
3. Our community of Kurdish Flutter developers can work with the vendor team to ensure accurate and complete integration of the translations.
### Benefits:
• Expand Flutter’s reach to a new user base.
• Improves app usability and accessibility for Kurdish-speaking users.
• Strengthens Flutter’s commitment to inclusivity and localization. | c: new feature,framework,a: internationalization,c: proposal,team-framework | low | Minor |
2,792,450,524 | neovim | Avoid external processing of a certain buffer/window | ### Problem
It is currently not possible to ignore the `TextChanged`, `BufModified`, `CursorMoved`, `WinResized` and `WinScrolled` events for a certain buffer/window, even for the "owner" of that object. These events are processed once per event loop iteration, rather than when a change happens, so the usual `:set ei+=all -> do something -> :set ei-=all` pattern doesn't work. That pattern does work for other events and processing done by the owner, but it may be desirable to be able to ignore autocommands triggered by external processing as well.
- An option focused solution be a local `'eventignore'`, with the caveat being that it applies to the window itself and its buffer. A global + buffer + window ( + tab)-local `'eventignore'` seems more appropriate, but we currently lack the ability for such an option (could be resolved by https://github.com/neovim/neovim/issues/29314).
- A namespace focused solution could be the ability to set a namespace for a buffer/window, in which case we only process autocommands belonging to that namespace (currently still augroups). (This concept could also be used to prevent deletion of windows/buffers when not inside a ui_attach callback for that namespace. Currently the namespace needs to always check that its buffers/windows still exist.)
- A more window focused solution could be some API to mark a window (and its buffer) to not be affected by autocommands (nvim_open_win() flag; we already have a boolean `noautocmd` for opening the window, could promote it to also accept a string `"always"` to ignore events persistently, or even the same format as `'eventignore'`, though at that point I think we should just go with the window-local option route).
What also should be considered is e.g. [command-preview](https://github.com/neovim/neovim/pull/28856)
> In the spirit of playing nice with external UIs, I think there needs to be a way to mark windows as ignored by command preview (if there isn't already), or maybe just have a class of window/buffer option combinations that should be ignored by command preview (e.g. nomodifiable+bufhidden=hide+buftype=nofile+is a floating window+...)?
Could probably recognize `"cmdpreview"` as a pseudo event to ignore in a window-local `'eventignore'`.
Related: https://github.com/neovim/neovim/pull/27855#issuecomment-2585221113
Any thoughts @justinmk? | enhancement,ui-extensibility,events,options | low | Minor |
2,792,505,437 | godot | 2D Tilemap coordinates disappear when on "Terrains" Mode | ### Tested versions
Tested Version: v4.3.stable.official [77dcf97d8]
### System information
Windows 10
### Issue description
On the editor, when using the tilemap in "Tiles" mode, you can see your tile coords in the lower left corner. If you switch to "Terrains" mode when you no longer see the tilemap coordinates where your cursor is.

### Steps to reproduce
Create a new 2D TilemapLayer. Hover over the field, you can see your coords in the lower left of the game field. Switch to Terrains, coords disappear.
### Minimal reproduction project (MRP)
N/A | enhancement,topic:editor,topic:2d | low | Minor |
2,792,524,580 | godot | Vulkan graphics pipelines use excessive amount of memory on Galaxy S23 | ### Tested versions
- Reproducible in: 4.4.dev7, master starting from 98deb2a0005cf654e667679cd72904d9b5d4c734
- Not reproducible in: 4.3.stable
### System information
Samsung Galaxy S23 Ultra, Android 14, Vulkan (Mobile), Adreno 740
### Issue description
After trying to update our project to Godot 4.4 we found that it crashes while loading on my Android phone.
The profiler showed that graphics memory was exceeding 4GB before the app closes while the game running on Godot 4.3 only uses around 800mb of graphics memory.
I have bisected everything between 4.3 and 4.4 and traced the problem to https://github.com/godotengine/godot/pull/90400 getting merged.
Here is a memory report generated by RenderingDevice.get_driver_and_device_memory_report from a version of our game with lots of content removed so it starts at all:
```
Total Driver Memory:76.373 MB
Total Driver Num Allocations: 51018
Total Device Memory:3755.699 MB
Total Device Num Allocations: 854
Memory use by object type (CSV format):
Category; Driver memory in MB; Driver Allocation Count; Device memory in MB;
UNKNOWN;0.0;0;0.0;0
INSTANCE;19.86637;1520;0.0;0
PHYSICAL_DEVICE;0.0;0;0.0;0
DEVICE;0.136612;580;0.247704;42
QUEUE;0.0;0;27.88282;11
SEMAPHORE;0.00267;10;0.0;0
COMMAND_BUFFER;0.0;0;1.484375;101
FENCE;0.001068;4;0.0;0
DEVICE_MEMORY;0.0;0;1056.0;8
BUFFER;10.4068;25260;0.0;0
IMAGE;1.001448;1756;0.181641;1
EVENT;0.0;0;0.0;0
QUERY_POOL;0.001633;6;0.007828;2
BUFFER_VIEW;0.000992;1;0.0;0
IMAGE_VIEW;0.836527;857;0.0;0
SHADER_MODULE;29.46983;1736;0.0;0
PIPELINE_CACHE;4.005884;286;0.0;0
PIPELINE_LAYOUT;1.361336;936;0.0;0
RENDER_PASS;0.018356;189;0.0;0
PIPELINE;2.593163;3315;2668.945;607
DESCRIPTOR_SET_LAYOUT;3.279724;8207;0.0;0
SAMPLER;0.01339;39;0.0;0
DESCRIPTOR_POOL;1.765778;1800;0.800781;46
DESCRIPTOR_SET;0.0;0;0.0;0
FRAMEBUFFER;1.206264;538;0.1492;36
COMMAND_POOL;0.403104;3976;0.0;0
DESCRIPTOR_UPDATE_TEMPLATE_KHR;0.0;0;0.0;0
SURFACE_KHR;0.000031;1;0.0;0
SWAPCHAIN_KHR;0.002022;1;0.0;0
DEBUG_UTILS_MESSENGER_EXT;0.0;0;0.0;0
DEBUG_REPORT_CALLBACK_EXT;0.0;0;0.0;0
ACCELERATION_STRUCTURE;0.0;0;0.0;0
VMA_BUFFER_OR_IMAGE;0.0;0;0.0;0
```
You can see that device memory for pipelines is 2668.945mb. The same line in Godot 4.3 (and 4.4 before https://github.com/godotengine/godot/pull/90400 got merged) shows a little over 1mb.
I modified my engine build to log every allocation related to pipeline objects and found that it allocates a block of 24mb for some of the pipelines. Adding up those 24mb allocations gives me pretty much exactly the excess amount of memory use compared to without that PR
I suspect that it is related to the Ubershader that is used while the optimized pipeline is compiled.
A hacky attempt to disable the feature by preventing the "define UBERSHADER" in the shader from being set resulted in the weird allocations disappearing.
But I am not that familiar with the code yet and I am also running out of time that I can invest in this problem so hopefully someone here can find a proper workaround.
The attached MRP contains just a camera looking at a single cube and a script to print the memory report.
On my device this already uses 72mb for pipelines. Strangely on a Oneplus 6 it was only 6.6mb and on a Oneplus 8 it uses 12.6mb for pipelines which still seems excessive compared to 1mb but apparently this depends heavily on hardware or driver version.
I could not test yet how this scales with the real project on those other devices.
I understand that this is likely related to a driver issue that we can't fix but maybe it can be mitigated somehow? If not maybe ubershaders can be deactivated depending on hardware or a project setting?
### Steps to reproduce
1. Open the attached project
2. Enable Deploy with Remote Debug
3. Deploy to Android device
4. Observe the logged memory report
### Minimal reproduction project (MRP)
[pipeline_memory_mrp.zip](https://github.com/user-attachments/files/18437403/pipeline_memory_mrp.zip) | bug,platform:android,topic:rendering,topic:porting,crash,regression | low | Critical |
2,792,530,396 | react-native | Image nested In Text with lineHeight specified overflows container | ### Description
An `<Image>` nested within `<Text>` with a `lineHeight` specified causes the image to overflow the parent container.
Workarounds attempted:
- Specifying a fixed height on the image
- Wrapping the image in a `<View>` with a fixed `height` and/or `lineHeight`
- Wrapping the Image in another `<Text>` with a fixed `height` and/or `lineHeight`
- Wrapping the image in another `<Text>` with lineHeight set to `undefined` or `null`
Expected behaviour:
To be able to nest images in Text with a non-default lineHeight.
(Please let me know if any more details are required for this issue. Thanks!)
### Steps to reproduce
Please see [expo snack](https://snack.expo.dev/@tomkelsey/image-nested-in-text-lineheight-bug).
Code below:
```
export default function App() {
return (
<View style={styles.container}>
<View style={styles.box}>
<Text style={styles.text}>
Hello
<Image style={styles.image} source={require('./assets/snack-icon.png')} />
</Text>
</View>
</View>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
backgroundColor: '#ecf0f1',
padding: 8,
},
box: {
backgroundColor: 'red',
},
text: {
backgroundColor: 'blue',
// lineHeight: 20,
},
image: {
height: 300,
}
});
```
### React Native Version
0.76.0
### Affected Platforms
Runtime - iOS, Runtime - Android
### Output of `npx react-native info`
```text
Please see expo snack.
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
https://snack.expo.dev/@tomkelsey/image-nested-in-text-lineheight-bug
Alternative reproducer:
https://github.com/OzymandiasTheGreat/rn-view-in-text-bug
### Screenshots and Videos
**Without lineHeight specified:**
<img width="421" alt="Image" src="https://github.com/user-attachments/assets/886f3be0-5b50-4444-8964-71fdc32544f6" />
**With lineHeight specified:**
<img width="411" alt="Image" src="https://github.com/user-attachments/assets/07670c1b-dd28-464a-965d-d1b432669b5a" /> | Issue: Author Provided Repro,Component: View,Component: Image | low | Critical |
2,792,537,803 | pytorch | CUDAGraph outputs will be overwritten by a subsequent run? | ### 🐛 Describe the bug
Hello, I have some doubts about the following cudagraph case.
I submitted another issue, #144386
```
import torch
def test_cuda_graph_output_overwritten():
class MLP(torch.nn.Module):
def __init__(self):
super().__init__()
self.ln = torch.nn.LayerNorm(6)
def forward(self, input):
ln = self.ln(input)
return ln
model = MLP().cuda()
compiled_model = torch.compile(mode="reduce-overhead")(model)
compiled_model(torch.randn([2, 6], device="cuda"))
@torch.compile(mode="reduce-overhead")
def my_model(x):
y = torch.matmul(x, x)
return y
x = torch.randn(10, 10, device="cuda")
y1 = my_model(x)
y2 = my_model(x)
print(y1)
# RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run.
test_cuda_graph_output_overwritten()
```
It was updated just the other day by the following PR
```
https://github.com/pytorch/pytorch/pull/144793/files
```
It was a successful case, and the error displayed on the doc cannot be reproduced.
What I want to know is whether the CUDAGraph output will be overwritten by subsequent runs. I found that the doc did not match the actual test results. I don't know if the doc was written wrong or the test case was designed incorrectly.
cc @ptrblck @msaroufim @eqy @mcarilli @ezyang @eellison @penguinwu @BoyuanFeng @chauhang @Edenzzzz
### Versions
torch 2.4.1
NVIDIA-SMI 560.35.05 Driver Version: 560.35.05 CUDA Version: 12.6 | module: cuda,triaged,module: cuda graphs,oncall: pt2 | low | Critical |
2,792,549,703 | opencv | SimpleBlobDetector option to get thresholds included in final location | ### Describe the feature and motivation
Let's say the blob detector had a minimum threshold of 10 and maximum of 200, which were the threshold values where there was a valid blob that contributed to the final decision?
### Additional context
_No response_ | feature | low | Minor |
2,792,582,833 | flutter | [ios][add2app] `Flutter.podspec` is missing after running `flutter build ios-framework` | ### Steps to reproduce
I'm following Add to app guideline for iOS, the last option[Use frameworks and CocoaPods](https://docs.flutter.dev/add-to-app/ios/project-setup#89-tab-panel)
After I run command `flutter build ios-framework`, all frameworks are exported successfully, but in step `Add Flutter engine to your Podfile`, docs stated that I need to update Podfile with :
```
pod 'Flutter', :podspec => '/path/to/MyApp/Flutter/[build mode]/Flutter.podspec'
```
I looked for `Flutter.podspec` in all 3 build modes: Debug, Profile and Release but could not see it.
This seems to be happened in the past with https://github.com/flutter/flutter/issues/55095?
### Expected results
`Flutter.podspec` should be exported as mentioned in the docs
### Actual results
`Flutter.podspec` is missing
### Code sample
https://github.com/huycozy/reproduce_issue_ios_native_addtoapp_optionC
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open>
<summary>flutter doctor -v (stable and master)</summary>
```console
[✓] Flutter (Channel stable, 3.27.2, on macOS 15.2 24C101 darwin-x64, locale en-VN)
• Flutter version 3.27.2 on channel stable at /Users/huynq/Documents/GitHub/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 68415ad1d9 (9 hours ago), 2025-01-13 10:22:03 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/huynq/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_SDK_ROOT = /Users/huynq/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• android-studio-dir = /Applications/Android Studio.app
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
[✓] IntelliJ IDEA Community Edition (version 2024.2.3)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin version 81.1.3
• Dart plugin version 242.22855.32
[✓] VS Code (version 1.96.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (3 available)
• Pixel 7 (mobile) • 2B171FDH20084L • android-arm64 • Android 15 (API 35)
• macOS (desktop) • macos • darwin-x64 • macOS 15.2 24C101 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
```console
[!] Flutter (Channel master, 3.28.0-2.0.pre.38724, on macOS 15.2 24C101 darwin-x64, locale en-VN) [4.2s]
• Flutter version 3.28.0-2.0.pre.38724 on channel master at /Users/huynq/Documents/GitHub/flutter_master
! Warning: `flutter` on your path resolves to /Users/huynq/Documents/GitHub/flutter/bin/flutter, which is not inside your current Flutter SDK checkout at /Users/huynq/Documents/GitHub/flutter_master. Consider adding /Users/huynq/Documents/GitHub/flutter_master/bin to the front of your path.
! Warning: `dart` on your path resolves to /Users/huynq/Documents/GitHub/flutter/bin/dart, which is not inside your current Flutter SDK checkout at /Users/huynq/Documents/GitHub/flutter_master. Consider adding /Users/huynq/Documents/GitHub/flutter_master/bin to the front of your path.
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 1b0441c18a (2 hours ago), 2025-01-13 17:29:08 -0800
• Engine revision 1b0441c18a
• Dart version 3.7.0 (build 3.7.0-312.0.dev)
• DevTools version 2.42.0
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update checks and upgrades.
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0) [5.7s]
• Android SDK at /Users/huynq/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_SDK_ROOT = /Users/huynq/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
This JDK is specified in your Flutter configuration.
To change the current JDK, run: `flutter config --jdk-dir="path/to/jdk"`.
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2) [2.6s]
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web [259ms]
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2) [257ms]
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• android-studio-dir = /Applications/Android Studio.app
• Java version OpenJDK Runtime Environment (build 21.0.4+-12422083-b607.1)
[✓] IntelliJ IDEA Community Edition (version 2024.2.3) [248ms]
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin version 81.1.3
• Dart plugin version 242.22855.32
[✓] VS Code (version 1.96.2) [30ms]
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (3 available) [7.7s]
• Pixel 7 (mobile) • 2B171FDH20084L • android-arm64 • Android 15 (API 35)
• macOS (desktop) • macos • darwin-x64 • macOS 15.2 24C101 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
[✓] Network resources [509ms]
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details> | platform-ios,tool,a: existing-apps,t: xcode,has reproducible steps,P2,team-ios,triaged-ios,found in release: 3.27,found in release: 3.28 | low | Critical |
2,792,663,145 | pytorch | DISABLED test_run_decompositions_same_handle_id (__main__.TestNumericDebugger) | Platforms: mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_run_decompositions_same_handle_id&suite=TestNumericDebugger&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35698516196).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_run_decompositions_same_handle_id`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_quantization.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_quantization.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @clee2000 @wdvr @malfet @albanD | oncall: quantization,triaged,module: flaky-tests,module: macos,skipped | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.