id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,779,619,116 | pytorch | dynamically set the number of SMs in torch.distributed.all_reduce | ### 🚀 The feature, motivation and pitch
I want to dynamically set the number of SMs in torch.distributed.all_reduce. NCCL supports using the nccl_max_nchannels environment variable setting.but cant dynamically set in the program. It is mentioned here that ncclCommInitRankConfig can be used in the program [(link),](https://github.com/NVIDIA/nccl/issues/1572), but the corresponding setting is not found in torch. Can this capability be supported? This is useful in inference optimization scenarios
### Alternatives
_No response_
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Minor |
2,779,636,045 | electron | [webRequest.onHeadersReceived] Not all requests show modified headers | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
34.0.0
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 10
### What arch are you using?
x64
### Last Known Working Electron version
14.2.9
### Expected Behavior
All requests that are handled with `onHeadersReceived` should show modified headers (if they are being modified in the handler)
### Actual Behavior
Some requests do not show the modified headers.
Example:
```
electron.session.defaultSession.webRequest.onHeadersReceived((details, callback) => {
console.log(details.url, details.responseHeaders);
if (details.responseHeaders) {
details.responseHeaders["test"] = ["1"];
}
callback({cancel: false, responseHeaders: details.responseHeaders});
});
```
I am setting a "test" header to each request
Navigating to google.com does not show the header in the first request:

It does appear in the second request though
### Testcase Gist URL
https://gist.github.com/t57ser/b97184b3c8772a8ccf7cb600ceac1202
### Additional Information
_No response_ | platform/windows,bug :beetle:,has-repro-gist,component/webRequest,33-x-y,34-x-y | low | Critical |
2,779,640,992 | kubernetes | Packaging - kubernetes >=1.30 - yum autoremove proposing to uninstall kubelet | Since kubernetes 1.30, I've noticed that when launching a ```yum autoremove``` (on a RHEL8 server ... certainly the same on other distributions), yum is proposing to uninstall kubelet !? (it was not the case with kubernetes <=1.29, I don't know for 1.32)
For example:
# From a RHEL8 server still running kubernetes 1.28:
```
$ sudo rpm -qa kubelet kubeadm
kubeadm-1.28.10-150500.1.1.x86_64
kubelet-1.28.10-150500.1.1.x86_64
$ sudo yum autoremove
Updating Subscription Management repositories.
Dependencies resolved.
Nothing to do.
Complete!
$ sudo rpm -q --whatrequires kubelet
kubeadm-1.28.10-150500.1.1.x86_64
```
kubelet is a dependency of kubeadm so no uninstall proposed.
# From a RHEL8 server running kubernetes 1.31:
```
$ sudo rpm -qa kubelet kubeadm
kubeadm-1.31.4-150500.1.1.x86_64
kubelet-1.31.4-150500.1.1.x86_64
$ sudo yum autoremove
Updating Subscription Management repositories.
Dependencies resolved.
==========================================================================================================================================================================================================
Package Architecture Version Repository Size
==========================================================================================================================================================================================================
Removing:
conntrack-tools x86_64 1.4.4-11.el8 @rhel-8-for-x86_64-baseos-rpms 576 k
kubelet x86_64 1.31.4-150500.1.1 @kubernetes 73 M
libnetfilter_cthelper x86_64 1.0.0-15.el8 @rhel-8-for-x86_64-baseos-rpms 38 k
libnetfilter_cttimeout x86_64 1.0.0-11.el8 @rhel-8-for-x86_64-baseos-rpms 39 k
libnetfilter_queue x86_64 1.0.4-3.el8 @rhel-8-for-x86_64-baseos-rpms 50 k
Transaction Summary
==========================================================================================================================================================================================================
Remove 5 Packages
Freed space: 74 M
Is this ok [y/N]: n
Operation aborted.
$ sudo rpm -q --whatrequires kubelet
no package requires kubelet
```
kubelet no more dependent of kubeadm so the uninstall is proposed.
Thank you
Adrien | kind/support,sig/release,needs-triage | low | Major |
2,779,650,676 | pytorch | `torch.index_put` raise error when `accumulate=True` | ### 🐛 Describe the bug
I run the following code
```python
torch.set_default_device('cuda')
x = torch.arange(1, 61).reshape(5, 4, 3)
indices=[
# torch.tensor([1, 2, 0]),
torch.tensor([[0, 2], [1, 3]]),
# torch.tensor([0, 1, 2]),
# torch.tensor([0, 1, 2]),
]
values=torch.tensor([100, 200, 300])
out2 = torch.index_put(x, indices, values, accumulate=True)
print(out2)
```
When `accumulate=False`, run correctly. But when `accumulate=True`, raise error:
```
RuntimeError: The expanded size of the tensor (12) must match the existing size (3) at non-singleton dimension 2. Target sizes: [2, 3, 12]. Tensor sizes: [3]
```
Is this a bug for index_put ?
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.35
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 470.129.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.72
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] numpy 2.2.1 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @ptrblck @msaroufim @eqy | module: cuda,triaged,module: advanced indexing | low | Critical |
2,779,653,614 | opencv | [CMake] : Is compilation with external zlib really working ? | ### System Information
OpenCV 4.11.0
Visual Studio 2019
### Detailed description
[context, talking about TIFF first]
When CMake-configuring OpenCV, it is possible to opt-out "BUILD_TIFF" and opt-in "WITH_TIFF" in order to provide external libs through "Tiff_Dir", "TIFF_LIBRARY_DEBUG" and "TIFF_LIBRARY_RELEASE"
In the generated opencv_imgcodecs Visual Studio project :
- a lib dependency to `<my_provided_external_path_to\tiff[d].lib>` is indeed added
- in the Includes paths, there is no custom `<my_provided_external_path_related_to_include>` certainly because compatibility can be ensured only if the built-in OpenCV/3rdParty/tiff is API compatible.
[What is different with zlib]
In the case of zlib, it is possible to disable "BUILD_ZLIB", and set some "ZLIB_LIBRARY_DEBUG" and "ZLIB_LIBRARY_RELEASE", but there is neither "WITH_ZLIB" nor "Zlib_Dir"
When setting a custom zlib.lib, then in the generated opencv_core Visual Studio project :
-there is a lib dependency to `..\..\3rdparty\lib\[Debug|Release]\zlib[d].lib`
-that file `..\..\3rdparty\lib\[Debug|Release]\zlib[d].lib` is indeed built by OpenCV (while BUILD_ZLIB is disabled!)
-considering its size, `..\..\3rdparty\lib\[Debug|Release]\zlib[d].lib` seems to be a static library
-I can't find a dependency to `<my_provided_external_path_to\zlib[d].lib>`
It suggests that the zlib.lib is still compiled from OpenCV\3rdparty source, as a static library (and won't use my custom zlib.dll)
Even if it was compiled as a shared lib, it would imply that not only API compatible, the external custom zlib.dll should also be ABI compatible, which is a big issue when dealing with compiler versions
Did I miss something about CMake zlib configuration ?
### Steps to reproduce
Try to use a custom zlib in OpenCV CMake configuration
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: build/install,incomplete | low | Critical |
2,779,675,215 | go | crypto/tls: ClientHelloOuter should hide the actual ALPN list | ### Go version
go version 1.24rc1
### Output of `go env` in your module/workspace:
```shell
AR='ar'
CC='gcc'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_ENABLED='1'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
CXX='g++'
GCCGO='gccgo'
GO111MODULE=''
GOAMD64='v1'
GOARCH='amd64'
GOAUTH='netrc'
GOBIN=''
GOCACHE='/home/robin/.cache/go-build'
GOCACHEPROG=''
GODEBUG='http2xconnect=0'
GOENV='/home/robin/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFIPS140='off'
GOFLAGS=''
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build784986521=/tmp/go-build -gno-record-gcc-switches'
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMOD='/home/robin/src/ech-alpn/go.mod'
GOMODCACHE='/home/robin/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/robin/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/robin/src/goroot'
GOSUMDB='sum.golang.org'
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/robin/.config/go/telemetry'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/robin/src/goroot/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='devel go1.24-c7c4420ae4 Thu Jan 9 12:24:58 2025 -0800'
GOWORK=''
PKG_CONFIG='pkg-config'
```
### What did you do?
While experimenting with ECH in go 1.24rc1, I noticed that the alpn extension in ClientHelloOuter is the same as in ClientHelloInner.
While the draft RFC doesn't specify exactly what should be in ClientHelloOuter, it does call out the alpn list as a potentially sensitive field.
https://datatracker.ietf.org/doc/html/draft-ietf-tls-esni/#name-introduction
> This document specifies a new TLS extension, called Encrypted Client Hello (ECH), that allows clients to encrypt their ClientHello to such a deployment. This protects the SNI and other potentially sensitive fields, such as the ALPN list [[RFC7301](https://www.rfc-editor.org/rfc/rfc7301)].
This code shows the ALPN list and SNI in ClientHelloOuter: https://go.dev/play/p/UwIw0DLxH7U (although it doens't run in playground)
```
$ go run .
ALPNProtos: ["foo"]
ServerName: "public.example.com"
```
My suggestion would be to create the ClientHelloOuter as if the tls Config was pretty much empty, e.g.
```go
&Config{
ServerName: "The PublicName from the chosen ECH Config",
MinVersion: VersionTLS13,
NextProtos: []string{"h2", "http/1.1"}, // Replace with "h3" with quic?
}
```
### What did you see happen?
The client's `NextProtos` are visible in ClientHelloOuter.
### What did you expect to see?
The client's actual `NextProtos` should only be in ClientHelloInner when ECH is used. | NeedsInvestigation,BugReport | low | Critical |
2,779,689,629 | node | TextDecoder incorrectly decodes 0x92 for Windows-1252 | ### Version
v23.6.0, v22.13.0
### Platform
```text
Darwin xxx 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:00:32 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6030 arm64
(but no problems reproducing it on Linux amd64 or arm64/)
```
### Subsystem
_No response_
### What steps will reproduce the bug?
```
const decoded = new TextDecoder("Windows-1252").decode(new Uint8Array([146])).charCodeAt(0);;
console[decoded === 8217 ? 'error' : 'log'](`Expected 8217 got ${decoded}`);
```
146 is now decoded as 146, not 8217. This still worked in v23.3.0 and v22.12.0 - might be related to the fix for https://github.com/nodejs/node/issues/56219 ?
### How often does it reproduce? Is there a required condition?
It always fails
### What is the expected behavior? Why is that the expected behavior?
https://web.archive.org/web/20151027124421/https://msdn.microsoft.com/en-us/library/cc195054.aspx shows 0x92 should indeed be 0x2019
### What do you see instead?
something else
### Additional information
_No response_ | confirmed-bug,good first issue,encoding,v22.x,v23.x | low | Critical |
2,779,711,473 | pytorch | torch.compile does not work with Flash attention 3 | ### 🐛 Describe the bug
Torch.compile cannot compile when using FA-3 kernels
### Error logs
```
FA3 not working with torch.compile
[rank7]: torch._dynamo.exc.Unsupported: Graph break due to unsupported builtin flash_attn_3_cuda.PyCapsule.fwd. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
```
### Versions
2.7 nightly, 3.0 FA
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bdhirsh @yf225 | high priority,triaged,module: custom-operators,oncall: pt2,module: pt2-dispatcher | low | Critical |
2,779,725,058 | PowerToys | svgl in Powertoys Run | ### Description of the new feature / enhancement
This would be a new plugin for Powertoys Run allowing users to search logos using svgl.
### Scenario when this would be used?
It could be useful when, for example, creating graphic design work, when an SVG logo is needed, without having to search it on the web, and by being able to instantly copy it
### Supporting information
Raycast has a plugin for this, but it is mac-only. Here is the link to it: https://www.raycast.com/1weiho/svgl | Idea-Enhancement,Run-Plugin | low | Minor |
2,779,775,303 | flutter | Flutter Integration tests - Text is not entered while executing scripts on iOS Device in test lab. Getting error as 'Can't find keyplane that supports type 4 for keyboard iPhone-PortraitChoco-NumberPad;' | ### Steps to reproduce
1. Create a sample app with two fields.
2. One as TextField with keyboardType as TextInputType.number or TextInputType.text or TextInputType.numberWithOptions()
3. Second field as Pin with as CustomPinCode field
4. Create an integration script to enter number in first text field and pin in second text field.
5. The details are not getting entered but focus gets moved from one field to other.
6. The logs show the error as '**_Can't find keyplane that supports type 4 for keyboard iPhone-PortraitChoco-NumberPad; using 27303_PortraitChoco_iPhone-Simple-Pad_Default'_**
### Expected results
The scripts should enter the number and pin to proceed further
### Actual results
The scripts are not entering details in text fields
### Code sample
```
// Define a custom Form widget.
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
class MyCustomForm extends StatefulWidget {
const MyCustomForm({super.key});
@override
State<MyCustomForm> createState() => _MyCustomFormState();
}
// Define a corresponding State class.
// This class holds data related to the Form.
class _MyCustomFormState extends State<MyCustomForm> {
// Define the focus node. To manage the lifecycle, create the FocusNode in
// the initState method, and clean it up in the dispose method.
late FocusNode focusNodePassword, focusNodeLogin;
var currentindex = 0;
void buttonClicked() {
setState(() {
// setstate using for the rerender the screen
// if we not use than it not show the sceond text
currentindex = currentindex + 1;
});
}
// Create a text controller and use it to retrieve the current value
// of the TextField.
final myController = TextEditingController();
@override
void initState() {
super.initState();
focusNodePassword = FocusNode();
focusNodeLogin = FocusNode();
// Start listening to changes.
myController.addListener(_printLatestValue);
}
@override
void dispose() {
// Clean up the focus node when the Form is disposed.
focusNodePassword.dispose();
focusNodeLogin.dispose();
// Clean up the focus node when the Form is disposed.
myController.dispose();
// Clean up the controller when the widget is removed from the widget tree.
// This also removes the _printLatestValue listener.
super.dispose();
}
void _printLatestValue() {
final text = myController.text;
if (kDebugMode) {
print('PinCode field: $text (${text.characters.length})');
}
}
@override
Widget build(BuildContext context) {
var questions = [
// list of text which the text get form here
"Enter your Credentials",
"Credentials Entered!",
];
return Scaffold(
backgroundColor: const Color.fromARGB(199, 7, 7, 7),
appBar: AppBar(
title: const Text('KPN App',
style: TextStyle(
fontSize: 40,
fontWeight: FontWeight.bold,
color: Colors.white)),
backgroundColor: Colors.orangeAccent,
),
body: Padding(
padding: const EdgeInsets.all(16),
child: Column(
children: [
TextField(
key: const ValueKey('Subscription'),
keyboardType: TextInputType.numberWithOptions(),
onChanged: (text) {
if (kDebugMode) {
print(
'Subscription number field: $text (${text.characters.length})');
}
if (text.characters.length == 12) {
focusNodePassword.requestFocus();
}
},
decoration: const InputDecoration(
hintText: 'Subscription Number',
hintStyle: TextStyle(color: Colors.black, fontSize: 15),
prefixIcon: Icon(Icons.lock),
prefixIconColor: Colors.black,
filled: true,
fillColor: Colors.white,
focusedBorder: OutlineInputBorder()),
),
Container(
margin: const EdgeInsets.only(top: 24),
child: TextField(
key: const ValueKey('PinCode'),
obscureText: true,
onChanged: (text) {
if (kDebugMode) {
print('PinCode field: $text (${text.characters.length})');
}
if (text.characters.length == 4) {
focusNodeLogin.requestFocus();
}
},
keyboardType: TextInputType.numberWithOptions(),
controller: myController,
focusNode: focusNodePassword,
decoration: const InputDecoration(
hintText: 'PinCode',
hintStyle: TextStyle(color: Colors.black, fontSize: 15),
prefixIcon: Icon(Icons.lock),
prefixIconColor: Colors.black,
filled: true,
fillColor: Colors.white,
focusedBorder: OutlineInputBorder()),
),
),
Container(
margin: const EdgeInsets.only(top: 24),
child: ElevatedButton(
key: const ValueKey('Login'),
focusNode: focusNodeLogin,
onPressed: () {
if (kDebugMode) {
print('Login pressed');
}
buttonClicked();
},
child: Text(
'Login',
style: TextStyle(
color: Colors.black,
fontSize: 18,
),
),
),
),
Padding(
padding: const EdgeInsets.all(15.0),
child: Text(
// questions.elementAt(1),
questions[currentindex],
// here index and text come form the upper list
// of question and indexing start form the 0
style: TextStyle(color: Colors.white, fontSize: 18),
),
),
],
),
),
floatingActionButton: FloatingActionButton(
// When the button is pressed,
// give focus to the text field using myFocusNode.
onPressed: () => focusNodePassword.requestFocus(),
tooltip: 'Focus FocusNode Field',
child: const Icon(Icons.wb_sunny_sharp),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
}
```
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
```
Jan 9 01:22:47 iPhone Runner(UIKitCore)[656] <Notice>: [Interface Orientation] was:Interface Unknown now:UIInterfaceOrientationPortrait reason:Using key window scene
Jan 9 01:22:47 iPhone UserEventAgent(com.apple.netsvcproxy)[29] <Notice>: File Handle Maintainer got a readable event on a file handle: Network Agent Registration socket (126) 499D9E61-7244-4625-BA7C-CFBD5A21426A 499D9E61-7244-4625-BA7C-CFBD5A21426A 0 (null) agent flags 0x1
Jan 9 01:22:47 iPhone Runner(UIKitCore)[656] <Notice>: Can't find keyplane that supports type 4 for keyboard iPhone-PortraitChoco-NumberPad; using 27303_PortraitChoco_iPhone-Simple-Pad_Default
Jan 9 01:22:47 iPhone kbd(TextInputCore)[448] <Notice>: -[TIKeyboardInputManagerLoader inputManagerForInputMode:withKeyboardState:class:] Reusing existing input manager for input mode <TIInputMode: 0x826e0f730; identifier = en_US>
Jan 9 01:22:47 iPhone kbd(DataDeliveryServices)[448] <Notice>: assetsForQuery: <query: com.apple.MobileAsset.LinguisticData, locO: 1, iO: 1, latO: 1, cO: 1, <filter: {
AssetLocale = "{(\134n en\134n)}";
}>> final result: (
) was cached: 1, cachedOnly: 1
Jan 9 01:22:47 iPhone mobileassetd(libmobileassetd.dylib)[109] <Notice>: -[DownloadInfo addNewRateDataPoint:]: Download has not progressed since last update, however, download appears to be complete. Previous Total Downloaded: 0, Total Expected: 0
Jan 9 01:22:47 iPhone kbd(DataDeliveryServices)[448] <Notice>: assetsForQuery: <query: com.apple.MobileAsset.LinguisticData, locO: 1, iO: 1, latO: 1, cO: 0, <filter: {
AssetLocale = "{(\134n en\134n)}";
AssetType = "{(\134n Delta\134n)}";
}>> final result: (
) was cached: 1, cachedOnly: 0
Jan 9 01:22:47 iPhone kbd(TextInputCore)[448] <Error>: updateSupplementalLexicons LM model is not valid yet
Jan 9 01:22:47 iPhone kbd(TextInputCore)[448] <Notice>: <private>
Jan 9 01:22:47 iPhone kbd[448] <Notice>: -[TIKeyboardInputManagerServer prepareForActivity] Preparing keyboard for activity
Jan 9 01:22:47 iPhone suggestd(PersonalizationPortraitInternals)[189] <Notice>: PPQuickTypeServer: warmUp
Jan 9 01:22:47 iPhone suggestd(PersonalizationPortraitInternals)[189] <Notice>: PPQuickTypeServer: warmUp
Jan 9 01:22:47 iPhone kbd(DataDeliveryServices)[448] <Notice>: assetsForQuery: <query: com.apple.MobileAsset.LinguisticData, locO: 1, iO: 1, latO: 1, cO: 0, <filter: {
AssetLocale = "{(\134n en\134n)}";
AssetType = "{(\134n Delta\134n)}";
}>> final result: (
) was cached: 1, cachedOnly: 0
```
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.0, on macOS 15.2 24C101 darwin-arm64, locale en-NL)
• Flutter version 3.27.0 on channel stable at
• Upstream repository
• Framework revision 8495dee1fd (4 weeks ago), 2024-12-10 14:23:39 -0800
• Engine revision 83bacfc525
• Dart version 3.6.0
• DevTools version 2.40.2
• No issues found!
```
</details>
| platform-ios,framework,f: integration_test,has reproducible steps,P1,team-ios,triaged-ios,found in release: 3.27,found in release: 3.28 | medium | Critical |
2,779,788,123 | langchain | MistralAIEmbeddings retry decorator catches wrong exception type for rate limiting | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_mistralai import MistralAIEmbeddings
embeddings = MistralAIEmbeddings(
model="mistral-embed",
mistral_api_key="your-api-key",
wait_time=30 # Should be used for rate limit retries
)
```
### Making multiple requests in quick succession triggers rate limits
```python
for i in range(10):
# When hitting rate limits, raises httpx.HTTPStatusError without retrying
embeddings.embed_query("Test string")
```
### Error Message and Stack Trace (if applicable)
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
self.generate_prompt(
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate
raise e
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
self._generate_with_cache(
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache
result = self._generate(
File "/usr/local/lib/python3.10/site-packages/langchain_mistralai/chat_models.py", line 545, in _generate
response = self.completion_with_retry(
File "/usr/local/lib/python3.10/site-packages/langchain_mistralai/chat_models.py", line 464, in completion_with_retry
rtn = _completion_with_retry(**kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain_mistralai/chat_models.py", line 461, in _completion_with_retry
_raise_on_error(response)
File "/usr/local/lib/python3.10/site-packages/langchain_mistralai/chat_models.py", line 170, in _raise_on_error
raise httpx.HTTPStatusError(
httpx.HTTPStatusError: Error response 429 while fetching https://api.mistral.ai/v1/chat/completions: {"message":"Requests rate limit exceeded"}
### Description
The `MistralAIEmbeddings` class currently uses a retry decorator that only catches `httpx.TimeoutException`, but according to the documentation, it should be handling rate limit (429) errors using the `wait_time` parameter.
## Current Behavior
- The retry decorator only catches `httpx.TimeoutException`
- When hitting rate limits (429 errors), the code raises an `httpx.HTTPStatusError` without retrying
- The `wait_time` parameter is documented as "The number of seconds to wait before retrying a request in case of 429 error" but isn't actually used for this purpose
## Expected Behavior
- The retry decorator should catch `httpx.HTTPStatusError` to handle 429 rate limit responses
- When receiving a 429 error, it should wait for `wait_time` seconds before retrying
- This matches the documented behavior of the `wait_time` parameter
### System Info
System Information
------------------
> OS: Linux
> OS Version: #49~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Nov 6 17:42:15 UTC 2
> Python Version: 3.10.16 (main, Dec 24 2024, 22:23:12) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.10
> langchain_mistralai: 0.2.4
> langchain_ollama: 0.2.2
> langchain_qdrant: 0.2.0
> langchain_text_splitters: 0.3.5
> langchainhub: 0.1.21
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> fastembed: Installed. No version info available.
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> ollama: 0.4.5
> orjson: 3.10.13
> packaging: 24.2
> pydantic: 2.9.2
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> qdrant-client: 1.12.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tokenizers: 0.21.0
> types-requests: 2.32.0.20241016
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available. | 🤖:bug | low | Critical |
2,779,794,627 | react | [Compiler]: Extend Function outlining to object methods | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAgHgBwjALgAgDMoA7OXASwhPyjAQBUALCkgcwAoBKfYAHRq16AZQgBbBAAkIEANYd+g-Pgm4mEACYBGbrwHKD+GAlywahAIYAbegG59BgL4AaB8tXqNAJmT5dAXgA+PSUDY1MYc2s7N3wXWI9NAGZfYjJKaj8eRUNlcLMiaIR7UPjQxI0AFlTScioaCsqskNyjEwLLG2LYsuVHLhLHEEcgA
### Repro steps
```js
method() {
return false;
},
```
is not hoisted, while
```js
method: () => {
return false;
},
```
is correctly hoisted.
Seems to be related to the fact the function is named?
This may or may not be the same as #31180?
### How often does this bug happen?
Every time
### What version of React are you using?
N/A
### What version of React Compiler are you using?
`19.0.0-beta-63e3235-20250105` | Type: Enhancement,Component: Optimizing Compiler | medium | Critical |
2,779,836,337 | next.js | PPR always loads the largest image dimension for prioritized images | ### Link to the code that reproduces this issue
https://github.com/mustafakareem040/simple-next/tree/main
### To Reproduce
Install latest nextjs canary
on home page add a LCP image and enable priority:
```
import Image from "next/image";
export default function Home() {
return (
<Image src={"https://raw.githubusercontent.com/aframevr/sample-assets/refs/heads/master/assets/images/bricks/brick_bump.jpg"} alt={"Example image"} fill={true} sizes="(max-width: 768px) 50vw, 100vw" priority={true} />
);
}
```
on next config enable the ppr:
/** @type {import('next').NextConfig} */
```
const nextConfig = {
experimental: {
ppr: true
},
images: {
remotePatterns: [
{
hostname: "raw.githubusercontent.com",
protocol: "https"
}
]
}
};
export default nextConfig;
```
pnpm run build
pnpm run start
Use Chrome or Safari on responsive mode. Firefox produces this issue regarding the ppr enabled or not.
### Current vs. Expected behavior
with PPR an additional image get loaded on small devices
<img width="1182" alt="image" src="https://github.com/user-attachments/assets/40dd06f2-4aac-4548-aaa6-c834d6cce3bd" />
without PPR enabled the issue doesn't exist.
<img width="1192" alt="image" src="https://github.com/user-attachments/assets/c84a22ca-4bde-483c-b5ff-e90442b23a09" />
### Provide environment information
```bash
MacOS M4 16GB RAM, 10 Cores CPU.
package.json:
{
"name": "nextapp",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev --turbopack",
"build": "next build",
"start": "next start",
"lint": "next lint"
},
"dependencies": {
"react": "^19.0.0",
"react-dom": "^19.0.0",
"next": "15.2.0-canary.3"
},
"devDependencies": {
"postcss": "^8",
"tailwindcss": "^3.4.1",
"eslint": "^9",
"eslint-config-next": "15.2.0-canary.3",
"@eslint/eslintrc": "^3"
}
}
```
### Which area(s) are affected? (Select all that apply)
Image (next/image), Partial Prerendering (PPR)
### Which stage(s) are affected? (Select all that apply)
next start (local)
### Additional context
_No response_ | Image (next/image),Partial Prerendering (PPR) | low | Minor |
2,779,838,239 | go | proposal: crypto/tls: support encrypt_then_mac extension | ### Go version
go version 1.23.3 X86_64/linux
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/tmpusr/.cache/go-build'
GOENV='/home/tmpusr/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/tmpusr/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/tmpusr/go'
GOPRIVATE=''
GOPROXY='https://goproxy.cn'
GOROOT='/media/vdc/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/media/vdc/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.3'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/tmpusr/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/media/vdc/MyProjects/GoProjects/tlstest/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build1584434915=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Golang 1.23.3, I write a simple https file downloading program:
```go
package main
import (
"crypto/tls"
"io"
"log"
"net/http"
"os"
)
func main() {
tlsConfig := &tls.Config{
MinVersion: tls.VersionTLS12,
MaxVersion: tls.VersionTLS12,
NextProtos: []string{"http/1.1"},
CipherSuites: []uint16{tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA},
InsecureSkipVerify: true,
PreferServerCipherSuites: true,
}
client := &http.Client{
Transport: &http.Transport{
TLSClientConfig: tlsConfig,
},
}
var url string
if len(os.Args) < 2 {
url = "https://127.0.0.1:14433/1k.txt"
} else {
url = os.Args[1]
}
resp, err := client.Get(url)
if err != nil {
log.Fatalf("get error: %v", err)
}
defer resp.Body.Close()
file, err := os.Create("1m.file")
if err != nil {
log.Fatalf("crete error: %v", err)
}
defer file.Close()
_, err = io.Copy(file, resp.Body)
if err != nil {
log.Fatalf("write error %v", err)
}
log.Println("all success")
}
```
### What did you see happen?
The packets sended by client.Get function as follows:

There is no encrypt_then_mac extension in the TLS Client Hello packet, the hmac mode is mac_then_encrypt
The Go language's crypto/tls API does not support the encrypt_then_mac extension field。
Here is the description of encryt_then_mac in rfc 7366:
The use of encrypt-then-MAC is negotiated via TLS/DTLS extensions as defined in TLS [2]. On connecting, the client includes the encrypt_then_mac extension in its client_hello if it wishes to use encrypt-then-MAC rather than the default MAC-then-encrypt. If the server is capable of meeting this requirement, it responds with an encrypt_then_mac in its server_hello. The "extension_type" value for this extension SHALL be 22 (0x16), and the "extension_data" field of this extension SHALL be empty. The client and server MUST NOT use encrypt-then-MAC unless both sides have successfully exchanged encrypt_then_mac extensions.
### What did you expect to see?
crypto/tls/handshake_messages.go,clientHelloMsg.marshalMsg and unmarshal support encrypt_then_mac. | Proposal,Proposal-Crypto,LibraryProposal | low | Critical |
2,779,857,095 | opencv | Problems with Linking avif in Visual Studio | ### System Information
OpenCV version: 4.12.0 (d12fa37)
Operating System / Platform: Windows 11 24H2 26100.2605
Compiler & compiler version: Visual Studio 2022 (v143)
### Detailed description
I `git clone` the code directly, and then directly in powershell via `cmake ... ` generated the `.sln` file, and then when building, I noticed that when compiling the `opencv_imgcodecs` module, it would fail to compile because the avif library read failed.

I realized that `avif.dll` is loaded incorrectly in the dependencies, where the correct load should be `avif.lib`. this file is found by `OpenCVFindAVIF.cmake`. I don't know if this has something to do with my environment variable configuration or if this is a cmake bug.

## Related Code (where `AVIF_LIBRARY` is set):
https://github.com/opencv/opencv/blob/d12fa37eed83c755e1cea2b0f54507133375ef65/cmake/OpenCVFindAVIF.cmake#L13-L32
### Steps to reproduce
```
git clone https://github.com/opencv/opencv.git
cd opencv
mkdir build
cd build
cmake ..
```
open `opencv.sln`
build opencv_imgcodecs
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: build/install,category: imgcodecs | low | Critical |
2,779,858,443 | material-ui | Display glitch when using multiline in TextField | ### Steps to reproduce
Steps:
1. Open this link to live example: https://mui.com/material-ui/react-text-field/#multiline with iOS simulator Version 16.0 (1038)
2. Click into the first example, so that the TextField get's the focus
3. Click outside so that the TextField looses the focus
### Current behavior
## With focus

The label "~Multiline~" is striken through.
## Without focus

There's a gap now.
### Expected behavior
## With focus

## Without focus

### Context
_No response_
### Your environment
MUI v.6.3.1
iOS simulator Version 16.0 (1038) with Mobile Safari 18
**Search keywords**: TextField, multiline | bug 🐛,component: text field,browser: Safari | low | Minor |
2,779,934,855 | node | Test runner matching every .ts and .js if glob is not provided | ### Version
Test on v23.6.0 and v22.10.0
### Platform
```text
All
```
### Subsystem
test_runner
### What steps will reproduce the bug?
Consider this folder structure:
```
└── test
├── fixtures
│ ├── boom.js
│ └── boom.ts
└── index.test.js
```
When running `node --test` the test runner will execute `./test/fixtures/boom.js`.
In v23 it will also execute `./test/fixtures/boom.ts`, since `--experimental-strip-types` has been unflagged.
<details>
marcoippolito@marcos-MacBook-Pro test % node --test
/Users/marcoippolito/Documents/projects/test/test/fixtures/boom.js:1
throw new Error('boom');
^
Error: boom
at Object.<anonymous> (/Users/marcoippolito/Documents/projects/test/test/fixtures/boom.js:1:7)
at Module._compile (node:internal/modules/cjs/loader:1739:14)
at Object..js (node:internal/modules/cjs/loader:1904:10)
at Module.load (node:internal/modules/cjs/loader:1473:32)
at Function._load (node:internal/modules/cjs/loader:1285:12)
at TracingChannel.traceSync (node:diagnostics_channel:322:14)
at wrapModuleLoad (node:internal/modules/cjs/loader:234:24)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:151:5)
at node:internal/main/run_main_module:33:47
Node.js v23.6.0
✖ test/fixtures/boom.js (36.07275ms)
'test failed'
/Users/marcoippolito/Documents/projects/test/test/fixtures/boom.ts:1
throw new Error('boom TS');
^
Error: boom TS
at Object.<anonymous> (/Users/marcoippolito/Documents/projects/test/test/fixtures/boom.ts:1:7)
at Module._compile (node:internal/modules/cjs/loader:1739:14)
at Object.loadTS [as .ts] (node:internal/modules/cjs/loader:1831:10)
at Module.load (node:internal/modules/cjs/loader:1473:32)
at Function._load (node:internal/modules/cjs/loader:1285:12)
at TracingChannel.traceSync (node:diagnostics_channel:322:14)
at wrapModuleLoad (node:internal/modules/cjs/loader:234:24)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:151:5)
at node:internal/main/run_main_module:33:47
Node.js v23.6.0
✖ test/fixtures/boom.ts (62.136209ms)
'test failed'
✔ should return true (0.3725ms)
ℹ tests 3
ℹ suites 0
ℹ pass 1
ℹ fail 2
ℹ cancelled 0
ℹ skipped 0
ℹ todo 0
ℹ duration_ms 72.075583
✖ failing tests:
test at test/fixtures/boom.js:1:1
✖ test/fixtures/boom.js (36.07275ms)
'test failed'
test at test/fixtures/boom.ts:1:1
✖ test/fixtures/boom.ts (62.136209ms)
'test failed'
</details>
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior? Why is that the expected behavior?
Maybe this is intended behavior but I'd would expect matching `.test.js`.
I think it's an undesired side effect to execute everything.
I immagine breakages due to a lot of `.ts` fixtures being executed.
### What do you see instead?
Everything is executed.
### Additional information
I know changing this would be a breaking change, but I dont think it's sane as it is. | test_runner | low | Critical |
2,779,949,723 | go | google.golang.org/protobuf: TestIntegration/Go1.22.6/LazyDecoding failures | ```
#!watchflakes
default <- pkg == "google.golang.org/protobuf" && test == "TestIntegration/Go1.22.6/LazyDecoding"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8726229609822898209)):
=== RUN TestIntegration/Go1.22.6/LazyDecoding
integration_test.go:148: executing (go1.22.6 test ./proto -test_lazy_unmarshal): exit status 1
--- FAIL: TestHasExtensionNoAlloc (0.02s)
--- FAIL: TestHasExtensionNoAlloc/Lazy (0.00s)
extension_test.go:156: proto.HasExtension should not allocate, but allocated 1.00x per run
FAIL
FAIL google.golang.org/protobuf/proto 0.752s
FAIL
--- FAIL: TestIntegration/Go1.22.6/LazyDecoding (33.34s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,779,964,008 | node | net module ->blocklist->clear method | ### What is the problem this feature will solve?
in net module block list has feauture to add ip but cannot able to clear it
### What is the feature you are proposing to solve the problem?
add a blockList.removeaddress,remove range, clear all in net module
currently it have only blocklist.addaddress,
blockList.addRange,
blockList.addSubnet
### What alternatives have you considered?
_No response_ | feature request | low | Minor |
2,779,991,550 | svelte | append_styles(...) --> Cannot read properties of null (reading 'querySelector') | ### Describe the bug
I get some random `Cannot read properties of null (reading 'querySelector')` pointing to the function `apend_styles(anchor, css)` in `css.js`.
```
css.js:20 Uncaught TypeError: Cannot read properties of null (reading 'querySelector')
at Array.<anonymous> (css.js:20:15)
at run_all (utils.js:44:8)
at process_micro_tasks (task.js:21:2)
```
### Reproduction
I am afraid I did not manage to reproduce this outside my app, and the part it occurs in is something rather special (recursive and dynamic components). But it seems to occur if the `svelte:component` changes to another component.
### Logs
```shell
css.js:20 Uncaught TypeError: Cannot read properties of null (reading 'querySelector')
at Array.<anonymous> (css.js:20:15)
at run_all (utils.js:44:8)
at process_micro_tasks (task.js:21:2)
(anonymous) @ css.js:20
run_all @ utils.js:44
process_micro_tasks @ task.js:21
```
### System Info
```shell
System:
OS: Linux 6.12 Void
CPU: (12) x64 AMD Ryzen 5 3600X 6-Core Processor
Memory: 47.80 GB / 62.73 GB
Container: Yes
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.16.0 - /usr/bin/node
Yarn: 1.22.19 - /usr/bin/yarn
npm: 10.8.1 - /usr/bin/npm
pnpm: 9.14.2 - /usr/bin/pnpm
Browsers:
Brave Browser: 121.1.62.153
Chromium: 131.0.6778.85
npmPackages:
svelte: ^5.10.1 => 5.17.1
```
### Severity
Prevents upgrade to Svelte 5 | awaiting submitter | low | Critical |
2,779,996,094 | flutter | [camera] Error: The method 'CameraPreview' isn't defined for the class | ### Steps to reproduce
1.carate a flutter application
2.use camera: ^0.11.0+2 plugin for launching camera
3.When trying to application it is showing CameraPreview not found
### Actual results
CameraPreview which is already existed in camera: ^0.11.0+2 but now it is showing it is not found or it is not part of this plugin
### Logs
<details open>
<summary>Logs</summary>
```console
lib/screens/Verification/faceverification_mode_screen.dart:244:45: Error: The method 'CameraPreview' isn't defined for the class '_FaceDetectionScreenState'.
- '_FaceDetectionScreenState' is from 'package:core/screens/Verification/faceverification_mode_screen.dart' ('lib/screens/Verification/faceverification_mode_screen.dart').
Try correcting the name to the name of an existing method, or defining a method named 'CameraPreview'.
if (_capturedImage == null) CameraPreview(_cameraController!),
^^^^^^^^^^^^^
lib/screens/Verification/fingerverification_screen.dart:94:20: Error: The method 'CameraPreview' isn't defined for the class '_FingerVerificationScreenState'.
- '_FingerVerificationScreenState' is from 'package:core/screens/Verification/fingerverification_screen.dart' ('lib/screens/Verification/fingerverification_screen.dart').
Try correcting the name to the name of an existing method, or defining a method named 'CameraPreview'.
child: CameraPreview(_cameraController!),
^^^^^^^^^^^^^
../../../AppData/Local/Pub/Cache/hosted/pub.dev/qr_code_dart_scan-0.9.3/lib/src/qr_code_dart_scan_view.dart:197:24: Error: The method 'CameraPreview' isn't defined for the class 'QRCodeDartScanViewState'.
- 'QRCodeDartScanViewState' is from 'package:qr_code_dart_scan/src/qr_code_dart_scan_view.dart' ('../../../AppData/Local/Pub/Cache/hosted/pub.dev/qr_code_dart_scan-0.9.3/lib/src/qr_code_dart_scan_view.dart').
Try correcting the name to the name of an existing method, or defining a method named 'CameraPreview'.
child: CameraPreview(
^^^^^^^^^^^^^
../../../AppData/Local/Pub/Cache/hosted/pub.dev/camera-0.11.0+2/lib/src/camera_preview.dart:96:7: Error: No named parameter with the name 'inputImageData'.
inputImageData: InputImageData(
^^^^^^^^^^^^^^
../../../AppData/Local/Pub/Cache/hosted/pub.dev/google_mlkit_commons-0.9.0/lib/src/input_image.dart:32:11: Context: Found this candidate, but the arguments don't match.
factory InputImage.fromBytes(
^^^^^^^^^
../../../AppData/Local/Pub/Cache/hosted/pub.dev/camera-0.11.0+2/lib/src/camera_preview.dart:87:14: Error: The method 'InputImagePlaneMetadata' isn't defined for the class '_FaceDetectionScreenState'.
- '_FaceDetectionScreenState' is from 'package:camera/src/camera_preview.dart' ('../../../AppData/Local/Pub/Cache/hosted/pub.dev/camera-0.11.0+2/lib/src/camera_preview.dart').
Try correcting the name to the name of an existing method, or defining a method named 'InputImagePlaneMetadata'.
return InputImagePlaneMetadata(
^^^^^^^^^^^^^^^^^^^^^^^
../../../AppData/Local/Pub/Cache/hosted/pub.dev/camera-0.11.0+2/lib/src/camera_preview.dart:130:17: Error: The method 'CameraPreview' isn't defined for the class '_FaceDetectionScreenState'.
- '_FaceDetectionScreenState' is from 'package:camera/src/camera_preview.dart' ('../../../AppData/Local/Pub/Cache/hosted/pub.dev/camera-0.11.0+2/lib/src/camera_preview.dart').
Try correcting the name to the name of an existing method, or defining a method named 'CameraPreview'.
CameraPreview(_cameraController),
^^^^^^^^^^^^^
../../../AppData/Local/Pub/Cache/hosted/pub.dev/face_camera-0.1.4/lib/src/smart_face_camera.dart:222:14: Error: The method 'CameraPreview' isn't defined for the class '_SmartFaceCameraState'.
- '_SmartFaceCameraState' is from 'package:face_camera/src/smart_face_camera.dart' ('../../../AppData/Local/Pub/Cache/hosted/pub.dev/face_camera-0.1.4/lib/src/smart_face_camera.dart').
Try correcting the name to the name of an existing method, or defining a method named 'CameraPreview'.
return CameraPreview(cameraController, child: Builder(builder: (context) {
^^^^^^^^^^^^^
Target kernel_snapshot_program failed: Exception
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:compileFlutterBuildDebug'.
> Process 'command 'C:\Users\venka\Downloads\flutter_windows_3.22.2-stable\flutter\bin\flutter.bat'' finished with non-zero exit value 1
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
```
</details>
### Flutter Doctor output
<details open>
<summary>Doctor output</summary>
```console
PS C:\Users\venka\BNPRS clone\bpr1006.uidpay.main\core> flutter doctor -v
[√] Flutter (Channel stable, 3.24.6-0.0.pre.29, on Microsoft Windows [Version 10.0.22631.4602], locale en-IN)
• Flutter version 3.24.6-0.0.pre.29 on channel stable at C:\Users\venka\Downloads\flutter_windows_3.22.2-stable\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision f2e5486704 (4 weeks ago), 2024-12-11 18:02:08 +0530
• Engine revision a18df97ca5
• Dart version 3.5.4
• DevTools version 2.37.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at C:\Users\venka\AppData\Local\Android\sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.10+0--11609105)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.11.6)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.11.35431.28
• Windows 10 SDK version 10.0.22621.0
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0--11609105)
[√] IntelliJ IDEA Community Edition (version 2024.3)
• IntelliJ at C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2024.3
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin version 243.21565.120
[√] VS Code (version 1.96.2)
• VS Code at C:\Users\venka\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.102.0
[√] Connected device (4 available)
• sdk gphone64 x86 64 (mobile) • emulator-5554 • android-x64 • Android 15 (API 35) (emulator)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4602]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.112
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| waiting for customer response,in triage | low | Critical |
2,780,057,037 | vscode | Close running code tabs through a single place / button | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
### Close button which closes the running tabs through a single location
- Why do we need it? sometime the tab size is different and we opened for ex 17 tabs
- so for closing them we frustratedly click on the close button expecting all tabs will be closed.
- but no, here the anger issue raises.
- if there is a button separately to close all tabs one after another when that same button clicked, then it will be a awesome feature
- please add this feature 😟😞
### here how it is looks like
https://github.com/user-attachments/assets/e5a56b3d-4508-45a9-b3ce-51548314766e
### here how it should be from notepad++
https://github.com/user-attachments/assets/b78906b7-66ea-43de-a361-ec0452603d13
| feature-request,workbench-tabs | low | Minor |
2,780,063,268 | vscode | Quick input - explore changes to border/shadow/backdrop/overlay | Here is a screenshot where it is difficult to "focus visually" on the quick input. Things that we could explore:
1. Tweak the border/shadow of the widget so that it stands out
2. Use a backdrop/overlay to dim the workbench behind the input
<img width="1826" alt="Image" src="https://github.com/user-attachments/assets/51728468-ee9b-42c1-bfc9-ab96de6f853a" /> | quick-pick,under-discussion | low | Minor |
2,780,091,411 | vscode | Language model not available |
Type: <b>Bug</b>
I just subscribed to Copilot but cannot use it in VS Code. It just says "language model not available"
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i5-1145G7 @ 2.60GHz (8 x 1498)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.73GB (2.79GB free)|
|Process Argv|--crash-reporter-id 604e9176-5864-4f53-a1d6-01b94bcdd7f0|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (14)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-openapi|42C|4.31.0
pascal-formatter|ale|2.9.0
ng-template|Ang|19.0.3
pascal-language-basics|Ans|0.1.12
vscode-intelephense-client|bme|1.12.6
openedge-abl|chr|1.3.0
xml|Dot|2.5.1
framework7|ETT|0.0.3
vscode-docker|ms-|1.29.3
csharp|ms-|2.55.29
vscode-dotnet-runtime|ms-|2.2.3
remote-containers|ms-|0.394.0
remote-wsl|ms-|0.88.5
vscode-yaml|red|1.15.0
</details>
<!-- generated by issue reporter --> | triage-needed,stale | low | Critical |
2,780,097,876 | flutter | Render issues when displaying borderer/outlined text | ### Steps to reproduce
Using TextStyle with higher strokeWidth values in foreground Paint. Here is the docs link with a sample https://api.flutter.dev/flutter/painting/TextStyle-class.html#painting.TextStyle.6
My code expands on it a bit to test for all uppercase chars and sets Roboto font for consistency across platforms. It also increases the strokeWidth to create this "spike render" bugs. On some fonts its more noticeable than other. It mostly affects A M N and V chars.
1. Create a new flutter project
2. Add Google fonts package to get the same font on iOS and Android https://pub.dev/packages/google_fonts
3. Run the sample code on Android and/or iOS physical device or simulator/emulator. The result is always the same.
### Expected results
Should render letters without spikes. Here is an edited image of expected results.
<img src="https://github.com/user-attachments/assets/60757c09-94d7-492c-aace-30870448d8a8" width ="300">
### Actual results
Renders spikes on A, M, N and V letters. The screenshots are form an Android Pixel 8 and from iPhone 16 simulator.
Pixel 8 | Simulator Iphone 16
:-------------------------:|:-------------------------:
 | 
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:google_fonts/google_fonts.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
String sampleText = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
return MaterialApp(
title: 'Outline Text',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: Scaffold(
body: Center(
child: Stack(
children: <Widget>[
// Stroked text as border.
Text(
sampleText,
style: GoogleFonts.roboto(
fontSize: 40,
foreground: Paint()
..style = PaintingStyle.stroke
..strokeWidth = 10
..color = Colors.blue[700]!,
),
),
// Solid text as fill.
Text(
sampleText,
style: GoogleFonts.roboto(
fontSize: 40,
color: Colors.grey[300],
),
),
],
)),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
PS C:\Users\Rok\FlutterProjects\Test\outlinetext> flutter doctor -v
[√] Flutter (Channel stable, 3.27.1, on Microsoft Windows [Version 10.0.22631.4602], locale en-SI)
• Flutter version 3.27.1 on channel stable at C:\Users\Rok\Flutter SDK\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (4 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 35.0.0-rc2)
• Android SDK at C:\Users\Rok\AppData\Local\Android\sdk
• Platform android-35, build-tools 35.0.0-rc2
• Java binary at: C:\Users\Rok\JDK\jdk-17.0.13+11\bin\java
• Java version OpenJDK Runtime Environment Temurin-17.0.13+11 (build 17.0.13+11)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[X] Visual Studio - develop Windows apps
X Visual Studio not installed; this is necessary to develop Windows apps.
Download at https://visualstudio.microsoft.com/downloads/.
Please install the "Desktop development with C++" workload, including all of its default components
[√] Android Studio (version 2024.2)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
[√] VS Code (version 1.95.2)
• VS Code at C:\Users\Rok\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.100.0
[√] Connected device (4 available)
• Pixel 8 (mobile) • 37261FDJH00C99 • android-arm64 • Android 14 (API 34)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4602]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
• Edge (web) • edge • web-javascript • Microsoft Edge 125.0.2535.85
[√] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| engine,a: typography,c: rendering,has reproducible steps,P2,team-engine,triaged-engine,found in release: 3.27,found in release: 3.28 | low | Critical |
2,780,101,130 | PowerToys | could you add slope calculation in ruler section? | ### Description of the new feature / enhancement
could you add slope calculation in ruler section?
### Scenario when this would be used?
na
### Supporting information
na | Needs-Triage | low | Minor |
2,780,107,204 | ui | [bug]: Disabled Slider is Not Styled or Supported by Label Properly | ### Describe the bug
The `Slider` Component carries incorrect TailwindCSS classes to support the @radix-ui/react-slider implementation, because the implementation (correctly) doesn't utilize a `disabled` attribute (those are only available to INPUTs, not SPANs, as implemented).
The Slider generator currently uses `disabled:` TailwindoCSS classes which fail to match because the disabled attribute isn't utilized. Rather, it should be using `aria-disabled:` classes on the `SliderPrimitive.Root`, as the root element is flagged with `aria-disabled="true"`. If the Slider handle is intended to be targeted (`disabled:pointer-events-none`), the `SliderPrimitive.Thumb` should instead carry the `data-[disabled]:pointer-events-none` TailwindCSS class, as that element uses `data-disabled`.
Additionally, the `Label` Component fails to recognize and visually represent the disabled state of Slider, because of the lack of `disabled` attribute. I don't have a great suggestion here. Ideally, the `Label` would be extended to included a `peer-aria-disabled`, but that becomes based on DOM positioning and the Slider carrying a `peer` class which cannot be assumed. 🫤
### Affected component/components
Slider
### How to reproduce
1. Generate a new Slider component
2. Add the Slider component to a page
3. Disable the Slider component
4. Note there is no visual differentiation between disabled or enabled, the cursor doesn't change, and pointer events still trigger.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
N/A
```
### System Info
```bash
N/A
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,780,111,445 | react-native | Archive broken on Xcode 16 for React Native 0.75 | ### Description
I have a production app built with react native that has the iOS Archive broken after an upgrade. This is a bare React Native project (not an expo project). I upgraded the following:
react-native version to 0.75.4
Xcode to v16
So I'm not really sure which of those upgrades are causing the problem.
Running the app in debug mode works ok, however when archiving (in order to publish a new version of the app) it throws the error: 'EMFILE too many files open' and looking the full logs I see that comes from Metro. That confuses me because as far as I know metro should not be running as part of the archive process, but this is fired from the react native scripts.
Any ideas?
I have tried to fix that Metro error by removing the line -reset-cache from react-native-xcode.sh and now it breaks in a different place. But I don't think touching the react native scripts is the way to go
### Steps to reproduce
1. build archive
2.install ipa on iphone
3. app is crashing on launchscreen
### React Native Version
0.75.4
### Affected Platforms
Runtime - iOS, Build - MacOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1.1
CPU: (8) arm64 Apple M2
Memory: 104.44 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 21.7.1
path: /usr/local/bin/node
Yarn: Not Found
npm:
version: 10.8.1
path: /usr/local/bin/npm
Watchman:
version: 2024.03.25.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/raghunaik/.rubies/ruby-2.6.10/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12550806
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /Users/raghunaik/.rubies/ruby-2.6.10/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.4
wanted: 0.75.4
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
``` | Needs: Author Feedback,Needs: Repro | low | Critical |
2,780,112,775 | react-native | RTL FlatList horizontal Layout Pressable Version 0.76.6 issues | ### Description
I faced an issue with the latest React Native Build in **a Physical Android Phone Not an Emulator**. If we render the data using FlatList on Horizontal Mode and the Layout is LTR the pressable works perfectly, however, if the Layout is RTL the Pressable won't take action it needs multiple touches to respond.
### Steps to reproduce
When app is built on Physical Android phone, Flatlist horizontal mode won't take action if apply pressable
### React Native Version
0.76.6
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: macOS 15.2
CPU: (16) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Memory: 2.92 GB / 16.00 GB
Shell:
version: 3.2.57
path: /bin/bash
Binaries:
Node:
version: 20.13.1
path: /usr/local/bin/node
Yarn:
version: 4.5.3
path: /usr/local/bin/yarn
npm:
version: 10.9.0
path: /usr/local/bin/npm
Watchman:
version: 2024.11.18.00
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.2
- iOS 18.2
- macOS 15.2
- tvOS 18.2
- visionOS 2.2
- watchOS 11.2
Android SDK:
API Levels:
- "26"
- "28"
- "29"
- "30"
- "31"
- "33"
- "34"
- "35"
Build Tools:
- 19.1.0
- 20.0.0
- 21.1.2
- 22.0.1
- 23.0.1
- 23.0.2
- 23.0.3
- 24.0.0
- 24.0.1
- 24.0.2
- 24.0.3
- 25.0.0
- 25.0.1
- 25.0.2
- 25.0.3
- 26.0.0
- 26.0.1
- 26.0.2
- 26.0.3
- 27.0.0
- 27.0.1
- 27.0.2
- 27.0.3
- 28.0.0
- 28.0.1
- 28.0.2
- 28.0.3
- 29.0.0
- 29.0.1
- 29.0.2
- 29.0.3
- 30.0.0
- 30.0.1
- 30.0.2
- 30.0.3
- 31.0.0
- 32.0.0
- 32.1.0
- 33.0.0
- 33.0.1
- 33.0.2
- 33.0.3
- 34.0.0
- 34.0.0
- 34.0.0
- 34.0.0
- 35.0.0
- 35.0.0
- 35.0.0
- 35.0.0
- 35.0.0
System Images:
- android-27 | Google Play Intel x86 Atom
- android-28 | Intel x86 Atom_64
- android-28 | Google Play Intel x86 Atom
- android-29 | Google Play Intel x86 Atom
- android-30 | Google APIs Intel x86 Atom
- android-30 | Google Play Intel x86 Atom
- android-33 | Intel x86_64 Atom
- android-34 | Intel x86_64 Atom
- android-34 | Google Play Intel x86_64 Atom
- android-35 | Intel x86_64 Atom
- android-35 | Google Play Intel x86_64 Atom
- android-35 | Pre-Release 16 KB Page Size Google Play ARM Intel x86_64
Atom
Android NDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12700392
Xcode:
version: 16.2/16C5032a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: /usr/bin/javac
Ruby:
version: 3.3.6
path: /usr/local/opt/ruby/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.6
wanted: 0.76.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
.
```
### Reproducer
https://github.com/react-native-community/reproducer-react-native
### Screenshots and Videos
https://github.com/user-attachments/assets/6b197a78-f22f-48f4-8796-6eaa6c014564
| Resolution: Fixed,Component: FlatList,Needs: Repro | low | Minor |
2,780,126,026 | TypeScript | 'identity' modifier to indicate a function's parameter-less returns should be narrowed like a value | ### 🔍 Search Terms
indicate function returns identical value cached memoized signals
### ✅ Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
Many apps today are built on the concept of using small "getter" functions as wrappers around values. For example, _Signals_ as implemented in [Angular](https://angular.dev/guide/signals "Angular Signals"), [Solid](https://docs.solidjs.com/concepts/signals "Solid Signals"), and the stage 1 [TC39 Signals proposal](https://github.com/tc39/proposal-signals) often look something like:
```ts
declare const value: () => string | undefined;
if (value() !== undefined) {
console.log(value().toUpperCase());
}
```
Signals users have struggled with using them in TypeScript because, at present, there isn't a way to get that code block to type check without type errors. Signals users know that the result of `value()` must be `string` inside the `if`, but TypeScript doesn't have a way to note that the result should be type narrowed. Common workarounds today include `!`, `?.`, and refactoring to store intermediate values. All of which are at best unnecessary verbosity, and at worst conflict with frameworks.
Request: can we have a keyword -or, failing that, built-in / intrinsic type- to indicate that _calls to a function produce a referentially equal, structurally unchanging value_? In other words, that the function call (`value()`) should be treated by type narrowing as if it was just a variable reference (`value`)?
Proposal: how about an **`identity`** modifier keyword for function types that goes before the `()`? It would be treated in syntax space similarly to other modifier keywords such as `abstract` and `readonly`.
### 📃 Motivating Example
When an `identity` function is called, it is given the same type narrowing as variables. Code like this would now type check without type errors, as if `value` was declared as `const value: string | undefined`:
```ts
declare const value: identity () => string | undefined;
if (value() !== undefined) {
value();
// Before: string | undefined
// Now: string
console.log(value().toUpperCase());
// Before: ~~~~~~~ Object is possibly 'undefined'.
// Now: ✅
}
```
Narrowing would be cleared the same as variables when, say, a new closure/scope can't be guaranteed to preserve narrowing:
```ts
declare const value: identity () => string | undefined;
if (value() !== undefined) {
setTimeout(() => {
value();
// Still: string | undefined
console.log(value().toUpperCase());
// Still: ~~~~~~~ Object is possibly 'undefined'.
});
}
```
### 💻 Use Cases
One difficult-to-answer design question is: how could `identity` handle functions with parameters? I propose the modifier not be allowed on function signaturess with parameters to start. It should produce a type error for now. The vast majority of Signals users wouldn't need signatures with parameters, so I don't think solidifying that needs to block this proposal. IMO that can always be worked on later.
Furthermore, it's common for frameworks to set up functions with a parameter-less "getter" signature and a single-parameter "setter" signature. I propose for an initial version of the feature, calling any other methods or setting to any properties on the type should clear type narrowing:
```ts
declare const value: {
identity (): string | undefined;
(newValue: string | undefined): void;
}
if (value() !== undefined) {
value("...");
value();
// Still: string | undefined
console.log(value().toUpperCase());
// Still: ~~~~~~~ Object is possibly 'undefined'.
}
```
More details on the difficulties of signals with TypeScript:
* Angular: https://github.com/angular/angular/issues/49161
* Solid: https://github.com/solidjs/solid/discussions/1575
* Also on Solid: https://github.com/microsoft/TypeScript/issues/57725#issuecomment-2573624234
If a new modifier keyword isn't palatable, a fallback proposal could be a built-in type like `Identity<T>`. This wouldn't be a new utility type ([FAQ: no new utility types](https://github.com/microsoft/typescript/wiki/faq#new-utility-types)); it'd be closer to the [built-in template string manipulation types](https://www.typescriptlang.org/docs/handbook/2/template-literal-types.html#intrinsic-string-manipulation-types). | Needs More Info | high | Critical |
2,780,132,106 | deno | Deno allow-read doesn't work for / (root directory) if a deny-read parameter is set. | Version: Deno 2.1.5 (also tested on 1.44.4, 1.46.4, 2.1.2)
OS: MacOS 15.2 (also tested on linux)
When allow-read is used in combination with deny-read, it doesn't allow access to the root directory.
## Test file
```ts
import * as fs from "node:fs";
console.log(fs.statSync("/tmp").mtime);
console.log(fs.statSync("/").mtime);
```
## Expected output
`$ deno run --allow-read access.ts`
```
2025-01-10T03:19:51.822Z
2024-12-07T08:11:55.000Z
```
## Output with bug
`$ deno run --allow-read --deny-read=/mnt access.ts`
```
2025-01-10T03:19:51.822Z
error: Uncaught (in promise) NotCapable: Requires read access to "/", run again with the --allow-read flag
console.log(fs.statSync("/").mtime);
^
at Object.statSync (ext:deno_fs/30_fs.js:415:3)
at Module.statSync (ext:deno_node/_fs/_fs_stat.ts:163:25)
at file:///Users/matheusgr/access.ts:4:16
```
Setting allow-read to root directory also fails.
---
We caught this issue due to the last browserslist update (4.24.4) . It may stat the root directory and fail due to some deny-read configuration that we use. | bug,node compat | low | Critical |
2,780,133,136 | tensorflow | Tensorflow.math.floormod() | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.17.0
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
The operation tf.math.floormod supports float types, but when performing the operation on two float-type tensors with GPU, an internal error occurs.
`import tensorflow as tf x = tf.constant([10, -15, 7.5], dtype=tf.float32) y = tf.constant([3, -4, 2.5], dtype=tf.float32) name = "random_floormod_operation" result_code = tf.math.floormod(x,y,name) print("!!!!!!!!!!!!!!!!!!!!!!!!!!!") print(result_code)`
**2025-01-10 09:34:03.279577: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2025-01-10 09:34:03.294385: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2025-01-10 09:34:03.312397: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2025-01-10 09:34:03.317811: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-10 09:34:03.330916: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-01-10 09:34:04.370985: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2025-01-10 09:34:05.819237: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 513 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:38:00.0, compute capability: 8.6
2025-01-10 09:34:05.819757: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 22463 MB memory: -> device: 1, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:41:00.0, compute capability: 8.6
2025-01-10 09:34:05.820176: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 22463 MB memory: -> device: 2, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:44:00.0, compute capability: 8.6
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
W0000 00:00:1736501646.122604 70797 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
W0000 00:00:1736501646.124987 70795 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
W0000 00:00:1736501646.127360 70789 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
W0000 00:00:1736501646.129708 70796 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
W0000 00:00:1736501646.132058 70788 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
W0000 00:00:1736501646.135845 70801 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
W0000 00:00:1736501646.137479 70799 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
W0000 00:00:1736501646.139073 70793 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
W0000 00:00:1736501646.140713 70787 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
W0000 00:00:1736501646.142340 70802 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
W0000 00:00:1736501646.143637 70786 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
W0000 00:00:1736501646.144931 70783 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
W0000 00:00:1736501646.159628 70796 gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
2025-01-10 09:34:06.630804: W tensorflow/compiler/mlir/tools/kernel_gen/tf_gpu_runtime_wrappers.cc:40] 'cuModuleLoadData(&module, data)' failed with 'CUDA_ERROR_UNSUPPORTED_PTX_VERSION'
2025-01-10 09:34:06.630843: W tensorflow/compiler/mlir/tools/kernel_gen/tf_gpu_runtime_wrappers.cc:40] 'cuModuleGetFunction(&function, module, kernel_name)' failed with 'CUDA_ERROR_INVALID_HANDLE'
2025-01-10 09:34:06.630870: W tensorflow/core/framework/op_kernel.cc:1828] INTERNAL: 'cuLaunchKernel(function, gridX, gridY, gridZ, blockX, blockY, blockZ, 0, reinterpret_cast(stream), params, nullptr)' failed with 'CUDA_ERROR_INVALID_HANDLE'
2025-01-10 09:34:06.630894: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: INTERNAL: 'cuLaunchKernel(function, gridX, gridY, gridZ, blockX, blockY, blockZ, 0, reinterpret_cast(stream), params, nullptr)' failed with 'CUDA_ERROR_INVALID_HANDLE'
Traceback (most recent call last):
File "/root/myFuzzer/outputs/test5/code/tensorflow.math.floormod/tensorflow.math.floormod38.py", line 5, in
result_code = tf.math.floormod(x,y,name)
File "/root/miniconda3/envs/fuzz4all/lib/python3.10/site-packages/tensorflow/python/ops/weak_tensor_ops.py", line 142, in wrapper
return op(*args, kwargs)
File "/root/miniconda3/envs/fuzz4all/lib/python3.10/site-packages/tensorflow/python/ops/gen_math_ops.py", line 4177, in floor_mod
_ops.raise_from_not_ok_status(e, name)
File "/root/miniconda3/envs/fuzz4all/lib/python3.10/site-packages/tensorflow/python/framework/ops.py", line 5983, in raise_from_not_ok_status
raise core._status_to_exception(e) from None # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InternalError: {{function_node _wrapped__FloorMod_device/job:localhost/replica:0/task:0/device:GPU:0}} 'cuLaunchKernel(function, gridX, gridY, gridZ, blockX, blockY, blockZ, 0, reinterpret_cast(stream), params, nullptr)' failed with 'CUDA_ERROR_INVALID_HANDLE' [Op:FloorMod] name: random_floormod_operation
The operation runs normally under integer types with GPU.
`import tensorflow as tf x = tf.constant([10, -15, 7], dtype=tf.int32) y = tf.constant([3, -4, 2], dtype=tf.int32) name = "random_floormod_operation" result_code = tf.math.floormod(x,y,name) print("!!!!!!!!!!!!!!!!!!!!!!!!!!!") print(result_code)`
2025-01-10 09:38:07.541149: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2025-01-10 09:38:07.559247: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2025-01-10 09:38:07.564692: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-10 09:38:07.577837: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-01-10 09:38:08.635137: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2025-01-10 09:38:10.256193: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 513 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:38:00.0, compute capability: 8.6
2025-01-10 09:38:10.256732: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 22463 MB memory: -> device: 1, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:41:00.0, compute capability: 8.6
2025-01-10 09:38:10.257179: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 22463 MB memory: -> device: 2, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:44:00.0, compute capability: 8.6
!!!!!!!!!!!!!!!!!!!!!!!!!!!
tf.Tensor([ 1 -3 1], shape=(3,), dtype=int32)
The float type is correct for the CPU as well.
`import tensorflow as tf
x = tf.constant([10, -15, 7.8], dtype=tf.float32)
y = tf.constant([3, -4, 2.5], dtype=tf.float32)
name = "random_floormod_operation"
with tf.device('/CPU:0'):
result_code = tf.math.floormod(x,y,name)
print("!!!!!!!!!!!!!!!!!!!!!!!!!!!")
print(result_code)`2025-01-10 12:57:31.435249: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-01-10 12:57:31.449893: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2025-01-10 12:57:31.467647: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2025-01-10 12:57:31.472965: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-10 12:57:31.486020: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-01-10 12:57:32.528966: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2025-01-10 12:57:34.007683: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 447 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:38:00.0, compute capability: 8.6
2025-01-10 12:57:34.008219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 22463 MB memory: -> device: 1, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:41:00.0, compute capability: 8.6
2025-01-10 12:57:34.008642: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 22463 MB memory: -> device: 2, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:44:00.0, compute capability: 8.6
!!!!!!!!!!!!!!!!!!!!!!!!!!!
tf.Tensor([ 1. -3. 0.3000002], shape=(3,), dtype=float32)
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
x = tf.constant([10, -15, 7.5], dtype=tf.float32)
y = tf.constant([3, -4, 2.5], dtype=tf.float32)
name = "random_floormod_operation"
result_code = tf.math.floormod(x,y,name)
```
### Relevant log output
_No response_ | stat:awaiting response,type:bug,stale,comp:apis,2.17 | medium | Critical |
2,780,163,969 | kubernetes | vishvananda Netlink breaking changes in 1.2.1 | The vishvananda netlink library had a behavior change in 1.2.1 , see https://github.com/vishvananda/netlink/pull/1018 for more details
> Before https://github.com/vishvananda/netlink/pull/925 (in v1.2.1), the flag was ignored and results were returned without an error. With that change, response handling is aborted, results are discarded, and unix.EINTR is returned.
We are impacted by that change https://github.com/kubernetes/kubernetes/blob/fc7520b32f1287b867377aaee89412482199d14a/go.mod#L63
In order to recover the previous behavior and being more resilient to partial results we should adopt a similar solution to the one adopted by moby developers, wrapping the library to retry a few times
We need to implement something like https://github.com/moby/moby/pull/48407/commits/00bf437d84ac1aec5ce24ffc5f1b7dbb6309263b#diff-28da2a9c14d97c7755152d3ad995389ceda8cc7655c8e03e2874b6d44f977ec1
We also should have some way of linting to avoid adding new netlink calls that are susceptible to get interrupted without the wrapper
/sig network
/priority important-soon | priority/important-soon,sig/network,triage/accepted | medium | Critical |
2,780,165,294 | flutter | [Proposal] TableView: add jump to/scroll to a specified item | ### Use case
Can the tableView be enhanced with a feature that allows it to jump to/scroll to a specified row and column when given the row index and column index?
### Proposal
jump to/scroll to | c: new feature,f: scrolling,package,c: proposal,P3,team-framework,triaged-framework,p: two_dimensional_scrollables | low | Minor |
2,780,169,030 | rust | Broken LLVM module: inlinable function call in a function with debug info must have a !dbg location | <!--
Thank you for filing a regression report! 🐛 A regression is something that changed between versions of Rust but was not supposed to.
Please provide a short summary of the regression, along with any information you feel is relevant to replicate it.
-->
### Code
I tried this code foundry-rs/foundry@0cc535504a909dcee74694fa86f7faafa4cbf4bc:
```bash
git clone https://github.com/foundry-rs/foundry.git
git checkout 0cc535504a909dcee74694fa86f7faafa4cbf4bc
cargo +stable build --bin forge --profile maxperf # takes a while (>5 min)
```
I expected to see this happen: compiles like with dev or release profile
Instead, this happened (see https://github.com/foundry-rs/foundry/actions/runs/12700786553/job/35404155023#step:7:3652):
```
inlinable function call in a function with debug info must have a !dbg location
tail call fastcc void @_ZN5alloc5alloc18handle_alloc_error17ha0547c441587f574E(i64 noundef 8, i64 noundef 128) #324
... # the last line repeated thousands of times with different arguments
rustc-LLVM ERROR: Broken module found, compilation aborted!
```
This is caused by the following config combination, as changing any of these values makes it compile again:
```toml
debug = "line-tables-only" # any value that's not "none"
lto = "fat"
```
I've also verified that the `strip` value does not affect the outcome either.
We fixed this by setting `debug = "none"`.
~~Maybe related to https://github.com/rust-lang/rust/pull/129063, which modified the function mentioned in the error?~~
---
~~I could reproduce this locally only on 1.84, not on beta or nightly, so feel free to close this if it has already been fixed.~~ Reproducible with `RUSTFLAGS="-Zverify-llvm-ir"`, see https://github.com/rust-lang/rust/issues/135332#issuecomment-2583700719.
~~Unfortunately I wasn't able to minimize this.~~
Simpler repro in https://github.com/rust-lang/rust/issues/135332#issuecomment-2584516583.
### Version it worked on
<!--
Provide the most recent version this worked on, for example:
It most recently worked on: Rust 1.47
-->
It most recently worked on: 1.83.0
### Version with regression
<!--
Provide the version you are using that has the regression.
-->
`rustc --version --verbose`:
```
rustc 1.84.0 (9fc6b4312 2025-01-07)
binary: rustc
commit-hash: 9fc6b43126469e3858e2fe86cafb4f0fd5068869
commit-date: 2025-01-07
host: x86_64-unknown-linux-gnu
release: 1.84.0
LLVM version: 19.1.5
```
```
rustc 1.86.0-nightly (824759493 2025-01-09)
binary: rustc
commit-hash: 824759493246ee383beb9cd5ceffa0e15deb9fa4
commit-date: 2025-01-09
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.6
```
<!--
If you know when this regression occurred, please add a line like below, replacing `{channel}` with one of stable, beta, or nightly.
@rustbot modify labels: +regression-from-stable-to-{channel} -regression-untriaged
-->
@rustbot modify labels: +regression-from-stable-to-stable -regression-untriaged
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"khuey"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-LLVM,A-debuginfo,P-high,T-compiler,regression-from-stable-to-stable,C-bug,E-needs-mcve,A-LTO | low | Critical |
2,780,180,832 | PowerToys | Advanced Past causes issues with larger clipboard content on remote desktop | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Advanced Paste
### Steps to reproduce
- Having advanced paste active on host machine
- Open remote desktop session (e.g. AVD)
- Use the snipping tool to copy a larger area of the desktop into the clipboard (probably any other way to have some larger content in the clipboard would work as well)
- try to type using the keyboard
### ✔️ Expected Behavior
Keyboard input to work normally.
### ❌ Actual Behavior
Keyboard input is extremely erratic, almost doesn't work anymore until clipboard is cleaned again.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,780,183,657 | godot | Baking LightmapGI with shadowmasks can crash Godot after transparency support was added | ### Tested versions
-Reproducible in latest master v4.4.dev.gh [24d74510e]
### System information
Windows 10 - Godot 4.4dev - Vulkan (Foward+) - Nvidea 1070
### Issue description
Trying to bake the LightmapGI with shadowmasks can get stuck and crash Godot in some circunstances.
Using the latest master (v4.4.dev.gh [24d74510e]) and trying to bake the lightmapGI using the testproject from (https://github.com/godotengine/godot/pull/85653#issuecomment-2447529657) crashes Godot.
With a lot of "Couldn't map PC to fn name" errors.
https://github.com/user-attachments/assets/852e65f9-1f42-476e-928f-f004ba76c92c
Here is the start of the crash:
(On Windows 10, Foward+, Nvidea 1070)

It appears this started happening since adding transparency support to the lightmapGI (https://github.com/godotengine/godot/pull/99538)
As found out by https://github.com/godotengine/godot/pull/85653#issuecomment-2581660852
> @Geometror EDIT: Tested some some PR's and is crashing since #99538 (Transparency PR). Something broke between Dec 12 and Dec 19.
>
> EDIT2: It seems like if I insist and try again it manage to bake, but taking double the time and the computer becomes unresponsive in the process.
### Steps to reproduce
I didn't figure out the MRP yet, but you can
use the test project from https://github.com/godotengine/godot/pull/85653#issuecomment-2447529657
(Used to work until the transparency to GI was added)
([direct download link](https://github.com/user-attachments/files/17580558/shadowmask_test_2.zip))
- Try to bake the LIghtmapGI
Expected: LightmapGI baked
Results: Goes until about 50% then Godot gets stuck and crashes.
### Minimal reproduction project (MRP)
Use the test project from https://github.com/godotengine/godot/pull/85653#issuecomment-2447529657
([direct download link](https://github.com/user-attachments/files/17580558/shadowmask_test_2.zip)) | bug,topic:rendering,crash,regression,topic:3d | medium | Critical |
2,780,202,137 | flutter | [google_maps_flutter][web] Support clickableIcons flag | ### Use case
As a developer I want to hide the default info window which appears when clicking on a POI
<img width="660" alt="Screenshot 2025-01-10 at 15 13 38" src="https://github.com/user-attachments/assets/fb396db0-510a-4890-b804-093a35f7443f" />
### Proposal
While going through the source files of google_maps_flutter_web I can see that the extension type MapOptions includes this boolean clickableIcons which is used by the JS API for the purpose of hiding this info windows.
```dart
extension type MapOptions._(JSObject _) implements JSObject {
external MapOptions({
String? backgroundColor,
bool? cameraControl,
CameraControlOptions? cameraControlOptions,
LatLngOrLatLngLiteral? center,
bool? clickableIcons,
JSAny? /*(ColorScheme|string)?*/ colorScheme,
num? controlSize,
bool? disableDefaultUI,
bool? disableDoubleClickZoom, ...) }
```
But there is no way to set this option via the GoogleMap widget. I think it would be nice to have an extra variable for this purpose. Something like:
```dart
GoogleMap(clickableIcons: false, ...)
```
| c: new feature,p: maps,platform-web,package,c: proposal,team-web | low | Minor |
2,780,203,409 | next.js | Inconsistent `context.resolvedUrl` behavior in `getServerSideProps` after middleware rewrite | ### Link to the code that reproduces this issue
https://github.com/guinnod/next-js-issue
### To Reproduce
1. Start the application `(npm run build & npm run start)`
2. Go to the page `/old?param=oldValue`
### Current vs. Expected behavior
When accessing `/old?param=oldValue`
Expected resolvedUrl: `/new?param=newValue`
Actual resolvedUrl: `/new?param=oldValue`
### Expected Behavior
After middleware rewrites both the pathname and query parameters, `context.resolvedUrl` in `getServerSideProps` should reflect the complete rewritten URL, including both the new pathname and the new query parameters.
### Actual Behavior
While the middleware successfully rewrites the URL and the application correctly routes to the new path, context.resolvedUrl in getServerSideProps shows a mixed state:
- Pathname in `resolvedUrl` is correctly updated to the rewritten value
- Query parameters in `resolvedUrl` remain as the original values before rewrite
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 20.14.0
npm: 10.7.0
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.2.0-canary.3 // Latest available version is detected (15.2.0-canary.3).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware, Pages Router
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed)
### Additional context
The bug exhibits inconsistent behavior across different projects and environments:
1. Original Production Project (Next.js 14.1.2):
- Deployed on Vercel: Bug present with incorrect pathname in resolvedUrl but correct query parameters
- Local development: Bug not reproducible even with identical Vercel environment variables
- Local production build (`next build && next start`): Bug not reproducible
- With locales in pathname (`e.g., / and /en`): Bug does not manifest
- Vercel deployment logs confirm the bug's existence in production
2. Reproduction Project (created with `npx create-next-app -e reproduction-template`):
- Shows opposite behavior: pathname is correct but query parameters are wrong
- Demonstrates that the bug's behavior is not consistent even when attempting to reproduce it
This inconsistency in behavior makes the bug particularly concerning:
- The same codebase behaves differently between Vercel deployment and local environment
- Different projects exhibit opposite symptoms (either pathname or query parameters being incorrect)
- The presence of locale routing appears to affect the bug's manifestation
- The bug's behavior is not sustainable across different setups and configurations
This variation in behavior suggests a deeper underlying issue with how URL rewrites are handled in different contexts and configurations within Next.js. | Middleware,Pages Router | low | Critical |
2,780,218,270 | vscode | Add keybindings to chat edit actions, help dialog | The following should have default keybindings to aid screen reader users and keyboard only users and be included in the a11y help dialog:
- Undo Last Edit (proposed `ctrlCmd+z`)
- Redo Last Edit (proposed `ctrlCmd+shift+z`)
- Clear Working Set (proposed `ctrlCmd+k`)
- Save All (proposed `alt+ctrlCmd+s`)
- View All Edits (proposed `ctrlCmd+e`)
- Add files (proposed `ctrlCmd+a`)
- Switch to Chat (already has keybinding)
- New Edit Session (already has keybinding)
cc @joyceerhl, @jrieken, @jooyoungseo, @rperez030 for feedback | feature-request,accessibility | low | Minor |
2,780,256,697 | next.js | CSS Module Hot Reloading Issue in Next.js Dev Mode | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/unruffled-rui-xqj4ys
### Issue Description:
I encountered an issue with CSS modules in my Next.js project when working in development mode. When I modify CSS properties in a CSS module and save the file, the styles do not update as expected in the browser. Instead, the following error is thrown in the console:
```
Error: No link element found for chunk static/chunks/src_components_fb4138._.css
at http://localhost:3000/_next/static/chunks/_d95469._.js:1595:28
...
```
### To Reproduce
1- Create a component that imports a CSS module:
`import styles from './drawingcanvas.module.css';
`
2- Modify any CSS property in the CSS module while running the Next.js app in development mode (next dev).
3- Observe that the styles do not update in the browser, and the following error appears in the console:
`Error: No link element found for chunk static/chunks/[...].css
`
4- Manually reload the page to apply the changes.
### Current vs. Expected behavior
Expected Behavior:
CSS modules should hot reload, and the updated styles should be reflected in the browser without requiring a manual page reload.
Actual Behavior:
CSS changes in the module do not take effect, and the browser throws an error until the page is manually reloaded.
Workaround:
I resolved this issue by moving the CSS module import from the child component (Canvas component) to its parent component. I then passed the styles object as a prop to the child component. Example:
Parent Component:
```
import styles from './drawingcanvas.module.css';
export default function ParentComponent() {
return <CanvasComponent styles={styles} />;
}
```
Child Component:
```
export default function CanvasComponent({ styles }) {
return <div className={styles.canvasContainer}>Canvas</div>;
}
```
This workaround fixed the issue, and the CSS changes hot-reloaded correctly in development mode. However, this introduces extra coupling between components and may not be an ideal solution.
### Provide environment information
```bash
Next.js version: 15.1.4
Node.js version: 20.17.0.
Browser: arc browser version 1.36.1 with chromium engine 131.0.6778.265
OS: windows 11 21h2 build 22000.556
```
### Which area(s) are affected? (Select all that apply)
Not sure, Developer Experience
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | CSS | low | Critical |
2,780,286,200 | PowerToys | Move AltGR functions to Left Alt key | ### Description of the new feature / enhancement
Example: To write the @ character I need to hold AltGR and press 2 on my Nordic ISO keyboard. I would rather hold <Left Alt> and press 2, but it doesn't work.
### Scenario when this would be used?
Everyday use for characters like @£$€{[]}\
It would also free up a key that I can assign to something else on.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,780,286,520 | PowerToys | Snap new windows when started | ### Description of the new feature / enhancement
As soon as a new app window is launched, snap it to a zone in the active screen/monitor, following where the mouse cursor is.
### Scenario when this would be used?
Today we have the ability to "remember" the last position of an app when re-launched. The idea is to be capable to keep windows organized for any new launched app window, at list the main app window, to avoid having to organize manually.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,780,289,410 | terminal | WT causes the whole Windows animation to be super laggy (after RDP?), until all the WT processes are terminated | ### Windows Terminal version
1.21.3231.0
### Windows build number
10.0.19045.5247
### Other Software
Command line
### Steps to reproduce
There are no concrete steps to reproduce, as the issue occurs randomly. However, I’ve noticed two recurring patterns:
1. Using RDP to access this computer often triggers the bug later when I use the computer locally (though *not always immediately*).
2. Having multiple Windows Terminal (WT) windows open for an extended period.
### Expected Behavior
It shouldn't cause Windows animation lagging issue.
### Actual Behavior
I briefly mentioned this performance issue [[in another ticket](https://github.com/microsoft/terminal/issues/17414#issuecomment-2539085553)](https://github.com/microsoft/terminal/issues/17414#issuecomment-2539085553), but I believe it deserves its own standalone discussion.
When the bug occurs, the entire Windows desktop UI becomes extremely laggy (e.g., dragging and switching windows), with performance dropping to what feels like less than 20 FPS. 3D applications like full-screen games are *not* affected; only the desktop UI.
To resolve the issue, I must close all open Windows Terminal windows (I typically keep 2–3 open at any given time). Doing so immediately restores normal performance.
The issue is not tied to any specific tab—I’ve tested this by opening a new window and then closing all existing windows, but the problem persists.
Here’s what the processes related to WT look like before I terminate them (and fix the issue):

| Issue-Bug,Needs-Triage,Needs-Attention | low | Critical |
2,780,294,127 | transformers | Trainer sets `state.best_model_checkpoint` even when it doesn't save there; leads to training crash | ### System Info
- `transformers` version: 4.49.0.dev0
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.9.16
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
@muellerz
@SunMarc
@seanswyi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
`pytest tests/test_model_card.py::test_model_card` from `setfit` (link: https://github.com/huggingface/setfit/blob/main/tests/test_model_card.py#L15)
Apologies for not having a convenient ready-to-go `transformers`-only script. I'm afraid I don't have time for that right now.
In essence, the flow is as follows:
1. I start the trainer, with lots of evaluations (e.g. `eval_steps=1`, `eval_strategy="steps"`)
2. When evaluating, the new `_determine_best_metric` is called: https://github.com/huggingface/transformers/blob/f63829c87bd89a4a0cea45d81c1cd870996b30c4/src/transformers/trainer.py#L3070-L3075
3. With `args.metric_for_best_model` set, we only set the `best_metric` in the first evaluation: https://github.com/huggingface/transformers/blob/f63829c87bd89a4a0cea45d81c1cd870996b30c4/src/transformers/trainer.py#L3182
4. On the 2nd eval, we start comparing against the first. If the model is better, we now also set `best_model_checkpoint`: https://github.com/huggingface/transformers/blob/f63829c87bd89a4a0cea45d81c1cd870996b30c4/src/transformers/trainer.py#L3184-L3192 *but* we're not sure if we're even going to be saving at this step! If `args.save_strategy != SaveStrategy.BEST:`, then it's very possible that we're not saving.
5. The eventual crash occurs when "deleting old checkpoints", because there is no file at `best_model_checkpoint`: https://github.com/huggingface/transformers/blob/f63829c87bd89a4a0cea45d81c1cd870996b30c4/src/transformers/trainer.py#L2680-L2685
### Expected behavior
We should not be setting `best_model_checkpoint` unless we're confident that 1) `state.should_save` is True or 2) `args.save_strategy == "best"`. Then we'll avoid this crash.
- Tom Aarsen | bug | low | Critical |
2,780,300,086 | PowerToys | Add ability to resize snapped windows together with others | ### Description of the new feature / enhancement
When having a bunch of windows snapped, would be nice to be capable to resize all other windows avoiding overlapping others, following the same approach as the native Windows snip feature.
### Scenario when this would be used?
Whenever we have snapped windows covering the entire screen, frequently we need to re-accommodate those windows to better focus on a specific one, but it is not possible to do that without having to resize all the others.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,780,300,366 | langchain | Google Gemini Grounding Tool: `'Tool' object has no attribute 'name'` | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from vertexai.generative_models import Tool, grounding
from langchain.agents import AgentExecutor, create_react_agent
tool = Tool.from_google_search_retrieval(grounding.GoogleSearchRetrieval())
try:
agent = create_react_agent(llm=llm, tools=[tool], prompt=prompt_template)
except Exception as error:
logger.exception(f"Error creating agent: {error}")
raise error
```
### Error Message and Stack Trace (if applicable)
```
{
"level": "ERROR",
"location": "_initialize_chatbot_agent:160",
"message": "Error creating agent: 'Tool' object has no attribute 'name'",
"timestamp": "2025-01-10 01:56:14,301+0000",
"service": "chatbot_agent.py",
"exception": "Traceback (most recent call last):\n File \"/app/util/chatbot_agent.py\", line 158, in _initialize_chatbot_agent\n agent = create_react_agent(llm=llm, tools=self.tools, prompt=prompt_template)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/site-packages/langchain/agents/react/agent.py\", line 117, in create_react_agent\n tools=tools_renderer(list(tools)),\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/site-packages/langchain/tools/render.py\", line 38, in render_text_description\n return \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])\n ^^^^^^^^^\nAttributeError: 'Tool' object has no attribute 'name'",
"exception_name": "AttributeError",
"stack_trace": {
"type": "AttributeError",
"value": "'Tool' object has no attribute 'name'",
"module": "builtins",
"frames": [
{
"file": "/app/util/chatbot_agent.py",
"line": 158,
"function": "_initialize_chatbot_agent",
"statement": "agent = create_react_agent(llm=llm, tools=self.tools, prompt=prompt_template)"
},
{
"file": "/usr/local/lib/python3.12/site-packages/langchain/agents/react/agent.py",
"line": 117,
"function": "create_react_agent",
"statement": "tools=tools_renderer(list(tools)),"
},
{
"file": "/usr/local/lib/python3.12/site-packages/langchain/tools/render.py",
"line": 38,
"function": "render_text_description",
"statement": "return \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])"
}
]
}
}
```
### Description
Using [this](https://x.com/LangChainAI/status/1852072744378302555) as an example, I'm attempting to use `gemini`'s `grounding` tool.
I expect my call to gemini to succeed. Instead, I get `'Tool' object has no attribute 'name'`.
Looks like `LangChain`'s `BaseTool` [class](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools/base.py#L349) expects a `name` attribute that google's `Tool` [class](https://github.com/googleapis/python-aiplatform/blob/main/google/cloud/aiplatform_v1beta1/types/tool.py#L48) doesn't implement, making them incompatible.
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:06:57 PDT 2024; root:xnu-11215.41.3~3/RELEASE_ARM64_T6041
> Python Version: 3.12.8 (main, Dec 19 2024, 09:47:55) [Clang 16.0.0 (clang-1600.0.26.6)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.1.147
> langchain_anthropic: 0.3.1
> langchain_google_genai: 2.0.8
> langchain_google_vertexai: 2.0.10
> langchain_openai: 0.2.14
> langchain_text_splitters: 0.3.5
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.42.0
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> filetype: 1.2.0
> google-cloud-aiplatform: 1.76.0
> google-cloud-storage: 2.19.0
> google-generativeai: 0.8.3
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langchain-mistralai: Installed. No version info available.
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.6
> orjson: 3.10.12
> packaging: 23.2
> pydantic: 2.9.2
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
``` | 🤖:bug | low | Critical |
2,780,300,864 | langchain | Streaming output does not work when using bind_tools | ### Checked other resources
- [X] This is a bug, not a usage question. For questions, please use GitHub Discussions.
- [X] I added a clear and detailed title that summarizes the issue.
- [X] I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
- [X] I included a self-contained, minimal example that demonstrates the issue INCLUDING all the relevant imports. The code run AS IS to reproduce the issue.
### Example Code
```python
import asyncio
import inspect
from typing import Annotated, TypedDict
from langchain_core.messages import HumanMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool
from langchain_ollama import ChatOllama
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, StateGraph
from langgraph.graph.message import AnyMessage, add_messages
from langgraph.prebuilt import ToolNode
OLLAMA_MODEL = "llama3.2:latest"
OLLAMA_URL = "http://127.0.0.1:11434"
class GraphState(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
@tool
def get_weather() -> str:
"""Inform the user that the weather is 15°C and it would rain and throw a joke in there also, but keep it brief 20 words max."""
return "Inform the user that the weather is 15°C and it would rain and throw a joke in there also, but keep it brief 20 words max."
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"""
You are a helpful AI assistant specializing in weather. Provide accurate and clear weather information,
forecasts, and safety tips based on user input. Offer localized details when provided with a location and
explain weather phenomena concisely. If information is unclear or unavailable, ask for clarification. Be
user-friendly and reliable. DO NOT respond with more than 20 words.
""",
),
("placeholder", "{messages}"),
]
)
def build_graph(agent, tools):
async def call_model(state, config):
response = await agent.ainvoke(state, config)
return {"messages": response}
def should_continue(state):
last_message = state["messages"][-1]
return "tools" if last_message.tool_calls else END
builder = StateGraph(GraphState)
builder.add_node("agent", call_model)
builder.add_node("tools", ToolNode(tools))
builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", should_continue, ["tools", END])
builder.add_edge("tools", "agent")
builder.add_edge("agent", END)
return builder.compile()
async def run_graph(input_message, agent, tools):
app = build_graph(agent, tools)
async for msg, metadata in app.astream(
{"messages": [HumanMessage(content=input_message, name="user")]},
stream_mode="messages",
):
if msg.content and not isinstance(msg, HumanMessage):
yield msg.content
async def test_chatollama_with_tools():
llm = ChatOllama(
base_url=OLLAMA_URL,
model=OLLAMA_MODEL,
temperature=0.1,
num_ctx=8000,
num_predict=-2,
)
tools = [get_weather]
agent = prompt | llm.bind_tools(tools)
print("\n\n" + "=" * 20 + f" {inspect.currentframe().f_code.co_name} " + "=" * 20)
async for msg in run_graph("What's the weather like in Tokyo?", agent, tools):
print(msg, end="|", flush=True)
async def test_chatollama_no_tools():
llm = ChatOllama(
base_url=OLLAMA_URL,
model=OLLAMA_MODEL,
temperature=0.1,
num_ctx=8000,
num_predict=-2,
)
tools = []
agent = prompt | llm.bind_tools(tools)
print("\n\n" + "=" * 20 + f" {inspect.currentframe().f_code.co_name} " + "=" * 20)
async for msg in run_graph("What's the weather like in Tokyo?", agent, tools):
print(msg, end="|", flush=True)
async def test_chatopenai_with_tools():
# We use the ChatOpenAI interface with ollama backend to make sure that the issue is not with ChatOllama interface only
llm = ChatOpenAI(
base_url=OLLAMA_URL + "/v1",
model=OLLAMA_MODEL,
temperature=0.1,
api_key="ollama",
)
tools = [get_weather]
agent = prompt | llm.bind_tools(tools)
print("\n\n" + "=" * 20 + f" {inspect.currentframe().f_code.co_name} " + "=" * 20)
async for msg in run_graph("What's the weather like in Tokyo?", agent, tools):
print(msg, end="|", flush=True)
async def test_chatopenai_no_tools():
llm = ChatOpenAI(
base_url=OLLAMA_URL + "/v1",
model=OLLAMA_MODEL,
temperature=0.1,
api_key="ollama",
)
tools = []
agent = prompt | llm.bind_tools(tools)
print("\n\n" + "=" * 20 + f" {inspect.currentframe().f_code.co_name} " + "=" * 20)
async for msg in run_graph("What's the weather like in Tokyo?", agent, tools):
print(msg, end="|", flush=True)
if __name__ == "__main__":
async def main():
await test_chatollama_no_tools()
await test_chatollama_with_tools()
await test_chatopenai_no_tools()
await test_chatopenai_with_tools()
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use an agentic graph via langgraph and stream the output to the end user.
The problem is that the output is not streamed when I use bind_tools function.
I'm already familiar with this [issue](https://github.com/langchain-ai/langchain/issues/26971) but it still doesn't work in my case.
For example the code above outputs:
```
==================== test_chatollama_no_tools ====================
Currently|:| Part|ly| cloudy| with| a| high| of| |22|°C| (|72|°F|)| and| low| of| |18|°C| (|64|°F|).|
==================== test_chatollama_with_tools ====================
Inform the user that the weather is 15°C and it would rain and throw a joke in there also, but keep it brief 20 words max.|"Rainy day in Tokyo! Better grab an umbrella... or a sake to drown your sorrows, as they say!"|
==================== test_chatopenai_no_tools ====================
Currently|:| Part|ly| cloudy| with| a| high| of| |22|°C| (|72|°F|)| and| a| low| of| |18|°C| (|64|°F|).|
==================== test_chatopenai_with_tools ====================
Inform the user that the weather is 15°C and it would rain and throw a joke in there also, but keep it brief 20 words max.|"Rainy days in Tokyo can be 'drizzly' affairs! Current temp: 15°C. Check forecasts for updates."|
```
### System Info
## Package Requirements
langgraph==0.2.37
langchain==0.3.14
langchain_community==0.3.14
langchain-ollama==0.2.2
langchain-openai==0.2.14
## Ollama Version
ollama version is 0.5.4
## System
Ubuntu 24.04.1 LTS
Python 3.12.3 | investigate | low | Critical |
2,780,363,622 | PowerToys | Suggestions for the Workspace function | ### Description of the new feature / enhancement
Add an option to move windows without activating them.
Add a shortcut key to quickly move windows.
### Scenario when this would be used?
After the monitor wakes up from sleep mode, all windows are resized to the upper left corner.
### Supporting information
_No response_ | Needs-Triage,Needs-Team-Response,Product-Workspaces | low | Minor |
2,780,375,761 | flutter | Material Error: The control character U+0000 can only be used in strings and comments. | ### Steps to reproduce
1. flutter run
```
ekusi …\herit feat/purchase $ 18:54
flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, 3.27.1, on Microsoft Windows [Version 10.0.26120.1252], locale en-US)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[√] Chrome - develop for the web
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.4)
[√] Android Studio (version 2024.1)
[√] VS Code (version 1.96.2)
[√] Connected device (4 available)
[√] Network resources
```
### Expected results
Run projects without any warnings.
### Actual results
Error: The control character U+0000 can only be used in strings and comments.
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video

### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| waiting for customer response,in triage | low | Critical |
2,780,388,553 | godot | Using "DisplayServer.window_set_mode" to unminimise borderless windows causes them to be lost in window limbo | ### Tested versions
Reproducible in:
v4.3.stable.official [77dcf97d8]
v4.4.dev7.official [46c8f8c5c]
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GTX 1050 (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz (4 Threads)
### Issue description
After minimising a project with multiple borderless non-embedded subwindows, attempting to unminimise all windows through the use of `DisplayServer.window_set_mode(DisplayServer.WINDOW_MODE_WINDOWED)` causes the majority of subwindows to become uninteractable, and to disappear from the Windows taskbar. Even though they are still visible, it is as though they do not exist.
- I have been using `Win + D` to minimise all windows due to the lack of title bar and the MRP code not really allowing subwindows to take focus.
- I have tested various window flags and this issue only appears when the windows are borderless.
- It is not shown in the MRP, but this happens even if the windows are forced to native.
- The project is set to compatibility rendering, as this is what my main project uses, however this issue can be reproduced using any rendering method.
I found the open issue https://github.com/godotengine/godot/issues/98508 which may be related but does not seem to be exactly the same, therefore I decided to open a separate issue. If these need to be merged at a later date I apologise in advance.
In my main project I did once receive the following error, but after editing the project further it has not appeared since. It also does not appear in the MRP. I have minimised and included it in case it proves useful, as it does seem to link to relevant code in the Godot source.
_error: E 0:00:19:0677 (...) Condition "!windows.has(p_window)" is true.
<C++ Source> platform/windows/display_server_windows.cpp:1982 @ window_set_mode()_
https://github.com/godotengine/godot/blob/77dcf97d82cbfe4e4615475fa52ca03da645dbd8/platform/windows/display_server_windows.cpp#L1982
### Steps to reproduce
- Run project.
- Press 'Win + D' to minimise all windows.
- Click on the program in the taskbar, you will see all windows present.
- Click any of the window previews in the taskbar popup to unminimise the project.
- Click the program on the taskbar again, you will see that the majority of subwindows have disappeared.
- These windows are visible on the desktop, but cannot be interacted with.
### Minimal reproduction project (MRP)
[minimisebug.zip](https://github.com/user-attachments/files/18378795/minimisebug.zip) | bug,topic:gui | low | Critical |
2,780,408,198 | vscode | How to fix Microsoft Visual studio code, language pack issue from English to French at first launch. | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
We are facing issue regarding the Microsoft Visual studio code --language pack, so we need to change the display language of the software to French(FR), so as per information provided on visual studio code page, we downloaded the language pack and also edited the argv.json file but even Tho keeping this files and folders priorly after installation, still at first launch we are getting English language only. Please refer attached image 1

. As we are getting French language at 2nd launch(Refer image 2),

we also tried to add start process of code.exe and killed it so it gets launched at once into the script. but still even after keeping user data files, language package and also start process and kill through script still English language gets displayed.
Summary: On first launch we are getting English language even after keeping all the needed data in user context, but after closing the software and relaunch it manually then only it displays French lang. Not of first launch. Please help us understand if any additional parameters files to be added to make it work on first launch.
| bug,info-needed,l10n-platform,confirmation-pending | low | Minor |
2,780,438,896 | transformers | Trainer: TensorBoardCallback not working for "on_save" and "on_save_end" events | ### System Info
transformers 4.47.1
torch 2.5.0+cu121
Ubuntu 22.04 LTS
### Who can help?
Hi, @muellerz and @SunMarc
I'm trying to get storage I/O related to the Trainer's checkpoint-saving operations. For that matter I implemented the following class:
```
import time
import psutil
from transformers.integrations import TensorBoardCallback
class DiskIOMonitoringCallback(TensorBoardCallback):
def __init__(self, tb_writer=None):
super().__init__(tb_writer=tb_writer)
self.start_time = None
self.initial_disk_io = None
def _compute_disk_io_metrics(self):
"""Compute disk I/O metrics."""
final_disk_io = psutil.disk_io_counters()
if self.initial_disk_io:
read_bytes = final_disk_io.read_bytes - self.initial_disk_io.read_bytes
write_bytes = final_disk_io.write_bytes - self.initial_disk_io.write_bytes
else:
read_bytes, write_bytes = 0, 0
return read_bytes, write_bytes
def _init_summary_writer(self, args, log_dir=None):
"""Ensure TensorBoard writer is initialized."""
log_dir = log_dir or args.logging_dir # Use provided log_dir or fallback to args.logging_dir
if self.tb_writer is None:
from torch.utils.tensorboard import SummaryWriter
self.tb_writer = SummaryWriter(log_dir=log_dir)
def on_save(self, args, state, control, **kwargs):
"""Hook triggered before saving a checkpoint."""
if self.tb_writer is None:
self._init_summary_writer(args)
# Record start time and initial disk I/O stats
self.start_time = time.time()
self.initial_disk_io = psutil.disk_io_counters()
def on_save_end(self, args, state, control, **kwargs):
"""Hook triggered after saving a checkpoint."""
# Calculate save duration
save_duration = time.time() - self.start_time
# Compute disk I/O metrics
read_bytes, write_bytes = self._compute_disk_io_metrics()
# Log metrics to TensorBoard
if self.tb_writer:
self.tb_writer.add_scalar("DiskIO/Save Duration (s)", save_duration, state.global_step)
self.tb_writer.add_scalar("DiskIO/Read Bytes", read_bytes, state.global_step)
self.tb_writer.add_scalar("DiskIO/Write Bytes", write_bytes, state.global_step)
# Print metrics for debugging purposes
print(f"Checkpoint saved in {save_duration:.2f}s. Read: {read_bytes} bytes, Write: {write_bytes} bytes.")
```
My trainer session is invoked line this:
```
training_args = TrainingArguments(
output_dir="./results",
optim="adamw_torch",
num_train_epochs=6,
per_device_train_batch_size=64,
gradient_accumulation_steps=8,
learning_rate=3e-5,
weight_decay=0,
warmup_steps=100,
lr_scheduler_type="cosine",
gradient_checkpointing=True,
dataloader_num_workers=8,
bf16=True,
logging_steps=10,
report_to="tensorboard",
save_strategy="epoch",
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=None,
data_collator=data_collator,
callbacks=[
GpuCpuMonitoringCallback(),
SystemMonitoringCallback(),
DiskIOMonitoringCallback(),
],
)
trainer.train()
```
The `GpuCpuMonitoringCallback` and `SystemMonitoringCallback` work properly, but I'm now getting data from `DiskIOMonitoringCallback` despite multiple implementation changes. I'm missing something, or something might not be working at the callbacks layer.
I really appreciate any help you can provide.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I provided the code for reproduction in the description.
### Expected behavior
I expect Tensorboard to provide a card to show I/O disk data upon checkpoint saving operations. | bug | low | Critical |
2,780,442,504 | ollama | Ollama version doesn't properly truncate tokens to 512 max for official snowflake-arctic-embed-l model | ### What is the issue?
When using the official Ollama model of snowflake-arctic-embed-l (latest/335m - 21ab8b9b0545), if input is greater than 512 tokens, instead of truncating, the model encounters an error.
On a previous version (0.3.9) when you pass it more than 512 tokens, it returns only [0,0,0...] embeddings.
In 0.5.4, Ollama returns a 500 error and the logs show that "Process xxxxxx (ollama_llama_se) of user xxx dumped core"
Logs:
```
llama_model_load: vocab only - skipping tensors
ggml-cpu.c:8400: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed
ggml-cpu.c:8400: GGML_ASSERT(i01 >= 0 && i01 < ne01) failed
SIGSEGV: segmentation violation
PC=0x7fcc733ecc57 m=5 sigcode=1 addr=0x207203fe0
signal arrived during ago violation
goroutine 8 gp=0xc0000f21c0 m=5 mp=0xc000100008 [syscall]:
runtime.cgocall(0x562b649d47d0, 0xc000073b90)
runtime/cgocall.go:167
github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7fcbf115bfa0, {0x2, 0x7fcbf0b80590, 0x0, 0x0, 0x7fcbf0b80da0, 0x7fcbf0b815b, 0x7fcbf0b81dc0, 0x7fcbf1144dc0})
...
```
I've checked my Ollama parameters and this occurs when "truncate": true. Other embedding models properly truncates the input and I see the INFO log in Ollama say "input truncated". I don't see this message with snowflake-arctic-embed-l.
When "truncate" is set to false, I get the expected "input length exceeds maximum context length".
https://ollama.com/library/snowflake-arctic-embed
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4 | bug | low | Critical |
2,780,486,208 | langchain | Presigned URLs can become invalid in `LakeFSLoader.load` when Unstructured is slow | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Unfortunately this will require you to have a LakeFS configuration so it isn't that straightforward to reproduce (and may also depend on your specific LakeFS configuration)... but basically just call `ls_objects` with `presign=True` then wait a while... and then try to access one of the URLs (which is what the `LakeFSLoader` does internally).
```python
import dotenv
import os
import requests
import time
from langchain_community.document_loaders.lakefs import LakeFSClient
client = LakeFSClient(lakefs_access_key=lakefs_access_key,
lakefs_secret_key=lakefs_secret_key,
lakefs_endpoint=lakefs_endpoint)
objs = client.ls_objects(repo, ref, path, presign=True)
path, url = objs[0]
response = requests.get(url)
response.raise_for_status()
time.sleep(1200)
response = requests.get(url)
response.raise_for_status()
```
### Error Message and Stack Trace (if applicable)
```console
Traceback (most recent call last):
File "lakefs_bug.py", line 24, in <module>
response.raise_for_status()
File ".venv/lib/python3.10/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: http://172.17.0.1:9200/mybucket/data/gb80naqqr9gs72ue9qu0/ctvfgfaqr9gs72ue9qv0?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=minioadmin%2F20250110%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250110T151448Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=89f8f515de3ee05589a4d7880b5a4a21957d2e3498ed702c436a9564fe8db285
```
### Description
When loading lots of or large documents with the `LakeFSLoader` it is frequently the case that quite a bit of time passes between the [call to `ls_objects` at line 109](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_loaders/lakefs.py#L109) and the [call to `requests.get` on line 172](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_loaders/lakefs.py#L172).
This is because Unstructured can be very slow (insisting on "repairing" and OCRing perfectly good PDFs, for instance). The result is that the presigned URLs that LakeFS gives us (in the call to `ls_objects`) are no longer valid once we get around to actually accessing them.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #52~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Dec 9 15:00:52 UTC 2
> Python Version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.25
> langchain: 0.3.12
> langchain_community: 0.3.12
> langsmith: 0.2.3
> langchain_nomic: 0.1.4
> langchain_ollama: 0.2.0
> langchain_text_splitters: 0.3.3
> langchain_unstructured: 0.1.6
> langgraph_sdk: 0.1.47
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.10
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> nomic: 3.3.4
> numpy: 1.26.4
> ollama: 0.4.4
> onnxruntime: 1.20.1
> orjson: 3.10.12
> packaging: 24.2
> pillow: 10.4.0
> pydantic: 2.9.2
> pydantic-settings: 2.7.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> typing-extensions: 4.12.2
> unstructured-client: 0.27.0
> unstructured[all-docs]: Installed. No version info available.
| 🤖:bug | low | Critical |
2,780,516,324 | tauri | [bug] Crash on MacOS when navigating with a pending invoke | ### Describe the bug
On MacOS there appears to be a race condition when an asynchronous invoke request is in progress while the webview is navigating to a new page. This leads to a panic in `wry::wkwebview::class::url_scheme_handler::start_task::{{closure}}::response::{{closure}}` which causes an immediate abort because it is in an `AssertUnwindSafe` context.
### Reproduction
Reproduction is available here: https://github.com/tactile-eng/tauri-repro
Simply run the app and it should crash after a few seconds. The only code that has changed from the template is in `src/main.js`.
### Expected behavior
Tauri should not crash.
### Full `tauri info` output
```text
> tauri "info"
[✔] Environment
- OS: Mac OS 15.2.0 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 18.20.5
- pnpm: 9.15.2
- npm: 10.8.2
[-] Packages
- tauri 🦀: 2.2.1
- tauri-build 🦀: 2.0.5
- wry 🦀: 0.48.0
- tao 🦀: 0.31.1
- tauri-cli 🦀: 1.5.3
- @tauri-apps/api : not installed!
- @tauri-apps/cli : 2.2.3
[-] Plugins
- tauri-plugin-opener 🦀: 2.2.3
- @tauri-apps/plugin-opener : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../src
```
### Stack trace
```text
2025-01-10 08:50:56.119 tauri-repro[4081:3055403] +[IMKClient subclass]: chose IMKClient_Modern
2025-01-10 08:50:56.213 tauri-repro[4081:3055403] +[IMKInputSession subclass]: chose IMKInputSession_Modern
thread 'tokio-runtime-worker' panicked at core/src/panicking.rs:221:5:
panic in a function that cannot unwind
stack backtrace:
0: rust_begin_unwind
at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:665:5
1: core::panicking::panic_nounwind_fmt::runtime
at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:112:18
2: core::panicking::panic_nounwind_fmt
at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:122:5
3: core::panicking::panic_nounwind
at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:221:5
4: core::panicking::panic_cannot_unwind
at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:310:5
5: objc2::exception::try_no_ret::try_objc_execute_closure
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc2-0.5.2/src/exception.rs:196:9
6: rust_objc_sys_0_3_try_catch_exception
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc-sys-0.3.5/extern/exception.m:14:9
7: objc2::exception::try_no_ret
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc2-0.5.2/src/exception.rs:232:28
8: objc2::exception::catch
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc2-0.5.2/src/exception.rs:291:27
9: objc2::runtime::message_receiver::MessageReceiver::send_message
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc2-0.5.2/src/runtime/message_receiver.rs:25:15
10: objc2::__macro_helpers::msg_send::MsgSend::send_message
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc2-0.5.2/src/__macro_helpers/msg_send.rs:27:31
11: objc2_web_kit::generated::__WKURLSchemeTask::WKURLSchemeTask::didFinish
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc2-0.5.2/src/macros/extern_protocol.rs:239:14
12: wry::wkwebview::class::url_scheme_handler::start_task::{{closure}}::response::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wry-0.48.0/src/wkwebview/class/url_scheme_handler.rs:260:19
13: core::ops::function::FnOnce::call_once
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
14: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/panic/unwind_safe.rs:272:9
15: objc2::exception::catch::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc2-0.5.2/src/exception.rs:289:27
16: objc2::exception::try_no_ret::try_objc_execute_closure
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc2-0.5.2/src/exception.rs:202:13
17: rust_objc_sys_0_3_try_catch_exception
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc-sys-0.3.5/extern/exception.m:14:9
18: objc2::exception::try_no_ret
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc2-0.5.2/src/exception.rs:232:28
19: objc2::exception::catch
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/objc2-0.5.2/src/exception.rs:291:27
20: wry::wkwebview::class::url_scheme_handler::start_task::{{closure}}::response
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wry-0.48.0/src/wkwebview/class/url_scheme_handler.rs:259:17
21: wry::wkwebview::class::url_scheme_handler::start_task::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wry-0.48.0/src/wkwebview/class/url_scheme_handler.rs:275:23
22: core::ops::function::FnOnce::call_once{{vtable.shim}}
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
23: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/alloc/src/boxed.rs:2454:9
24: wry::RequestAsyncResponder::respond
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wry-0.48.0/src/lib.rs:345:5
25: tauri_runtime_wry::create_webview::{{closure}}::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-runtime-wry-2.3.0/src/lib.rs:4402:36
26: core::ops::function::FnOnce::call_once{{vtable.shim}}
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
27: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/alloc/src/boxed.rs:2454:9
28: tauri::app::UriSchemeResponder::respond
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-2.2.1/src/app.rs:2116:5
29: tauri::ipc::protocol::get::{{closure}}::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-2.2.1/src/ipc/protocol.rs:58:7
30: tauri::ipc::protocol::get::{{closure}}::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-2.2.1/src/ipc/protocol.rs:141:19
31: core::ops::function::FnOnce::call_once{{vtable.shim}}
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
32: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/alloc/src/boxed.rs:2454:9
33: tauri::webview::Webview<R>::on_message::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-2.2.1/src/webview/mod.rs:1342:11
34: core::ops::function::FnOnce::call_once{{vtable.shim}}
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
35: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/alloc/src/boxed.rs:2454:9
36: tauri::ipc::InvokeResolver<R>::return_result
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-2.2.1/src/ipc/mod.rs:452:5
37: tauri::ipc::InvokeResolver<R>::respond_async_serialized::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-2.2.1/src/ipc/mod.rs:347:7
38: <core::pin::Pin<P> as core::future::future::Future>::poll
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/future/future.rs:123:9
39: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/core.rs:331:17
40: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/loom/std/unsafe_cell.rs:16:9
41: tokio::runtime::task::core::Core<T,S>::poll
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/core.rs:320:13
42: tokio::runtime::task::harness::poll_future::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/harness.rs:532:19
43: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/panic/unwind_safe.rs:272:9
44: std::panicking::try::do_call
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/std/src/panicking.rs:557:40
45: ___rust_try
46: std::panicking::try
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/std/src/panicking.rs:520:19
47: std::panic::catch_unwind
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/std/src/panic.rs:358:14
48: tokio::runtime::task::harness::poll_future
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/harness.rs:520:18
49: tokio::runtime::task::harness::Harness<T,S>::poll_inner
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/harness.rs:209:27
50: tokio::runtime::task::harness::Harness<T,S>::poll
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/harness.rs:154:15
51: tokio::runtime::task::raw::poll
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/raw.rs:271:5
52: tokio::runtime::task::raw::RawTask::poll
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/raw.rs:201:18
53: tokio::runtime::task::LocalNotified<S>::run
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/mod.rs:449:9
54: tokio::runtime::scheduler::multi_thread::worker::Context::run_task::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/scheduler/multi_thread/worker.rs:596:13
55: tokio::runtime::coop::with_budget
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/coop.rs:107:5
56: tokio::runtime::coop::budget
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/coop.rs:73:5
57: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/scheduler/multi_thread/worker.rs:595:9
58: tokio::runtime::scheduler::multi_thread::worker::Context::run
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/scheduler/multi_thread/worker.rs:546:24
59: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/scheduler/multi_thread/worker.rs:511:21
60: tokio::runtime::context::scoped::Scoped<T>::set
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/context/scoped.rs:40:9
61: tokio::runtime::context::set_scheduler::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/context.rs:180:26
62: std::thread::local::LocalKey<T>::try_with
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/std/src/thread/local.rs:283:12
63: std::thread::local::LocalKey<T>::with
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/std/src/thread/local.rs:260:9
64: tokio::runtime::context::set_scheduler
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/context.rs:180:9
65: tokio::runtime::scheduler::multi_thread::worker::run::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/scheduler/multi_thread/worker.rs:506:9
66: tokio::runtime::context::runtime::enter_runtime
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/context/runtime.rs:65:16
67: tokio::runtime::scheduler::multi_thread::worker::run
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/scheduler/multi_thread/worker.rs:498:5
68: tokio::runtime::scheduler::multi_thread::worker::Launch::launch::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/scheduler/multi_thread/worker.rs:464:45
69: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/blocking/task.rs:42:21
70: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/core.rs:331:17
71: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/loom/std/unsafe_cell.rs:16:9
72: tokio::runtime::task::core::Core<T,S>::poll
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/core.rs:320:13
73: tokio::runtime::task::harness::poll_future::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/harness.rs:532:19
74: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/panic/unwind_safe.rs:272:9
75: std::panicking::try::do_call
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/std/src/panicking.rs:557:40
76: ___rust_try
77: std::panicking::try
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/std/src/panicking.rs:520:19
78: std::panic::catch_unwind
at /Users/alexmoon/.rustup/toolchains/stable-aarch64-apple-darwin/lib/rustlib/src/rust/library/std/src/panic.rs:358:14
79: tokio::runtime::task::harness::poll_future
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/harness.rs:520:18
80: tokio::runtime::task::harness::Harness<T,S>::poll_inner
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/harness.rs:209:27
81: tokio::runtime::task::harness::Harness<T,S>::poll
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/harness.rs:154:15
82: tokio::runtime::task::raw::poll
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/raw.rs:271:5
83: tokio::runtime::task::raw::RawTask::poll
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/raw.rs:201:18
84: tokio::runtime::task::UnownedTask<S>::run
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/task/mod.rs:486:9
85: tokio::runtime::blocking::pool::Task::run
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/blocking/pool.rs:161:9
86: tokio::runtime::blocking::pool::Inner::run
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/blocking/pool.rs:511:17
87: tokio::runtime::blocking::pool::Spawner::spawn_thread::{{closure}}
at /Users/alexmoon/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.43.0/src/runtime/blocking/pool.rs:469:13
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
thread caused non-unwinding panic. aborting.
```
### Additional context
_No response_ | type: bug,priority: 1 high,platform: macOS,status: needs triage | low | Critical |
2,780,559,409 | flutter | [ally][iOS] : ModalBottomSheet's DragHandle missing semantic information | When using a screeneader you can tap on the DragHandle of ModalBottomSheet to dismiss it. iOS VoiceOver will only read 'dismiss' and provide no indication that this is a clickable element. (On Android TalkBack will say 'Dismiss, Double-Tap to activate'). Since, when navigating with a screenreader the DragHandle is tappable, it should be semantically indicated as a button.
Steps to reproduce:
1. run this:
```
import 'package:flutter/material.dart';
/// Flutter code sample for [showModalBottomSheet].
/// showDragHandle: true was added by author of the issue
void main() => runApp(const BottomSheetApp());
class BottomSheetApp extends StatelessWidget {
const BottomSheetApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
theme: ThemeData(
colorSchemeSeed: const Color(0xff6750a4),
useMaterial3: true,
),
home: Scaffold(
appBar: AppBar(title: const Text('Bottom Sheet Sample')),
body: const BottomSheetExample(),
),
);
}
}
class BottomSheetExample extends StatelessWidget {
const BottomSheetExample({super.key});
@override
Widget build(BuildContext context) {
return Center(
child: ElevatedButton(
child: const Text('showModalBottomSheet'),
onPressed: () {
showModalBottomSheet<void>(
showDragHandle: true,
context: context,
builder: (BuildContext context) {
return SizedBox(
height: 200,
child: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
mainAxisSize: MainAxisSize.min,
children: <Widget>[
const Text('Modal BottomSheet'),
ElevatedButton(
child: const Text('Close BottomSheet'),
onPressed: () => Navigator.pop(context),
),
],
),
),
);
},
);
},
),
);
}
}
```
3. Use iOS VoiceOver
4. Open ModalBottomSheet
5. Move semantic focus to the DragHandle -> No indication that this is a clickable element

| platform-ios,a: accessibility,platform-mac,good first issue,has reproducible steps,P2,team-accessibility,triaged-accessibility,found in release: 3.27,found in release: 3.28 | low | Minor |
2,780,559,802 | deno | Docker container uses massive amount of memory to boot (150MB) | `docker run -it --rm denoland/deno:2.1.5 repl`
150MB from `docker container stats`:
<img width="166" alt="image" src="https://github.com/user-attachments/assets/349ba074-082d-4595-a3cf-83b2be22a701" />
For reference, I observed the following for other container images:
- `docker run -it --rm python:3.12 python`: 11MB
- `docker run -it --rm node:22 node`: 11MB | needs investigation | low | Minor |
2,780,604,811 | pytorch | Multiple tests not run / run as no-ops by `run_test.py` | ### 🐛 Describe the bug
I noticed this while working on https://github.com/pytorch/pytorch/issues/126523
Basically the test suite runner `run_test.py` runs each test file separately or in parallel. It boils down to e.g. executing: `python -bb distributed/optim/test_apply_optimizer_in_backward.py --shard-id=1 --num-shards=1 -v -vv -rfEX -p no:xdist --use-pytest -x --reruns=2`
However for some tests this does effectively nothing. For example https://github.com/pytorch/pytorch/blob/main/test/distributed/optim/test_apply_optimizer_in_backward.py does not contain any code to be executed. The only way the tests would be executed is by running the file with `pytest` instead of `python` or by calling `common_utils.run_tests` as is done in most tests.
I can't imagine this is intentional, is it?
It also applies to e.g. https://github.com/pytorch/pytorch/blob/main/test/distributed/optim/test_named_optimizer, https://github.com/pytorch/pytorch/blob/main/tools/test/test_executorch_signatures.py and a few others
Are the tests intended to be run with pytest instead of `run_test.py` now? It looks like some tests are not compatible with pytest (judging from some code in `run_test.py`).
I also couldn't find How the tests on CI are executed to replicate that on our side.
### Versions
PyTorch 2.3.0 - main
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @mruberry @ZainRizvi @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | high priority,module: tests,triaged | low | Critical |
2,780,606,871 | storybook | [Bug]: composeStories() create errors on Next.js | ### Describe the bug
I'm trying to get stories from /packages/canon but /website is isolated from the rest of the repo so I'm using relative paths to get stories. If you run the website on yarn dev and see the two pages I created you can see that on the client version using use client; the components appears correctly but I get an error in the terminal and a 500 error in the console.
This error doesn't seem to come from fetching the component as if you just console.log(composeStories) you'll see the same error.
Is there's something wrong in my setup? Or is this a bug on the function itself?
Thanks 🙏
### Reproduction link
https://codesandbox.io/p/github/cdedreuille/nextjs-compose-stories/main
### Reproduction steps
1. Go to https://codesandbox.io/p/github/cdedreuille/nextjs-compose-stories/main
2. It should install all dependencies
3. You should then see a website with two links on `:3000`
4. On the first link (client side) you should see 3 buttons but in the console you should see a 500 error.
5. You should also see in the terminal an error looking like:
```tsx
TypeError: Cannot read properties of null (reading '0')
at __webpack_require__ (/project/workspace/website/.next/server/webpack-runtime.js:33:42)
at __webpack_require__ (/project/workspace/website/.next/server/webpack-runtime.js:33:42)
at __webpack_require__ (/project/workspace/website/.next/server/webpack-runtime.js:33:42)
at __webpack_require__ (/project/workspace/website/.next/server/webpack-runtime.js:33:42)
at eval (./app/test-client/page.tsx:7:74)
at (ssr)/./app/test-client/page.tsx (/project/workspace/website/.next/server/app/test-client/page.js:184:1)
at Object.__webpack_require__ [as require] (/project/workspace/website/.next/server/webpack-runtime.js:33:42)
```
### System
```bash
System:
OS: macOS 15.2
CPU: (16) arm64 Apple M3 Max
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.18.0 - ~/.nvm/versions/node/v20.18.0/bin/node
Yarn: 3.8.1 - ~/.nvm/versions/node/v20.18.0/bin/yarn <----- active
npm: 10.8.2 - ~/.nvm/versions/node/v20.18.0/bin/npm
Browsers:
Chrome: 131.0.6778.265
Safari: 18.2
```
### Additional context
_No response_ | bug,sev:S3,portable stories | low | Critical |
2,780,608,150 | vscode | Debt: add callback to verify state is changed inside of dispatchKeybinding call in smoketests | To reduce smoketest flakiness, we should change the method `dispatchKeybinding` in the driver to verify the state has changed successfully on key dispatch before proceeding to further code execution. This issue is used to track this debt item. | debt | low | Minor |
2,780,608,713 | langchain | AzureOpenAIWhisperParser fails with DefaultAzureCredential due to incorrect token parameter | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.document_loaders.parsers.audio import AzureOpenAIWhisperParser
from langchain_core.documents.base import Blob
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
scope = "https://cognitiveservices.azure.com/.default"
token_provider = get_bearer_token_provider(DefaultAzureCredential(), scope)
parser = AzureOpenAIWhisperParser(
azure_endpoint="https://example.openai.azure.com/",
api_version="2024-06-01",
deployment_name="whisper",
azure_ad_token_provider=token_provider,
)
audio_path = "audios/sample.mp3"
audio_blob = Blob(path=audio_path)
documents = parser.lazy_parse(blob=audio_blob)
for doc in documents:
print(doc.page_content)
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AuthenticationError Traceback (most recent call last)
Cell In[1], line 21
17 audio_blob = Blob(path=audio_path)
19 documents = parser.lazy_parse(blob=audio_blob)
---> 21 for doc in documents:
22 print(doc.page_content)
File d:\Repos\Python\OpenAI\.venv\Lib\site-packages\langchain_community\document_loaders\parsers\audio.py:205, in AzureOpenAIWhisperParser.lazy_parse(self, blob)
203 try:
204 if is_openai_v1():
--> 205 transcript = self._client.audio.transcriptions.create(
206 model=self.deployment_name,
207 file=file_obj,
208 **self._create_params,
209 )
210 else:
211 transcript = self._client.Audio.transcribe(
212 model=self.deployment_name,
213 deployment_id=self.deployment_name,
214 file=file_obj,
215 **self._create_params,
216 )
File d:\Repos\Python\OpenAI\.venv\Lib\site-packages\openai\resources\audio\transcriptions.py:188, in Transcriptions.create(self, file, model, language, prompt, response_format, temperature, timestamp_granularities, extra_headers, extra_query, extra_body, timeout)
184 # It should be noted that the actual Content-Type header that will be
185 # sent to the server will contain a `boundary` parameter, e.g.
186 # multipart/form-data; boundary=---abc--
187 extra_headers = {"Content-Type": "multipart/form-data", **(extra_headers or {})}
--> 188 return self._post( # type: ignore[return-value]
189 "/audio/transcriptions",
190 body=maybe_transform(body, transcription_create_params.TranscriptionCreateParams),
191 files=files,
192 options=make_request_options(
193 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
194 ),
195 cast_to=_get_response_format_type(response_format),
196 )
File d:\Repos\Python\OpenAI\.venv\Lib\site-packages\openai\_base_client.py:1280, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1266 def post(
1267 self,
1268 path: str,
(...)
1275 stream_cls: type[_StreamT] | None = None,
1276 ) -> ResponseT | _StreamT:
1277 opts = FinalRequestOptions.construct(
1278 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1279 )
-> 1280 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File d:\Repos\Python\OpenAI\.venv\Lib\site-packages\openai\_base_client.py:957, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
954 else:
955 retries_taken = 0
--> 957 return self._request(
958 cast_to=cast_to,
959 options=options,
960 stream=stream,
961 stream_cls=stream_cls,
962 retries_taken=retries_taken,
963 )
File d:\Repos\Python\OpenAI\.venv\Lib\site-packages\openai\_base_client.py:1017, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
1014 log.debug("Encountered Exception", exc_info=True)
1016 if remaining_retries > 0:
-> 1017 return self._retry_request(
1018 input_options,
1019 cast_to,
1020 retries_taken=retries_taken,
1021 stream=stream,
1022 stream_cls=stream_cls,
1023 response_headers=None,
1024 )
1026 log.debug("Raising connection error")
1027 raise APIConnectionError(request=request) from err
File d:\Repos\Python\OpenAI\.venv\Lib\site-packages\openai\_base_client.py:1095, in SyncAPIClient._retry_request(self, options, cast_to, retries_taken, response_headers, stream, stream_cls)
1091 # In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a
1092 # different thread if necessary.
1093 time.sleep(timeout)
-> 1095 return self._request(
1096 options=options,
1097 cast_to=cast_to,
1098 retries_taken=retries_taken + 1,
1099 stream=stream,
1100 stream_cls=stream_cls,
1101 )
File d:\Repos\Python\OpenAI\.venv\Lib\site-packages\openai\_base_client.py:1061, in SyncAPIClient._request(self, cast_to, options, retries_taken, stream, stream_cls)
1058 err.response.read()
1060 log.debug("Re-raising status error")
-> 1061 raise self._make_status_error_from_response(err.response) from None
1063 return self._process_response(
1064 cast_to=cast_to,
1065 options=options,
(...)
1069 retries_taken=retries_taken,
1070 )
AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'}
### Description
When using `AzureOpenAIWhisperParser` with Azure AD authentication through `DefaultAzureCredential`, the parser fails with a 401 unauthorized error. This occurs because the underlying OpenAI client initialization uses an incorrect parameter name for the Azure AD token.
**Current Behavior**
When instantiating `AzureOpenAIWhisperParser` with the `azure_ad_token_provider` parameter, the authentication fails with the following error:
```AuthenticationError: Error code: 401 - {'statusCode': 401, 'message': 'Unauthorized. Access token is missing, invalid, audience is incorrect (https://cognitiveservices.azure.com), or have expired.'}```
**Expected Behavior**
The authentication should succeed when providing a valid Azure AD token provider through the `azure_ad_token_provider` parameter.
**Root Cause**
In the source code of langchain-community, when initializing the OpenAI client, the `azure_ad_token_provider` is incorrectly passed as `azure_ad_token`:
```python
if is_openai_v1():
self._client = openai.AzureOpenAI(
api_key=self.api_key,
azure_endpoint=self.azure_endpoint,
api_version=self.api_version,
max_retries=self.max_retries,
azure_ad_token=self.azure_ad_token_provider, # This is incorrect
)
```
Proposed Solution
Update the client initialization to use the correct parameter name:
```python
if is_openai_v1():
self._client = openai.AzureOpenAI(
api_key=self.api_key,
azure_endpoint=self.azure_endpoint,
api_version=self.api_version,
max_retries=self.max_retries,
azure_ad_token_provider=self.azure_ad_token_provider, # Corrected parameter name
)
```
### System Info
❯ python -m langchain_core.sys_info
System Information
------------------
> OS: Windows
> OS Version: 10.0.26100
> Python Version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.10
> langchain_openai: 0.2.14
> langchain_text_splitters: 0.3.5
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.4
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.4
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available. | 🤖:bug | low | Critical |
2,780,653,081 | godot | Selection Cube in Gridmap not selecting correctly in some workflows | ### Tested versions
4.3dev7
### System information
Windows 11 Godot 4.4dev7
### Issue description
When working with modular assets (e.g. floors, walls, corners), and hence having cells half the size of the asset size, with `centerX=centerY=CenterZ=false` for correct alignment, we get the following issue where the wall aren't selected:
https://github.com/user-attachments/assets/285de3ca-8aed-4e3f-9935-0ef21fac65f6
### Steps to reproduce
Above.
### Minimal reproduction project (MRP)
I will provide one later, as I start working on it. | bug,topic:editor,topic:3d | low | Minor |
2,780,666,444 | pytorch | Tabulate not official dependency of PyTorch but needed by features like FlopCounterMode | ```
Traceback (most recent call last):
File "/home/rzou/dev/ocu11/tutorials/recipes_source/torch_compile_user_defined_triton_kernel_tutorial.py", line 338, in <module>
with FlopCounterMode() as flop_counter:
File "/home/rzou/dev/ocu11/pt-ocu11/torch/utils/flop_counter.py", line 726, in __exit__
print(self.get_table(self.depth))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/ocu11/pt-ocu11/torch/utils/flop_counter.py", line 658, in get_table
import tabulate
ModuleNotFoundError: No module named 'tabulate'
``` | triaged,dependency issue,module: flop counter | low | Critical |
2,780,682,813 | next.js | revalidateTag/revalidatePath do not work when directly followed by a redirect in a route handler | ### Link to the code that reproduces this issue
https://github.com/WebDevSimplified/next-cache-redirect-bug
### To Reproduce
1. Run `npm run dev`
2. Navigate to http://localhost:3000
3. Note the random numbers being generated
4. Navigate to http://localhost:3000/revalidate
5. Notice the numbers are still the same cached versions even though `revalidateTag` was called with the proper tag.
### Current vs. Expected behavior
### Expected
When calling `revalidateTag` or `revalidatePath` in a route handler that redirects the user it should invalidate all caches for that tag or path.
### Current
When `revalidateTag`/`revalidatePath` is used within a route handler it does not actually invalidate the cache for that tag/path when followed by the `redirect` function. It is the same issue with other similar functions, such as, `notFound`, `forbidden`, etc. Using `NextResponse.redirect`, does work as expected, though, and the cached data is invalidated and refetched.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Education
Available memory (MB): 65300
Available CPU cores: 24
Binaries:
Node: 22.7.0
npm: 10.9.0
Yarn: 1.22.17
pnpm: N/A
Relevant Packages:
next: 15.2.0-canary.3 // Latest available version is detected (15.2.0-canary.3).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
dynamicIO, Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
This seems very related to PR https://github.com/vercel/next.js/pull/70715. This PR was specifically focused on server actions and my best guess is the same problem exists in route handlers as well, but it was not addressed in this PR since this PR was focused solely on server actions. | Navigation,dynamicIO | low | Critical |
2,780,695,333 | vscode | Issues with CSS in development mode | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.2 / main
- OS Version: macOS 15.2
There are several issues that impact the developer experience when developing CSS for vscode itself
1. Load/reload window is very slow
<img width="532" alt="Image" src="https://github.com/user-attachments/assets/b636cb42-d44a-4ee4-b7e5-8f1062400a2b" />
this seems to be related to the use of `@import url()` which appears to download stylesheets serially. switching to `<link>` tags locally removed this hotspot
2. CSS changes are not picked up on reload window
known issue see https://github.com/microsoft/vscode/pull/237603#issuecomment-2582407926 and https://github.com/microsoft/vscode/pull/234357
also `CSSDevelopmentService` caches its list of discovered css files, so if you add or delete a css file things do not work as expected
3. Full reload is very cumbersome for simple CSS changes
a `Reload CSS` option that simply refreshes all the CSS on the page would be much better experience
automatically reloading CSS based on file-watch events would be ideal | ESM | low | Critical |
2,780,718,709 | flutter | Need to use rootOverlay for positioning MenuAnchor | After merging #155539
Since `OverlayPortal.targetsRootOverlay` is now used, the root `Overlay` must be used when calculating the position for `_Submenu`
```dart
final RenderBox overlay = Overlay.of(
anchorContext,
rootOverlay: true // NEED ADD THIS
).context.findRenderObject()! as RenderBox;
```
**Line to need fixed**
https://github.com/flutter/flutter/blob/864d4f59dde0afab749d5530cf9902e59ccbaacf/packages/flutter/lib/src/material/menu_anchor.dart#L3644
**How to fix**
add `rootOverlay: true`
**How to reproduce the problem**
Create a nested navigator with a height less than the screen height and put DropdownMenu there

| framework,f: material design,has reproducible steps,P2,team-design,triaged-design,found in release: 3.27,found in release: 3.28 | low | Major |
2,780,725,172 | godot | AudioStreamSynchronized stops playback if 'set_sync_stream' is called while playing | ### Tested versions
- Reproducible in: 4.3.stable and later
- Not reproducible in: 4.2.stable and earlier
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - Intel(R) HD Graphics 5500 (Intel Corporation; 20.19.15.5107) - Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz (4 Threads)
### Issue description
If AudioStreamSynchronized::set_sync_stream is called while playback is playing, audio will be stopped.
I've described some of my experience with AudioStreamSynchronized on [this bluesky thread](https://bsky.app/profile/did:plc:zximkmsmk54ssy6ekrxpoizl/post/3lcegipaalk2b), for those interested on discussing about it.
There's an opened [PR](https://github.com/godotengine/godot/pull/100534) to fix it.
### Steps to reproduce
- Add a AudioStreamSynchronized to an AudioPlayer with autoplay set to true
- Load another stream on script to be set later
- Call set_sync_stream(another_stream) from AudioStreamSynchronized's instance when a chosen input is pressed
### Minimal reproduction project (MRP)
You can reproduce this behavior [here](https://github.com/adriano-sudario/godot_streams_issues_mrp), with detailed informations, running the res://scene unexpected_behaviors/stream_sync_stops_when_set.tscn | bug,topic:audio | low | Minor |
2,780,758,400 | pytorch | torch.accelerator.is_available() raise RuntimeError if no available CUDA/XPU devices | ### 🐛 Describe the bug
```python
>>> import torch
>>> torch.accelerator.is_available()
/home/guangyey/repos/stock-pytorch/torch/xpu/__init__.py:120: UserWarning: XPU device count is zero! (Triggered internally at /home/guangyey/repos/stock-pytorch/c10/xpu/XPUFunctions.cpp:117.)
torch._C._xpu_init()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/guangyey/repos/stock-pytorch/torch/accelerator/__init__.py", line 46, in is_available
return device_count() > 0
File "/home/guangyey/repos/stock-pytorch/torch/accelerator/__init__.py", line 33, in device_count
return torch._C._accelerator_deviceCount()
File "/home/guangyey/repos/stock-pytorch/torch/xpu/__init__.py", line 120, in _lazy_init
torch._C._xpu_init()
RuntimeError: No XPU devices are available.
```
The root cause is that https://github.com/pytorch/pytorch/pull/144368 changed the current accelerator detection from runtime to compile time. The call stack now follows this flow `torch.accelerator.device_count` -> [device_lazy_init](https://github.com/pytorch/pytorch/blob/7a93a58b3c9bd528b86d76aaa924d7ad43be0864/torch/csrc/DeviceAccelerator.cpp#L16) -> [lazyInitDevice](https://github.com/pytorch/pytorch/blob/7a93a58b3c9bd528b86d76aaa924d7ad43be0864/torch/csrc/xpu/Module.cpp#L412) -> [device_count_ensure_non_zero](https://github.com/pytorch/pytorch/blob/7a93a58b3c9bd528b86d76aaa924d7ad43be0864/aten/src/ATen/xpu/detail/XPUHooks.cpp#L14)
As a result, a RuntimeError is raised if a user runs a PyTorch wheel built with XPU on a machine without any available XPU devices. The same issue applies to CUDA as well.
### Versions
Collecting environment information...
PyTorch version: 2.7.0a0+gitcfd08f8
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-32-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 42 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU max MHz: 5100.0000
CPU min MHz: 800.0000
BogoMIPS: 4838.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.13.0
[pip3] torch==2.7.0a0+gitcfd08f8
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] torch 2.7.0a0+gitcfd08f8 dev_0 <develop>
[conda] torchfix 0.4.0 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @albanD @EikanWang | high priority,triaged,module: regression,bug,module: accelerator | low | Critical |
2,780,786,665 | godot | print() of empty array using format string does not work | ### Tested versions
Godot Engine v4.4.dev7.official.46c8f8c5c
### System information
Godot v4.4.dev7 - Windows 10 (build 19045)
### Issue description
the print of an empty array with a format string does not work like expected.
the print of an empty dictionary with a format string works but has 2 spaces between the brackets.
```
output test1: %s
Expected output: []
Debugger Message: ... _ready(): String formatting error: not enough arguments for format string. ...
output test2: %s
Expected output: []
Debugger Message: ... _ready(): String formatting error: not enough arguments for format string. ...
output test3: %s
Expected output: []
Debugger Message: ... _ready(): String formatting error: not enough arguments for format string. ...
output test4: { }
Expected output: {}
output test5: 0
Expected output: 0
output test6: []
Expected output: []
```
### Steps to reproduce
```
extends Control
func _ready() -> void:
var test1 : Array
print('output test1: %s' % test1)
print('Expected output: []')
print('Debugger Message: ... _ready(): String formatting error: not enough arguments for format string. ...')
print()
var test2 : Array = []
print('output test2: %s' % test2)
print('Expected output: []')
print('Debugger Message: ... _ready(): String formatting error: not enough arguments for format string. ...')
print()
var test3 : Array [int] = []
print('output test3: %s' % test3)
print('Expected output: []')
print('Debugger Message: ... _ready(): String formatting error: not enough arguments for format string. ...')
print()
var test4 : Dictionary
print('output test4: %s' % test4)
print('Expected output: {}')
print()
var test5 : int
print('output test5: %s' % test5)
print('Expected output: 0')
print()
var test6 : Array
prints('output test6:', test6)
print('Expected output: []')
```
### Minimal reproduction project (MRP)
see steps to reproduce | discussion,topic:gdscript,documentation | low | Critical |
2,780,835,931 | react-native | Issue when displaying a modal after dismissing another modal | ### Description
Before RN 0.76.5, on iOS it was possible to show a modal and after dismissing it, showing another one.
Now, when doing the exact same thing on the **old architecture** the second modal is not showing up.
https://github.com/facebook/react-native/issues/47694
https://github.com/facebook/react-native/issues/48245
https://github.com/facebook/react-native/issues/48559
The error **is not** happening on **new architecture**
### Steps to reproduce
1. Create two different modals
2. Show the first one by clicking on a button
3. Then, create a button that will dismiss the first modal and display the second one
4. The second one won't show up
### React Native Version
0.76.5
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.2
CPU: (10) arm64 Apple M1 Max
Memory: 482.95 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 22.12.0
path: ~/.nvm/versions/node/v22.12.0/bin/node
Yarn: Not Found
npm:
version: 8.19.4
path: ~/code/yonitou/hygo/node_modules/.bin/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.11.2
path: /Users/yonitouboul/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.2
- iOS 18.2
- macOS 15.2
- tvOS 18.2
- visionOS 2.2
- watchOS 11.2
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2411.12071903
Xcode:
version: 16.2/16C5032a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.1
path: /usr/bin/javac
Ruby:
version: 2.7.4
path: /Users/yonitouboul/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react: Not Found
react-native: Not Found
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
```
### Stacktrace or Logs
```text
No errors logs
```
### Reproducer
https://github.com/yonitou/modal-bug
### Screenshots and Videos
When I click on the button "Close modal", it should dismiss the red one and show the blue one but the blue one never shows.
You will notice that if enabling **new architecture**, the problem disappears
https://github.com/user-attachments/assets/72aca692-d90d-40f7-b500-ef574ae26658
| Component: Modal,Needs: Triage :mag:,Newer Patch Available | medium | Critical |
2,780,880,913 | vscode | VSCode claims no JSON formatter installed, but 5 seconds later formats the document just fine |
Type: <b>Bug</b>
Everything has to be done pretty fast after VS Code starts.
0. Start with no VSCode running.
1. Copy some JSON document into clipboard.
2. Start VSCode.
3. As soon as it starts, Ctrl+N to create an editor
4. Paste the clipboard content as soon as editor opens
5. Press F1 and type "Format" and have "Format document" selected, don't press enter yet.
6. As soon as pasted text is recognized as JSON and syntax highlight kicks in, press enter to format document.
7. There is a good chance you'd see "There is no formatter installed for JSON files" message here. Pretty sure VSCode ships with one.
8. Wait 5-10 seconds, try formatting again, it works this time.
It sounds like you have to hurry up to get it, but it happens pretty much every time I do it and is very annoying.
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz (8 x 3408)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.92GB (13.99GB free)|
|Process Argv|--crash-reporter-id 22bb98af-524b-4214-a58b-fbfebdd440c9|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (13)</summary>
Extension|Author (truncated)|Version
---|---|---
xml|Dot|2.5.1
copilot|Git|1.256.0
copilot-chat|Git|0.23.2
vscode-azureresourcegroups|ms-|0.10.2
vscode-bicep|ms-|0.32.4
vscode-devskim|MS-|1.0.51
vscode-dotnet-runtime|ms-|2.2.3
scope-vscode-ext|ms-|1.0.1
azure-account|ms-|0.12.0
powershell|ms-|2024.4.0
vscode-xml|red|0.27.2
rust-analyzer|rus|0.3.2257
even-better-toml|tam|0.21.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter -->
 | formatting | low | Critical |
2,780,881,532 | flutter | Update multiway AGP/Java/Gradle compatibility logic to also include a check for kotlin | https://kotlinlang.org/docs/whatsnew20.html#current-k2-compiler-limitations
Kotlin 2.0 requires agp 8.3 or higher but that error condition is not evaluated in our multi way compatibility logic.
packages/flutter_tools/gradle/src/main/kotlin/dependency_version_checker.gradle.kts
Context https://github.com/flutter/flutter/pull/160974 | platform-android,t: gradle,P1,team-android,triaged-android | medium | Critical |
2,780,891,309 | transformers | Better handeling of hardcoded component in PretrainedModel.from_pretrained. | ### System Info
- `transformers` version: 4.42.0
- Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.2
- Accelerate version: 0.28.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes
- Using GPU in script?: yes
- GPU type: Tesla V100-SXM2-32GB
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The hardcoded component to replace the key names in the loaded model needs better handling (See snippets below). I had named a few variable as `beta` and `gamma` in my layers. the `from_pretrained` function was replacing these names with bias and weights thus loaded model was performing differently.
Probably, raise a `warning/error` if these names are part of layer names in the architecture to avoid trouble at the later stage of development.
https://github.com/huggingface/transformers/blob/15bd3e61f8d3680ca472c9314ad07584d20f7b81/src/transformers/modeling_utils.py#L4338C1-L4358C19
```python
@staticmethod
def _fix_state_dict_key_on_load(key):
"""Replace legacy parameter names with their modern equivalents. E.g. beta -> bias, gamma -> weight."""
if "beta" in key:
return key.replace("beta", "bias")
if "gamma" in key:
return key.replace("gamma", "weight")
# to avoid logging parametrized weight norm renaming
if hasattr(nn.utils.parametrizations, "weight_norm"):
if "weight_g" in key:
return key.replace("weight_g", "parametrizations.weight.original0")
if "weight_v" in key:
return key.replace("weight_v", "parametrizations.weight.original1")
else:
if "parametrizations.weight.original0" in key:
return key.replace("parametrizations.weight.original0", "weight_g")
if "parametrizations.weight.original1" in key:
return key.replace("parametrizations.weight.original1", "weight_v")
return key
```
### Expected behavior
Loading of the pre-trained model should not raise missing/unexpected layer warnings. | bug | low | Critical |
2,780,908,985 | flutter | bot_update failing and/or timing out due to upstream issue with Git hosts | Edit: there is an upstream issue in Google's internal git hosts that is causing this issue (I've also closed and forwarded https://github.com/flutter/flutter/issues/161448 here). I am tracking progress on that, and will update this issue when we have recovered.
---
For example: https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8726111610429262673/+/u/Checkout_source_code__2_/bot_update/stdout
```
2025-01-10 09:22:56.990732 ===Running git config --global --unset core.trustctime ===
In directory: /b/s/w/ir/cache/builder
2025-01-10 09:22:57.001061 ===Succeeded in 0.0 mins of git config --global --unset core.trustctime ===
Traceback (most recent call last):
File "/b/s/w/ir/kitchen-checkout/depot_tools/recipes/recipe_modules/bot_update/resources/bot_update.py", line 1211, in <module>
sys.exit(main())
^^^^^^
File "/b/s/w/ir/kitchen-checkout/depot_tools/recipes/recipe_modules/bot_update/resources/bot_update.py", line 1195, in main
checkout(options, git_slns, specs, revisions, step_text)
File "/b/s/w/ir/kitchen-checkout/depot_tools/recipes/recipe_modules/bot_update/resources/bot_update.py", line 1094, in checkout
ensure_checkout(**checkout_parameters)
File "/b/s/w/ir/kitchen-checkout/depot_tools/recipes/recipe_modules/bot_update/resources/bot_update.py", line 659, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/b/s/w/ir/kitchen-checkout/depot_tools/recipes/recipe_modules/bot_update/resources/bot_update.py", line 824, in ensure_checkout
git_checkouts(solutions, revisions, refs, no_fetch_tags, git_cache_dir,
File "/b/s/w/ir/kitchen-checkout/depot_tools/recipes/recipe_modules/bot_update/resources/bot_update.py", line 668, in git_checkouts
_git_checkout(sln, sln_dir, revisions, refs, no_fetch_tags, git_cache_dir,
File "/b/s/w/ir/kitchen-checkout/depot_tools/recipes/recipe_modules/bot_update/resources/bot_update.py", line 697, in _git_checkout
git(*populate_cmd)
File "/b/s/w/ir/kitchen-checkout/depot_tools/recipes/recipe_modules/bot_update/resources/bot_update.py", line 257, in git
return call(*cmd, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/b/s/w/ir/kitchen-checkout/depot_tools/recipes/recipe_modules/bot_update/resources/bot_update.py", line 235, in call
raise SubprocessFailed('%s failed with code %d in %s.' %
SubprocessFailed: ('/home/chrome-bot/.cache/vpython-root.1000/store/python_venv-8ngg3io865uup2tka5tn5p88p4/contents/bin/python3 -u /b/s/w/ir/kitchen-checkout/depot_tools/git_cache.py populate -v --cache-dir /b/s/w/ir/cache/git https://flutter.googlesource.com/mirrors/flutter --reset-fetch-config --commit ee4d46d43aaf996675fd09b1c4185f818ef166df --ref refs/heads/gh-readonly-queue/master/pr-161431-864d4f59dde0afab749d5530cf9902e59ccbaacf failed with code -9 in /b/s/w/ir/cache/builder.', -9, '/b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter has 15 .pack files, re-bootstrapping if >50 or ==0\n864d4f59dde0afab749d5530cf9902e59ccbaacf refs/heads/master\nrunning "git --git-dir /b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter config --unset-all remote.origin.fetch" in "/b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter"\nrunning "git --git-dir /b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter config gc.autodetach 0" in "/b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter"\nrunning "git --git-dir /b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter config core.deltaBaseCacheLimit 2g" in "/b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter"\nrunning "git --git-dir /b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter config remote.origin.url https://flutter.googlesource.com/mirrors/flutter" in "/b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter"\nrunning "git --git-dir /b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter config --replace-all remote.origin.fetch +refs/heads/*:refs/heads/* \\+refs/heads/\\*:.*" in "/b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter"\nrunning "git --git-dir /b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter config --replace-all remote.origin.fetch +refs/heads/gh-readonly-queue/master/pr-161431-864d4f59dde0afab749d5530cf9902e59ccbaacf:refs/heads/gh-readonly-queue/master/pr-161431-864d4f59dde0afab749d5530cf9902e59ccbaacf \\+refs/heads/gh-readonly-queue/master/pr-161431-864d4f59dde0afab749d5530cf9902e59ccbaacf:.*" in "/b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter"\nFetching +refs/heads/*:refs/heads/*\nrunning "git --git-dir /b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter fetch -v --progress --prune origin +refs/heads/*:refs/heads/*" in "/b/s/w/ir/cache/git/flutter.googlesource.com-mirrors-flutter"\n')
``` | team-infra,P1 | medium | Critical |
2,780,912,643 | go | cmd/cgo: unused parameter when exporting Go function | ### Go version
go version go1.22.10 darwin/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='auto'
GOARCH='amd64'
GOBIN=''
GOCACHE='/tmp/.gocache'
GOENV='/Users/rittneje/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/rittneje/test/pkg/mod'
GONOPROXY='[redacted]'
GONOSUMDB='[redacted]'
GOOS='darwin'
GOPATH='/Users/rittneje/test'
GOPRIVATE='[redacted]'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/Users/rittneje/go1.22.10'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/Users/rittneje/go1.22.10/pkg/tool/darwin_amd64'
GOVCS='[redacted]'
GOVERSION='go1.22.10'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/Users/rittneje/test/src/cgotest/go.mod'
GOWORK='/Users/rittneje/test/go.work'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch x86_64 -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/kf/kr7_s3xx0l12zbj3jrn082hmzy5gvy/T/go-build3787749113=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
I am writing an application that uses cgo. For this reason, I want to compile with all warnings as errors.
Here is a minimal reproducer of the issue.
```go
package main
/*
#cgo CFLAGS: -Werror -Wunused-parameter
*/
import "C"
func main() {
}
//export foo
func foo() {
}
```
### What did you see happen?
```
$ go build
# command-line-arguments
cgo-generated-wrappers:1:37: error: unused parameter 'p' [-Werror,-Wunused-parameter]
```
I believe it is referring to _cgo_main.c, which looks like this:
```go
#include <stddef.h>
int main() { return 0; }
void crosscall2(void(*fn)(void*) __attribute__((unused)), void *a __attribute__((unused)), int c __attribute__((unused)), size_t ctxt __attribute__((unused))) { }
size_t _cgo_wait_runtime_init_done(void) { return 0; }
void _cgo_release_context(size_t ctxt __attribute__((unused))) { }
char* _cgo_topofstack(void) { return (char*)0; }
void _cgo_allocate(void *a __attribute__((unused)), int c __attribute__((unused))) { }
void _cgo_panic(void *a __attribute__((unused)), int c __attribute__((unused))) { }
void _cgo_reginit(void) { }
#line 1 "cgo-generated-wrappers"
void _cgoexp_d15c4901095e_foo(void* p){}
```
### What did you expect to see?
No errors. | NeedsInvestigation,compiler/runtime,BugReport | low | Critical |
2,780,967,506 | PowerToys | Fancy Zones corrupting screen snipper Snagit | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
Simply turning FancyZones on and off both corrupts and corrects the conflict.
### ✔️ Expected Behavior
To be able to clip areas of my screens when the capture button on my Snagit is pressed.
### ❌ Actual Behavior
With FancyZones enabled, whenenver I click the capture button for a screen grab, my (3) screens all shift their entire content about 60% down the screens in unison, and the snipping area that is enabled is now only the lower 40% of my screens which actually only shows the top 40% of my screen content (since it's been lowered that amount)
When I disable FancyZones, the problem does not exist.
[PowerToysReport_2025-01-10-14-21-16.zip](https://github.com/user-attachments/files/18381519/PowerToysReport_2025-01-10-14-21-16.zip)
### Other Software
Snagit by Techsmith
Version 2024.3.0 (build 4481) (Direct2D 1.1) - 11/12/24 | Issue-Bug,Needs-Triage | low | Minor |
2,780,977,460 | go | cmd/cgo: _GoStringLen and _GoStringPtr implicitly declared when exporting Go function | ### Go version
go version go1.22.10 darwin/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='auto'
GOARCH='amd64'
GOBIN=''
GOCACHE='/tmp/.gocache'
GOENV='/Users/rittneje/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/rittneje/test/pkg/mod'
GONOPROXY='[redacted]'
GONOSUMDB='[redacted]'
GOOS='darwin'
GOPATH='/Users/rittneje/test'
GOPRIVATE='[redacted]'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/Users/rittneje/go1.22.10'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/Users/rittneje/go1.22.10/pkg/tool/darwin_amd64'
GOVCS='[redacted]'
GOVERSION='go1.22.10'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/Users/rittneje/test/src/cgotest/go.mod'
GOWORK='/Users/rittneje/test/go.work'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch x86_64 -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/kf/kr7_s3xx0l12zbj3jrn082hmzy5gvy/T/go-build3787749113=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
I am writing an application that uses cgo. For this reason, I want to compile with all warnings as errors.
Here is a minimal reproducer:
```go
package main
/*
#cgo CFLAGS: -Werror -Wimplicit-function-declaration
void bar(_GoString_ str) {
_GoStringLen(str);
_GoStringPtr(str);
}
*/
import "C"
func main() {
}
//export foo
func foo() {
}
```
### What did you see happen?
```
$ go build
# cgotest
In file included from _cgo_export.c:4:
main.go:7:2: error: call to undeclared function '_GoStringLen'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
main.go:8:2: error: call to undeclared function '_GoStringPtr'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
```
Without the `//export foo` directive, it compiles without issue.
### What did you expect to see?
No errors. | NeedsInvestigation,compiler/runtime,BugReport | low | Critical |
2,780,983,203 | rust | Trait aliases make it possible to sidestep various syntactic trait bound checks | Similar to how type aliases allow you to circumvent some syntactic/syntax-driven[^1] checks made by AST lowering (#132212), you can utilize trait aliases to bypass certain restrictions applying to trait bounds.
```rs
#![feature(trait_alias)]
trait SuperMaybeNeg: ?Sized {} // 🔴 REJECTED: `?Trait` is not permitted in supertraits
trait SuperMaybePos: MaybeSized {} // 🟢 Workaround: ACCEPTED
type DynMaybeNeg = dyn std::io::Write + ?Sized; // 🔴 REJECTED: `?Trait` is not permitted in trait object types
type DynMaybePos = dyn std::io::Write + MaybeSized; // 🟢 Workaround: ACCEPTED
fn noop_maybe_neg() where i32: ?Sized {} // 🔴 REJECTED: `?Trait` bounds are only permitted at the point where a type parameter is declared
fn noop_maybe_pos() where i32: MaybeSized {} // 🟢 Workaround: ACCEPTED
trait MaybeSized = ?Sized;
```
```rs
#![feature(trait_alias, const_trait_impl)]
const fn incompat_modifs_neg<T: const ?Trait>() {} // 🔴 REJECTED: const` trait not allowed with `?` trait polarity modifier
const fn incompat_modifs_pos<T: ?ConstTrait>() {} // 🟢 Workaround: ACCEPTED
trait ConstTrait = const Trait; // alternatively, `async Fn()` under feat `async_trait_bounds`
#[const_trait] trait Trait {}
```
```rs
#![feature(trait_alias)]
type DupeAssocNeg = dyn std::ops::Deref<Target = String, Target = i32>; // 🔴 REJECTED: the value of the associated type `Target` in trait `Deref` is already specified
type DupeAssocPos = dyn AssocFixed<Target = i32>; // 🟢 Workaround: ACCEPTED
//^ This bypasses a HIR ty lowering check, not an AST validation one.
type DynAtbNeg = dyn std::ops::Deref<Target: Copy>; // 🔴 REJECTED: associated type bounds are not allowed in `dyn` types
type DynAtbPosIsh<T> = dyn AssocBounded<Target = T>; // 🟢 Workaround-ish: ACCEPTED
trait AssocFixed = std::ops::Deref<Target = String>;
trait AssocBounded = std::ops::Deref<Target: Copy>;
```
```rs
#![feature(trait_alias, return_type_notation)]
struct DynRtnNeg(dyn Trait<method(..): Copy>); // 🔴 REJECTED: associated type bounds are not allowed in `dyn` types
// return type notation is not allowed to use type equality
struct DynRtnPos(dyn Rtn); // 🟢 Workaround: ACCEPTED
trait Trait { fn method(&self) -> impl Sized where Self: Sized; }
trait Rtn = Trait<method(..): Copy>;
```
This either calls into question the very existence of these restrictions or it demonstrates that all of these checks ought to run "later" (i.e., after trait alias expansion).
[^1]: I.e., checks on the AST or HIR without any sort of prior substitution/expansion/normalization/... | T-compiler,C-bug,F-trait_alias,requires-nightly,T-types | low | Minor |
2,780,999,958 | PowerToys | Move mouse point to default button on dialog/message boxes | ### Description of the new feature / enhancement
It would be very handy to have the option to move the mouse pointer to the default button on dialog and message boxes, either automatically when they appear or with a hotkey. I used to use a product called Actual Windows Manager that offered this feature (automatic but not "on demand") and it saves a lot of mouse movement.
### Scenario when this would be used?
Whenever you have to move the pointer to click a default button it slows you down.
### Supporting information
_No response_ | Needs-Triage | low | Major |
2,781,044,098 | pytorch | lintrunner has stale errors | ### 🐛 Describe the bug
Sometimes lintrunner will have stale errors even though the errors no longer exist (eg. if you switch back to a clean main commit). Here's a small repro:
```
ghstack checkout https://github.com/pytorch/pytorch/pull/144263
lintrunner -a
git checkout --detach origin/main
lintrunner -a
```
Notice the errors are still there even though we are on a clean main
```
(/home/bobren/local/b/pytorch-env) [15:02] devgpu009:/home/bobren/local/b/pytorch lintrunner -a
Warning: Could not find a lintrunner config at: '.lintrunner.private.toml'. Continuing without using configuration file.
FLAKE8 success!
CLANGFORMAT success!
MYPY failure
MYPYSTRICT success!
CLANGTIDY success!
TYPEIGNORE success!
NOQA success!
TYPENOSKIP success!
NATIVEFUNCTIONS success!
GHA success!
NEWLINE success!
SPACES success!
TABS success!
C10_UNUSED success!
INCLUDE success!
C10_NODISCARD success!
ERROR_PRONE_ISINSTANCE success!
PYBIND11_INCLUDE success!
PYBIND11_SPECIALIZATION success!
EXEC success!
PYPIDEP success!
CUBINCLUDE success!
ROOT_LOGGING success!
RAWCUDA success!
RAWCUDADEVICE success!
DEPLOY_DETECTION success!
CMAKE success!
ACTIONLINT success!
SHELLCHECK success!
TESTOWNERS success!
CALL_ONCE success!
TEST_HAS_MAIN success!
WORKFLOWSYNC success!
ONCE_FLAG success!
CONTEXT_DECORATOR success!
NO_WORKFLOWS_ON_FORK success!
PYFMT success!
BAZEL_LINTER success!
COPYRIGHT success!
LINTRUNNER_VERSION success!
RUFF success!
MERGE_CONFLICTLESS_CSV success!
META_NO_CREATE_UNBACKED success!
ATEN_CPU_GPU_AGNOSTIC success!
IMPORT_LINTER success!
SET_LINTER success!
DOCSTRING_LINTER success!
>>> Lint for torch/_functorch/_activation_checkpointing/graph_info_provider.py:
Error (MYPY) [attr-defined]
Module has no attribute "viridis"
276 | vmin=min(self.get_knapsack_memory_input()),
277 | vmax=max(self.get_knapsack_memory_input()),
278 | )
>>> 279 | cmap = cm.viridis
280 |
281 | # Assign colors based on memory
282 | node_colors = [
>>> Lint for torch/fx/experimental/proxy_tensor.py:
Error (MYPY) [attr-defined]
"Thunk[Proxy]" has no attribute "proxy"
1085 |
1086 | def unwrap_proxy(self, e: T) -> object:
1087 | if isinstance(e, Tensor):
>>> 1088 | return get_proxy_slot(e, self, e, lambda x: x.proxy)
1089 | elif isinstance(e, py_sym_types):
1090 | return get_proxy_slot(e, self, e, lambda e: e.force())
1091 | elif isinstance(e, _AnyScriptObject):
>>> Lint for torch/testing/_internal/common_utils.py:
Error (MYPY) [import-not-found]
Cannot find implementation or library stub for module named "pytest"
101 |import torch.utils._pytree as pytree
102 |from torch.utils import cpp_extension
103 |try:
>>> 104 | import pytest
105 | has_pytest = True
106 |except ImportError:
107 | has_pytest = False
Successfully applied all patches.
(/home/bobren/local/b/pytorch-env) [15:02] devgpu009:/home/bobren/local/b/pytorch git stash
Saved working directory and index state WIP on (no branch): 5c94ea34c52 Migrate from Tuple -> tuple in torch/_functorch
(/home/bobren/local/b/pytorch-env) [15:02] devgpu009:/home/bobren/local/b/pytorch git checkout --detach origin/main
Previous HEAD position was 5c94ea34c52 Migrate from Tuple -> tuple in torch/_functorch
HEAD is now at c7f12a4a7b8 [MPSInductor] Speedup maximum/minumum ops (#144581)
(/home/bobren/local/b/pytorch-env) [15:02] devgpu009:/home/bobren/local/b/pytorch lintrunner -a
Warning: Could not find a lintrunner config at: '.lintrunner.private.toml'. Continuing without using configuration file.
FLAKE8 success!
CLANGFORMAT success!
MYPY failure
MYPYSTRICT success!
CLANGTIDY success!
TYPENOSKIP success!
TYPEIGNORE success!
NOQA success!
NATIVEFUNCTIONS success!
NEWLINE success!
GHA success!
TABS success!
SPACES success!
C10_UNUSED success!
C10_NODISCARD success!
PYBIND11_INCLUDE success!
INCLUDE success!
PYBIND11_SPECIALIZATION success!
ERROR_PRONE_ISINSTANCE success!
EXEC success!
RAWCUDA success!
DEPLOY_DETECTION success!
RAWCUDADEVICE success!
CUBINCLUDE success!
PYPIDEP success!
ROOT_LOGGING success!
CMAKE success!
SHELLCHECK success!
ACTIONLINT success!
TESTOWNERS success!
CONTEXT_DECORATOR success!
TEST_HAS_MAIN success!
CALL_ONCE success!
ONCE_FLAG success!
WORKFLOWSYNC success!
NO_WORKFLOWS_ON_FORK success!
PYFMT success!
COPYRIGHT success!
BAZEL_LINTER success!
RUFF success!
LINTRUNNER_VERSION success!
MERGE_CONFLICTLESS_CSV success!
META_NO_CREATE_UNBACKED success!
ATEN_CPU_GPU_AGNOSTIC success!
DOCSTRING_LINTER success!
IMPORT_LINTER success!
SET_LINTER success!
>>> Lint for torch/_functorch/_activation_checkpointing/graph_info_provider.py:
Error (MYPY) [attr-defined]
Module has no attribute "viridis"
276 | vmin=min(self.get_knapsack_memory_input()),
277 | vmax=max(self.get_knapsack_memory_input()),
278 | )
>>> 279 | cmap = cm.viridis
280 |
281 | # Assign colors based on memory
282 | node_colors = [
>>> Lint for torch/fx/experimental/proxy_tensor.py:
Error (MYPY) [attr-defined]
"Thunk[Proxy]" has no attribute "proxy"
1085 |
1086 | def unwrap_proxy(self, e: T) -> object:
1087 | if isinstance(e, Tensor):
>>> 1088 | return get_proxy_slot(e, self, e, lambda x: x.proxy)
1089 | elif isinstance(e, py_sym_types):
1090 | return get_proxy_slot(e, self, e, lambda e: e.force())
1091 | elif isinstance(e, _AnyScriptObject):
>>> Lint for torch/testing/_internal/common_utils.py:
Error (MYPY) [import-not-found]
Cannot find implementation or library stub for module named "pytest"
100 |import torch.utils._pytree as pytree
101 |from torch.utils import cpp_extension
102 |try:
>>> 103 | import pytest
104 | has_pytest = True
105 |except ImportError:
106 | has_pytest = False
Successfully applied all patches.
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.34
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk15_zion_2630_gf27365f948db-x86_64-with-glibc2.34
Is CUDA available: N/A
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
GPU 4: NVIDIA H100
GPU 5: NVIDIA H100
GPU 6: NVIDIA H100
GPU 7: NVIDIA H100
Nvidia driver version: 535.154.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 76%
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4792.43
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.2.0+git35c6c7c6
[pip3] torch==2.6.0a0+git2966fb3
[pip3] torchaudio==2.5.0a0+332760d
[pip3] torchdata==0.10.0a0+77bf3d1
[pip3] torchtext==0.17.0a0+1d4ce73
[pip3] torchvision==0.20.0a0+b33aef4
[conda] blas 1.0 mkl
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.11 py310h5eee18b_0
[conda] mkl_random 1.2.8 py310h1128e8f_0
[conda] numpy 1.26.4 py310h5f9d8c6_0
[conda] numpy-base 1.26.4 py310hb5e798b_0
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git35c6c7c6 pypi_0 pypi
[conda] torch 2.6.0a0+git2966fb3 dev_0 <develop>
[conda] torchaudio 2.5.0a0+332760d dev_0 <develop>
[conda] torchdata 0.10.0a0+77bf3d1 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchtext 0.17.0a0+1d4ce73 dev_0 <develop>
[conda] torchvision 0.20.0a0+b33aef4 dev_0 <develop>
cc @ZainRizvi @kit1980 @huydhn @clee2000 | module: lint,triaged,module: devx | low | Critical |
2,781,096,545 | vscode | The terminal is glitching when it's split in groups |
Type: <b>Bug</b>
This bug happens when you open two or more terminals and split them together.
Text gets "doubled" and wrong sized in one of the terminals and you can't fix it unless you close all the terminals.
VS Code version: Code - Insiders 1.97.0-insider (f80816ab8e21c65ed7f1f7e08ccdbffae63610c6, 2025-01-10T09:53:30.658Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 7 7800X3D 8-Core Processor (16 x 4192)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.62GB (17.49GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (13)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-eslint|dba|3.0.10
javascript-ejs-support|Dig|1.3.3
prettier-vscode|esb|11.0.0
vscode-language-pack-it|MS-|1.97.2025010809
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
remote-explorer|ms-|0.4.3
material-icon-theme|PKi|5.17.0
prisma|Pri|6.2.1
open-in-browser|tec|2.0.0
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
</details>
<!-- generated by issue reporter --> | info-needed,terminal-rendering | low | Critical |
2,781,114,028 | PowerToys | Quick Accent - En dash disappeared from hyphen options | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Quick Accent
### Steps to reproduce
Hold hyphen key then activation key (space, for me) to pull up the hyphen's Quick Accent options.
### ✔️ Expected Behavior
En dash (–) should appear as one of the options presented in the hyphen key's Quick Accent menu.
### ❌ Actual Behavior
The en dash (–) is no longer available.
### Other Software
_No response_ | Issue-Bug,Needs-Repro,Needs-Triage,Needs-Team-Response | low | Minor |
2,781,141,955 | tauri | [feat] Add way to check if Tauri's initialization scripts have run | ### Describe the problem
Tauri's initialization scripts may not have executed when the webview starts executing user scripts. This can cause Tauri API calls to fail unexpectedly. For example calling `await listen(...)` at a top level will fail if the `__TAURI_INTERNALS__` object has not yet been populated.
### Describe the solution you'd like
I would like to be able to check if Tauri has finished initializing and wait if it has not. I would suggest a new variable like `window.__TAURI_READY__` or `window.__TAURI_INTERNALS__.ready` which would be set to true after all initialization scripts have finished executing. That could be wrapped in an function in the core library like:
```js
function isReady(): bool {
return window.__TAURI_INTERNALS__?.ready ?? false;
}
```
Additionally, it would be nice to dispatch an event after the ready flag is set like `window.dispatchEvent("tauriReady", new Event())` so that a script can wait for the ready condition without having to poll.
### Alternatives considered
_No response_
### Additional context
See https://discord.com/channels/616186924390023171/1327342177608794113 for discussion. | type: feature request | low | Minor |
2,781,142,682 | tauri | [bug] titleBarStyle: "Transparent" Uses Opposite Hex Color for Background on macOS | ### Describe the bug
When setting "titleBarStyle": "Transparent" and specifying a backgroundColor in a Tauri app on macOS, the expected behavior is for the title bar to match the provided background color. However, instead of using the exact hex color, Tauri applies what seems to be the opposite color, or black.
### Reproduction
Set the titleBarStyle to "Transparent" and specify a backgroundColor in tauri.conf.json:
```
{
"tauri": {
"windows": [
{
"titleBarStyle": "Transparent",
"backgroundColor": "#00FF00"
}
]
}
}
```
### Expected behavior
Observe that the title bar does not match the given background color.
Example: Setting a green background (#00FF00) results in a black title bar.
A temporary fix is to manually use the opposite hex value to get the intended background color.
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 15.1.1 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 20.18.0
- npm: 10.9.2
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.2
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1 (outdated, latest: 2.2.0)
- @tauri-apps/cli : 2.1.0 (outdated, latest: 2.2.3)
[-] Plugins
- tauri-plugin-shell 🦀: 2.2.0
- @tauri-apps/plugin-shell : not installed!
- tauri-plugin-sql 🦀: 2.2.0
- @tauri-apps/plugin-sql : 2.2.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,781,150,261 | pytorch | `torch._foreach_mul` does not support autograd | ### 📚 The doc issue
This is just a note for the eventual foreach docs. If someone has the same error, they can arrive here through search.
```
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: inconsistent range for TensorList output
```
I don't expect foreach ops to support autograd.
(Or maybe I'm wrong and my code has an issue, and foreach is intended to support autograd?)
### Suggest a potential alternative/fix
Nothing to fix for now.
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @crcrpar @mcarilli @janeyx99 | module: autograd,triaged,actionable,module: mta | low | Critical |
2,781,178,491 | puppeteer | [Bug]: Install fails to 24.0.0: Error: ERROR: Failed to set up chrome v131.0.6778.264! Set "PUPPETEER_SKIP_DOWNLOAD" env variable to skip download. | ### Minimal, reproducible example
```TypeScript
upgrading puppeteer in node.js project from 23.11.1 to 24.0.0 and now get the following error:
npm ERR! code 1
npm ERR! path /XXX/node_modules/puppeteer
npm ERR! command failed
npm ERR! command sh -c -- node install.mjs
npm ERR! **INFO** Skipping Firefox download as instructed.
npm ERR! chrome-headless-shell (131.0.6778.264) downloaded to /XXX/.cache/puppeteer/chrome-headless-shell/mac-131.0.6778.264
npm ERR! Error: ERROR: Failed to set up chrome v131.0.6778.264! Set "PUPPETEER_SKIP_DOWNLOAD" env variable to skip download.
npm ERR! at downloadBrowser (file:///XXX/node_modules/@squadle/squadle-lib-nodejs/node_modules/puppeteer/lib/esm/puppeteer/node/install.js:26:15)
npm ERR! at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
npm ERR! at async Promise.all (index 0)
npm ERR! at async downloadBrowsers (file:///XXX/node_modules/@squadle/squadle-lib-nodejs/node_modules/puppeteer/lib/esm/puppeteer/node/install.js:84:9) {
npm ERR! [cause]: [Error: EEXIST: file already exists, symlink 'Versions/Current/Resources' -> '/XXX/backend-management/.cache/puppeteer/chrome/mac-131.0.6778.264/chrome-mac-x64/Google Chrome for Testing.app/Contents/Frameworks/Google Chrome for Testing Framework.framework/Resources'] {
npm ERR! errno: -17,
npm ERR! code: 'EEXIST',
npm ERR! syscall: 'symlink',
npm ERR! path: 'Versions/Current/Resources',
npm ERR! dest: '/XXX/.cache/puppeteer/chrome/mac-131.0.6778.264/chrome-mac-x64/Google Chrome for Testing.app/Contents/Frameworks/Google Chrome for Testing Framework.framework/Resources'
npm ERR! }
npm ERR! }
```
### Background
_No response_
### Expectation
To just install 24.0.0 without issue
### Reality
Fails to install
### Puppeteer configuration file (if used)
```TypeScript
const {join} = require('path');
/**
* @type {import("puppeteer").Configuration}
*/
module.exports = {
// Changes the cache location for Puppeteer.
cacheDirectory: join(__dirname, '.cache', 'puppeteer'),
};
```
### Puppeteer version
24.0.0
### Node version
v18.17.0
### Package manager
npm
### Package manager version
8.19.2
### Operating system
macOS | bug,unconfirmed,not-reproducible | low | Critical |
2,781,227,093 | neovim | `vim.hl.range` inconsistent behavior of `start` and `finish` parameters | ### Problem
As mentioned in the docs `start` and `finish` parameters accepts a tuple of (line, column) or a string accepted by `getpos()` but it behaves accordingly if you build the tuple using `getpos()`.
### Steps to reproduce
- nvim --clean
- insert two lines of text
- go back to normal mode and keep the cursor on the first line
- run: `lua vim.hl.range(0, 1, 'Search', { vim.fn.getpos('.')[2], vim.fn.getpos('.')[3] }, { vim.fn.getpos('.')[2], vim.fn.getpos('.')[3] }, { regtype = 'V' })`
- The second line is highlighted, but if you run: `lua vim.hl.range(0, 1, 'Search', '.', '.', { regtype = 'V' })` the first line is highlighted
### Expected behavior
Using the (line, column) tuple should behave exactly like passing the `getpos()` string expression.
### Nvim version (nvim -v)
NVIM v0.11.0-dev-1417+g487c48ec86
### Vim (not Nvim) behaves the same?
no, not applicable
### Operating system/version
Windows 11
### Terminal name/version
Windows Terminal
### $TERM environment variable
nil
### Installation
chocolatey (--pre) | documentation,lua | low | Minor |
2,781,228,529 | kubernetes | Virtual Service Description Rendering of Maps | ### What happened?
When executing the following command: **kubectl describe vs my-vs-virtualservice** ,
the istio http header set is not rendered correctly. The virtual service functions as expected and get the virtual service in yaml format also works as expected.
The issue is only with the describe command. Here's how the out looks like:
...
API Version: networking.istio.io/v1beta1
Kind: VirtualService
...
Metadata:
Creation Timestamp: 2025-01-02T23:01:50Z
Generation: 1
Spec:
Http:
Headers:
Request:
Remove:
x-my-private-header
Set:
**X - My - Public - Header: aValue**
The Remove header which is a list is rendered properly but the Set which is a map is not rendered correctly.
And if I execute the following command: **kubectl get vs my-vs-virtualservice -o yaml**, the output looks like this:
http:
- headers:
request:
remove:
- x-my-private-header
set:
x-my-public-header: "aValue"
### What did you expect to happen?
Expected output:
...
API Version: networking.istio.io/v1beta1
Kind: VirtualService
...
Metadata:
Creation Timestamp: 2025-01-02T23:01:50Z
Generation: 1
Spec:
Http:
Headers:
Request:
Remove:
x-my-private-header
Set:
**x-my-public-header: aValue**
### How can we reproduce it (as minimally and precisely as possible)?
Create a basic istio virtual service which sets some headers, deploy it and do a describe of the virtual service.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
$ kubectl version
Client Version: v1.31.3
Kustomize Version: v5.4.2
Server Version: v1.29.1
```
</details>
### Cloud provider
<details>
Tested on a private cloud environment and on Azure.
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,needs-sig,needs-triage | low | Minor |
2,781,271,159 | stable-diffusion-webui | [Bug]: Dreambooth installed but tab is not viewed on UI | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
No Dreambooth Tab
### Steps to reproduce the problem
just install the extension and fully restart UI
### What should have happened?
it should show the tab
### What browsers do you use to access the UI ?
Microsoft Edge
### Sysinfo
I do know how to do it
### Console logs
```Shell
https://pastebin.com/acPkiMTZ
```
### Additional information
nothing | bug-report | low | Critical |
2,781,300,035 | deno | Deno installs unused npm dependencies despite --entrypoint flag usage | Given this simple project:
main.ts:
```ts
import "@std/assert";
import "express";
console.log("oioioi");
```
deno.json:
```json
{
"tasks": {
"dev": "deno run --watch main.ts"
},
"imports": {
"@std/assert": "jsr:@std/assert@1",
"dbmate": "npm:[email protected]",
"express": "npm:[email protected]"
}
}
```
Running:
```cmd
deno clean && deno install --entrypoint main.ts && du -h --max-depth=0 ~/.cache/deno/
```
I get the following output:
```
Removed /home/main/.cache/deno (1004 files, 4.63MB)
38M /home/main/.cache/deno/
```
If now i edit the `main.ts` to comment out the `import "express"` line, and rerun the above command:
```cmd
deno clean && deno install --entrypoint main.ts && du -h --max-depth=0 ~/.cache/deno/
```
i get:
```
Removed ~/.cache/deno (241 files, 2MB)
2,7M ~/.cache/deno/
```
Checking into the files, it appears that both `dbmate` and `express` are being installed even tho i do not use `dbmate` directly, and since i am running the install command with `--entrypoint` i would expect deno to only install `express`.
Is my assumption correct? i would expect deno to be smart about which deps are being used and only install does. | install | low | Major |
2,781,311,024 | godot | Calling add_child() during NOTIFICATION_WM_ABOUT always gives "Parent busy setting up children" error | ### Tested versions
Reproduced in 4.3
### System information
Godot v4.3-stable - macOS 15.2.0 - Vulkan (Forward+) - integrated Apple M3 - Apple M3 (8 Threads)
### Issue description
Seems like a pretty trivial use case to add an AboutMenu kind of node for NOTIFICATION_WM_ABOUT. But no matter what node I add it to, I get the same error, "Parent busy setting up children, `add_child()` failed. Consider using `add_child.call_deferred(child)` instead". So this is easy to workaround, but also seems like it shouldn't happen.
### Steps to reproduce
See the MRP.
```gdscript
func _notification(what: int) -> void:
if what == NOTIFICATION_WM_ABOUT:
var texture_rect := TextureRect.new()
texture_rect.texture = load("res://icon.svg")
add_child(texture_rect)
#add_child.call_deferred(texture_rect)
```
### Minimal reproduction project (MRP)
[MacOSbug.zip](https://github.com/user-attachments/files/18382968/MacOSbug.zip) | bug,platform:macos,topic:gui | low | Critical |
2,781,321,699 | go | x/tools/gopls: gopls does not reset workspace/configuration if missing from the client | ### gopls version
master head, following experiment is based on commit `6efe0f4b404b25e02999c3e34db08771f855fc28`
### go env
```shell
NA
```
### What did you do?
Locate semantic token information from gopls of a string or a number. Toggle feature `noSemanticString` or `noSemanticNumber`.
### What did you see happen?
By default, number's token info is returned from the gopls, you can see `semantic token type = number`.
<img width="1312" alt="image" src="https://github.com/user-attachments/assets/64c3f1b6-e468-4320-91fd-07cd07093cbf" />
Then if I set `"ui.noSemanticNumber": true,` the semantic token info goes away. WAI
<img width="1283" alt="image" src="https://github.com/user-attachments/assets/c25bfa6d-186d-4557-a0b1-011974ce91e1" />
If I set the `"ui.noSemanticNumber": false,`, the semantic token info comes back. WAI
<img width="1262" alt="image" src="https://github.com/user-attachments/assets/df841c55-3ffc-4872-b75d-7ea9a48a7246" />
Set it to `"ui.noSemanticNumber": true,`, the token info goes away. WAI. Then if I comment this out. What I expect is, the semantic token information will come back. Because, no setting means default value `"ui.noSemanticNumber": false,`.
### What did you expect to see?
<img width="1318" alt="image" src="https://github.com/user-attachments/assets/e037a0be-513b-49fa-a31f-bf66b8ce6bc8" />
As a result it does not come back.
### Editor and settings
_No response_
### Logs
I will focus on the last two `workspace/configuration` logs
Where I set it to true
```
[Trace - 5:26:49 PM] Sending response 'workspace/configuration - (14)'. Processing request took 0ms
Result: [
{
"ui.semanticTokens": true,
"ui.noSemanticNumber": true,
"ui.inlayhint.hints": {
"assignVariableTypes": false,
"compositeLiteralFields": false,
"compositeLiteralTypes": false,
"constantValues": false,
"functionTypeParameters": false,
"parameterNames": false,
"rangeVariableTypes": false
},
"ui.vulncheck": "Off",
"linkifyShowMessage": true
}
]
```
Where I comment it out
```
[Trace - 5:27:44 PM] Sending response 'workspace/configuration - (16)'. Processing request took 0ms
Result: [
{
"ui.semanticTokens": true,
"ui.inlayhint.hints": {
"assignVariableTypes": false,
"compositeLiteralFields": false,
"compositeLiteralTypes": false,
"constantValues": false,
"functionTypeParameters": false,
"parameterNames": false,
"rangeVariableTypes": false
},
"ui.vulncheck": "Off",
"linkifyShowMessage": true
}
]
```
I think what happened is, when gopls saw `"ui.noSemanticNumber": X,` in the json, gopls will apply this `X` in memory. WAI.
But when gopls saw `"ui.noSemanticNumber": X,` is missing in the json, gopls do nothing. In this case, gopls should set the `ui.noSemanticNumber = false` because false is the default value.
@findleyr | gopls,Tools,BugReport | low | Minor |
2,781,340,713 | bitcoin | cmake -P $buildDir/Coverage.cmake: Test execution for coverage fails. Wrapper resource "cov_tool_wrapper.sh.in" missing from build cache folder. | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behaviour
When following Compiling for Test Coverage instructions:
[doc/developer-notes-CompilingForTestCoverage](https://github.com/bitcoin/bitcoin/blob/master/doc/developer-notes.md#compiling-for-test-coverage)
I encounter error:
```
CMake Error: File /home/alicebob/wkspc1/presets/bitcoin/cmake/cov_tool_wrapper.sh.in does not exist.
CMake Error at /home/alicebob/wkspc1/build/bitcoin/CoverageInclude.cmake:14 (configure_file):
configure_file Problem configuring file
Call Stack (most recent call first):
/home/alicebob/wkspc1/build/bitcoin/Coverage.cmake:5 (include)
```
### Expected behaviour
Driver for functional tests should run, generating instrumentation report HTML.
### Steps to reproduce
1) Compile for coverage. Following are sample cmake cache variables for the CLang tool chain:
```
"CMAKE_C_COMPILER": "clang",
"CMAKE_CXX_COMPILER": "clang++",
"CMAKE_CXX_FLAGS": "-fprofile-arcs -ftest-coverage",
"CMAKE_BUILD_TYPE": "Coverage",
```
2) Follow instructions in developer notes to compile for coverage:
[doc/developer-notes-CompilingForTestCoverage](https://github.com/bitcoin/bitcoin/blob/master/doc/developer-notes.md#compiling-for-test-coverage)
### Relevant log output
CMake Error: File /home/alicebob/wkspc1/presets/bitcoin/cmake/cov_tool_wrapper.sh.in does not exist.
CMake Error at /home/alicebob/wkspc1/build/bitcoin/CoverageInclude.cmake:14 (configure_file):
configure_file Problem configuring file
Call Stack (most recent call first):
/home/alicebob/wkspc1/build/bitcoin/Coverage.cmake:5 (include)
### How did you obtain Bitcoin Core
Compiled from source
### What version of Bitcoin Core are you using?
v28.99.0-37e49c2c7ca5
### Operating system and version
Ubuntu 24.04.1 LTS
### Machine specifications
Unix/Intel | Build system | low | Critical |
2,781,360,280 | transformers | Help Understanding Beam Search Scores in Hugging Face (LLaMA + LoRA) | ### System Info
Hello Hugging Face community,
I’m working with a LLaMA-based model that has a LoRA (Low-Rank Adapter) applied, and I’m using beam search in Transformers. I’m trying to debug how the final beam scores are computed, because the step-by-step log probabilities I print out look far more negative than the final “sequence score” reported by Hugging Face.
Below is a sample of my debug output for 4 beams, each showing:
Generated Sequence (token IDs, excluding the prompt/input).
Generated Text (decoded).
Step-by-Step Analysis: Each newly generated token’s log probability.
HF Cumulative Sequence Score (final beam score from generation_output.sequences_scores).
Debug Info (lengths, how many log-prob steps were used vs. available).
=== HuggingFace Beam Analysis (Generated Tokens Only) ===
Input sequence length: 148
--- Beam 1 ---
Generated Sequence (IDs): [32, 3202, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001]
Generated Text: AUP
Step-by-Step Analysis:
Step 1: Token='A' (ID=32), LogProb=-0.741240
Step 2: Token='UP' (ID=3202), LogProb=-28.383789
Step 3: Token='' (ID=128001), LogProb=-32.667973
Final Scores:
HF Cumulative Sequence Score: -0.247081
--- Beam 2 ---
Generated Sequence (IDs): [51154, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001, 128001]
Generated Text: Others
Step-by-Step Analysis:
Step 1: Token='Others' (ID=51154), LogProb=-0.647490
Step 2: Token='' (ID=128001), LogProb=-29.399292
Final Scores:
HF Cumulative Sequence Score: -0.323745
--- Beam 3 ---
Generated Sequence (IDs): [32, 3202, 320, 6546, 1428, 11, 10984, 49541, 13, 15388, 3298, 8, 128001]
Generated Text: AUP (CSAM, Encourg. Illegal Act)
Step-by-Step Analysis:
Step 1: Token='A' (ID=32), LogProb=-0.741240
Step 2: Token='UP' (ID=3202), LogProb=-20.869020
Step 3: Token=' (' (ID=320), LogProb=-9.416358
Step 4: Token='CS' (ID=6546), LogProb=-19.269587
Step 5: Token='AM' (ID=1428), LogProb=-23.486216
Step 6: Token=',' (ID=11), LogProb=-10.883574
Step 7: Token=' Enc' (ID=10984), LogProb=-0.144973
Step 8: Token='ourg' (ID=49541), LogProb=-0.001301
Step 9: Token='.' (ID=13), LogProb=-0.001659
Step 10: Token=' Illegal' (ID=15388), LogProb=-20.425816
Step 11: Token=' Act' (ID=3298), LogProb=-14.907486
Step 12: Token=')' (ID=8), LogProb=-0.150186
Step 13: Token='' (ID=128001), LogProb=-17.213655
Final Scores:
HF Cumulative Sequence Score: -1.447294
--- Beam 4 ---
Generated Sequence (IDs): [32, 3202, 320, 6546, 1428, 11, 10984, 49541, 13, 15388, 3298, 6266, 128001]
Generated Text: AUP (CSAM, Encourg. Illegal Act.)
Step-by-Step Analysis:
Step 1: Token='A' (ID=32), LogProb=-0.741240
Step 2: Token='UP' (ID=3202), LogProb=-28.162111
Step 3: Token=' (' (ID=320), LogProb=-10.757921
Step 4: Token='CS' (ID=6546), LogProb=-6.859391
Step 5: Token='AM' (ID=1428), LogProb=-20.384962
Step 6: Token=',' (ID=11), LogProb=-15.148496
Step 7: Token=' Enc' (ID=10984), LogProb=-0.298849
Step 8: Token='ourg' (ID=49541), LogProb=-18.535187
Step 9: Token='.' (ID=13), LogProb=-0.006747
Step 10: Token=' Illegal' (ID=15388), LogProb=-14.434349
Step 11: Token=' Act' (ID=3298), LogProb=-12.582914
Step 12: Token='.)' (ID=6266), LogProb=-12.790556
Step 13: Token='' (ID=128001), LogProb=-20.104782
Final Scores:
HF Cumulative Sequence Score: -1.464120
The Question
--------------
How does Hugging Face’s beam search compute the final scores (e.g., −0.247081, −0.323745, −1.447294, −1.464120) given the very negative individual log probabilities?
For example, for the first beam, I expected a cumulative probability of (-0.741240 - 28.38378 - 32.667973) / 3 = -20.597667 since no length_penalty is being applied. However, the final sequences_scores from HF differ significantly from any straightforward summation of the listed token log-probs, even when accounting for a length_penalty.
Can someone help clarify how these scores are calculated?
### Who can help?
@gante @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
GENERATION CODE :
------------------------------------------------------------------------------------------------------------------------
model_name = "./Llama/Meta-Llama-3.1-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = LlamaForCausalLM.from_pretrained(
model_name,
load_in_8bit=False,
torch_dtype=torch.float16,
device_map='auto',
)
adaptor_path = './model_spec/checkpoints/checkpoint-200'
model = PeftModel.from_pretrained(
model,
adaptor_path,
torch_dtype=torch.float16,
)
model.eval()
message = "Lady Sold Children's Clothes That She Don't Send!"
input_raw = "Message: {message}"
input = input_raw.format(message=message)
instruction = "Does this customer-reported message indicate an AUP violation from the following categories? \n[A, B, C]\nIf yes, respond 'AUP'; if not, respond 'Others'."
prompt_template = f"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{instruction}<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
prompt = prompt_template.format(instruction=instruction, input=input)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to('cuda')
generation_config = GenerationConfig(
temperature=0,
top_p=1,
top_k=-1,
num_beams=4, # Number of beams for beam search
num_return_sequences=4, # Return all beams
)
generate_params = {
"input_ids": input_ids,
"generation_config": generation_config,
"return_dict_in_generate": True,
"output_scores": True,
"max_new_tokens": 128,
}
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=128
)
s = generation_output.sequences[0]
output = tokenizer.decode(s,skip_special_tokens=True)
result = output.split('assistant')[1].strip()
```
DECODE CODE :
------------------------------------------------------------------------------------------------------------------------
```
import torch
import torch.nn.functional as F
def analyze_beams(
generation_output,
tokenizer,
input_ids,
end_of_text_id=128001,
length_penalty=1.0,
ignore_after_first_eos=False
):
"""
Analyzes final beams from a Hugging Face generation output.
1) Excludes the original input tokens, only focusing on "newly generated" tokens.
2) Prints step-by-step tokens (ID & text) + log-probs.
3) Applies optional length penalty for the final "calculated score."
4) Optionally stops counting tokens after first <eos> if 'ignore_after_first_eos=True'.
:param generation_output: Object with attributes:
- sequences: final beam sequences (tensor shape [num_beams, total_seq_len])
- sequences_scores: final HF beam scores
- scores: list of per-step logits ([num_steps], each shape [num_beams, vocab_size])
:param tokenizer: A Hugging Face tokenizer to decode tokens into text.
:param input_ids: The original input_ids (so we can know how many tokens to skip).
:param end_of_text_id: The <eos> or <end_of_text> token ID (default=128001).
:param length_penalty: Exponent for length normalization.
:param ignore_after_first_eos: If True, we ignore any tokens after the first <eos>.
"""
# 1) Determine how many input tokens to skip
input_length = len(input_ids[0]) # e.g. shape [batch_size, seq_len]
print("\n=== HuggingFace Beam Analysis (Generated Tokens Only) ===")
print(f"Input sequence length: {input_length}")
# 2) Convert generation_output.scores into shape [num_beams, steps, vocab_size]
logits = torch.stack(generation_output.scores, dim=1) # shape [num_beams, steps, vocab_size]
log_probs = F.log_softmax(logits, dim=-1) # shape [num_beams, steps, vocab_size]
beam_sequences = generation_output.sequences
beam_scores = generation_output.sequences_scores
num_beams = beam_sequences.shape[0]
steps_available = log_probs.shape[1]
vocab_size = log_probs.shape[2]
# 3) Analyze each beam
for beam_idx in range(num_beams):
print(f"\n--- Beam {beam_idx + 1} ---")
# Slice out only the newly generated portion (excluding input)
full_sequence = beam_sequences[beam_idx]
generated_sequence = full_sequence[input_length:] # This is your "generated" part
# Decode text
generated_text = tokenizer.decode(generated_sequence, skip_special_tokens=True)
print(f"Generated Sequence (IDs): {generated_sequence.tolist()}")
print(f"Generated Text: {generated_text}")
print("\nStep-by-Step Analysis:")
beam_score_sum = 0.0
used_step_count = 0
# We'll iterate over each newly generated token
for step_idx, token_id in enumerate(generated_sequence):
if step_idx >= steps_available:
# We've run out of log_probs steps
break
# Retrieve distribution for this beam at this step
# shape [vocab_size]
token_log_probs = log_probs[beam_idx, step_idx]
# The log-prob for the chosen token_id
token_logp = token_log_probs[token_id].item()
# Accumulate beam score
beam_score_sum += token_logp
used_step_count += 1
# Print step info
token_text = tokenizer.decode([token_id], skip_special_tokens=True)
print(
f"Step {step_idx + 1}: "
f"Token='{token_text}' (ID={token_id}), LogProb={token_logp:.6f}"
)
# If ignoring repeated <eos>, we break after the first <eos> token
if ignore_after_first_eos and token_id == end_of_text_id:
break
# 4) Apply length penalty
# If all tokens are used, used_step_count is the length; otherwise we truncated early
final_len = used_step_count if used_step_count > 0 else 1
calculated_score = beam_score_sum / (final_len ** length_penalty)
# 5) Print results
print("\nFinal Scores:")
# Show Hugging Face's final beam score
hf_score = beam_scores[beam_idx].item()
print(f" HF Cumulative Sequence Score: {hf_score:.6f}")
print(f" Calculated Score: {calculated_score:.6f}")
print("\nDebug Info:")
print(f" Full sequence length: {len(full_sequence)} (including input)")
print(f" Generated sequence length: {len(generated_sequence)}")
print(f" Steps of log_probs used: {used_step_count}")
print(f" Steps of log_probs avail: {steps_available}")
print(f" Vocab size: {vocab_size}")
```
### Expected behavior
Expected a cumulative probability of (-0.741240 - 28.38378 - 32.667973) / 3 = -20.597667 since no length_penalty is being applied. | bug,Generation | low | Critical |
2,781,369,347 | next.js | `next lint` uses yarn even though project uses npm because no package-lock.json because using turborepo | ### Link to the code that reproduces this issue
https://github.com/astriaorg/flame-apps
### To Reproduce
1. create turborepo next app
2. cd to apps/web
3. `npm run lint`
### Current vs. Expected behavior
Next will try to use yarn to install eslint because no package-lock.json is found.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
Available memory (MB): 65536
Available CPU cores: 10
Binaries:
Node: 18.19.0
npm: 10.2.3
Yarn: 4.1.1
pnpm: 8.10.2
Relevant Packages:
next: 14.2.23 // An outdated version detected (latest is 15.1.4), upgrade is highly recommended!
eslint-config-next: 14.2.23
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
⚠ An outdated version detected (latest is 15.1.4), upgrade is highly recommended!
Please try the latest canary version (`npm install next@canary`) to confirm the issue still exists before creating a new issue.
Read more - https://nextjs.org/docs/messages/opening-an-issue
```
### Which area(s) are affected? (Select all that apply)
Linting
### Which stage(s) are affected? (Select all that apply)
Other (Deployed)
### Additional context
I was able to get around the issue by touching a package-lock.json and rerunning the command.
<img width="1397" alt="Screenshot 2025-01-10 at 3 59 49 PM" src="https://github.com/user-attachments/assets/25f1aa71-f79f-4d45-b7ce-4908e02efd11" />
The fix here doesn't work because this is a monorepo with turborepo and there is no package-lock.json in the directory.
https://github.com/vercel/next.js/issues/31755
| Linting | low | Minor |
2,781,374,612 | PowerToys | Feature Request: Power File Converter Tool | ### Description of the new feature / enhancement
I often find myself spending significant time and effort searching for simple, reliable tools to convert files between formats. It would be incredibly useful if PowerToys included a "Power File Converter" tool that integrates directly into the Windows right-click context menu.
**Proposed Features:**
- A streamlined interface accessible via the right-click menu for quick file conversions.
- Support for popular format conversions like:
- Video to GIF: .MP4 → .GIF
- Image conversions: .HEIC → .JPG, .PNG, etc.
- Document conversions: .PDF → .Markdown, .HTML → .Word
- Advanced options, such as resolution/quality settings for images or format-specific options.
- Batch file conversion for increased efficiency.
**Benefits:**
- Saves users time by eliminating the need to search for and install various third-party tools.
- Integrates seamlessly into existing workflows.
- Simplifies file management for users with diverse needs.
The world would be a simpler place with this PowerToy. I hope this idea resonates and could find a spot in a future update!
Thank you for considering this feature.
### Scenario when this would be used?
1. You’re preparing a presentation and need to quickly convert various types of files to formats compatible with your workflow:
- You have an .MP4 file of a short animation but need a looping .GIF for embedding into your presentation slides.
- Right-click the .MP4 file → Select Power File Converter → Choose .GIF → Conversion completes instantly.
6. A colleague sends you photos in .HEIC format from their iPhone, but you need .JPG files to upload to a shared drive that doesn't support .HEIC.
- Select multiple .HEIC files → Right-click → Choose Convert to JPG under Power File Converter → All files are converted in one step.
9. You receive a .PDF with meeting notes but need them in Markdown format for importing into your team’s documentation system.
- Right-click the .PDF → Choose Convert to Markdown → Paste the generated .MD file directly into your system.
12. While working on a blog post, you download an .HTML page for reference but need the content in Word format to make edits.
- Right-click the .HTML file → Select Convert to Word → Open the converted file in Microsoft Word.
In all these cases, the Power File Converter saves time and eliminates the need for external tools or complicated workflows, letting you focus on your work instead of file format frustrations.
### Supporting information
**User Need:**
Many users frequently deal with file conversion challenges, often resorting to external websites or tools that are cumbersome, slow, or come with privacy risks. A built-in, reliable solution within PowerToys would address this common pain point efficiently.
**Existing Demand:**
File conversion tools are one of the most commonly searched utilities online. Integrating this functionality into the Windows right-click menu provides a seamless user experience while meeting a clear, widespread demand.
**Privacy and Security:**
By offering local file conversion capabilities, users won’t need to rely on third-party services that require uploading sensitive files, ensuring better data privacy and security.
**Precedent:**
Similar features exist in standalone tools or extensions, but consolidating them into PowerToys would offer greater convenience and build on PowerToys' reputation for powerful, productivity-enhancing utilities.
**Feasibility:**
Libraries for many common conversions (e.g., FFmpeg for video/audio, ImageMagick for images, and Pandoc for documents) are open-source and can be integrated efficiently into PowerToys.
By addressing a universal workflow need, this feature would significantly enhance PowerToys’ value for professionals, students, and everyday users. | Needs-Triage | low | Major |
2,781,379,588 | flutter | DL/Geometry engine migration tracking issue | The engine is slowly being migrated to only use DisplayList/Impeller geometry classes throughout its code and the associated Skia objects will only be used when talking directly to Skia, say, in the Skia DisplayList adapter interfaces.
Converted/done items:
- [x] Unit tests in the impeller/ directory (see https://github.com/flutter/flutter/pull/161855) (reland https://github.com/flutter/flutter/pull/162146)
- [x] Unit tests in the display_list/ directory (see https://github.com/flutter/flutter/pull/161453)
- [x] AccumulationRect and MatrixClipTracker (see https://github.com/flutter/flutter/pull/161553)
- [x] SkRSXform which has no Impeller equivalent yet (see https://github.com/flutter/flutter/pull/161652)
Things which will be converted to DisplayList/Impeller mechanisms proactively:
- [ ] ImageFilter unit tests which use Skia filters to compare things like bounds behaviors.
- [ ] there are still some bugs filed against this mechanism that should be fixed before we remove these tests
- [ ] they should be replaced with "here is the expected answer" tests in lieu of using Skia to generate the correct answers
- [ ] dl_geometry unit tests which only use Skia classes to test the current Skia/Impeller conversion macros which will eventually go away when all non-Skia code is solely using the dl_geometry stubs
- [ ] dl_path unit tests which will switch to Impeller's PathBuilder when we beef it up to completely replace SkPath
- [ ] dl_region unit tests which only use Skia geometry to test results
- [ ] these should be replaced with "expected answer" tests rather than comparing to Skia results
- [ ] impeller/
- [ ] flow/
- [ ] shell/
- [ ] embedders
Things being worked on:
Things that might require significant effort to convert:
- [ ] dl_path itself which defers to SkPath for most of its construction work. It supports bidirectional conversion to/from Impeller Paths, but requires the ui.Path Flutter API to use SkPath as the originating path because the Impeller Path does not support the entire Flutter path construction API
- [ ] SkPathOps which are a major geometry sub-system living in the Skia source base and support only SkPath objects. It may be possible to use them with Impeller Paths by just converting the path to an SkPath, but eventually the engine should have a self-sufficient set of path ops.
- [ ] dl_rendering_unittests
- [ ] we should eliminate the "does rendering via DisplayList match rendering via SkPicture?" tests as we no longer support SkPicture at all
- [ ] we should also get these up and running on Impeller as an output backend (sort of works, but needs bug fixing)
- [ ] Any use of SkSurface, SkColorSpace and their associated classes to manage surfaces
Things that will never be converted as they will eventually be deleted when we stop using Skia:
- [ ] skia/ directory - will never be converted as its purpose is to interpret DisplayLists for Skia
- [ ] raster_cache classes which won't be used under Impeller
- [ ] complexity benchmarks which are only used for raster caching decisions which won't be used under Impeller | engine,P2,team-engine,triaged-engine | low | Critical |
2,781,380,893 | flutter | Bad instructions for installing android command line tools | 
The instructions say:
```
cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"
```
If you do not have command line tools installed, then you don't have the sdkmanager tool available to use; sdkmanager is part of command line tools: https://developer.android.com/tools | tool,t: flutter doctor,P2,team-tool,triaged-tool | low | Minor |
2,781,402,070 | rust | E0195 diagnostic should take into account GATs, etc | ### Code
```Rust
pub trait Trait {
type Gat<'a>;
fn example(self, _: Self::Gat<'_>) -> Self::Gat<'_>;
}
impl Trait for () {
type Gat<'a> = ();
fn example(self, _: Self::Gat<'_>) {}
}
```
### Current output
```Shell
error[E0195]: lifetime parameters or bounds on method `example` do not match the trait declaration
--> <source>:8:15
|
3 | fn example(self, _: Self::Gat<'_>) -> Self::Gat<'_>;
| - lifetimes in impl do not match this method in trait
...
8 | fn example(self, _: Self::Gat<'_>) {}
| ^ lifetimes do not match method in trait
```
### Desired output
```Shell
error[E0195]: lifetime parameters or bounds on method `example` do not match the trait declaration
--> <source>:8:15
|
3 | fn example(self, _: Self::Gat<'_>) -> Self::Gat<'_>;
| -- --
| lifetimes in impl do not match this method in trait
...
8 | fn example(self, _: Gat<'_>) {}
| --
| lifetimes do not match method in trait
|
note: The lifetime in the trait does not constrain the lifetime parameter,
but the lifetime in the implementation signature is constrained
hint: Make the lifetime in the implementation unconstrained by mentioning
the lifetime in an explicit bound:
fn example<'a:'a>(self, _: Self::Gat<'a>) {}
+++++++ ~~
```
### Rationale and extra context
Context: #109476 and #87803. Those are filed as bugs, so I'm filing this diagnostic issue separately. It's plausible this will become moot if those issues are resolved, but it's not certain that will happen (or that it will happen soon).
When I encountered this, it took me awhile to figure out that E0195 was really about late-vs-early lifetimes parameters. Only when I figured that out did the error make sense. Late-vs-early lifetime parameters are niche knowledge, so I'm not entirely sure how to phrase the error message; it needs to probably spell out a workaround, as it's not obvious.
The current phrasing of E0195 makes sense when lifetime parameters are early bound due to appearing in explicit bounds. However, lifetime parameters can also become early bound implicitly/invisibly, such as in the example. There are similar cases (see #87803) when the return type is an RPIT/RPITIT (`-> impl Trait`) -- in which case the lifetime is early bound due to appearing in the implicit `use<..>` bounds.
Also in those cases, exactly matching the signature from the trait is not always an option or desirable. For example, when using refinement and/or precise capturing -- i.e. intentionally removing a lifetime from the return type in order to provide more functionality than the trait requires.
So for example while `-> Self::Gat<'_>` would be a usable suggestion for the code at the top of this issue, it is not a usable suggestion here:
```rust
pub trait Trait {
type Gat<'a>;
fn example(&self, _: Self::Gat<'_>) -> impl Sized;
}
impl Trait for () {
type Gat<'a> = &'a str;
#[allow(refining_impl_trait)]
fn example(&self, _: Self::Gat<'_>) -> i32 { 0 }
}
```
### Other cases
Being explicit about lifetimes doesn't improve things; the span highlighted has an "Expected `X` found `X`" flavor.
```Rust
3 | fn example<'a, 'b>(&'a self, _: Self::Gat<'b>) -> Self::Gat<'b>;
| -------- lifetimes in impl do not match this method in trait
...
8 | fn example<'a, 'b>(&'a self, _: Self::Gat<'b>) {}
| ^^^^^^^^ lifetimes do not match method in trait
```
### Rust Version
```Shell
Rust Playground
Build using the Stable version: 1.84.0
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.