id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,715,397,022 | langchain | AzureMLEndpointClient 400 Bad Request with Azure ML Inference Endpoint | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import os
import json
from typing import Optional, List
from langchain_community.llms.azureml_endpoint import AzureMLEndpointClient
class AzureMLLanguageModel:
"""Using AzureML Endpoint to serve as a node in a LangGraph workflow. via AzureMLEndpointClient"""
def __init__(self):
endpoint_url = "https://models.inference.ai.azure.com"
api_key = os.getenv("AZURE_ML_API_KEY")
deployment_name = "Meta-Llama-3.1-70B-Instruct"
if not all([endpoint_url, api_key, deployment_name]):
raise ValueError("Azure ML configuration is incomplete.")
self.client = AzureMLEndpointClient(
endpoint_url=endpoint_url,
endpoint_api_key=api_key,
deployment_name=deployment_name,
timeout=500
)
def generate(self, prompt: str, temperature: float = 0.7, max_tokens: Optional[int] = None, stop_sequences: Optional[List[str]] = None) -> str:
"""Generate text from prompt."""
# Adjust the request payload structure
request_payload = json.dumps({
"inputs": {
"input_string": [prompt]
},
"parameters": {
"temperature": temperature,
"max_tokens": max_tokens or 1000,
"stop": stop_sequences
}
}).encode('utf-8')
print("Request Payload:", request_payload)
try:
# Call the Azure ML endpoint
response_bytes = self.client.call(body=request_payload)
response_json = json.loads(response_bytes)
print(response_json)
# Handle the response based on your model's output format
# if "choices" in response_json:
# return response_json["choices"][0]["text"]
# elif "output" in response_json:
# return response_json["output"]
# else:
# return str(response_json)
except Exception as e:
print(f"Error generating text: {e}")
return ""
# Example usage
model = AzureMLLanguageModel()
text = model.generate("Write a poem about AI")
print(text)
```
### Error Message and Stack Trace (if applicable)
```
response_bytes = self.client.call(body=request_payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\Learning\researcher agent\.venv\Lib\site-packages\langchain_community\llms\azureml_endpoint.py", line 57, in call
response = urllib.request.urlopen(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\kolad\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 216, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\kolad\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 525, in open
response = meth(req, response)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\kolad\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 634, in http_response
response = self.parent.error(
^^^^^^^^^^^^^^^^^^
File "C:\Users\kolad\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 563, in error
return self._call_chain(*args)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\kolad\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 496, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "C:\Users\kolad\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request
```
### Description
When attempting to use the `AzureMLEndpointClient` with Azure ML's inference endpoint (models.inference.ai.azure.com), I'm consistently receiving a 400 Bad Request error. The client seems to be missing some required configuration or payload formatting specific to Azure ML's inference endpoints.
Current Behavior
- Sending requests to Azure ML inference endpoint results in 400 Bad Request
i have tried and successfully gotten back response from that endpoint using the Azure inferance lib, i am just trying to convert that into a langchain suitable node, for a project i am working on,
### System Info
> OS Version: 10.0.27754
> Python Version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.21
> langchain: 0.3.9
> langchain_community: 0.3.8
> langsmith: 0.1.147
> langchain_text_splitters: 0.3.2
> langgraph_sdk: 0.1.40
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.9
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.28.0
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> typing-extensions: 4.12.2 | ๐ค:bug | low | Critical |
2,715,403,198 | pytorch | [ONNX] DispatchError: No ONNX function found for <OpOverload(op='prims.var', overload='default')> exporting ToTaToNet | ### ๐ Describe the bug
The goal is to export ToTaToNet (speaker diarization and overlap segmentation) to onnx.
The model by default uses the asteroid-library heavily which internally does some dynamic checks which are incompatible with exporting. Therefore, the relevant files have been modified to remove such checks for static code. This leads to the minimal example being quite heavy, which is why I provide a colab link for you to access.
https://colab.research.google.com/drive/1tvB9ZI_oy8vgQXtuO3HIy3JtvJp5flZx?usp=sharing
Exporting this model requires the dynamo export (as opset 18 or higher is required due to use of col2im).
During the export I get the error:
```
[torch.onnx] Obtain model graph for `ToTaToNet([...]` with `torch.export.export`...
[torch.onnx] Obtain model graph for `ToTaToNet([...]` with `torch.export.export`... โ
[torch.onnx] Translate the graph into ONNX...
[torch.onnx] Translate the graph into ONNX... โ
---------------------------------------------------------------------------
DispatchError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/exporter/_core.py](https://localhost:8080/#) in _add_nodes(exported_program, model, lower, registry)
552 if lower == "at_conversion":
--> 553 _handle_call_function_node_with_lowering(
554 model,
7 frames
DispatchError: No ONNX function found for <OpOverload(op='prims.var', overload='default')>. Failure message: No decompositions registered for the real-valued input
```
I feel lost in how to approach this, and web-searches did not really turn up any usable results either.
Hence, I would greatly appreciate any help with this problem as I am super excited to be using this model :)
I am also not sure if this qualifies as a bug, so please excuse me if that is not the proper categorization.
Thank you!
### Versions
Version Information taken from the colab-reproduction-example as linked above.
Stepping: 0
BogoMIPS: 4400.34
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.3.3
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-nccl-cu12==2.23.4
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvtx==0.2.10
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241203
[pip3] optree==0.13.1
[pip3] pynvjitlink-cu12==0.4.0
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-metric-learning==2.7.0
[pip3] pytorch-ranger==0.1.1
[pip3] torch==2.5.1+cu121
[pip3] torch-audiomentations==0.11.1
[pip3] torch-optimizer==0.1.0
[pip3] torch_pitch_shift==1.2.5
[pip3] torch-stoi==0.2.3
[pip3] torchaudio==2.5.1+cu121
[pip3] torchmetrics==0.11.4
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.20.1+cu121
[conda] Could not collect | module: onnx,triaged,onnx-triaged | low | Critical |
2,715,407,452 | PowerToys | Mouse w/o boreder do not go to other PC | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
Hi,
It did work. It was great. It stopt and do not want to run properly again.
PTs it is running on two PC.
I generat new key on PC 1, go to PC 2 enter key and PC 1 name. Connect.
And it is not working ;(
Please help.
### โ๏ธ Expected Behavior
Come back to smooth operating.
### โ Actual Behavior
_No response_
### Other Software
PC 1 and 2 Windows 11 pro | Issue-Bug,Needs-Triage,Product-Mouse Without Borders | low | Minor |
2,715,420,424 | flutter | [google_maps_flutter] Flaky crashes in Android integration tests: `Trying to call onSurfaceCreated with no current context` | I'm seeing a fair amount of flake in the roller ([example failure](https://ci.chromium.org/ui/p/flutter/builders/try/Linux_android%20android_device_tests_shard_2%20master/12649/overview)) when running `google_maps_flutter_android` integration tests, with the following error:
```
Fatal AndroidRuntime Exception detected.
FATAL EXCEPTION: GL-Map
Process: io.flutter.plugins.googlemapsexample, PID: 22437
java.lang.IllegalStateException: Trying to call onSurfaceCreated with no current context
at m.foj.l(:com.google.android.gms.policy_maps_core_dynamite@[email protected]:10)
at m.ewb.run(:com.google.android.gms.policy_maps_core_dynamite@[email protected]:754)
```
Based on https://issuetracker.google.com/issues/339141097 it appears that this is an SDK issue rather than a plugin issue, so this is probably not currently actionable. However, I wanted to file this for tracking so that gardeners know that this is a known and tracked issue, and so that we have a place to check back for updates so that, e.g., we can update our library version if necessary once a fix is in the SDK. | platform-android,p: maps,package,P2,c: flake,team-android,triaged-android | low | Critical |
2,715,427,346 | go | cmd/go: go list misattributed error for missing dependency | ### Go version
go version go1.23.3 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/nabice/.cache/go-build'
GOENV='/home/nabice/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/nabice/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/nabice/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/ssd/source/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/ssd/source/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.3'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/nabice/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/nabice/temp/go_list_bug/go.mod'
GOWORK='/home/nabice/temp/go_list_bug/go.work'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build2242914037=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Repository: [github.com/nabice/golistissue](https://github.com/nabice/golistissue)
The project contains seven files, and I've reduced the code as much as possible to reproduce the issue.
git clone https://github.com/nabice/golistissue
go list -e -json=Name,ImportPath,Error,DepOnly,Module -compiled=true -test=false -export=true -deps=true -find=false -pgo=off -- ./...
### What did you see happen?
```
{
"ImportPath": "github.com/pkg/errors",
"DepOnly": true,
"Error": {
"ImportStack": [
"github.com/nabice/golistissue/analytics",
"github.com/pkg/errors"
],
"Pos": "util/fileutil.go:4:2",
"Err": "github.com/nabice/[email protected]: reading github.com/nabice/errorpkg/go.mod at revision v0.9.1: git ls-remote -q origin in /home/nabice/go/pkg/mod/cache/vcs/878721895ab660b96a101659ca17aa7068c22b888997bc7726d86e2caba4f39f: exit status 128:\n\tfatal: could not read Username for 'https://github.com': terminal prompts disabled\nConfirm the import path was entered correctly.\nIf this is a private repository, see https://golang.org/doc/faq#git_https for additional information."
}
}
```
### What did you expect to see?
The github.com/pkg/errors package should not contain error information, and the package can be found. I tested it with Go 1.17, and the output was correct.
| NeedsInvestigation,GoCommand | low | Critical |
2,715,432,097 | godot | Windows are always rendered over other controls | ### Tested versions
Tested in v4.3.stable.mono.arch_linux
### System information
Godot v4.3.stable.mono unknown - Arch Linux #1 SMP PREEMPT_DYNAMIC Fri, 22 Nov 2024 16:04:27 +0000 - Wayland - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 Ti (nvidia; 565.57.01) - AMD Ryzen 9 7950X3D 16-Core Processor (32 Threads)
### Issue description
I'm trying to render a Control node above a `Window` (background: I'm trying to implement custom tooltips), but whatever I try the Window is always rendered on top, hiding the control underneath it.
My test setup:

This is what I get:

This is what I expect:

Things I've tried unsuccessfully:
- Changing the order of `Label` and `Window` in the node tree
- Increasing the Z Index of `Label` to 4096
- Enabling CanvasItem -> Visibility -> Top Level
- Wrapping the `Label` in a `CanvasLayer` and increasing it's layer to 128
The documentation says:
> Note: Embedded [Window](https://docs.godotengine.org/en/stable/classes/class_window.html#class-window)s are placed on layer 1024. [CanvasItem](https://docs.godotengine.org/en/stable/classes/class_canvasitem.html#class-canvasitem)s on layers 1025 and higher appear in front of embedded windows.
but I'm unable to increase the layer property of `CanvasLayer` above 128. Is this on purpose?
### Steps to reproduce
The attached MRP contains the setup mentioned above, with the Label wrapped in a CanvasLayer. Otherwise, reproducing this is very simple: Create a new scene, add a `Window`, add a `Label`. Add some text to the label. Observe that `Label` is always rendered underneath `Window`.
### Minimal reproduction project (MRP)
[window_zorder.zip](https://github.com/user-attachments/files/17995757/window_zorder.zip)
| discussion,topic:editor,topic:gui | low | Minor |
2,715,439,724 | langchain | PDFPlumber extract_images has problems on 1bit images | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.document_loaders import PDFPlumberLoader
filename = "<pdf-file-scanned-to-1bit-bw.pdf>"
loader = PDFPlumberLoader(filename, extract_images=True)
docs = loader.load()
```
### Error Message and Stack Trace (if applicable)
langchain_community/document_loaders/parsers/pdf.py:456, in PDFPlumberParser._extract_images_from_page(self, page)
453 for img in page.images:
454 if img["stream"]["Filter"].name in _PDF_FILTER_WITHOUT_LOSS:
455 images.append(
--> 456 np.frombuffer(img["stream"].get_data(), dtype=np.uint8).reshape(
457 img["stream"]["Height"], img["stream"]["Width"], -1
458 )
459 )
460 elif img["stream"]["Filter"].name in _PDF_FILTER_WITH_LOSS:
461 images.append(img["stream"].get_data())
ValueError: cannot reshape array of size 1052700 into shape (3300,2552,newaxis)
### Description
The problem is that the conversion to numpy array assumes 8bit (np.uint8), which is not correct on 1 bit images in the PDF.
You can get the information from the "stream" object:
```
print(img['stream'])
```
--> <PDFStream(321): raw=66079, {**'BitsPerComponent': 1**, 'ColorSpace': /'DeviceGray', 'DecodeParms': {'Columns': 2552, 'K': -1, 'Rows': 3300}, 'Filter': /'CCITTFaxDecode', 'Height': 3300, 'Subtype': /'Image', 'Type': /'XObject', 'Width': 2552, 'Length': 66079}>
A solution would be e.g. using PIL to get the image converted correctly:
```
from PIL import Image
if img["stream"]["BitsPerComponent"] == 1:
images.append(
np.array(Image.frombytes("1", (img["stream"]["Width"], img["stream"]["Height"]), img["stream"].get_data()).convert("L")
)
)
```
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
> Python Version: 3.11.9 (main, Apr 2 2024, 08:25:04) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.3.21
> langchain: 0.3.9
> langchain_community: 0.3.9
> langsmith: 0.1.147
> langchain_ollama: 0.2.0
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.9.3
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.4
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> ollama: 0.4.1
> orjson: 3.10.12
> packaging: 23.2
> pydantic: 2.10.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.1
> requests: 2.31.0
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.25
> tenacity: 8.2.3
> typing-extensions: 4.12.2 | ๐ค:bug | low | Critical |
2,715,448,906 | vscode | issue with nb occurrence highlighting | Testing #235047
looks like it picks up some matches, but not others when doing `cmd + shift + L` even though it is highlighted.
in the vid, only 2 are selected, but 4 were highlighted
https://github.com/user-attachments/assets/81910cf9-4bc6-4138-b253-e86624f64e01
| bug,notebook-cell-editor | low | Minor |
2,715,459,898 | vscode | Debugger hover doesn't work with parenthesized expressions | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: Windows 11 Pro
Here are two examples in python and in typescript for the same problem. If a variable has parentheses around it, the debug hover for getters doesn't work.
Steps to Reproduce:
1. Python code:
```py
a = ''
a.isupper()
(a).isupper()
print(a) # Breakpoint here!
```


1. Typescript code:
```ts
const a = 'test';
a.length;
(a).length;
console.log(a);
```


| bug,debug | low | Critical |
2,715,473,669 | deno | "Failed to create JsStackFrame from callsite object" | I very well could be doing something abominable. However, the logs suggested this was a Deno bug, so I figured I'd create an issue just incase. The source code can be found [here](https://github.com/harrysolovay/structured-outputs/tree/659a2690c3f5f0aafd5413607497a74b62084f8f). The erroring script is `examples/what_to_ask.eg.ts`.
Version: Deno 2.1.2
```ts
warning: Failed to create JsStackFrame from callsite object: ; Result so far: RangeError: Maximum call stack size exceeded. This is a bug in deno
warning: Failed to create JsStackFrame from callsite object: RangeError: Maximum call stack size exceeded; Result so far: RangeError: Maximum call stack size exceeded. This is a bug in deno
[Function (anonymous)] {
fill: [Function: fill],
assert: [Function: assert],
annotate: [Function: annotate],
widen: [Function: widen],
[Symbol()]: [Internal Formatting Error] RangeError: Maximum call stack size exceeded
} [Function (anonymous)]
warning: Failed to create JsStackFrame from callsite object: ; Result so far: RangeError: Maximum call stack size exceeded. This is a bug in deno
[Internal Formatting Error] RangeError: Maximum call stack size exceeded [Function (anonymous)]
error: Uncaught (in promise) RangeError: Maximum call stack size exceeded
console.log(_0, getType)
^
at ext:runtime_main/js/99_main.js:272:3
at console.log (ext:deno_console/01_console.js:3141:37)
at file:///Users/harrysolovay/Dev/structured-outputs/json/toSchema.ts:108:13
at file:///Users/harrysolovay/Dev/structured-outputs/json/toSchema.ts:23:12
at file:///Users/harrysolovay/Dev/structured-outputs/core/TypeVisitor.ts:62:50
at TypeVisitor.sequence (file:///Users/harrysolovay/Dev/structured-outputs/core/TypeVisitor.ts:63:11)
at TypeVisitor.visit (file:///Users/harrysolovay/Dev/structured-outputs/core/TypeVisitor.ts:48:25)
at file:///Users/harrysolovay/Dev/structured-outputs/json/toSchema.ts:99:55
at Array.map (<anonymous>)
at file:///Users/harrysolovay/Dev/structured-outputs/json/toSchema.ts:92:38
``` | bug,deno_core | low | Critical |
2,715,481,460 | rust | `compare_method_predicate_entailment` is missing implied bounds from higher-ranked GAT | I tried this code (from `tests/ui/generic-associated-types/extended/lending_iterator.rs`):
```rust
pub trait FromLendingIterator<A>: Sized {
fn from_iter<T: for<'x> LendingIterator<Item<'x> = A>>(iter: T) -> Self;
}
impl<A> FromLendingIterator<A> for Vec<A> {
fn from_iter<I: for<'x> LendingIterator<Item<'x> = A>>(mut iter: I) -> Self {
let mut v = vec![];
while let Some(item) = iter.next() {
v.push(item);
}
v
}
}
pub trait LendingIterator {
type Item<'z>
where
Self: 'z;
fn next(&mut self) -> Option<Self::Item<'_>>;
fn collect<A, B: FromLendingIterator<A>>(self) -> B
where
Self: Sized,
Self: for<'q> LendingIterator<Item<'q> = A>,
{
<B as FromLendingIterator<A>>::from_iter(self)
}
}
```
I expected to see this happen: Pass (probably?)
Instead, this happened:
```
error[E0276]: impl has stricter requirements than trait
--> $DIR/lending_iterator.rs:6:45
|
LL | fn from_iter<T: for<'x> LendingIterator<Item<'x> = A>>(iter: T) -> Self;
| ------------------------------------------------------------------------ definition of `from_iter` from trait
...
LL | fn from_iter<I: for<'x> LendingIterator<Item<'x> = A>>(mut iter: I) -> Self {
| ^^^^^^^^^^^^ impl has extra requirement `I: 'x`
error: `Self` does not live long enough
--> $DIR/lending_iterator.rs:27:9
|
LL | <B as FromLendingIterator<A>>::from_iter(self)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0276`.
``` | A-trait-system,C-bug,T-types,A-implied-bounds,A-GATs,A-higher-ranked | low | Critical |
2,715,483,400 | go | x/build/cmd/relui: add workflows for some remaining manual recurring Go major release cycle tasks | In addition to workflows for all of Go release types, relui holds a few workflows for miscellaneous recurring tasks that come up in the release cycle (go.dev/s/release) such as [pinging early-in-cycle issues](https://cs.opensource.google/go/x/build/+/master:internal/relui/workflows.go;l=297-313;drc=4fcdaed4d4dccf0d4c7304e63f227de17e978872) and [removing wait-release hashtags](https://cs.opensource.google/go/x/build/+/master:internal/relui/workflows.go;l=316-320;drc=4fcdaed4d4dccf0d4c7304e63f227de17e978872) after tree reopening.
This is a tracking issue for relui workflows for a few more recurring tasks that are a good fit for automation, such as:
- applying wait-release hashtags to CLs where it can be determined to apply
- promoting api/next to api/go1.n.txt and filing an API audit issue
- merging release note fragments with `relnote generate` (newly applicable after #64169), atomically moving the result to x/website
CC @golang/release. | Builders,NeedsFix | low | Major |
2,715,493,684 | godot | Shadow looks little weird on Bright texture. | ### Tested versions
v4.4.dev5.official [9e6098432]
### System information
Godot v4.4.dev5 - Android - Single-window, 1 monitor - OpenGL ES 3 (Compatibility) - Adreno (TM) 610 - (8 threads)
### Issue description
I don't have idea how I should write it. Shadow looks like glowing on Bright Texture. Here is the preview:


### Steps to reproduce
n/a
### Minimal reproduction project (MRP)
n/a | bug,topic:rendering | low | Minor |
2,715,501,462 | rust | s390x regression: failing io::tests::try_oom_error | As of this merge commit:
```
commit d53f0b1d8e261f2f3535f1cd165c714fc0b0b298
Merge: a2545fd6fc6 4a216a25d14
Author: bors <[email protected]>
Date: Thu Nov 28 21:44:34 2024 +0000
Auto merge of #123244 - Mark-Simulacrum:share-inline-never-generics, r=saethlin
```
I'm seeing the following test case failure. Note that the test passes in *both* parents (a2545fd6fc6 and 4a216a25d14) of the merge commit.
```
thread 'io::tests::try_oom_error' panicked at std/src/io/tests.rs:822:62:
called `Result::unwrap_err()` on an `Ok` value: ()
stack backtrace:
0: 0x3fff7dd6702 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h0eec3d9053c23c0f
1: 0x3fff7e37506 - core::fmt::write::h66866b531685abe5
2: 0x3fff7dc575e - std::io::Write::write_fmt::h89ced3ac9904279e
3: 0x3fff7dd6570 - std::sys::backtrace::BacktraceLock::print::h363d5b9cad1f5c19
4: 0x3fff7df62e4 - std::panicking::default_hook::{{closure}}::ha4b8eaf1f6a37f57
5: 0x3fff7df60da - std::panicking::default_hook::hda41cc1e1c3b4efa
6: 0x2aa00430d78 - test::test_main::{{closure}}::h4d9e2859f981c511
7: 0x3fff7df6aa0 - std::panicking::rust_panic_with_hook::heff88192ef2a89fb
8: 0x3fff7dd6d52 - std::panicking::begin_panic_handler::{{closure}}::hff5589d5c45993a6
9: 0x3fff7dd69b4 - std::sys::backtrace::__rust_end_short_backtrace::h165daf71d9abcca8
10: 0x3fff7df63ca - rust_begin_unwind
11: 0x3fff7d4aa6a - core::panicking::panic_fmt::hec8c29ccd1751d1e
12: 0x3fff7d4b948 - core::result::unwrap_failed::h47cf11019e236d96
13: 0x2aa001c5a9a - core::ops::function::FnOnce::call_once::h5453841f675c42ec
14: 0x2aa00436d74 - test::__rust_begin_short_backtrace::h31f93d45aa944e21
15: 0x2aa00436f62 - test::run_test_in_process::h617ed5302028c350
16: 0x2aa0042a67e - std::sys::backtrace::__rust_begin_short_backtrace::hbc434a15ea7a090f
17: 0x2aa00425e14 - core::ops::function::FnOnce::call_once{{vtable.shim}}::h2f86d2c09a8a35d2
18: 0x3fff7df33a8 - std::sys::pal::unix::thread::Thread::new::thread_start::hce74d4c3b42eec78
19: 0x3fff7bac3fa - start_thread
at /usr/src/debug/glibc-2.39-17.1.ibm.fc40.s390x/nptl/pthread_create.c:447:8
20: 0x3fff7c2bde0 - thread_start
at /usr/src/debug/glibc-2.39-17.1.ibm.fc40.s390x/misc/../sysdeps/unix/sysv/linux/s390/s390-64/clone3.S:71
21: 0x0 - <unknown>
```
I've tried debugging the test, but if I'm reading this correctly, the test function was already completely optimized out and replaced by a failed assertion at compile time:
```
Dump of assembler code for function _ZN4core3ops8function6FnOnce9call_once17h5453841f675c42ecE:
0x000002aa001c5a60 <+0>: stmg %r6,%r15,48(%r15)
0x000002aa001c5a66 <+6>: aghi %r15,-168
0x000002aa001c5a6a <+10>: lgr %r11,%r15
0x000002aa001c5a6e <+14>: lgrl %r1,0x2aa00568f28
0x000002aa001c5a74 <+20>: lb %r0,0(%r1)
0x000002aa001c5a7a <+26>: la %r4,167(%r11)
0x000002aa001c5a7e <+30>: larl %r2,0x2aa00481e7c <anon.6846cc147164699b42462cc8b979de03.18.llvm.3644326088524771271>
0x000002aa001c5a84 <+36>: lghi %r3,46
0x000002aa001c5a88 <+40>: larl %r5,0x2aa00545d08 <anon.6846cc147164699b42462cc8b979de03.17.llvm.3644326088524771271>
0x000002aa001c5a8e <+46>: larl %r6,0x2aa00546f78 <anon.6846cc147164699b42462cc8b979de03.473.llvm.3644326088524771271>
0x000002aa001c5a94 <+52>: brasl %r14,0x2aa0005c0e0 <_ZN4core6result13unwrap_failed17h47cf11019e236d96E@plt>
```
Note the unconditional call to `unwrap_failed`.
| T-compiler,C-bug,O-SystemZ,E-needs-mcve,A-ABI,I-miscompile | low | Critical |
2,715,517,643 | go | proposal: bufio: Scanner.IterText/Scanner.IterBytes | ### Proposal Details
Once iterators are introduced in 1.23 there is growing number of libraries based on this feature. Reading data by lines or words are quite common task and could be accomplished with `for ... range` loop. It requires just two simple methods for `bufio.Scanner`:
```go
func (s *Scanner) TextSeq() iter.Seq[string] {
return func(yield func(string) bool) {
for s.Scan() {
if !yield(s.Text()) {
break
}
}
}
}
func (s *Scanner) BytesSeq() iter.Seq[[]byte] {
return func(yield func([]byte) bool) {
for s.Scan() {
if !yield(s.Bytes()) {
break
}
}
}
}
```
Reading whole file as collection of lines could be like:
```go
f, err := os.Open("file.txt")
if err != nil {
panic(err)
}
defer f.Close()
scanner := bufio.NewScanner(f)
// read all lines as slice of strings
lines := slices.Collect(scanner.TextSeq())
// instead of:
// lines := make([]string, 0)
// for scanner.Scan() {
// lines = append(lines, scanner.Text())
// }
```
| Proposal | low | Major |
2,715,523,349 | tensorflow | CPU performance is questionable | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
v2.18.0-rc2-4-g6550e4bd802 2.18.0
### Custom code
No
### OS platform and distribution
Ubuntu 22.04
### Mobile device
_No response_
### Python version
Python 3.10.12
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
None
### Current behavior?
So, I have 2 machines: I-5 5200U and 13700K. Both of them have only iGpu which is ignored; understandable.
I am using DeepFace which internally uses tensorflow. I am running the same python program on both machines with the same datasets. The combination is ArcFace+CenterFace. I am running a single photo vs library of photos.
Dataset 1:
Intel I-5 5200U:
CPU usage ~75%, and it takes 75 seconds.
13700K:
CPU usage ~60% with all cores around 60%, and it takes 23 seconds.
Dataset 2:
Intel I-5 5200U:
CPU usage ~60%, and it takes 99 seconds.
13700K:
CPU usage ~20% with all cores around 20%, and it takes 26 seconds.
We get a speed up of 4x; however, the CPU is not fully utilized in both cases. I was wondering about all sorts of optimizations I can do for tensorflow to squeeze the most of this system.
I also have done benchmark on 13700k, and it is close to everyone's results:
https://browser.geekbench.com/v6/cpu/9219843
### Standalone code to reproduce the issue
```shell
https://sharepad.io/p/a327C3f
Output on I-5 5200U:
Total CPUs available: 4
Available memory: 3978.17 MB
2024-12-03 20:40:07.838943: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
Intra-op threads: 0
Inter-op threads: 0
CPUs detected:
PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')
Number of logical CPUs detected by TensorFlow: 1
Output on 13700k:
Total CPUs available: 24
Available memory: 57406.59 MB
2024-12-03 20:40:07.838943: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
Intra-op threads: 0
Inter-op threads: 0
CPUs detected:
PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')
Number of logical CPUs detected by TensorFlow: 1
```
### Relevant log output
_No response_ | type:support,TF 2.18 | medium | Critical |
2,715,543,522 | PowerToys | [request] PowerToys Run, execute hotkeys, switch to desktop | ### Description of the new feature / enhancement
Hello,
would be nice if a new feature could be implemented to send key or hotkeys, example: you type "win+d" => switch to desktop, "win" => opens start menu, "ctrl+shift+esc" => opens task manager, and so on.
Then also for the window switch, could you also please implement a "switch to desktop" / minimize all in the list.
### Scenario when this would be used?
This would be really handy to use, especially if you command the PC through remote or game streams, and could then easily maneuver through Windows, where some hotkey bindings are not natively available.
### Supporting information
_No response_ | Idea-Enhancement,Product-PowerToys Run | low | Major |
2,715,555,879 | vscode | Very small buttons for expandable hover | Testing #234989
First of all, I think this is a great feature! - really useful :)
I wish the UX was better though, in terms of its really hard or rather takes very conscious effort to carefully click on + or - which are really close to each other.
We may want to consider giving more gap in between the two buttons or separating them to left and right? (This might look ugly but would be better in terms of giving better experience for people actually clicking the button)
I do notice the effort to null one of them out if they are deemed invalid (if its in only expandable state the - gets blacked out), so that is great.
Honestly buttons are too small :/
Perhaps there are keyboard shortcut to these that I dont know about though!


| ux,under-discussion,editor-hover | low | Minor |
2,715,570,083 | rust | ICE: `DefId::expect_local DefId isn't local` | <!--
[31mICE[0m: Rustc ./a.rs '' 'thread 'rustc' panicked at compiler/rustc_hir_analysis/src/collect/predicates_of.rs:401:60: 'DefId::expect_local: `DefId(2:2144 ~ core[958b]::mem::transmutability::TransmuteFrom::{constant#0})` isn't local'', 'thread 'rustc' panicked at compiler/rustc_hir_analysis/src/collect/predicates_of.rs:401:60: 'DefId::expect_local: `DefId(2:2144 ~ core[958b]::mem::transmutability::TransmuteFrom::{constant#0})` isn't local''
File: /tmp/im/a.rs
-->
code:
````rust
#![feature(generic_const_exprs, transmutability)]
mod assert {
use std::mem::TransmuteFrom;
pub fn is_transmutable<Src, Dst>()
where
Dst: TransmuteFrom<Src>,
{
}
}
pub fn main() {}
````
Version information
````
rustc 1.85.0-nightly (8575f8f91 2024-12-03)
binary: rustc
commit-hash: 8575f8f91bbd7dca529d362afc8117db74661c3b
commit-date: 2024-12-03
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.4
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/8575f8f91bbd7dca529d362afc8117db74661c3b/compiler/rustc_hir_analysis/src/collect/predicates_of.rs#L395-L407
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<details><summary><strong>Program output</strong></summary>
<p>
```
warning: the feature `generic_const_exprs` is incomplete and may not be safe to use and/or cause compiler crashes
--> /tmp/crash.rs:1:12
|
1 | #![feature(generic_const_exprs, transmutability)]
| ^^^^^^^^^^^^^^^^^^^
|
= note: see issue #76560 <https://github.com/rust-lang/rust/issues/76560> for more information
= note: `#[warn(incomplete_features)]` on by default
thread 'rustc' panicked at compiler/rustc_hir_analysis/src/collect/predicates_of.rs:401:60:
DefId::expect_local: `DefId(2:2144 ~ core[958b]::mem::transmutability::TransmuteFrom::{constant#0})` isn't local
stack backtrace:
0: 0x744f82b429ba - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h297eb035a685a8b3
1: 0x744f83213ca6 - core::fmt::write::h8e5bb32e06fa555f
2: 0x744f841d9f11 - std::io::Write::write_fmt::h5d3c55e1a008be63
3: 0x744f82b42812 - std::sys::backtrace::BacktraceLock::print::h2e284e09a2de7ba2
4: 0x744f82b44d1a - std::panicking::default_hook::{{closure}}::h69c9a1055a2222c1
5: 0x744f82b44b63 - std::panicking::default_hook::h1b78022d7d188ab6
6: 0x744f81cc19e8 - std[a8291144892739ee]::panicking::update_hook::<alloc[a1db28cf05e82ab]::boxed::Box<rustc_driver_impl[eeb16750bb3b6251]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x744f82b45518 - std::panicking::rust_panic_with_hook::h032cdc7be3cb56c8
8: 0x744f82b4520a - std::panicking::begin_panic_handler::{{closure}}::ha7afe2e3fcd9c50e
9: 0x744f82b42e79 - std::sys::backtrace::__rust_end_short_backtrace::hda5e7bea82be6b3c
10: 0x744f82b44ecd - rust_begin_unwind
11: 0x744f7f7ae750 - core::panicking::panic_fmt::hf510dcbfda4b6faf
12: 0x744f81ea6240 - <rustc_hir_analysis[4f3591f81681eb24]::collect::predicates_of::const_evaluatable_predicates_of::{closure#0}::ConstCollector as rustc_type_ir[36b6ce6a92c61b97]::visit::TypeVisitor<rustc_middle[e2ded4ee35845ccb]::ty::context::TyCtxt>>::visit_const
13: 0x744f83fcc8ef - rustc_hir_analysis[4f3591f81681eb24]::collect::predicates_of::gather_explicit_predicates_of
14: 0x744f83fc41f4 - rustc_query_impl[1566d5036aaf6ee4]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[1566d5036aaf6ee4]::query_impl::explicit_predicates_of::dynamic_query::{closure#2}::{closure#0}, rustc_middle[e2ded4ee35845ccb]::query::erase::Erased<[u8; 24usize]>>
15: 0x744f8338f670 - rustc_query_system[3e62bfa58e5579fb]::query::plumbing::try_execute_query::<rustc_query_impl[1566d5036aaf6ee4]::DynamicConfig<rustc_query_system[3e62bfa58e5579fb]::query::caches::DefIdCache<rustc_middle[e2ded4ee35845ccb]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[1566d5036aaf6ee4]::plumbing::QueryCtxt, false>
16: 0x744f8338eeb2 - rustc_query_impl[1566d5036aaf6ee4]::query_impl::explicit_predicates_of::get_query_non_incr::__rust_end_short_backtrace
17: 0x744f8338d5c6 - rustc_hir_analysis[4f3591f81681eb24]::collect::predicates_of::predicates_of
18: 0x744f8338d4e1 - rustc_query_impl[1566d5036aaf6ee4]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[1566d5036aaf6ee4]::query_impl::predicates_of::dynamic_query::{closure#2}::{closure#0}, rustc_middle[e2ded4ee35845ccb]::query::erase::Erased<[u8; 24usize]>>
19: 0x744f8338f659 - rustc_query_system[3e62bfa58e5579fb]::query::plumbing::try_execute_query::<rustc_query_impl[1566d5036aaf6ee4]::DynamicConfig<rustc_query_system[3e62bfa58e5579fb]::query::caches::DefIdCache<rustc_middle[e2ded4ee35845ccb]::query::erase::Erased<[u8; 24usize]>>, false, false, false>, rustc_query_impl[1566d5036aaf6ee4]::plumbing::QueryCtxt, false>
20: 0x744f8338edc6 - rustc_query_impl[1566d5036aaf6ee4]::query_impl::predicates_of::get_query_non_incr::__rust_end_short_backtrace
21: 0x744f8337d99b - <rustc_hir_analysis[4f3591f81681eb24]::collect::CollectItemTypesVisitor as rustc_hir[8c6e6009ce07755a]::intravisit::Visitor>::visit_item
22: 0x744f80e1da34 - rustc_hir_analysis[4f3591f81681eb24]::check::wfcheck::check_well_formed
23: 0x744f83c86687 - rustc_query_impl[1566d5036aaf6ee4]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[1566d5036aaf6ee4]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[e2ded4ee35845ccb]::query::erase::Erased<[u8; 1usize]>>
24: 0x744f83c86948 - rustc_query_system[3e62bfa58e5579fb]::query::plumbing::try_execute_query::<rustc_query_impl[1566d5036aaf6ee4]::DynamicConfig<rustc_data_structures[864da293b8c7000e]::vec_cache::VecCache<rustc_span[634f3266c1e253b7]::def_id::LocalDefId, rustc_middle[e2ded4ee35845ccb]::query::erase::Erased<[u8; 1usize]>, rustc_query_system[3e62bfa58e5579fb]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[1566d5036aaf6ee4]::plumbing::QueryCtxt, false>
25: 0x744f83c86662 - rustc_query_impl[1566d5036aaf6ee4]::query_impl::check_well_formed::get_query_non_incr::__rust_end_short_backtrace
26: 0x744f83c873ec - rustc_hir_analysis[4f3591f81681eb24]::check::wfcheck::check_mod_type_wf
27: 0x744f83c8720b - rustc_query_impl[1566d5036aaf6ee4]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[1566d5036aaf6ee4]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[e2ded4ee35845ccb]::query::erase::Erased<[u8; 1usize]>>
28: 0x744f84012308 - rustc_query_system[3e62bfa58e5579fb]::query::plumbing::try_execute_query::<rustc_query_impl[1566d5036aaf6ee4]::DynamicConfig<rustc_query_system[3e62bfa58e5579fb]::query::caches::DefaultCache<rustc_span[634f3266c1e253b7]::def_id::LocalModDefId, rustc_middle[e2ded4ee35845ccb]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[1566d5036aaf6ee4]::plumbing::QueryCtxt, false>
29: 0x744f840120b0 - rustc_query_impl[1566d5036aaf6ee4]::query_impl::check_mod_type_wf::get_query_non_incr::__rust_end_short_backtrace
30: 0x744f834c53dc - rustc_hir_analysis[4f3591f81681eb24]::check_crate
31: 0x744f83b0eebc - rustc_interface[14f05bc75be5cd61]::passes::run_required_analyses
32: 0x744f83b09f1e - rustc_interface[14f05bc75be5cd61]::passes::analysis
33: 0x744f83b09eef - rustc_query_impl[1566d5036aaf6ee4]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[1566d5036aaf6ee4]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[e2ded4ee35845ccb]::query::erase::Erased<[u8; 1usize]>>
34: 0x744f841467fa - rustc_query_system[3e62bfa58e5579fb]::query::plumbing::try_execute_query::<rustc_query_impl[1566d5036aaf6ee4]::DynamicConfig<rustc_query_system[3e62bfa58e5579fb]::query::caches::SingleCache<rustc_middle[e2ded4ee35845ccb]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[1566d5036aaf6ee4]::plumbing::QueryCtxt, false>
35: 0x744f841464ce - rustc_query_impl[1566d5036aaf6ee4]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
36: 0x744f8426ed79 - rustc_interface[14f05bc75be5cd61]::interface::run_compiler::<core[958b0aa3988e400a]::result::Result<(), rustc_span[634f3266c1e253b7]::ErrorGuaranteed>, rustc_driver_impl[eeb16750bb3b6251]::run_compiler::{closure#0}>::{closure#1}
37: 0x744f840e1bc7 - std[a8291144892739ee]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[14f05bc75be5cd61]::util::run_in_thread_with_globals<rustc_interface[14f05bc75be5cd61]::util::run_in_thread_pool_with_globals<rustc_interface[14f05bc75be5cd61]::interface::run_compiler<core[958b0aa3988e400a]::result::Result<(), rustc_span[634f3266c1e253b7]::ErrorGuaranteed>, rustc_driver_impl[eeb16750bb3b6251]::run_compiler::{closure#0}>::{closure#1}, core[958b0aa3988e400a]::result::Result<(), rustc_span[634f3266c1e253b7]::ErrorGuaranteed>>::{closure#0}, core[958b0aa3988e400a]::result::Result<(), rustc_span[634f3266c1e253b7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[958b0aa3988e400a]::result::Result<(), rustc_span[634f3266c1e253b7]::ErrorGuaranteed>>
38: 0x744f840e1862 - <<std[a8291144892739ee]::thread::Builder>::spawn_unchecked_<rustc_interface[14f05bc75be5cd61]::util::run_in_thread_with_globals<rustc_interface[14f05bc75be5cd61]::util::run_in_thread_pool_with_globals<rustc_interface[14f05bc75be5cd61]::interface::run_compiler<core[958b0aa3988e400a]::result::Result<(), rustc_span[634f3266c1e253b7]::ErrorGuaranteed>, rustc_driver_impl[eeb16750bb3b6251]::run_compiler::{closure#0}>::{closure#1}, core[958b0aa3988e400a]::result::Result<(), rustc_span[634f3266c1e253b7]::ErrorGuaranteed>>::{closure#0}, core[958b0aa3988e400a]::result::Result<(), rustc_span[634f3266c1e253b7]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[958b0aa3988e400a]::result::Result<(), rustc_span[634f3266c1e253b7]::ErrorGuaranteed>>::{closure#1} as core[958b0aa3988e400a]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
39: 0x744f840e0fab - std::sys::pal::unix::thread::Thread::new::thread_start::hcf422f7542348f85
40: 0x744f7e4a339d - <unknown>
41: 0x744f7e52849c - <unknown>
42: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/tmp/im/rustc-ice-2024-12-03T17_09_07-4164483.txt` to your bug report
query stack during panic:
#0 [explicit_predicates_of] computing explicit predicates of `assert::is_transmutable`
#1 [predicates_of] computing predicates of `assert::is_transmutable`
end of query stack
warning: 1 warning emitted
```
</p>
</details>
<!--
query stack:
#0 [explicit_predicates_of] computing explicit predicates of `assert::is_transmutable`
#1 [predicates_of] computing predicates of `assert::is_transmutable`
-->
@rustbot label +F-generic_const_exprs +F-transmutability | I-ICE,T-compiler,C-bug,F-generic_const_exprs,S-has-mcve,S-bug-has-test,F-transmutability,S-has-bisection | low | Critical |
2,715,582,302 | flutter | [packages] `withOpacity` is deprecated | Example apps for a few packages use `withOpacity`, which is deprecated in favor of `withValues`. | package,team-ecosystem,P2,c: tech-debt,p: flutter_markdown,p: flutter_adaptive_scaffold,triaged-ecosystem,p: two_dimensional_scrollables,p: deprecated api | low | Major |
2,715,606,378 | tauri | [bug] Tauri CLI fails to correctly install/remove dependencies in a pnpm workspace | ### Describe the bug
I follow the documentation to install plugin using the `tauri add` command. But it fails when installing NPM dependency.
```bash
pnpm tauri add cli
```
```bash
> @readest/[email protected] tauri /Users/chrox/dev/readest/apps/readest-app
> tauri "add" "cli"
Info Installing Cargo dependency "tauri-plugin-cli"...
Info Installing NPM dependency "@tauri-apps/plugin-cli@>=2"...
Failed to install NPM dependency
Error Failed to install NPM dependency
โELIFECYCLEโ Command failed with exit code 1.
```
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
โ readest-app git:(main) โ pnpm tauri info
> @readest/[email protected] tauri /Users/chrox/dev/readest/apps/readest-app
> tauri "info"
WARNING: no lock files found, defaulting to npm
[โ] Environment
- OS: Mac OS 14.3.1 arm64 (X64)
โ Xcode Command Line Tools: installed
โ rustc: 1.82.0 (f6e511eec 2024-10-15)
โ cargo: 1.82.0 (8f40fc59f 2024-08-21)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.9.0
- pnpm: 9.14.4
- yarn: 1.22.22
- npm: 10.9.1
[-] Packages
- tauri ๐ฆ: 2.1.1
- tauri-build ๐ฆ: 2.0.3
- wry ๐ฆ: 0.47.2
- tao ๐ฆ: 0.30.8
- @tauri-apps/api ๎: 2.1.1
- @tauri-apps/cli ๎: 2.1.0
[-] Plugins
- tauri-plugin-http ๐ฆ: 2.0.4
- @tauri-apps/plugin-http ๎: 2.0.1
- tauri-plugin-fs ๐ฆ: 2.1.0
- @tauri-apps/plugin-fs ๎: 2.0.3
- tauri-plugin-os ๐ฆ: 2.0.1
- @tauri-apps/plugin-os ๎: 2.0.0
- tauri-plugin-updater ๐ฆ: 2.1.0
- @tauri-apps/plugin-updater ๎: 2.0.0
- tauri-plugin-process ๐ฆ: 2.0.1
- @tauri-apps/plugin-process ๎: 2.0.0
- tauri-plugin-dialog ๐ฆ: 2.0.4
- @tauri-apps/plugin-dialog ๎: 2.0.1
- tauri-plugin-log ๐ฆ: 2.0.3
- @tauri-apps/plugin-log ๎: 2.0.1
- tauri-plugin-shell ๐ฆ: 2.0.2
- @tauri-apps/plugin-shell ๎: 2.0.1
- tauri-plugin-cli ๐ฆ: 2.0.1
- @tauri-apps/plugin-cli ๎: not installed!
[-] App
- build-type: bundle
- CSP: img-src 'self' blob: data: asset: http://asset.localhost; script-src 'self' 'unsafe-inline' 'unsafe-eval' blob: asset: http://asset.localhost https://*.sentry.io https://*.posthog.com; connect-src 'self' blob: asset: http://asset.localhost ipc: http://ipc.localhost https://*.sentry.io https://*.posthog.com https://*.deepl.com https://*.wikipedia.org https://*.wiktionary.org; style-src 'self' 'unsafe-inline' blob: asset: http://asset.localhost; default-src 'self' 'unsafe-inline' blob: customprotocol: asset: http://asset.localhost ipc: http://ipc.localhost https://fonts.gstatic.com; frame-src 'self' blob: asset: http://asset.localhost
- frontendDist: ../out
- devUrl: http://localhost:3000/
- framework: React (Next.js)
- bundler: Webpack
```
### Stack trace
_No response_
### Additional context
The `WARNING: no lock files found, defaulting to npm` is because `pnpm-lock.yaml` file is in the root dir of the monorepo. And when running `tauri add` command in the root dir I got similar error:
```
โ readest git:(main) โ pnpm tauri add cli
> @readest/monorepo@ tauri /Users/chrox/dev/readest
> pnpm --filter @readest/readest-app tauri "add" "cli"
> @readest/[email protected] tauri /Users/chrox/dev/readest/apps/readest-app
> tauri "add" "cli"
Info Installing Cargo dependency "tauri-plugin-cli"...
Info Installing NPM dependency "@tauri-apps/plugin-cli@>=2"...
Failed to install NPM dependency
Error Failed to install NPM dependency
/Users/chrox/dev/readest/apps/readest-app:
โERR_PNPM_RECURSIVE_RUN_FIRST_FAILโ @readest/[email protected] tauri: `tauri "add" "cli"`
Exit status 1
โELIFECYCLEโ Command failed with exit code 1.
``` | type: bug,status: needs triage | low | Critical |
2,715,641,293 | vscode | Git - Don't show inline blame while dragging selection | Testing #235028
1. Click and drag to make a multi-line selection
**Bug**
The inline blame decoration moves along with the mouse cursor, which is distracting
I'd find it better if blame only showed after the selection has been made instead of while dragging | bug,git | low | Critical |
2,715,651,218 | pytorch | frozen modules | ### ๐ Describe the bug
# checking frozen modules
import sys
frozen_modules = [name for name, module in sys.modules.items() if getattr(module, '__frozen__', False)]
print("Frozen Modules:", frozen_modules)
# output
Frozen Modules: ['torch.ops', 'torch.classes']
### Versions
Frozen Modules: ['torch.ops', 'torch.classes']
I'm using python 3.12.7 and torch 2.5.1, is it true that the latest pytorch version doesn't support python version above 3.11.x?
cc @seemethere @malfet @osalpekar @atalman | module: binaries,triaged | low | Critical |
2,715,656,194 | flutter | [packages] `Color.value` is deprecated | ```
info โข [...] โข 'value' is deprecated and shouldn't be used. Use
component accessors like .r or .g, or toARGB32 for an explicit conversion. Try replacing the use of the
deprecated member with the replacement. โข deprecated_member_use
```
It's mostly used in plugins where we are serializing colors for method channels. In those cases, it would probably be best to make a `PlatformColor` class and get each component, rather than relying on matching bit operations on both sides as we currently do. | p: maps,p: webview,package,team-ecosystem,P2,c: tech-debt,triaged-ecosystem,p: deprecated api,p: vector_graphics,p: flutter_svg | low | Major |
2,715,662,377 | flutter | [palette_generator] `red`/`blue`/`green`/`alpha` are deprecated | They have been replaced with one-character versions:
```
info โข example/lib/main.dart:315:16 โข 'red' is deprecated and shouldn't be used. Use .r. Try replacing the
use of the deprecated member with the replacement. โข deprecated_member_use
info โข example/lib/main.dart:315:30 โข 'green' is deprecated and shouldn't be used. Use .g. Try replacing the
use of the deprecated member with the replacement. โข deprecated_member_use
info โข example/lib/main.dart:315:46 โข 'blue' is deprecated and shouldn't be used. Use .b. Try replacing the
use of the deprecated member with the replacement. โข deprecated_member_use
info โข lib/palette_generator.dart:761:23 โข 'alpha' is deprecated and shouldn't be used. Use .a. Try replacing
the use of the deprecated member with the replacement. โข deprecated_member_use
[...]
```
This will be a trivial update once it's available on `stable`. | package,p: palette_generator,team-ecosystem,P2,c: tech-debt,triaged-ecosystem,p: deprecated api | low | Minor |
2,715,673,156 | flutter | [A11y] Semantics roles system โ๏ธ | ### Use case
Our current semantics system is missing a bunch of roles in web and desktop platform, and some in mobile platform.
See
http://flutter.dev/go/semantics-roles
### Proposal
http://flutter.dev/go/semantics-role-system
### General
- [ ] https://github.com/flutter/flutter/issues/161346
### Dropdown, Menu, ComboBox
- [ ] https://github.com/flutter/flutter/issues/157177
### Tabs
- [ ] https://github.com/flutter/flutter/issues/107861
### Table
- [ ] https://github.com/flutter/flutter/issues/45205
### List
- [ ] https://github.com/flutter/flutter/issues/144988
### menu
- [ ] https://github.com/flutter/flutter/issues/161561
### document
##### Link
- [x] https://github.com/flutter/flutter/issues/157210
- [ ] https://github.com/flutter/flutter/issues/102535
##### Dialog
- [ ] https://github.com/flutter/flutter/issues/157204
- [ ] https://github.com/flutter/flutter/issues/157207
##### Heading
- [ ] https://github.com/flutter/flutter/issues/155928
##### form
- [ ] https://github.com/flutter/flutter/issues/161628
##### tooltip
- [ ] https://github.com/flutter/flutter/issues/161630
### misc
- [ ] https://github.com/flutter/flutter/issues/161539
- [ ] https://github.com/flutter/flutter/issues/161540
- [ ] https://github.com/flutter/flutter/issues/161542
- [ ] https://github.com/flutter/flutter/issues/161557
- [ ] https://github.com/flutter/flutter/issues/161558
- [ ] https://github.com/flutter/flutter/issues/161559
- [ ] https://github.com/flutter/flutter/issues/161631
- [ ] https://github.com/flutter/flutter/issues/161689
- [ ] https://github.com/flutter/flutter/issues/161690
| c: new feature,a: accessibility,P2,team-accessibility,triaged-accessibility | low | Minor |
2,715,674,189 | opencv | YOLO11 models do not operate with the CUDA and CUDA FP16 targets (OpenCV 4.10.0) | ### System Information
OpenCV 4.10.0
Windows 11 Pro
JDK 23.0.1
CUDA 12.4
CuDNN cudnn-windows-x86_64-9.0.0.312_cuda12
### Detailed description
YOLO11 models do not operate with the CUDA and CUDA FP16 targets. Meanwhile, it operates well with the CPU target.
### Steps to reproduce
For instance, the yolo11n.onnx model (https://dropmefiles.com/w332f) throws the following error:
Caused by: CvException [org.opencv.core.CvException: cv::Exception: OpenCV(4.10.0-dev) G:\opencv-4.x\modules\dnn\src\net_impl_fuse.cpp:608: error: (-215:Assertion failed) biasLayerData->outputBlobsWrappers.size() == 1 in function 'cv::dnn::dnn4_v20240521::Net::Impl::fuseLayers'
]
at org.opencv.dnn.Net.forward_4(Native Method)
at org.opencv.dnn.Net.forward(Net.java:352)
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: dnn,category: dnn (onnx) | low | Critical |
2,715,680,252 | godot | Checkable properties behave differently based on where they are edited | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated GeForce GTX 1660 Ti - Intel(R) Core(TM) i5-9300HF CPU @ 2.40GHz (8 Threads)
### Issue description
I use "checkable properties" to refer to properties exposed to the editor through `_get_property_list` with the usage flag `PROPERTY_USAGE_CHECKABLE`.
Assume we have the following:
- a custom resource class `MyResource` with a checkable property of type `int`
- a file `my_resource.tres` storing an instance of the resource
- a scene `MyNode` with an exported member of type `MyResource` set to `my_resource.tres`
When editing `my_resource.tres` directly, checking on and off the property work as I expect it to. Specifically, toggling it off sets the value to `null` internally and the editor displays `0`. However, when editing `my_resource.tres` through `MyNode`'s inspector, toggling it on or off does nothing. This doesn't just affect the editor: even if `my_property` is manually set to 0 and unchecked, it will be saved as 0 and not `null` when the two may have different meanings.
In my project, I use checkable properties in a system similar to `Control`'s themes. `null` and only `null` means "don't override the parent's value". As a workaround, I store all such theme-like resources externally.
### Steps to reproduce
The minimal reproduction project below contains all the necessary files to reproduce the issue.
1) Open `my_resource.tres`. Toggling its property on should print `my_property <- 0` to the console (replace 0 with whatever value you've entered if using the spinbox directly). Toggling it off should always display `my_property <- <null>`.
2) Open `my_node.tscn`, and click on its `first` field to edit it. Since its value is set to `my_resource.tres`, I expect it to work just like in the previous case. Instead, toggling the property on or off does nothing. It is possible to input a non-zero value and toggle it off without the value being reset.
### Minimal reproduction project (MRP)
[mrp_checkable_properties.zip](https://github.com/user-attachments/files/17997122/mrp_checkable_properties.zip)
| bug,topic:editor | low | Minor |
2,715,684,536 | ui | [feat]: Stepper | ### Feature description
Hello,
I am actively using the Shadcn library and find it highly efficient. However, there is currently no Stepper component available in the library. The Stepper component, commonly used to visualize step-by-step processes, is a widely preferred feature in modern user interface designs.
I believe adding a Stepper component similar to the one in Material-UI would greatly benefit Shadcn/ui users. Would it be possible to include this feature? Thank you in advance, and I wish you good luck with your work.
### Affected component/components
Stepper
### Additional Context
like this : https://mui.com/material-ui/react-stepper/
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,715,722,368 | go | x/net/http2: stuck extended CONNECT requests | @aojea points out in [https://go-review.googlesource.com/c/net/+/610977/comment/2f78d788_69fe1833/](https://go-review.googlesource.com/c/net/+/610977/comment/2f78d788_69fe1833/) that HTTP/2 extended CONNECT requests can get stuck waiting for a SETTINGS frame, due to a failure to close ClientConn.seenSettingsChan on some error paths.
| NeedsFix | low | Critical |
2,715,731,092 | vscode | Git - What should blame show for deleted files | Testing #235028
1. Delete a tracked file and stage the change
Currently this keeps showing the original blame info. Should it show `Not Committed Yet (staged)` instead? | bug,git | low | Minor |
2,715,735,161 | PowerToys | [Request] Automated window manager with rules for programs and system events | ### Description of the new feature / enhancement
It would be great if PowerToys provided some automated window manager which allowed to set different rules when a certain app is opened, closed, minimized/maximized. It could also support various sorts of system events (certain time of the day, light/dark mode change, etc).
In general, it should allow to perform commands automatically when some system event occurs, and provide easy to use rules editor with some predefined events and actions. It could be called, for example, "PowerToys Scenarios" or "PowerToys Rules"
### Scenario when this would be used?
Such flexible tool will allow users to easily simplify their everyday routines and make system more adapted to their lifestyles.
Few examples I can think of:
- When a window of a certain program is opened (allow to query by window class or title), make it always on top and semitransparent
- In the evening, change desktop wallpaper and switch system theme to dark
- When any new window is opened, snap it to the selected FancyZones layout (esentially implemeneting tile window manager)
### Supporting information
Although it is possible to implement some of these automation rules using Autohotkey scripts, it has high learning curve and is unreachable for the majority of Windows users. | Idea-New PowerToy,Needs-Triage | low | Minor |
2,715,747,212 | godot | [MINOR] Dieresis not working from my keyboard | ### Tested versions
4.2.2.stable
### System information
Windows 7 - Godot 4.2.2 - Microsoft Wired Keyboard 600 (Mouse and Keyboard Center) (Azerty layout)
### Issue description
Never thought I would fall one day on a minor bug of that sort of thing, but figuring out it was an Engine bug (or maybe not, after all can be a keyboard/computer issue since I have no way to check) was frustating, especially for something so oddly specific.
So disclaimer : That a minor bug. A very minor thing, and honestly not even sure that a real thing.
That said, whats the bug : Every letter comporting dieresis will not write using the standard method on my keyboard meaning SHIFT+"^", but I can copy paste it, I can type it with a virtual keyboard. "Where ?", well both in-editor and in-game.
It's not a font problem for in-game cuz I can copy paste it fine, it's not a keyboard problem cuz I can just type it elsewhere, unless there some very specific problem with Godot ???
Well something else to note, is that normally typing a character that cannot have accent will just type the accent alone and that another character with it right ? So if you type twice that accent it will type it twice ? Well if I do that in Godot with dieresis, which I'll put it again, for me it's SHIFT+"^", will do : `` ^ยจ `` ????
Whatever that nonsense is.
### Steps to reproduce
Either you can reproduce by typing it, but that probably not the case, else I guess it would have been reported before, so I guess do it with an AZERTY keyboard... Else... I have honestly no fucking idea.
### Minimal reproduction project (MRP)
. | bug,topic:input | low | Critical |
2,715,748,488 | deno | `deno task`: built-in additional env vars | It'd be nice to have a normalized set of env var for "common paths" across the different OS:
```json
{
"tasks": {
"foo": "deno run --allow-write=$X_TMP foo.ts"
}
}
```
I prefixed with `X_` in the example but could be something else
The difference with specifying manually all possible env from all os (like `--allow-write=$LOCALAPPDATA,$HOME/.cache`) is that:
- You don't get an error if you use an env var from another os: `error: Empty values are not allowed`
- You don't grant "invalid path" (like `/tmp` would grant `C:/tmp` on windows)
- You don't have to rewrite and remember these each time
## `X_CACHE`
Would resolve to something similar to https://deno.land/x/[email protected]. Would be nice to offer a ways to allow wasm and other cached resources to be read/write but reject the rest of the disk
| Platform | Value | Example |
| -------- | ----------------------------------- | -------------------------------- |
| Linux | `$XDG_CACHE_HOME` or `$HOME`/.cache | /home/user/.cache |
| macOS | `$HOME`/Library/Caches | /Users/user/Library/Caches |
| Windows | `$LOCALAPPDATA` | C:\Users\user\AppData\Local |
## `X_TMP`
Would resolve to resolve to the temp dir. Would be super nice to offer a read-only experience while still being able to use `Deno.makeTemp()` cross-platform.
| Platform | Value | Example |
| -------- | ---------------------- | ---------------------- |
| Linux | `/tmp` | /tmp |
| macOS | `/tmp` | /tmp |
| Windows | `%TEMP%` or `%TMP%` | C:\Users\user\AppData\Local\Temp | | suggestion,task runner | low | Critical |
2,715,774,877 | pytorch | CI feature mirroring or caching for Docker Hub images | ### ๐ Describe the bug
We should investigate figuring out some way to cache Docker Hub images to avoid hitting the Docker Hub API limit. There's 2 ideas that might be worth exploring.
1) Setting up a pull through cache using Docker registry (https://docs.docker.com/docker-hub/mirror/)
2) Using AWS Public ECR Gallary which supports Docker Hub images (https://gallery.ecr.aws/)
Maybe there's some other ideas folks might have, feel free to post here.
### Versions
n/a
cc @seemethere @malfet @pytorch/pytorch-dev-infra | module: ci,triaged | low | Critical |
2,715,787,449 | PowerToys | Mouse Jump > Activation Shortcut description language | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Settings, Mouse Utilities
### Steps to reproduce
Just go to Mouse Utilities settings
### โ๏ธ Expected Behavior
This little tweak to the description would be clearer. Change to:
Customize the shortcut to turn **_this mode on or off._**
### โ Actual Behavior
Customize the shortcut to turn **_on or off this mode_**.
### Other Software
_No response_ | Issue-Bug,Product-Settings,Needs-Triage,Product-Mouse Utilities | low | Minor |
2,715,792,332 | flutter | Flutter engine logs do not appear in Console.app on macOS | ### Steps to reproduce
1. Log anything using `FML_LOG(ERROR)` (using the ERROR severity level to ensure the log will be printed regardless of log level.)
### Expected results
The log message does not appear in Console.app (messages logged with `NSLog` appear as expected)
### Actual results
The log message appears alongside `NSLog` messages in Console.app. Note: if the app is run directly (`myapp.app/Contents/MacOS/myapp`), the output is shown.
### Code sample
In `shell/platform/darwin/macos/framework/Source/FlutterEngine.mm`, add:
```cpp
FML_LOG(ERROR) << "hello";
```
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
N/A
Related iOS issue: https://github.com/flutter/flutter/issues/44030 | platform-mac,a: desktop,P3,fyi-tool,team-macos,triaged-macos | low | Critical |
2,715,809,320 | ant-design | Support for defaultProps in ConfigProvider | ### What problem does this feature solve?
Since a while, React is throwing a warning when using defaultProps. This feature was useful for overriding defaults from external libraries like AntD, but now if we want to customize the default behavior, we have to create a wrapper for basicaly every AntD component and use it everywhere.
### What does the proposed API look like?
```
<ConfigProvider defaultProps={{
Card: { bordered: false },
Modal: { width: 720 } }}
>
...
</ConfigProvider>
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ฃ Discussion,๐ก Feature Request,Inactive,๐งถ Low Priority | low | Minor |
2,715,823,247 | tauri | [bug] Building of the tonic crate (added as a dependency) fails the running cargo tauri dev due to failed to build archive: Operation not permitted | ### Describe the bug
Using `cargo tauri dev` or `deno task tauri dev` fails to build [`tonic`](https://crates.io/crates/tonic) with:
```
Compiling tonic v0.12.3
error: failed to build archive: Operation not permitted
error: could not compile `tonic` (lib) due to 1 previous error
```
Building works with `cargo build` or `cargo tauri dev -- --release`.
### Reproduction
1. Create a new tauri 2.0 project
2. Add `tonic` as a dependency
3. run `cargo tauri dev`
### Expected behavior
The application builds in debug mode and runs.
### Full `tauri info` output
```text
[โ] Environment
- OS: Mac OS 14.5.0 arm64 (X64)
โ Xcode Command Line Tools: installed
โ rustc: 1.81.0 (eeb90cda1 2024-09-04)
โ cargo: 1.81.0 (2dbb1af80 2024-08-20)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 20.11.0
- pnpm: 8.15.6
- yarn: 1.22.19
- npm: 10.2.4
- deno: deno 2.1.2
[-] Packages
- tauri ๐ฆ: 2.1.1
- tauri-build ๐ฆ: 2.0.3
- wry ๐ฆ: 0.47.2
- tao ๐ฆ: 0.30.8
- tauri-cli ๐ฆ: 2.1.0
- @tauri-apps/api ๎: 2.1.1
- @tauri-apps/cli ๎: 2.1.0
[-] Plugins
- tauri-plugin-shell ๐ฆ: 2.0.2
- @tauri-apps/plugin-shell ๎: 2.0.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,help wanted,platform: macOS,status: needs triage | low | Critical |
2,715,833,051 | vscode | "This browser supports WebGPU but it appears to be disabled" on linux | Testing #234762
I thought i'd test on my Ubuntu 24.04 VM (virtualized on an ARM mac). After enabling the setting and reloading I get this, presumably meaning my environment is incompatible?


When I click that button to revert back, I get this error (and nothing appears to happen behind the scenes):

| feature-request,editor-gpu | low | Critical |
2,715,844,608 | deno | `js_unit_tests::serve_test` is flaky | Over and over:
```
[serve_test 003.21] error: AddrInUse: Address already in use (os error 48): Address already in use (os error 48)
[serve_test 003.21] const server = Deno.serve(***
[serve_test 003.21] ^
[serve_test 003.21] at listen (ext:deno_net/01_net.js:504:35)
[serve_test 003.21] at Object.serve (ext:deno_http/00_serve.ts:555:16)
[serve_test 003.21] at file:///Users/runner/work/deno/deno/tests/unit/serve_test.ts:1224:25
```
https://github.com/denoland/deno/actions/runs/12143020514/job/33858916873 | flaky | low | Critical |
2,715,876,276 | ui | [bug]: Drawer component doesn't work properly when direction="top" | ### Describe the bug
When you set `<Drawer direction="top">` the component doesn't work properly.

### Affected component/components
Drawer
### How to reproduce
On a new nextjs project install Drawer component and set `<Drawer direction="top">`
### Codesandbox/StackBlitz link
https://stackblitz.com/edit/nextjs-vq7fth?file=app%2Fpage.tsx
### Logs
_No response_
### System Info
```bash
Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,715,889,221 | go | x/mobile: target maccatalyst cannot find OpenGLES header | ### Go version
go version go1.23.3 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='auto'
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/brien/Library/Caches/go-build'
GOENV='/Users/brien/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/brien/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/brien/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.3'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/brien/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/Users/brien/bringyour/sdk/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/r3/v_3z60rx2cxg0s1r9tl9fbmw0000gn/T/go-build1634776604=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
Create a xcframework using the following `gomobile bind` command:
```
gomobile bind \
-ldflags "-X client.Version=$WARP_VERSION" \
-target ios,iossimulator,macos,maccatalyst -iosversion 16.0 \
-bundleid com.bringyour \
-trimpath \
-gcflags "-dwarf=true" \
-ldflags "-X client.Version=$WARP_VERSION -compressdwarf=false -B gobuildid" \
-o "build/$BUILD_DIR/URnetworkSdk.xcframework" \
github.com/urnetwork/sdk;
```
The command ends with an error:
```
gomobile: maccatalyst/amd64: go build -gcflags -dwarf=true -ldflags -X client.Version=2024.12.3-outerwerld -compressdwarf=false -B gobuildid -trimpath -buildmode=c-archive -o /var/folders/r3/v_3z60rx2cxg0s1r9tl9fbmw0000gn/T/gomobile-work-3462168386/URnetworkSdk-maccatalyst-amd64.a ./gobind failed: exit status 1
# golang.org/x/mobile/gl
In file included from /Users/brien/go/pkg/mod/golang.org/x/[email protected]/gl/work.go:25:
./work.h:18:10: fatal error: 'OpenGLES/ES2/glext.h' file not found
18 | #include <OpenGLES/ES2/glext.h>
| ^~~~~~~~~~~~~~~~~~~~~~
1 error generated.
```
### What did you see happen?
`go build` cannot find header file.
```
In file included from /Users/brien/go/pkg/mod/golang.org/x/[email protected]/gl/work.go:25:
./work.h:18:10: fatal error: 'OpenGLES/ES2/glext.h' file not found
18 | #include <OpenGLES/ES2/glext.h>
| ^~~~~~~~~~~~~~~~~~~~~~
1 error generated.
```
### What did you expect to see?
The maccatalyst libraries should be included in the built xcframework without error. | help wanted,NeedsInvestigation,mobile | low | Critical |
2,715,890,426 | rust | Tracking issue for release notes of #133811: [AIX] change AIX default codemodel=large |
This issue tracks the release notes text for #133811.
### Steps
- [x] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Compatibility Notes
- [Change `powerpc64-ibm-aix` default `codemodel` to large](https://github.com/rust-lang/rust/pull/133811)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @mustartt, @jieyouxu -- origin issue/PR authors and assignees for starting to draft text
| T-compiler,relnotes,O-aix,relnotes-tracking-issue | low | Minor |
2,715,898,270 | rust | compiletest: improve diagnostics for test suite default outcome expectation | > Remark (for myself): improve this diagnostics when I get to untangling the runtest logic to make it so that test suites explicitly declare their intended default behavior so that the diagnostics here can say e.g. "expected to check-fail, but this didn't fail" or whatever.
_Originally posted by @jieyouxu in https://github.com/rust-lang/rust/pull/133813#discussion_r1868302243_
E.g. `ui` tests by default are expected to "check-fail", compare that to the actual test outcome, etc. | C-enhancement,A-diagnostics,T-compiler,T-bootstrap,A-compiletest | low | Minor |
2,715,915,193 | pytorch | Add torch.set_default_int_dtype() / extend set_default_dtype() to allow setting default signed integer dtype | ### ๐ The feature, motivation and pitch
The proposal is adding torch.set_default_int_dtype() to PyTorch, similar to the existing torch.set_default_dtype() for floating-point types [1]. This function would allow users to set the default integer dtype used by PyTorch.
When working with certain datasets or models, customers may not require the full range of 64-bit integers, using 32-bit integers can lead to significant memory savings and potential performance improvements. Most importantly, certain compilers do not support 64-bit data types to begin with, as is the case with Neuron. At the moment, PyTorch currently defaults to 64-bit integers (torch.int64) for many operations. In some cases, this limits and complicates enforcing this limitation, as is the case with TorchXLA - particularly because some tensor operations (e.g. Cast) requires validating that the raw underlying type can be converted between a source and target XLA type. Hence, any type downcasting on XLA is inherently limited by PyTorch.
The proposed torch.set_default_int_dtype() would allow users to easily switch to 32-bit integers (or other integer dtypes) as the default, without having to explicitly specify the dtype in every tensor creation or operation.
The function could work similarly to torch.set_default_dtype():
```
import torch
# Check current default integer dtype
print(torch.tensor([1, 2, 3]).dtype) # Output: torch.int64
# Set new default integer dtype
torch.set_default_int_dtype(torch.int32)
# Verify the change
print(torch.tensor([1, 2, 3]).dtype) # Output: torch.int32
```
This would enhance PyTorch's flexibility and allow users/components to more easily optimize their code for specific use cases.
In this case, the scope if only for signed integers and not complex (which uses floats).
[1] https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html#torch.set_default_dtype
### Alternatives
_No response_
### Additional context
Draft documentation for `torch.set_default_int_dtype`:
```
torch.set_default_int_dtype(d, /)
Sets the default integer dtype to d. Supports integer dtype as inputs. Other dtypes will cause torch to raise an exception.
When PyTorch is initialized its default integer dtype is torch.int64 (long). The intent of set_default_int_dtype(torch.int32) is to facilitate using 32-bit integers as the default. The default integer dtype is used to:
1. Determine the dtype for tensors constructed using Python integers. See examples below.
2. Determine the result of type promotion between bool tensors and Python integers.
3. Infer the dtype for integer tensors created without an explicit dtype specified.
Parameters:
d (torch.dtype) โ the integer point dtype to make the default. Must be one of torch.int8, torch.int16, torch.int32, or torch.int64.
Example::
>>> torch.tensor([1, 2, 3]).dtype
torch.int64
>>> torch.set_default_int_dtype(torch.int32)
>>> torch.tensor([1, 2, 3]).dtype
torch.int32
Warning:
This function will affect the behavior of all modules and tensors created after it's called. It should be used with caution, preferably at the beginning of a script or program.
Note:
This does not affect the default dtype of floating point tensors, which remains controlled by torch.set_default_dtype().
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | feature,triaged,needs design,module: python frontend | low | Major |
2,715,930,976 | go | x/tools/gopls: stubmethods: "could not find the enclosing function of the return statement" bug in GetIfaceStubInfo | ```
#!stacks
("runtime.sigpanic" || "bug.Errorf") &&
("stubmethods.fromReturnStmt:+24" || "fromReturnStmt:+27") &&
"GetIfaceStubInfo"
```
Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks).
This stack `P5n9hA` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2024-12-01.json):
- `crash/crash`
- [`runtime.gopanic:+69`](https://cs.opensource.google/go/go/+/go1.23.3:src/runtime/panic.go;l=804)
- `runtime.panicmem:=262`
- [`runtime.sigpanic:+19`](https://cs.opensource.google/go/go/+/go1.23.3:src/runtime/signal_unix.go;l=917)
- [`golang.org/x/tools/gopls/internal/golang/stubmethods.fromReturnStmt:+24`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.3:gopls/internal/golang/stubmethods/stubmethods.go;l=284)
- [`golang.org/x/tools/gopls/internal/golang/stubmethods.GetIfaceStubInfo:+10`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.3:gopls/internal/golang/stubmethods/stubmethods.go;l=60)
- [`golang.org/x/tools/gopls/internal/golang.quickFix:+44`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.3:gopls/internal/golang/codeaction.go;l=316)
- [`golang.org/x/tools/gopls/internal/golang.CodeActions:+65`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.3:gopls/internal/golang/codeaction.go;l=108)
- [`golang.org/x/tools/gopls/internal/server.(*server).CodeAction:+154`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.3:gopls/internal/server/code_action.go;l=178)
- [`golang.org/x/tools/gopls/internal/protocol.serverDispatch:+160`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.3:gopls/internal/protocol/tsserver.go;l=330)
- [`golang.org/x/tools/gopls/internal/lsprpc.(*streamServer).ServeStream.ServerHandler.func3:+5`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.3:gopls/internal/protocol/protocol.go;l=160)
- [`golang.org/x/tools/gopls/internal/lsprpc.(*streamServer).ServeStream.handshaker.func4:+52`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.3:gopls/internal/lsprpc/lsprpc.go;l=509)
- [`golang.org/x/tools/gopls/internal/protocol.Handlers.MustReplyHandler.func1:+2`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.3:internal/jsonrpc2/handler.go;l=35)
- [`golang.org/x/tools/gopls/internal/protocol.Handlers.AsyncHandler.func2.2:+3`](https://cs.opensource.google/go/x/tools/+/gopls/v0.17.0-pre.3:internal/jsonrpc2/handler.go;l=104)
- `runtime.goexit:+0`
```
golang.org/x/tools/[email protected] go1.23.3 darwin/amd64 vscode (4)
```
Dups: fm8UdQ | NeedsInvestigation,gopls,Tools,gopls/telemetry-wins | low | Critical |
2,715,967,260 | vscode | Clear Search Results also clears the filter unexpectedly | Clicking the button will clear the filter when there are no results:

Note that the button is enabled when there is text in the box, but not without:

| bug,search | low | Minor |
2,715,997,406 | rust | enums with disjoint ranges should emit more precise `llvm.range` metadata | I tried this code:
https://godbolt.org/z/4KKGvWs8d
```rust
#[derive(Copy, Clone)]
pub enum Foo {
Foo1 = 1,
Foo2 = 2,
Foo4 = 4,
Foo8 = 8,
}
#[no_mangle]
pub fn load(foo: &Foo) -> u32 {
*foo as u32
}
```
I expected to see this happen: The metadata attatched to the `load` should specify that the loaded value is either 1, 2, 4 or 8:
```llvm
define noundef range(i32 1, 8) i32 @load(ptr noalias nocapture noundef readonly align 1 dereferenceable(1) %foo) unnamed_addr #0 {
%0 = load i8, ptr %foo, align 1, !range !2, !noundef !3
%_0 = zext nneg i8 %0 to i32
ret i32 %_0
}
!2 = !{
i8 1, i8 3, ; 1 or 2
i8 4, i8 5, ; 4
i8 8, i8 9, ; 8
}
```
Instead, this happened: the range metadata gives a still correct, but less accurate range of `[1,9)`:
```llvm
define noundef range(i32 1, 8) i32 @load(ptr noalias nocapture noundef readonly align 1 dereferenceable(1) %foo) unnamed_addr #0 {
%0 = load i8, ptr %foo, align 1, !range !2, !noundef !3
%_0 = zext nneg i8 %0 to i32
ret i32 %_0
}
!2 = !{i8 1, i8 9}
```
Similarly, the `range` attribute for the return value of `load` (and the `range` attribute for a value of type `Foo` if it is passed by value) is also less precise than it could be. But it appears from reading the LLVM LangRef that `range` **attributes** can only specify a single range, rather than a union of disjoint ranges like `llvm.range` **metadata** can.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.85.0-nightly (5e1440ae5 2024-12-01)
binary: rustc
commit-hash: 5e1440ae514d98ddfcbf1607acb64d41e07ef616
commit-date: 2024-12-01
host: x86_64-unknown-linux-gnu
release: 1.85.0-nightly
LLVM version: 19.1.4
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
<backtrace>
```
</p>
</details>
| A-LLVM,I-slow,A-codegen,T-compiler,C-optimization | low | Critical |
2,715,997,852 | pytorch | [export] Move pytree serialization into export serializer | ### ๐ The feature, motivation and pitch
Instead of giving pytree their own serialization mechanism ([code](https://github.com/pytorch/pytorch/blob/b4ea9139781dce66419961305d220baeda565969/torch/utils/_pytree.py#L1425-L1444)), a better design seems like for export to have its own pytree serialization mechanism. Maybe pytree can provide the utility of getting to a python schema, and export can handle the actual serialization to json. Then we can avoid changes like [this](https://github.com/pytorch/pytorch/pull/141525?fbclid=IwZXh0bgNhZW0CMTEAAR0PKY5KkViM1-2rBOJQFgoEPmmRZFQGqZV6DtuB5kzWstoJrKX3EJ0wIcA_aem_p-fuTzA43Yw42_6ZJOlHmA#discussion_r1866506916).
Note: We need to be careful when making this change as it is BC breaking.
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @suo @ydwu4 | oncall: pt2,oncall: export | low | Minor |
2,715,999,013 | go | proposal: crypto/x509: support extracting X25519 public keys from certificates | ### Proposal Details
Even though X25519 key pairs (not to be confused with Ed25519 key pairs) can't be used to establish TLS connections, there are still cases in which you want to use them in combination with X.509 certificates. For example, if X25519 is used to perform public key authenticated encryption (e.g., [NaCl's crypto_box](https://nacl.cr.yp.to/box.html)), X.509 certificates may be used to authenticate the identity of the peer before encrypting/decrypting. RFC 8410 added the ability to embed X25519 public keys into X.509 certificates for such use cases.
Right now `x509.ParseCertificate()` completes successfully when presented with a certificate containing an X25519 public key. Unfortunately, it does end up discarding the public key, as `Certificate.PublicKeyAlgorithm` and `Certificate.PublicKey` will be set to `UnknownPublicKeyAlgorithm` and `nil`, respectively.
The proposal here is to actually let `x509.ParseCertificate()` extract the public key from the certificate and return it in the form of an `*ecdh.PublicKey`. The `PublicKeyAlgorithm` enumeration will be extended to include an additional element for this public key algorithm, which will simply be called `X25519`.
Code changes can be found here: https://go-review.googlesource.com/c/go/+/632875 | Proposal,Proposal-Crypto | low | Major |
2,716,010,133 | go | proposal: x/mobile: better support for unrecovered panics | ### Proposal Details
Currently with gomobile, the following example would crash the entire app with a single error message and no stack trace.
```
func HelloWorld() {
var x *SomeType
x.Foo()
// oh no, the application crashed
}
```
In mobile applications we typically don't want to crash the app, but instead capture and report unrecovered errors and then handle the unrecovered errors as part of error handling in the caller. I'd like to propose four improvement to gomobile:
1. Standard stack trace logging for all unrecovered errors.
2. A global unrecovered error callback that is called before propagating unrecovered errors.
3. Instead of unrecovered errors crashing the application, the error should be converted to an unchecked exception/fatal error type in the caller code so that it can integrate with the native unhandled error flow.
4. Support for propagating unrecovered errors to the caller as checked exceptions. In many cases a typical return will be a pair of value and error, which is correctly translated to the exception framework by gomobile. Unrecovered errors would more naturally be "thrown" to the caller with the correct stack trace than crashing the application.
Is there a process to vet improvements like this before working on a PR? | Proposal | low | Critical |
2,716,014,904 | angular | FormControl events should include when ValidatorFn is added/removed | ### Which @angular/* package(s) are relevant/related to the feature request?
forms
### Description
I'd like to be able to show a required asterisk (`*`) when the `Validators.required` is added to the reactive forms control.
There doesn't seem to be a way to get this information in any rxjs event or signal way.
My only workaround is to subscribe to `formControl.events`, but this ends up with extra noise and is not immediately triggered if changing from one invalid reason to another, etc.
### Proposed solution
`AbstractControl<any, any>.events: Observable<ControlEvent<any>>` should be extended to include a `ValidatorsChangeEvent`
```ts
export declare class ValidatorsChangeEvent extends ControlEvent {
readonly validators: ValidatorFn[];
readonly source: AbstractControl;
constructor(validators: ValidatorFn[], source: AbstractControl);
}
```
### Alternatives considered
The following doesn't work very well:
```ts
const sub = formControl.events
.pipe(
takeUntilDestroyed(this.destroyRef),
map((x) => formControl.hasValidator(Validators.required)),
// only trigger when the value changes
distinctUntilChanged(
(a, b) => a === b,
(x) => x,
),
)
.subscribe((isFormControlRequired) => {
``` | area: forms,forms: validators | low | Minor |
2,716,022,752 | TypeScript | [bug] Auto `accessor` fields are declared as regular properties in mixin abstract classes | ### ๐ Search Terms
abstract class mixin accessor field property declaration .d.ts ts(2611)
### ๐ Version & Regression Information
- This changed between versions ______ and _______
- This changed in commit or PR _______
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about _________
- I was unable to test this on prior versions because _______
### โฏ Playground Link
https://www.typescriptlang.org/play/?ts=5.8.0-dev.20241203#code/PTAEAMEsDsBMFMAeA6WyAuBncpKdAObzTwBOAhuvLKAEYCeEWAxqALRvwC2k6AIvGYAbchXSQA9tADy0IfRwwIXemyGRaFUgoBQSAA4TS6UAmGj4ocrUzoKzE+cz4AQuUyWA3jtC+rzZnhnIysALlBoAFcuWjIAbh0AXz1EQ2NTQRFSSwAzSOgHSWhQHlhYIXgACgBlSP0yAH5w9Hp6iRzQNw8ASnDrW3sTEgB3UEru0ABeAD5Qbz8wiOjY0gTEhJBcLjSTTxLIMorQRNAc0gkuUAByFTUNLXorhIMjRxFnUAAxGHIhUCQqHB8KVylUJvM-OQAkFMCFyFNQAAmOK+TZYSqIgBsAEZsRMyOdSEkgA
### ๐ป Code
```ts
// `index.d.ts` is generated by `tsc --emitDeclarationOnly` in `my-library`
declare abstract class Base {
accessor a: number;
}
declare function middle(Super?: typeof Base): abstract new () => {
a: number;
};
// import { middle } from 'my-library';
class Final extends middle() {
accessor a = 2; // ts(2611) error
}
```
### ๐ Actual behavior
TS throws:
```text
'a' is defined as a property in class '{ a: number; }', but is overridden here in 'Final' as an accessor. ts(2611)
```
but the library source code is:
```ts
export abstract class Base {
accessor a = 1;
}
export function middle(Super = Base) {
abstract class Middle extends Super {
}
return Middle;
}
```
### ๐ Expected behavior
It has no type error while overrides auto `accessor` fields in sub classes of abstract mixins from 3rd-party libraries.
### Additional information about the issue
This bug can be resolved if `tsc` generates `.d.ts` as shown below:
```diff
// `index.d.ts` is generated by `tsc --emitDeclarationOnly` in `my-library`
declare abstract class Base {
accessor a: number;
}
declare function middle(Super?: typeof Base): abstract new () => {
- a: number;
+ get a(): number;
+ set a(value: number);
};
// import { middle } from 'my-library';
class Final extends middle() {
accessor a = 2; // no error
}
```
https://www.typescriptlang.org/play/?ts=5.8.0-dev.20241203#code/PTAEAMEsDsBMFMAeA6WyAuBncpKdAObzTwBOAhuvLKAEYCeEWAxqALRvwC2k6AIvGYAbchXSQA9tADy0IfRwwIXemyGRaFUgoBQSAA4TS6UAmGj4ocrUzoKzE+cz4AQuUyWA3jtC+rzZnhnIysALlBoAFcuWjIAbh0AXz1EQ2NTQRFSSwAzSOgHSWhQHlhYIXgACgBlSP0yAH5w9Hp6iRzQNw8ASnDrW3sTEgB3UEru0ABeAD5Qbz9CeBNycfComPifPw9lyoA3ciFI+DXo2NJuhMSEkFwuNJNPEsgyitBE0BzSCS5QAHIVGoNFp6H8EgYjI4RM5QAAxGCHUBIKhwfClcpVCbzPzkAJBTAhchTUAAJjivluWEqJIAbABGOkTMjfUhJIA | Needs Investigation,Fix Available | low | Critical |
2,716,022,995 | next.js | useRef value not reset on component suspension caused by "use" after client-side navigation | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/fl4vdz
### To Reproduce
1. open codesandbox
2. run build and then prod-mode server
3. visit the root `/` in preview and hit "NAVIGATE"
4. observe that the loaded page has a blank result value and the ref is set to "done"
<img width="438" alt="image" src="https://github.com/user-attachments/assets/e1170fed-3135-49cb-bbec-7c54aee1cac8">
### Current vs. Expected behavior
From everything I can find online, a component should not retain the values in `useRef` when the ref is set before suspension.
https://github.com/facebook/react/issues/17271
It seems that if a page is client-side navigated to and a suspense is triggered via calling `use` on a server-passed promise, the ref value that is set on initial render is not reset.
This is demonstrated in my example by having the code attempt to store the value returned by `use` in state, but this does not work because the ref that is checked before calling this code is never set back to null when the component unsuspends.
```tsx
export const SuspendedRefTest = ({ promise }: { promise: Promise<string> }) => {
const ref = useRef<string | null>(null);
const [someData, setData] = useState("");
console.log("current ref", ref.current);
if (!ref.current) {
ref.current = "done";
console.log("getting data");
const data = use(promise);
// EXPECTED: after the data is resolved from the `use` call, this should run
// this is because when the component re-renders after it unsuspends, the value of `ref.current` should have been reset to null
// ACTUAL: this never runs because ref.current retains its value after suspension
console.log("setting data");
setData(data);
}
return (
<>
RESULT: {someData}, CURRENT REF: {ref.current}
<br />
<button onClick={() => window.location.replace("/")}>RESET</button>
</>
);
};
```
I would expect that the ref should be reset and the suspended component re-rendered from scratch.
As far as I can tell this is not an issue if you directly load the page in question from scratch, it only happens if you client-side navigate to it.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.12.0
npm: 10.5.0
Yarn: 1.22.19
pnpm: 8.15.6
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: 15.0.3
react: 19.0.0-rc.1
react-dom: 19.0.0-rc.1
typescript: 5.4.5
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Lazy Loading
### Which stage(s) are affected? (Select all that apply)
next start (local)
### Additional context
_No response_ | bug,Lazy Loading | low | Minor |
2,716,061,749 | rust | Compiling `no_std` for `i686-pc-windows-gnu` ignores `panic=abort` | When cross-compiling a `#![no_std]` crate for `i686-pc-windows-gnu` with `panic=abort` it includes unwinding symbols and its dependencies:
```text
Archive member included to satisfy reference by file (symbol)
/usr/lib/gcc/i686-w64-mingw32/13-win32/libgcc_eh.a(unwind-dw2-fde.o)
/home/irate-walrus/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/i686-pc-windows-gnu/lib/rsbegin.o (__register_frame_info)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libmsvcrt.a(libmsvcrt_defs01135.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/libgcc_eh.a(unwind-dw2-fde.o) (abort)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libmsvcrt.a(libmsvcrt_defs01177.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/libgcc_eh.a(unwind-dw2-fde.o) (free)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libmsvcrt.a(libmsvcrt_defs01228.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/libgcc_eh.a(unwind-dw2-fde.o) (malloc)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libmsvcrt.a(libmsvcrt_defs01234.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/libgcc_eh.a(unwind-dw2-fde.o) (memcpy)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libmsvcrt.a(libmsvcrt_defs01279.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/libgcc_eh.a(unwind-dw2-fde.o) (strlen)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libmsvcrt.a(libmsvcrt_defh.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libmsvcrt.a(libmsvcrt_defs01135.o) (_head_lib32_libmsvcrt_def_a)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libmsvcrt.a(libmsvcrt_deft.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libmsvcrt.a(libmsvcrt_defh.o) (_lib32_libmsvcrt_def_a_iname)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libmingw32.a(lib32_libmingw32_a-tlsmcrt.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/libgcc_eh.a(unwind-dw2-fde.o) (_CRT_MT)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libkernel32.a(libkernel32s01416.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/libgcc_eh.a(unwind-dw2-fde.o) (_imp__Sleep@4)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libkernel32.a(libkernel32s00996.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/libgcc_eh.a(unwind-dw2-fde.o) (_imp__LeaveCriticalSection@4)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libkernel32.a(libkernel32s00898.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/libgcc_eh.a(unwind-dw2-fde.o) (_imp__InitializeCriticalSection@4)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libkernel32.a(libkernel32s00321.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/libgcc_eh.a(unwind-dw2-fde.o) (_imp__EnterCriticalSection@4)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libkernel32.a(libkernel32h.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libkernel32.a(libkernel32s01416.o) (_head_lib32_libkernel32_a)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libkernel32.a(libkernel32t.o)
/usr/lib/gcc/i686-w64-mingw32/13-win32/../../../../i686-w64-mingw32/lib/../lib/libkernel32.a(libkernel32h.o) (_lib32_libkernel32_a_iname)
```
As `panic=abort` is specified in the crate's [Cargo.toml](https://github.com/Irate-Walrus/stardust-rs/blob/f85933c11ec40cb9bb78d2a0360fc31831b7a44d/Cargo.toml#L12C1-L12C16) I would expect that these dependencies would not be linked. This would also be consistent with the behavior when compiling for `x86_64-pc-windows-gnu` which does not link these symbols.
I have also attempted the compilation of the crate using the `Z build-std=core,alloc,panic_abort -Z build-std-features=panic_immediate_abort` flags which as not resolved this issue. I also define my own panic-handler and other required symbols in the [main.rs](https://github.com/Irate-Walrus/stardust-rs/blob/f85933c11ec40cb9bb78d2a0360fc31831b7a44d/stardust/src/main.rs#L22-L47). Previous discussion on the rust-lang forum can be found here: https://users.rust-lang.org/t/inclusion-of-lkernel32-and-others-when-compiling-no-std-for-i686-pc-windows-gnu/121551/14.
The project can be found here: https://github.com/Irate-Walrus/stardust-rs .
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (b8c8287a2 2024-11-03)
binary: rustc
commit-hash: b8c8287a229cd79604aa84c25e1235fc78cd5f2e
commit-date: 2024-11-03
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
N/A, the project compiles but due to linkage of additional symbols crashes during runtime with an unrelated error.
</p>
</details>
| A-cross,T-compiler,O-windows-gnu,C-bug,-Zbuild-std,A-panic,O-x86_32 | low | Critical |
2,716,067,195 | yt-dlp | Some Reddit videos fail to download | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Ireland
### Provide a description that is worded well enough to be understood
Most Reddit videos work fine, but some don't. At a quick glance, the commonality is that it's a GIF image which Reddit has converted to video themselves. This video actually was very easy to download in the browser, because it's a normal <video> tag with a normal src attribute, which seems to point directly to an MP4 file. I didn't try that initially, because yt-dlp usually works better for Reddit videos.
Note: The example I've got is NSFW. Sorry about that.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-Uv', 'https://www.reddit.com/r/gayporndaily/comments/137xou6/anyone_has_seen_this_recently_the_transition_is/']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [a9f85670d] (zip)
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-48-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1
[debug] Optional libraries: Cryptodome-3.20.0, certifi-2023.11.17, requests-2.31.0, secretstorage-3.3.3, sqlite3-3.45.1, urllib3-2.0.7
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/download/2024.12.02.233010/SHA2-256SUMS
Current version: [email protected] from yt-dlp/yt-dlp-nightly-builds
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
Current Build Hash: 6540c6e222318a85a7f254e6871a0d473b00411aa4f2bb734565dae9be405c7e
Updating to [email protected] from yt-dlp/yt-dlp-nightly-builds ...
[debug] Downloading yt-dlp from https://github.com/yt-dlp/yt-dlp-nightly-builds/releases/download/2024.12.02.233010/yt-dlp
Updated yt-dlp to [email protected] from yt-dlp/yt-dlp-nightly-builds
[debug] Restarting: python3 /usr/local/bin/yt-dlp -Uv https://www.reddit.com/r/gayporndaily/comments/137xou6/anyone_has_seen_this_recently_the_transition_is/
[debug] Command-line config: ['-Uv', 'https://www.reddit.com/r/gayporndaily/comments/137xou6/anyone_has_seen_this_recently_the_transition_is/']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [d8fb34908] (zip)
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-48-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1
[debug] Optional libraries: Cryptodome-3.20.0, certifi-2023.11.17, requests-2.31.0, secretstorage-3.3.3, sqlite3-3.45.1, urllib3-2.0.7
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp-nightly-builds)
[Reddit] Extracting URL: https://www.reddit.com/r/gayporndaily/comments/137xou6/anyone_has_seen_this_recently_the_transition_is/
[Reddit] 137xou6: Downloading JSON metadata
[generic] Extracting URL: https://i.redd.it/q7x461z6twxa1.gif
[generic] q7x461z6twxa1: Downloading webpage
[redirect] Following redirect to https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fq7x461z6twxa1.gif&rdt=33670
[generic] Extracting URL: https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fq7x461z6twxa1.gif&rdt=33670
[generic] media?url=https://i.redd.it/q7x461z6twxa1: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] media?url=https://i.redd.it/q7x461z6twxa1: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fq7x461z6twxa1.gif&rdt=33670
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1624, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1759, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/generic.py", line 2553, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fq7x461z6twxa1.gif&rdt=33670
```
| NSFW,site-bug | low | Critical |
2,716,074,846 | react-native | iOS - borderStyle: 'dashed' and 'dotted' no longer works in combination with overflow: hidden in 0.76 | ### Description
After upgrading to expo 52 and react-native 0.76.3 I noticed some of my dashed borders had disappeared in some cases on iOS. On further investigation, I discovered that dashed borders no longer work when borderStyle: "dashed" | "dotted" are used in combination with overflow: "hidden" or overflow: "scroll". Instead of a dashed or dotted border, it shows a solid border.
Dotted and dashed borders work correctly with overflow: "visible"
Dotted and dashed borders also work correctly in android.
I took a cursory look through the source code, but couldn't see the issue - that being said, if someone can point me in somewhat the right direction, I'd be happy to take a swing at a fix.
### Steps to reproduce
1) Install the application and launch ios emulator
2) look at the code in the stylesheet for the two items
3) look at the missing borderStyle
<img width="1373" alt="Screenshot 2024-12-03 at 12 27 00โฏPM" src="https://github.com/user-attachments/assets/c114521d-792e-436a-af3e-d42d4d4af1f9">
### React Native Version
0.76.3
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
info Fetching system and libraries information...
System:
OS: macOS 14.5
CPU: (16) x64 Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz
Memory: 181.07 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.19.0
path: ~/.nvm/versions/node/v18.19.0/bin/node
Yarn:
version: 1.22.22
path: /usr/local/bin/yarn
npm:
version: 10.2.3
path: ~/.nvm/versions/node/v18.19.0/bin/npm
Watchman:
version: 4.9.0
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.10.0
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 4.1 AI-201.8743.12.41.6953283
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 18.0.1.1
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.3
wanted: 0.76.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
no logs
```
### Reproducer
https://github.com/jorjordandan/borderIssue
### Screenshots and Videos
_No response_ | Platform: iOS,Issue: Author Provided Repro,Resolution: PR Submitted | low | Minor |
2,716,088,082 | pytorch | Reduce the number of included headers in cpp_wrapper mode | See https://github.com/pytorch/pytorch/pull/141580#discussion_r1868239904, where handling of certain Python argument types relies on including new header files. This should be minimized as a compile time improvement.
Tagging @desertfire.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | triaged,oncall: pt2,module: inductor | low | Minor |
2,716,105,790 | godot | AudioEffectCapture not working correctly with 2 channel microphone | ### Tested versions
- Found in Godot 4.3-stable
- Reproducible in Godot 4.2
### System information
Windows 11
### Issue description
When getting buffer data with get_buffer(1024) from a AudioEffectCapture and playing it back on a AudioStreamGeneratorPlayback , the voice heard is heavily distorded. This seems to occur when the microphone is in โ2-channel 24-bitโ configuration
With a โ1 channel 16 bitโ microphone, I don't have this problem.
here is an example :
https://github.com/user-attachments/assets/5f9e9031-54d2-48b5-8a3a-72a7dce8b117
In my personal case, the problem occurred with the microphone "FIFINE K688" (2-channel mic) and the microphone of my Headset (Steelseries Arctis Nova Pro).
### Steps to reproduce
Set up a basic microphone playback system with an AudioEffectCapture and an AudioStreamGeneratorPlayback
use two different types of microphone (a โ1 channelโ and a โ2 channelsโ) and listen to the final result
### Minimal reproduction project (MRP)
[audiobugtest.zip](https://github.com/user-attachments/files/18000034/audiobugtest.zip)
| bug,topic:audio | low | Critical |
2,716,124,651 | vscode | Expand section symbol persistent | Testing #234762
Does not repro with acceleration disabled. When collapsing a section in the editor, the three dots appear and the end of the line allowing the user to click and expand the section.
After clicking the symbol, rather than disappearing, it remains present and interactable. However, clicking is just a noop.
Also note what looks to maybe be a double rendered line, the text all grew bolder and appeared almost fuzzy
With acceleration OFF:
https://github.com/user-attachments/assets/4d6e8667-6932-4d11-b71b-53a713e48d1a
With acceleration ON:
https://github.com/user-attachments/assets/95e0a65d-6ec6-4083-8b9c-d66c1e86245a
fallback hover:

| bug,editor-gpu | low | Minor |
2,716,130,669 | go | cmd/link: unused runtime-internal exported methods aren't getting deadcoded | For example, `internal/abi.(*SwissMapType).NeedKeyUpdate`. It has a 3-instruction body and gets inlined everywhere. There is no reference to it from any code. But it still exists in the binary.
Probably this is because it is exported (starts with a capital letter), and the type `*SwissMapType` is reachable somehow.
Kind of anecdotal at the moment, but this may be the cause of part of the binary size increase since 1.23.
@cherrymui
A few other examples:
```
internal/chacha8rand.(*State).Next
sync.(*noCopy).Lock
``` | NeedsInvestigation,compiler/runtime | low | Major |
2,716,152,237 | terminal | New Tab Menu Customization SUI Follow-ups | Follow-ups from #18015:
- [ ] add support for adding and editing action entries
- [x] when we discard changes or save, it would be cool if we could stay on the same page
- [ ] allow customizing the folder entry _before_ adding it (current workaround is to add it, then edit it)
- [ ] improve UI for setting icon (reuse UI from #17965)
- [x] Windows 10 doesn't like grids (See note below)
- PR #18424
- Upstream: https://github.com/microsoft/microsoft-ui-xaml/issues/10300
#### Windows 10

### Bug bash 1/21 notes
- [ ] narrow window can cut off buttons

| Product-Terminal,Issue-Task,Needs-Tag-Fix,Area-SettingsUI | low | Critical |
2,716,175,929 | deno | TypeError: expected typed ArrayBufferView at readableStreamWriteChunkFn | deno 2.1.2 (stable, release, aarch64-apple-darwin)
v8 13.0.245.12-rusty
typescript 5.6.2
Using `Deno.serve` and returning a Response with readable stream generates an error during this line: `controller.enqueue("<!DOCTYPE html>");`. However, if instead of `Deno.serve` I use `import https from 'node:https';` then it works fine with Deno.
To reproduce it:
```ts
const stream = new ReadableStream({
start(controller) {
controller.enqueue("<!DOCTYPE html>");
controller.close();
}
});
Deno.serve({ handler: (req) => {
return new Response(stream, { headers: { "Content-Type": "text/html" } });
}, port: 3000 })
```
Then:
```sh
Terminating Deno.serve loop due to unexpected error TypeError: expected typed ArrayBufferView
at readableStreamWriteChunkFn (ext:deno_web/06_streams.js:785:15)
at readableStreamReadFn (ext:deno_web/06_streams.js:848:5)
at resourceForReadableStream (ext:deno_web/06_streams.js:883:3)
at fastSyncResponseOrStream (ext:deno_http/00_serve.ts:357:11)
at mapped (ext:deno_http/00_serve.ts:437:5)
at eventLoopTick (ext:core/01_core.js:175:7)
```
| needs investigation | low | Critical |
2,716,185,694 | ollama | Improve handling of pushes without namespace prefix | Currently, when users try to push a model without specifying their namespace (e.g. push model-name instead of push username/model-name), they receive a generic error about not being able to push to that namespace. This happens because the registry implicitly tries to use the "library/" namespace, which is restricted.
Most users naturally create models without a namespace prefix locally
```
โฏ ollama push llama3.2
retrieving manifest
pushing dde5aa3fc5ff... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 2.0 GB
pushing 966de95ca8a6... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 1.4 KB
pushing fcc5a6bec9da... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 7.7 KB
pushing a70ff7e570d9... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 6.0 KB
pushing 56bb8bd477a5... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 96 B
pushing 34bb5ab01051... 100% โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 561 B
pushing manifest
Error: you are not authorized to push to this namespace, create the model under a namespace you own
```
# Proposed Behavior
We should either:
1. Automatically prefix the push with the username.
or
2. Provide a more helpful error message explaining how to name the model correctly. | feature request | low | Critical |
2,716,244,159 | ollama | add code to enable ollama cli cmd logging , or disable the new ' if not tty exit ' code PLZZ | ### What is the issue?
i have .bashrc like to log ollama cmds
newly , this completly stopped working ..
neither
tee -a $somelog < <( ollama .. )
nor
ollama |& tee -a $log
nor
ollama > >( cat )
.. stay alive
.. they exit after ai answers , .. or exit after loading model if no text as cli arguments are given
.. before it wasnt perfect but now .. :))
my solution is make tty check either somehow optional , or throw it away .. ..
greets ..
btw this is on android phone on termux compiled
.. i suppose that tty check is a general one ,
. eg same on debian and else ...
### OS
Linux
### GPU
Other
### CPU
Other
### Ollama version
0.0.0 ( 0.4.5 or so via git ) | bug | low | Minor |
2,716,275,139 | tauri | [feat] binary diff updater for tiny updates | ### Describe the problem
Currently, the updater downloads the entire binary file for updates. Consider implementing a system to download only the patched updates.
### Describe the solution you'd like
For example, walk through the files in the installation directory, compare each binary, and generate a patch for each.
The updater would then apply these patches, reducing update sizes from 20โ50 MB to just a few KB in most cases.
### Alternatives considered
_No response_
### Additional context
https://man.freebsd.org/cgi/man.cgi?query=bsdiff
https://electrondelta.com/
| type: feature request | low | Minor |
2,716,300,758 | kubernetes | FIPS 140-3 Compliance K8s Release | ### What would you like to be added?
This issue is for being consensus on building a [FIPS 140-3](https://csrc.nist.gov/pubs/fips/140-2/upd2/final) compliant flavor/variant of k8s within the kubernetes project or CNCF organization.
While not important to all users, there are a significant number of users in the US Government/public sector that require FIPS 140-3 compliance for k8s. Given that golang is adding [FIPS 140-3 compliant native libs](https://github.com/golang/go/issues/70123), this would be a good time to evaluate what a FIPS compliant k8s release would look like.
### Why is this needed?
No FIPS compliant k8s releases are done today through the kubernetes project. While not a universal requirement, there are a significant number of users who do have 140-3 requirements who would benefit from a FIPS compliant k8s release thats not tied to a specific vendor. | kind/feature,sig/auth,sig/release,needs-triage | medium | Major |
2,716,328,460 | stable-diffusion-webui | [Bug]: When using Python to call the web UI API for mask image generation, a black bar appears at the bottom of the image. | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I encountered this issue when using Python to call the API. I am using /sdapi/v1/img2img for image generation. Below is my calling code.
```
class SDWebUIRequest:
def __init__(self, server_address="http://127.0.0.1:7860"):
self.server_address = server_address
self.api_endpoint = f"{self.server_address}/sdapi/v1/img2img"
def img2img_with_canny_controlnet(self, prompt, original_image_path, mask_image_path, negative_prompt="", steps=20, seed=-1, cfg_scale=7.0, controlnet_model="control_canny-sd15", controlnet_weight=1.0):
try:
# ่ฏปๅๅนถ็ผ็ ๅๅงๅพๅ
with open(original_image_path, "rb") as img_file:
original_image = base64.b64encode(img_file.read()).decode('utf-8')
# ่ฏปๅๅนถ็ผ็ ่็ๅพๅ
with open(mask_image_path, "rb") as mask_file:
mask_image = base64.b64encode(mask_file.read()).decode('utf-8')
controlnet_config = {
"enabled": True,
"image": original_image,
"weight": 1,
"module": "canny",
"model": "control_v11p_sd15_canny_fp16 [b18e0966]",
}
# ๆๅปบ่ฏทๆฑ่ด่ฝฝ
payload = {
"prompt": prompt,
"negative_prompt": negative_prompt,
"sampler_name": "Euler a",
"steps": 20,
"seed": seed,
"cfg_scale": cfg_scale,
"width": 1800, # ๆ นๆฎ้่ฆ่ฐๆด
"height": 1200, # ๆ นๆฎ้่ฆ่ฐๆด
"init_images": [f"data:image/png;base64,{original_image}"],
"mask": f"data:image/png;base64,{mask_image}",
"resize_mode": 1, # ่ฐๆดๅ
จๅฑ resize_mode ไปฅ้ๅบ้ๆฑ
"denoising_strength": 0.75,
"alwayson_scripts": {
"controlnet": {
"args": [controlnet_config]
}
}
}
headers = {
'Content-Type': 'application/json'
}
print("ๅ้ๅฐStable Diffusion WebUI็่ฏทๆฑ่ด่ฝฝ:")
# ๅ้POST่ฏทๆฑ
response = requests.post(self.api_endpoint, headers=headers, data=json.dumps(payload))
if response.status_code == 200:
response_data = response.json()
for i, image_data in enumerate(response_data.get("images", [])):
if ',' in image_data:
image_data = image_data.split(",",1)[1]
image = Image.open(io.BytesIO(base64.b64decode(image_data)))
save_path = os.path.join(TEMP_DIR, f"sd_output_{uuid.uuid4().hex}.png")
image.save(save_path)
print(f"็ๆ็ๅพๅๅทฒไฟๅญๅฐ {save_path}")
return save_path
else:
print(f"่ฏทๆฑๅคฑ่ดฅ: {response.status_code} - {response.text}")
return None
except Exception as e:
print(f"ๅ้่ฏทๆฑๆถๅบ้: {e}")
return None
```
The images generated through the web UI have no problems,
![Uploading 00008-3395685328.pngโฆ]()
but when I use the same parameters to call the API, a black bar appears.

My image size is 1800x1200, and the mask is the same size,

but there ends up being a curved edge at the bottom of the image. I'm not sure why this is happening. Additionally, when I set the mask for image generation, the width and height specified in the payload become ineffective. The generated image always ends up being 1800x1200, even if I set those two parameters to 512x512. I don't know what the issue is. When the size exceeds 512x512, the white border becomes very noticeable, but the image quality decreases significantly.

If I set it to a larger size, such as the original size of 1800x1200, the image quality improves, but that curved edge becomes even more pronounced.
### Steps to reproduce the problem
Using the code above to make the call results in this issue.
### What should have happened?
Under normal circumstances, the generated image should be almost identical to the one produced by the web UI, without that white border!
### What browsers do you use to access the UI ?
Microsoft Edge
### Sysinfo
[sysinfo-2024-12-04-01-10.json](https://github.com/user-attachments/files/18001247/sysinfo-2024-12-04-01-10.json)
### Console logs
```Shell
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --medvram-sdxl --theme dark --xformers --api --autolaunch --server-name 0.0.0.0 --skip-python-version-check
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.9.0, num models: 10
ControlNet preprocessor location: D:\software\server_tools\sd-webui-aki-v4.9.1\sd-webui-aki-v4.9.1\extensions\sd-webui-controlnet\annotator\downloads
2024-12-04 08:37:27,144 - ControlNet - INFO - ControlNet v1.1.455
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [7f96a1a9ca] from D:\software\server_tools\sd-webui-aki-v4.9.1\sd-webui-aki-v4.9.1\models\Stable-diffusion\sd1.5\anything-v5.safetensors
2024-12-04 08:37:32,317 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: D:\software\server_tools\sd-webui-aki-v4.9.1\sd-webui-aki-v4.9.1\configs\v1-inference.yaml
Running on local URL: http://0.0.0.0:7860
To create a public link, set `share=True` in `launch()`.
IIB Database file has been successfully backed up to the backup folder.
Startup time: 60.3s (prepare environment: 9.5s, import torch: 10.9s, import gradio: 6.6s, setup paths: 2.9s, initialize shared: 0.7s, other imports: 3.2s, setup gfpgan: 0.1s, list SD models: 0.3s, load scripts: 11.1s, initialize extra networks: 0.4s, scripts before_ui_callback: 0.1s, create ui: 3.4s, gradio launch: 8.6s, add APIs: 1.1s, app_started_callback: 3.0s).
Loading VAE weights specified in settings: D:\software\server_tools\sd-webui-aki-v4.9.1\sd-webui-aki-v4.9.1\models\VAE\sd1.5\animevae.pt
Applying attention optimization: xformers... done.
Model loaded in 36.3s (load weights from disk: 3.6s, load config: 0.3s, create model: 1.0s, apply weights to model: 21.9s, load VAE: 7.6s, load textual inversion embeddings: 0.2s, calculate empty prompt: 1.4s).
WARNING:root:Sampler Scheduler autocorrection: "Euler a" -> "Euler a", "None" -> "Automatic"
WARNING:root:Sampler Scheduler autocorrection: "Euler a" -> "Euler a", "None" -> "Automatic"
2024-12-04 08:53:53,160 - ControlNet - INFO - unit_separate = False, style_align = False
2024-12-04 08:53:53,398 - ControlNet - INFO - Loading model: control_v11p_sd15_canny_fp16 [b18e0966]
2024-12-04 08:53:53,969 - ControlNet - INFO - Loaded state_dict from [D:\software\server_tools\sd-webui-aki-v4.9.1\sd-webui-aki-v4.9.1\models\ControlNet\control_v11p_sd15_canny_fp16.safetensors]
2024-12-04 08:53:53,970 - ControlNet - INFO - controlnet_default_config
2024-12-04 08:54:04,569 - ControlNet - INFO - ControlNet model control_v11p_sd15_canny_fp16 [b18e0966](ControlModelType.ControlNet) loaded.
2024-12-04 08:54:04,989 - ControlNet - INFO - Using preprocessor: canny
2024-12-04 08:54:04,989 - ControlNet - INFO - preprocessor resolution = 512
2024-12-04 08:54:05,228 - ControlNet - INFO - ControlNet Hooked - Time = 12.071203470230103
WARNING:root:Sampler Scheduler autocorrection: "Euler a" -> "Euler a", "None" -> "Automatic"
2024-12-04 08:56:10,995 - ControlNet - INFO - unit_separate = False, style_align = False
2024-12-04 08:56:10,995 - ControlNet - INFO - Loading model from cache: control_v11p_sd15_canny_fp16 [b18e0966]
2024-12-04 08:56:11,383 - ControlNet - INFO - Using preprocessor: canny
2024-12-04 08:56:11,383 - ControlNet - INFO - preprocessor resolution = 512
2024-12-04 08:56:11,585 - ControlNet - INFO - ControlNet Hooked - Time = 0.5922081470489502
```
### Additional information
_No response_ | bug-report | low | Critical |
2,716,338,178 | godot | You're breathtaking! | It's been over 10 years now that Godot has been developed in the open, with code contributions from 2,800 users.
The total count of people who helped build Godot is actually far greater, as it should include documentation writers, testers, bug reporters, translators, moderators, content creators, users doing community support or talking at events, folks making games that credit Godot visibly, everyone supporting the project financially, and many other types of contributions which I can't keep enumerating.
All these people brought Juan and Ariel's little-engine-that-could from this:

*Screenshot of the isometric 2D demo in Godot 1.0.*
To this:

*Screenshot of [PVKK: Planetenverteidigungskanonenkommandant](https://store.steampowered.com/app/2956040/PVKK_Planetenverteidigungskanonenkommandant/) in Godot 4.3.*
That's no small achievement, so I encourage all contributors to take a minute to contemplate the progress that we've made together over this journey so far!
Amidst the daily churn of fixing issues, reviewing PRs, making releases, etc., it's important to remind ourselves of where we are, and how we got there.

*GIF of [Cozy Space Survivors](https://store.steampowered.com/app/2657850/Cozy_Space_Survivors/) where asteroids form the word "GOโฅDOT".*
## Some stats about Godot usage on GitHub
Between our usual traditions of either [making silly jokes](https://github.com/godotengine/godot/issues/10000) or [sharing some inspiring stats](https://github.com/godotengine/godot/issues/30000) for round issue numbers, I picked the latter for this 100,000th issue[^1], and wanted to look a bit closer at our issue and PR numbers over time.
A lot of people coming to Godot's repository to see over 10,000 open issues and 3,000 open PRs might rightfully wonder whether this is normal, or a sign of a maintenance issue.
I will make the case that it is a bit of both :)
### Issues

*Visualization from OSS Insight showing issue count over time, [see the interactive version](https://ossinsight.io/analyze/godotengine/godot#issue-history).*
The two accumulated curves show the total number of issues created over the lifetime of the repository (topmost curve), and the subset of those which have been closed (as fixed or invalid). The difference between the two curves represents the number of issues still open at a given point in time - currently exactly 11,000, out of 53,648 issues total, so roughly 20%.
This is a fairly normal percentage of yet-unresolved issues in software projects of this scale, but it can definitely be better.
I annotated the graph with some key events of Godot's development which match peaks in either reported bugs (usually at the start of a beta phase, or shortly after a stable release), or closed issues (when we do a "spring cleaning" going through old issues to check if they are still reproducible in a newly released Godot version). You can see notably two big peaks of closed issues around the 3.2 release, when we had a coordinated effort from maintainers to go through the whole backlog and ask reporters to confirm whether their issues were still valid, or had been fixed. This reduced the percentage of open issues from 29% to 19%, and it's been mostly stable since, with minor fluctuations.
We are preparing a new "spring cleaning" to properly reassess a lot of the old issues which have been opened before the 4.0 release, or in the early days of 4.0 or 4.1, and may no longer be relevant nowadays with 4.3 and soon 4.4 beta.
As the volume of issues keeps increasing steadily, but the number of active bug triagers doesn't really grow as fast, we are working on improving and documenting our workflows so that we can:
- Onboard new volunteers to help triage new and old issues;
- Spread the workload and be more consistent with our issue triage;
- Be confident that while doing so, we do increase the _quality_ of our open issues, which isn't something that can be easily tracked in numbers.
We will share more details here and on the Godot blog when this process is ready to welcome new volunteers.
In the meantime, you can already do some simple things which help greatly the existing bug triage team:
- [**Regularly re-assess your own issues**](https://github.com/godotengine/godot/issues?q=is%3Aissue%20state%3Aopen%20author%3A%40me) (you can bookmark that URL). Make sure that they are reproducible in the latest stable release and dev snapshots, and that there is a minimal reproduction project that contributors can use to reproduce and fix the issue. If your issue is no longer relevant, please close it with a comment explaining why.
- Feel free to apply the same process to other people's issues that you stumble upon, especially if their last update was a long time ago. In this case you can't close issues yourself, but you can suggest it by commenting that the issue is no longer reproducible in the version you tested. Bug triagers will get a notification and can double check.
### Pull Requests

*Visualization from OSS Insight showing pull request count over time, [see the interactive version](https://ossinsight.io/analyze/godotengine/godot#pr-history).*
I didn't annotate this one, but a few takeaways:
- Godot gets almost the same number of new pull requests per month than new issues, roughly 600 of each per month (that's 20 PRs and 20 issues per day).
- The PR volume increased a lot over the lifetime of the project to culminate at over 700 PRs per month at the end of the 4.0 development. Since then, as the Godot userbase grew a lot and we are more careful with compatibility and API design, reviews can take a bit longer and the total monthly volume seems to have plateaued around 500-600. That's still plenty enough for a fairly small group of reviewers.
- Someone should plot this data with relative percentages, but it seems like the size/complexity of pull requests is slightly increasing recently. Whether that trend gets confirmed or not, I can attest that reviewing 600 PRs per month is hard, and that we need more help for interested contributors and users to both test and review the PRs that get opened on a daily basis.
---
That's all for now, I was already way too long and verbose while drafting this at the last minute to (try to) snatch the 100,000th issue number ;)
Aside from showing some cool numbers, I mostly want to convey that we are well aware that we have a significant backlog, though it's not as dire as it might look from the outside.
To deal with it, we need better triage and review processes (which we are designing now), more volunteers involved in these processes, but also importantly [**more funding**](https://fund.godotengine.org/). Volunteer contributors do a ton of work, but many critical parts of the workflow depend on a few paid contractors, and we need to grow that group to better manage the increasing scale of the project.
[^1]: I'll make a quick note that 100,000 is the combined number of issues and PRs, which share the same index system on GitHub.
At the time of writing, we've actually had 53,648 issues and 45,213 PRs created. Astute minds will notice that the sum is not 100,000, the difference comes from spam issues or PRs which have been deleted by GitHub.
| discussion,good first issue | high | Critical |
2,716,339,212 | flutter | How can I tell if code is running in a unit test? | ### Use case
Code that doesn't work in tests should explicitly report that as an error, not silently hang (#159597).
### Proposal
Method somewhere that returns a Boolean. This needs to be supported by the framework and not rely on the user to set a global at the start of a test. | a: tests,c: new feature,framework,d: api docs,c: proposal,P3,team-framework,triaged-framework | low | Critical |
2,716,368,483 | pytorch | [export] NotImplementedError: No registered serialization name for <class 'diffusers.models.modeling_outputs.Transformer2DModelOutput'> | ### ๐ Describe the bug
Flux model has been compiled using Torch-TensorRT and I'm trying to export it using `torch.export.export` and see the following error
```py
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/_export/serde/serialize.py", line 1139, in <listcomp>
self.serialize_module_call_signature(entry.signature)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/_export/serde/serialize.py", line 1128, in serialize_module_call_signature
out_spec=treespec_dumps(module_call_signature.out_spec, TREESPEC_VERSION),
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/utils/_pytree.py", line 1436, in treespec_dumps
json_spec = _SUPPORTED_PROTOCOLS[protocol].treespec_to_json(treespec)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/utils/_pytree.py", line 1365, in _treespec_to_json
raise NotImplementedError(
NotImplementedError: No registered serialization name for <class 'diffusers.models.modeling_outputs.Transformer2DModelOutput'> found. Please update your _register_pytree_node call with a `serialized_type_name` kwarg.
> /work/TensorRT/examples/dynamo/torch_export_flux.py(230)<module>()
```
Here's the full script to reproduce the error
```py
# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
import torch_tensorrt
from transformers import AutoModelForCausalLM, AutoTokenizer
from diffusers import FluxPipeline, FluxTransformer2DModel
from utils import export_llm, generate
from torch.export import Dim
from typing import Optional, Dict, Any
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)
import time
from contextlib import contextmanager
@contextmanager
def timer(logger, name:str):
logger.info(f"{name} section Start...")
start = time.time()
yield
end = time.time()
logger.info(f"{name} section End...")
logger.info(f"{name} section elapsed time: {end - start} seconds")
class MyModule(torch.nn.Module):
def __init__(self, module):
super().__init__()
self.module = module
def forward(self,
hidden_states: torch.Tensor,
encoder_hidden_states: torch.Tensor = None,
pooled_projections: torch.Tensor = None,
timestep: torch.LongTensor = None,
img_ids: torch.Tensor = None,
txt_ids: torch.Tensor = None,
guidance: torch.Tensor = None,
joint_attention_kwargs: Optional[Dict[str, Any]] = None,
return_dict: bool = False, **kwargs):
return self.module.forward(
hidden_states,
encoder_hidden_states,
pooled_projections,
timestep,
img_ids,
txt_ids,
# guidance,
# joint_attention_kwargs,
# return_dict
)
def wrap_pipeline_transformer_call(instance, prompt, max_sequence_length):
from unittest.mock import patch
# Assume `instance` is your class instance containing the `__call__` method
# Use patch.object to mock the __call__ method of self.transformer
with patch.object(instance.transformer, 'forward', wraps=instance.transformer.forward) as mock_transformer_call:
# one step is enough for intercept the inputs
image =instance(
prompt,
guidance_scale=0.0,
num_inference_steps=1,
max_sequence_length=max_sequence_length,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
# Access the call arguments of the first (or specific) call
if mock_transformer_call.call_args_list:
args, kwargs = mock_transformer_call.call_args_list[0]
# Store the inputs in a tuple
intercepted_inputs = (args, kwargs)
# print("Intercepted args:", args)
# print("Intercepted kwargs:", kwargs)
return (args, kwargs)
else:
print("No calls were made to self.transformer.__call__")
return (None, None)
if __name__ == "__main__":
# config
dryrun = False
# parameter setting
batch_size = 2
max_seq_len = 256
prompt = ["A cat holding a sign that says hello world" for _ in range(batch_size)]
device = "cuda:0"
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell",
torch_dtype=torch.float16)
pipe.to(device)
example_args , example_kwargs = wrap_pipeline_transformer_call(pipe, prompt, max_seq_len)
tensor_inputs = ['hidden_states', 'timestep', 'pooled_projections', 'encoder_hidden_states', 'txt_ids', 'img_ids' ]
example_kwargs_shapes = {key: example_kwargs[key].shape for key in tensor_inputs}
BATCH = Dim("batch", min=1, max=batch_size)
SEQ_LEN = Dim("seq_len", min=1, max=max_seq_len)
dynamic_shapes = ({0 : BATCH},
{0 : BATCH,
1 : SEQ_LEN,
},
{0 : BATCH},
{0 : BATCH},
{0 : BATCH},
{0 : BATCH,
1 : SEQ_LEN,
},
)
example_args = (
example_kwargs['hidden_states'],
example_kwargs['encoder_hidden_states'],
example_kwargs['pooled_projections'],
example_kwargs['timestep'],
example_kwargs['img_ids'],
example_kwargs['txt_ids'],
)
with timer(logger=logger, name="ep_gen"):
with torch.no_grad():
# model = FluxTransformer2DModel.from_pretrained("black-forest-labs/FLUX.1-schnell",torch_dtype=torch.float16)
model = MyModule(pipe.transformer).eval().half().to(device)
logger.info("Directly use _export because torch.export.export doesn't work")
# This API is used to express the constraint violation guards as asserts in the graph.
from torch.export._trace import _export
ep = _export(
model,
args=example_args,
# kwargs=example_kwargs,
dynamic_shapes=dynamic_shapes,
strict=False,
allow_complex_guards_as_runtime_asserts=True,
)
logger.info(f"Generating TRT engine now, dryrun={dryrun}...")
# print("Generating TRT engine now...")
#TODO: if some non-tensor input, do we still need to provide them.
with timer(logger, "trt_gen"):
with torch_tensorrt.logging.debug():
trt_start = time.time()
trt_model = torch_tensorrt.dynamo.compile(
ep,
inputs=list(example_args),
enabled_precisions={torch.float32},
truncate_double=True,
device=torch.device(device),
disable_tf32=True,
use_explicit_typing=True,
dryrun=dryrun,
debug=True,
use_fp32_acc=True,
)
trt_end = time.time()
del pipe
del ep
del model
import gc
gc.collect()
torch.cuda.empty_cache()
with timer(logger, "trt_save"):
try:
trt_ep = torch.export.export(trt_model, args=example_args, dynamic_shapes=dynamic_shapes, strict=False) #
torch.export.save(trt_ep, "trt.ep")
except Exception as e:
import traceback
# Capture the full traceback
tb = traceback.format_exc()
logger.warning("An error occurred. Here's the traceback:")
# print(tb)
logger.warning(tb)
breakpoint()
torch_tensorrt.save(trt_model, "trt.ep")
```
cc: @angelayi
### Versions
torch.version
'2.6.0.dev20241202+cu124'
trt.version
'10.6.0.post1'
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,export-triaged,oncall: export | low | Critical |
2,716,426,448 | deno | bad error message in `deno lint` when workspace member version is missing patch component | Repro:
```
// deno.jsonc
{
"workspace": [
"./foo"
]
}
// foo/deno.jsonc
{
"name": "@test/foo",
"version": "1.0"
"exports": "./mod.ts",
}
// foo/mod.ts
export const hi: string = "hi";
```
```
โฏ deno lint
error: Invalid version: Unexpected character.
1.0
~
```
The actual issue is that the version `1.0` is missing the patch component.
At a minimum, this should show what file the error is in or give some more context. Ideally, we could give a better diagnostic than this though (improving the error message itself would likely have to happen in deno_semver) | dx | low | Critical |
2,716,454,854 | godot | get_vertex_uv seems to return improper values | ### Tested versions
4.3 stable
### System information
Godot v4.3.stable - Linux Mint 22 (Wilma) - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3080 Laptop GPU - AMD Ryzen 9 5900HX with Radeon Graphics (16 Threads)
### Issue description
A lot of copy/paste from godot forums, but as there was no reply, and the more I dig into this, the more I'm thinking it's potentially a bug:
Iโve been trying to utilize โliveโ painting effects in the game Iโm working on, and am running into some trouble. I found this useful project someone worked on a while back, but itโs unfortunately in Godot 3, which Iโm working in 4.3 ( Project: [Godot Engine In-game Splat Map Texture Painting (Dirt Removal Effect) - Alfred Reinold Baudisch 1](https://alfredbaudisch.com/blog/gamedev/godot-engine/godot-engine-in-game-splat-map-texture-painting-dirt-removal-effect/) )
I managed to convert the shader and scripts over to Godot 4 compatible format, and the game works and shader looks correct in all the ways - even confirmed w/ a custom mask on it. However, Iโm running into one issue: Clicking doesnโt seem to paint on the object. I was able to narrow it down to get_vertex_uv, and found something super confusing, in which Iโm seeing no other people mention - so wondering if this is something Iโm doing wrong in my conversion of this code, or if it's a unique enough scenario that no one's noticed until now(?)
Basically, Iโve got the get_uv_coords method that was written, as the following:
```
func get_uv_coords(point, normal, transform = true) -> Vector2:
# Gets the uv coordinates on the mesh given a point on the mesh and normal
# these values can be obtained from a raycast
transform_vertex_to_global = transform
var face = get_face(point, normal)
if face.is_empty():
return -Vector2.ONE
var bc = face[2]
var uv1 = meshtool.get_vertex_uv(_local_face_vertices[face[0]][0])
var uv2 = meshtool.get_vertex_uv(_local_face_vertices[face[0]][1])
var uv3 = meshtool.get_vertex_uv(_local_face_vertices[face[0]][2])
return (uv1 * bc.x) + (uv2 * bc.y) + (uv3 * bc.z)
```
I added some debug statements and all that to right before get_vertex_uv, and compared the conversion to the original project in a separate installation of Godot 3โฆ And I verified the face values and all that are exactly the same, however, get_vertex_uv in Godot 4 keeps returning values like the following to me:
image

What reasons would this method result in such crazy values, and is there a different way I should be doing this check to get the uv coordinate in Godot 4 vs. 3? Or is this just a bug no oneโs reported (and if so - any suggestions on a way around it?)
### Steps to reproduce
See post above, try to call get_vertex_uv on valid coordinates to see it potentially fail
### Minimal reproduction project (MRP)
[GodotRuntimeTextureSplatMapPainting.zip](https://github.com/user-attachments/files/18002123/GodotRuntimeTextureSplatMapPainting.zip)
| bug,needs testing,topic:3d | low | Critical |
2,716,459,255 | godot | Viewport Texture doesn't work with output override (OpenXR) | ### Tested versions
- Unreproducible in 3.5
- Reproducible since 4.1.3 (maybe earlier)
- Reproducible in 4.4 master (47bc374edf6d2e775a5e6b937dc3fd73cdc6f59b)
### System information
Windows 11 - NVidia 4070ti - Quest 3 with SteamLink
### Issue description
This issue seems to have been introduced early in Godot 4's development but as this is a bit of a niche use case it hasn't gotten much attention especially as it seems to specifically be broken for OpenXR.
I decided to look into this again after someone reported the issue on Discord since I wanted to create a demo project for the setup anyway. Also initially I thought this was a Vulkan issue but it happens on OpenGL as well.
Basically, when grabbing a `ViewportTexture` from the `SubViewport` that is rendering content to the HMD, Godot renders a black screen.
Using the `MobileVRInterface` to create a similar scenario, rendering HMD output to a `SubViewport` and using a `ViewportTexture` to show the left and right eye output works fine
### Steps to reproduce
- Create a `SubViewport` for XR output, add a standard XR rig and enable `use_xr` on this viewport
- Create a ColorRect for the output to screen (so outside of the `SubViewport`) and add a `ShaderMaterial` like so:
```
shader_type canvas_item;
uniform sampler2DArray xr_texture : source_color;
uniform float layer;
void fragment() {
COLOR.rgb = texture(xr_texture, vec3(UV, layer)).rgb;
COLOR.a = 1.0;
}
```
- In a script set the `xr_texture` shader parameter to `SubViewport.get_texture()`
### Minimal reproduction project (MRP)
For reproducing this issue:
[openxr_spectator_view_demo.zip](https://github.com/user-attachments/files/18002150/openxr_spectator_view_demo.zip)
For a working example using the `MobileVRInterface`:
[teststereo.zip](https://github.com/user-attachments/files/18002159/teststereo.zip)
| bug,topic:rendering,topic:xr | low | Critical |
2,716,477,794 | pytorch | [RFC] Add Intel GPU Support to Torch Test Cases | ### ๐ The feature, motivation and pitch
### Motivation
For [[RFC]Add Intel GPU Support in PyTorch CI/CD,](https://github.com/pytorch/pytorch/issues/114850) we already enabled Intel GPU support in test cases of Inductor ([#122866,](https://github.com/pytorch/pytorch/pull/122866) [#124147](https://github.com/pytorch/pytorch/pull/124147)) and Profiler ([#134316](https://github.com/pytorch/pytorch/pull/134316/files)). As a step forward, weโd like to enable Intel GPU in other Torch test cases to achieve better test coverage.
### Approach
Because Intel GPU functionality is on par with other GPU devices, in Inductor and Profiler tests we did generalization for device specific cases to reuse them. Weโd like to extend the generalization approach to other test cases.
We assume only one kind of GPU device will be active in the test at one time, for example no CUDA and Intel GPU cases will run in the same test. "XPU" is the device type of Intel GPU in PyTorch. To generalize the Inductor tests we have introduced a GPU_TYPES list for all the GPU devices on par functionally, and GPU_TYPE to indicate the current GPU device. For test parameterization with instantiate_device_type_tests(), we will adopt the following approaches for device generalization:

- We have introduced XPUTestBase class for the Intel GPU test and supported it in instantiate_device_type_tests(), PR #[120891](https://github.com/pytorch/pytorch/pull/120891).
- For decorators, we do generalization, for example, change onlyCUDA to onlyGPU so that all the devices in GPU_TYPES can run the tests.
- We also make op_db general for all GPU devices, introduce dtypesIfGPU to replace dytpesIfCUDA; for โskipsโ with device_type specified we unify them with device_type=GPU_TYPE. For example, change

into

- For API on device module, we can generalize it with torch.get_device_module(), for example torch.get_device_module().FloatTensor. If needed, we would also use [torch.accelerator](https://github.com/pytorch/pytorch/pull/132204) interface.
- If needed, we could add new code or wrapper functions to remove the device specific code. For example, add get_gpu_autocast() for torch.cuda.amp.autocast:
```
def get_gpu_autocast():
return torch.cuda.amp.autocast if HAS_CUDA else torch.amp.autocast
```
For code
```
with torch.backends.cudnn.flags(enabled=True):
```
we could update it with
```
context = torch.backends.cudnn.flags(enabled=True) if HAS_CUDA else contextlib.nullcontext()โฏ โฏ โฏ โฏ
with context:
```
### Plan
We are going to make all the modules Intel GPU supported device agnostic. We have enabled the โXPUโ backend in test infrastructure ([#120891](https://github.com/pytorch/pytorch/pull/120891)) and enabled inductor ([#122866,](https://github.com/pytorch/pytorch/pull/122866) [#124147](https://github.com/pytorch/pytorch/pull/124147)) and profiler tests ([#134316](https://github.com/pytorch/pytorch/pull/134316/files)). As a staging plan for the next step, we will
- Work on the generalization of Aten:ops tests starting from test_ops.py and test_nn.py.
- Work on tests for runtime, distributed, quantization and sparsity, etc.
- More modules will be added according to new feature planning.
### Alternatives
_No response_
### Additional context
_No response_
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: testing,module: xpu | low | Minor |
2,716,487,411 | pytorch | DISABLED TCPStoreTest.testMultiTenantStores (__main__.TCPStoreTest) | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=TCPStoreTest.testMultiTenantStores&suite=TCPStoreTest&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/33883747655).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `TCPStoreTest.testMultiTenantStores`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
unknown file
C++ exception with description "The server socket has failed to listen on any local network address. The server socket has failed to bind to [::]:29500 (errno: 98 - Address already in use). The server could not be initialized on any address for port=29500, family=10 The server socket has failed to bind to 0.0.0.0:29500 (errno: 98 - Address already in use). The server could not be initialized on any address for port=29500, family=2
Exception raised from run at /var/lib/jenkins/workspace/torch/csrc/distributed/c10d/socket.cpp:558 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0xb0 (0x7fc973795400 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libc10.so)
frame #1: <unknown function> + 0x14ce6df (0x7fc974cd26df in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libtorch_cpu.so)
frame #2: <unknown function> + 0x6803ba5 (0x7fc97a007ba5 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libtorch_cpu.so)
frame #3: <unknown function> + 0x67c96e2 (0x7fc979fcd6e2 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libtorch_cpu.so)
frame #4: <unknown function> + 0x67c0e70 (0x7fc979fc4e70 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libtorch_cpu.so)
frame #5: c10d::TCPStore::TCPStore(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, c10d::TCPStoreOptions const&) + 0x46f (0x7fc979fc7d3f in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/libtorch_cpu.so)
frame #6: testMultiTenantStores(bool) + 0xd3 (0x5618300c69c3 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #7: void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) + 0x51 (0x56183010a021 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #8: <unknown function> + 0x5aa90 (0x5618300f9a90 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #9: testing::TestInfo::Run() + 0x40a (0x5618300f9faa in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #10: <unknown function> + 0x5f089 (0x5618300fe089 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #11: testing::internal::UnitTestImpl::RunAllTests() + 0xf28 (0x5618300ff4d8 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #12: testing::UnitTest::Run() + 0x93 (0x5618300ffca3 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #13: main + 0x44 (0x5618300c4874 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
frame #14: __libc_start_main + 0xf3 (0x7fc96c604083 in /lib/x86_64-linux-gnu/libc.so.6)
frame #15: _start + 0x2e (0x5618300c4cde in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin/TCPStoreTest)
" thrown in the test body.
unknown file:0: C++ failure
```
</details>
Test file path: `` or `test/run_test`
Error: Error retrieving : 400, test/run_test: 404
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr | module: rocm,triaged,module: flaky-tests,skipped | low | Critical |
2,716,511,668 | rust | #![debugger_visualizer] arbitrary file access & storing of file in debug binary |
I tried this code:
```rust
#![debugger_visualizer(gdb_script_file = "/etc/passwd")]
#![debugger_visualizer(gdb_script_file = "/proc/self/environ")]
```
I expected to see this happen: Error: Invalid GDB Python Script
Instead, this happened:
Both files `/etc/passwd` and `/proc/self/environ` were added to the rust debug binary file.
```
$ objdump target/debug/myapp -j .debug_gdb_scripts -s
target/debug/myapp: file format elf64-x86-64
Contents of section .debug_gdb_scripts:
4914b 01676462 5f6c6f61 645f7275 73745f70 .gdb_load_rust_p
4915b 72657474 795f7072 696e7465 72732e70 retty_printers.p
4916b 79000470 72657474 792d7072 696e7465 y..pretty-printe
4917b 722d6d79 6170702d 300a726f 6f743a78 r-myapp-0.root:x
4918b 3a303a30 3a3a2f72 6f6f743a 2f62696e :0:0::/root:/bin
4919b 2f626173 680a6269 6e3a783a 313a313a /bash.bin:x:1:1:
491ab 3a2f3a2f 7573722f 62696e2f 6e6f6c6f :/:/usr/bin/nolo
491bb 67696e0a 6461656d 6f6e3a78 3a323a32 gin.daemon:x:2:2
491cb 3a3a2f3a 2f757372 2f62696e 2f6e6f6c ::/:/usr/bin/nol
491db 6f67696e 0a6d6169 6c3a783a 383a3132 ogin.mail:x:8:12
491eb 3a3a2f76 61722f73 706f6f6c 2f6d6169 ::/var/spool/mai
491fb 6c3a2f75 73722f62 696e2f6e 6f6c6f67 l:/usr/bin/nolog
4920b 696e0a66 74703a78 3a31343a 31313a3a in.ftp:x:14:11::
4921b 2f737276 2f667470 3a2f7573 722f6269 /srv/ftp:/usr/bi
4922b 6e2f6e6f 6c6f6769 6e0a6874 74703a78 n/nologin.http:x
4923b 3a33333a 33333a3a 2f737276 2f687474 :33:33::/srv/htt
4924b 703a2f75 73722f62 696e2f6e 6f6c6f67 p:/usr/bin/nolog
4925b 696e0a6e 6f626f64 793a783a 36353533 in.nobody:x:6553
[..]
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0 (90b35a623 2024-11-26) (Arch Linux rust 1:1.83.0-1)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-unknown-linux-gnu
release: 1.83.0
LLVM version: 18.1.8
```
originally reported in https://github.com/bf/rust-security-problems
| T-compiler,C-discussion,F-debugger_visualizer | low | Critical |
2,716,541,342 | rust | `./x check std --target aarch64-apple-ios` fails when cross-checking due to missing `xcrun` | On a non-mac host, checking std for an ios target fails with the following error:
```
> ./x check std --target aarch64-apple-ios
Building bootstrap
Finished `dev` profile [unoptimized] target(s) in 0.8s
error occurred: Failed to find tool. Is `xcrun` installed? (see https://docs.rs/cc/latest/cc/#compile-time-requirements for help)
Build completed unsuccessfully in 0:00:01
```
Ideally I should be able to check any target from any host. | A-cross,T-bootstrap,A-linkers | low | Critical |
2,716,551,275 | vscode | Explore app group containers for user data isolation | App group containers overview https://developer.apple.com/videos/play/wwdc2024/10123/?time=743
We cannot apply the isolation to all files in the user data since there are files shared by the runtime and also we need specific apis to read/write data to/from the container.
Upstream issue - https://issues.chromium.org/issues/366381674
See if we can move a subset of vscode owned data into the container as part of this exploration
/cc @isidorn | feature-request,macos | low | Minor |
2,716,554,913 | TypeScript | Incorrect error attribution on indexed access, when using object spread operator | ### ๐ Search Terms
`spread operator destructuring error`, `spread error message`, `object indexed access error message`
### ๐ Version & Regression Information
- This changed between versions [4.2.3](https://www.typescriptlang.org/play/?ts=4.2.3#code/C4TwDgpgBAogbhAdsACgQxAGwPZoCYDOUAvFAN4BQU1UAxtgLZiYTAQDKYATgJbIBc5KjRFoArsEZpgPbIgCSeQQWC9EAcwA0wkdQJg0tCIuWq+WnVAC+2kQSR4AshAIE06iIMq6aDF249TNQsRGworCgpQSCgAYUZmVgg8eCRgAB4AFSgIAA82REIoAGsIEGwAM1gEZHQsXEIAPhIhEQBtGCg+KEyAXS9LEWjPWFsfKAN6-EFU2owcfAIO3rGfNDAwADk0PyDzVetwtr7IirFEWhk5KGwELgB3XjYAQQ3tvyyc-IciUvKq2aoeYNAiNAAUlgItAAFskxCwlHEEiw2CkahlMo1tABKQTxJgo5KArLNbw0eiIFTkKDrLY7CCaKAAOhZXBcwGsLShsLw8OSlgA9AKoAB1CA8Lh4HJcLjYLhQPyudzQWFsxkqNBcGQaLqIKAAFiZAGZLGzgGIuHqyT4WUy2SoDqI3vTBAAiNgqV0HCJWIA) and [4.3.5](https://www.typescriptlang.org/play/?ts=4.3.5#code/C4TwDgpgBAogbhAdsACgQxAGwPZoCYDOUAvFAN4BQU1UAxtgLZiYTAQDKYATgJbIBc5KjRFoArsEZpgPbIgCSeQQWC9EAcwA0wkdQJg0tCIuWq+WnVAC+2kQSR4AshAIE06iIMq6aDF249TNQsRGworCgpQSCgAYUZmVgg8eCRgAB4AFSgIAA82REIoAGsIEGwAM1gEZHQsXEIAPhIhEQBtGCg+KEyAXS9LEWjPWFsfKAN6-EFU2owcfAIO3rGfNDAwADk0PyDzVetwtr7IirFEWhk5KGwELgB3XjYAQQ3tvyyc-IciUvKq2aoeYNAiNAAUlgItAAFskxCwlHEEiw2CkahlMo1tABKQTxJgo5KArLNbw0eiIFTkKDrLY7CCaKAAOhZXBcwGsLShsLw8OSlgA9AKoAB1CA8Lh4HJcLjYLhQPyudzQWFsxkqNBcGQaLqIKAAFiZAGZLGzgGIuHqyT4WUy2SoDqI3vTBAAiNgqV0HCJWIA)
### โฏ Playground Link
[Playground link, 5.7.2](https://www.typescriptlang.org/play/?ts=5.7.2#code/C4TwDgpgBAogbhAdsACgQxAGwPZoCYDOUAvFAN4BQU1UAxtgLZiYTAQDKYATgJbIBc5KjRFoArsEZpgPbIgCSeQQWC9EAcwA0wkdQJg0tCIuWq+WnVAC+2kQSR4AshAIE06iIMq6aDF249TNQsRGworCgpQSCgAYUZmVgg8eCRgAB4AFSgIAA82REIoAGsIEGwAM1gEZHQsXEIAPhIhEQBtGCg+KEyAXS9LEWjPWFsfKAN6-EFU2owcfAIO3rGfNDAwADk0PyDzVetwtr7IirFEWhk5KGwELgB3XjYAQQ3tvyyc-IciUvKq2aoeYNAiNAAUlgItAAFskxCwlHEEiw2CkahlMo1tABKQTxJgo5KArLNbw0eiIFTkKDrLY7CCaKAAOhZXBcwGsLShsLw8OSlgA9AKoAB1CA8Lh4HJcLjYLhQPyudzQWFsxkqNBcGQaLqIKAAFiZAGZLGzgGIuHqyT4WUy2SoDqI3vTBAAiNgqV0HCJWIA)
### ๐ป Code
```ts
type EventPayloads = {
completeSprint: {
automationId: string,
spaceId: string,
},
sendMessage: {
message: string,
},
}
type CompletedEvent<T extends keyof EventPayloads> = {
[E in T]: {
type: E,
payload: EventPayloads[E],
appName: string,
}
}[T]
function overwriteAppName<T extends keyof EventPayloads>(
scheduled: CompletedEvent<T>,
): CompletedEvent<T> {
const { appName, ...rest } = scheduled
// Weird error message here, starting in 4.3
return {
...rest,
appName: "test",
}
}
```
### ๐ Actual behavior
The `return` statement has the following error:
```
Type 'Omit<CompletedEvent<T>, "appName"> & { appName: string; }' is not assignable to type 'CompletedEvent<T>'.
Types of property 'payload' are incompatible.
Type 'CompletedEvent<T>["payload"]' is not assignable to type 'EventPayloads[T]'.
Type '{ type: T; payload: EventPayloads[T]; appName: string; }' is missing the following properties from type 'EventPayloads': completeSprint, sendMessage(2322)
```
The first three lines make sense, but the last line is bizarre enough that I suspect it's a bug in either the reporting/checking logic.
Given that `EventPayloads` and `CompletedEvent` are completely different objects, this error made me think I had missed an indexing step somewhere, but after rereading the code multiple times I'm pretty sure I didn't (it seems like a type error in the type error).
Also, my best possible explanation for the reduction in this step
```
Type 'CompletedEvent<T>["payload"]' is not assignable to type 'EventPayloads[T]'.
Type '{ type: T; payload: EventPayloads[T]; appName: string; }' is missing the following properties from type 'EventPayloads': completeSprint, sendMessage(2322)
```
is that the algorithm eliminated the indexed access on both sides, so that
| | `CompletedEvent<T>["payload"]` | `EventPayloads[T]` |
|-|-|-|
|becomes | `CompletedEvent<T>` | `EventPayloads` |
but this is clearly incorrect considering `T != "payload"`.
### ๐ Expected behavior
Given the penultimate line of the error, I would expect the final line to compare payload types, i.e. `{ automationId: string, spaceId: string }` vs. `{ message: string }`
<details>
<summary>
For what it's worth, this is the error from <a href="https://www.typescriptlang.org/play/?ts=4.2.3#code/C4TwDgpgBAogbhAdsACgQxAGwPZoCYDOUAvFAN4BQU1UAxtgLZiYTAQDKYATgJbIBc5KjRFoArsEZpgPbIgCSeQQWC9EAcwA0wkdQJg0tCIuWq+WnVAC+2kQSR4AshAIE06iIMq6aDF249TNQsRGworCgpQSCgAYUZmVgg8eCRgAB4AFSgIAA82REIoAGsIEGwAM1gEZHQsXEIAPhIhEQBtGCg+KEyAXS9LEWjPWFsfKAN6-EFU2owcfAIO3rGfNDAwADk0PyDzVetwtr7IirFEWhk5KGwELgB3XjYAQQ3tvyyc-IciUvKq2aoeYNAiNAAUlgItAAFskxCwlHEEiw2CkahlMo1tABKQTxJgo5KArLNbw0eiIFTkKDrLY7CCaKAAOhZXBcwGsLShsLw8OSlgA9AKoAB1CA8Lh4HJcLjYLhQPyudzQWFsxkqNBcGQaLqIKAAFiZAGZLGzgGIuHqyT4WUy2SoDqI3vTBAAiNgqV0HCJWIA">4.2.3</a> and earlier:
</summary>
```
Type 'Omit<CompletedEvent<T>, "appName"> & { appName: string; }' is not assignable to type 'CompletedEvent<T>'.
Types of property 'payload' are incompatible.
Type 'CompletedEvent<T>["payload"]' is not assignable to type 'EventPayloads[T]'.
Type '{ type: T; payload: EventPayloads[T]; appName: string; }' is missing the following properties from type 'EventPayloads': completeSprint, sendMessage
Type 'CompletedEvent<T>["payload"]' is not assignable to type '{ automationId: string; spaceId: string; } & { message: string; }'.
Type 'EventPayloads[T]' is not assignable to type '{ automationId: string; spaceId: string; } & { message: string; }'.
Type '{ automationId: string; spaceId: string; } | { message: string; }' is not assignable to type '{ automationId: string; spaceId: string; } & { message: string; }'.
Type '{ automationId: string; spaceId: string; }' is not assignable to type '{ automationId: string; spaceId: string; } & { message: string; }'.
Property 'message' is missing in type '{ automationId: string; spaceId: string; }' but required in type '{ message: string; }'.
Type 'EventPayloads[T]' is not assignable to type '{ automationId: string; spaceId: string; }'.
Type 'CompletedEvent<T>["payload"]' is not assignable to type '{ automationId: string; spaceId: string; }'.
Type 'EventPayloads[T]' is not assignable to type '{ automationId: string; spaceId: string; }'.
Type '{ automationId: string; spaceId: string; } | { message: string; }' is not assignable to type '{ automationId: string; spaceId: string; }'.
Type '{ message: string; }' is missing the following properties from type '{ automationId: string; spaceId: string; }': automationId, spaceId(2322)
```
</details>
### Additional information about the issue
I've tagged the issue with `object spread operator` because [removing the spreads resolves the error](https://www.typescriptlang.org/play/?ts=5.7.2#code/C4TwDgpgBAogbhAdsACgQxAGwPZoCYDOUAvFAN4BQU1UAxtgLZiYTAQDKYATgJbIBc5KjRFoArsEZpgPbIgCSeQQWC9EAcwA0wkdQJg0tCIuWq+WnVAC+2kQSR4AshAIE06iIMq6aDF249TNQsRGworCgpQSCgAYUZmVgg8eCRgAB4AFSgIAA82REIoAGsIEGwAM1gEZHQsXEIAPhIhEQBtGCg+KEyAXS9LEWjPWFsfKAN6-EFU2owcfAIO3rGfNDAwADk0PyDzVetwtr7IirFEWhk5KGwELgB3XjYAQQ3tvyyc-IciUvKq2aoeYNAiNAAUlgItAAFskxCwlHEEiw2CkahlMo1tABKQTxJgo5KArLNbw0eiIFTkKDDTQTYH4awtKGwvDw5KWAD0nKgAHUeMBodgJFB9FwIPg6YLoBAuFxsFwoOpsC4oGh7hgAHSWcXAMRcRCtca0wY0SYLPAHURvHYjABEbBUdoOESsQA). But I don't actually know whether the bug is intrinsic to spreads. | Help Wanted,Possible Improvement | low | Critical |
2,716,566,919 | pytorch | Python 3.12 "from functorch.einops import rearrange" fails with "RuntimeError: First class dim doesn't work with python 3.12" | ### ๐ Describe the bug
Python 3.12 Nightly binary (e.g. wget https://download.pytorch.org/whl/nightly/cu126/torch-2.6.0.dev20241119%2Bcu126-cp312-cp312-linux_x86_64.whl ) fails with
`import functorch
from functorch.einops import rearrange`
python3 -c "import functorch; from functorch.einops import rearrange"
/usr/local/lib/python3.12/dist-packages/torch/_subclasses/functional_tensor.py:295: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
RuntimeError: First class dim doesn't work with python 3.12
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.12/dist-packages/functorch/einops/__init__.py", line 1, in <module>
from .rearrange import rearrange
File "/usr/local/lib/python3.12/dist-packages/functorch/einops/rearrange.py", line 7, in <module>
from functorch._C import dim as _C
ImportError: initialization failed
root@~# pip list |grep torch
pytorch-triton 3.1.0+cf34004b8a
torch 2.6.0.dev20241119+cu126
It also reproduces with:
pytorch-triton 3.2.0+git35c6c7c6
torch 2.6.0.dev20241203+cu126 (i.e. nightly as of today)
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @Chillee @samdow @kshitij12345 @atalman @malfet @ptrblck @eqy @tinglvv @xwang233
### Versions
e.g. 2.6.0.dev20241119+cu126 and torch 2.6.0.dev20241203+cu126 | triaged,module: regression,module: functorch | low | Critical |
2,716,651,889 | next.js | Next lint doesn't give warning of unused dependencies in other places than pages/components | ### Link to the code that reproduces this issue
https://github.com/Harsh-Sharma25/lint-issue-next
### To Reproduce
1. pnpm i
2. pnpm lint
& then if you run `pnpm eslint .` you will see the difference.
### Current vs. Expected behavior
When you run `pnpm lint` which in turn runs next lint it doesn't gives warning for missing dependencies in the hooks folder but if you define a hook inside pages then it does. `pnpm eslint .` seems to give proper warnings.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Home Single Language
Available memory (MB): 16077
Available CPU cores: 16
Binaries:
Node: 20.17.0
npm: 10.8.2
Yarn: N/A
pnpm: 9.14.2
Relevant Packages:
next: 15.0.4-canary.37 // Latest available version is detected (15.0.4-canary.37).
eslint-config-next: 15.0.3
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Linting
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
My original project is not on canary release, it is on 15.0.3 and it also doesn't use react 19 rc, which is when I encountered this issue. I've also tested with the eslint version: 9.15.0.
| bug,Linting | low | Major |
2,716,671,833 | ui | [bug]: Syntax Issue in Generated tailwind.config.ts from `pnpm dlx shadcn@latest init` | ### Describe the bug
The `tailwind.config.ts` generated by `pnpx shadcn@latest init` contains syntax issues, writing \n as a string.

### Affected component/components
tailwind.config.ts
### How to reproduce
1. create-remix
2. npx shadcn@latest init
3. select new york , neutral.
4. check tailwind.config.ts
### Codesandbox/StackBlitz link
https://stackblitz.com/edit/remix-run-remix-fop74u?file=tailwind.config.ts
### Logs
_No response_
### System Info
```bash
linux,zed editor.
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,716,748,315 | ui | [bug]: Website broken: https://ui.shadcn.com/blocks | ### Describe the bug
Blocks part of the documentation website does not work (https://ui.shadcn.com/blocks).
All blocks are blanks, console log gets filled with "Minified React error ..." messages.
### Affected component/components
Website documentation
### How to reproduce
1. Go to https://ui.shadcn.com/blocks
2. Try to view any block
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Tested in Chrome (including incognito) and Safari
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,716,759,471 | ollama | Multiple ollama_llama_server process are created and then not released | ### What is the issue?
In the process of use, after a period of time through nvidia-smi view, there will be multiple processes using the GPU, but only one of these processes actually works. You can confirm this by using the ollama ps command

Refer to the above pic, only the 2115700 process is valid, and it is clear that there are two other processes 1990922,2036868 that occupy a fixed size GPU memory, and another process 2117261 is still running..
### OS
Linux
### GPU
Nvidia
### CPU
Other
### Ollama version
0.3.5 | bug | low | Major |
2,716,838,146 | rust | `macro!();` statements sometimes don't seem to include `;` in `stmt.span` (HIR) | cc #133833 and #133834 in suggestions such as
```rs
#![crate_type = "lib"]
fn foo() -> String {
let mut bar = {
unknown_macro!();
};
return bar;
}
```
Opened while looking at #133843 to not lose the FIXME. | A-diagnostics,T-compiler,A-HIR,C-bug,E-needs-investigation | low | Minor |
2,716,848,689 | transformers | Multiple training runs not working with deepspeed | ### System Info
- `transformers` version: 4.46.1
- Platform: Linux-5.15.0-126-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 1
- gpu_ids: 0,1,2,3
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: True
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@muellerzr
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi,
I'm working on a setup where I have a frozen language model persisted in GPU memory and can fine-tune arbitrary many adapters during run time. So my "script" is a server that receives finetune requests through some REST api. Each time a finetune request is received, I'll create a new peft model for my base model, create a new train dataset, create a new HF trainer object, train and then save the adapter to disk.
This is basically working fine but as soon as I'm providing a deepspeed config, I'm getting the following exception on the **second** training run:
```
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/transformers/trainer.py", line 431, in __init__
[rank0]: self.create_accelerator_and_postprocess()
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/transformers/trainer.py", line 4953, in create_accelerator_and_postprocess
[rank0]: self.accelerator = Accelerator(**args)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/accelerate/accelerator.py", line 305, in __init__
[rank0]: raise NotImplementedError(
[rank0]: NotImplementedError: You cannot pass in a `deepspeed_plugin` when creating a second `Accelerator`. Please make sure the first `Accelerator` is initialized with all the plugins you want to use.
```
Obviously this is because the accelerate state is stored in some singletons. So I've tried to reset that state after each training run (in my own script) with these calls before creating the new trainer object:
```
AcceleratorState._reset_state(True)
GradientState._reset_state()
```
This however will lead to the following exception during the second training run:
```
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/transformers/trainer.py", line 2122, in train
[rank0]: return inner_training_loop(
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/transformers/trainer.py", line 2474, in _inner_training_loop
[rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/transformers/trainer.py", line 3572, in training_step
[rank0]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank0]: File "/somewhere/3rdparty-llama-factory/src/llamafactory/train/sft/trainer.py", line 88, in compute_loss
[rank0]: loss = super().compute_loss(model, inputs, return_outputs, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/transformers/trainer.py", line 3625, in compute_loss
[rank0]: outputs = model(**inputs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
[rank0]: ret_val = func(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1846, in forward
[rank0]: loss = self.module(*inputs, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1844, in _call_impl
[rank0]: return inner()
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1790, in inner
[rank0]: result = forward_call(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/peft/peft_model.py", line 1577, in forward
[rank0]: return self.base_model(
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1844, in _call_impl
[rank0]: return inner()
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1790, in inner
[rank0]: result = forward_call(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 188, in forward
[rank0]: return self.model.forward(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 1164, in forward
[rank0]: outputs = self.model(
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1844, in _call_impl
[rank0]: return inner()
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1790, in inner
[rank0]: result = forward_call(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 854, in forward
[rank0]: inputs_embeds = self.embed_tokens(input_ids)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1844, in _call_impl
[rank0]: return inner()
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1779, in inner
[rank0]: args_result = hook(self, args)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
[rank0]: ret_val = func(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 278, in _pre_forward_module_hook
[rank0]: self.pre_sub_module_forward_function(module)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 452, in pre_sub_module_forward_function
[rank0]: param_coordinator.fetch_sub_module(sub_module, forward=True)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 632, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
[rank0]: ret_val = func(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/somewhere/venv/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 317, in fetch_sub_module
[rank0]: assert param.ds_status == ZeroParamStatus.AVAILABLE, param.ds_summary()
[rank0]: AssertionError: {'id': 0, 'status': 'INFLIGHT', 'numel': 136134656, 'ds_numel': 136134656, 'shape': (151936, 896), 'ds_shape': (151936, 896), 'requires_grad': False, 'grad_shape': None, 'persist': False, 'active_sub_modules': {1308}, 'ds_tensor.shape': torch.Size([34033664])}
```
### Expected behavior
See above | bug | low | Critical |
2,716,857,348 | transformers | `trainer.evaluate` always creates a new MLFlow run, separate from the one used during `train()` | ### System Info
`transformers` version: 4.46.1
`mlflow==2.18.0`
I have confirmed this on my local machine (mac) and our training cluster (8x H100 using standard nvcr image), using a local and external MLFlow tracking server, so this is not likely to be very environment dependent outside the above.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
from time import sleep
from transformers import Trainer, TrainingArguments, AutoModelForSequenceClassification, AutoTokenizer
from datasets import load_dataset
import mlflow
import os
with mlflow.start_run() as run:
run_id = run.info.run_id
os.environ["MLFLOW_RUN_ID"] = run_id
print("Original run id:", run_id)
# Load a pre-trained model and tokenizer
model_name = "distilbert-base-uncased"
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Load a small dataset
dataset = load_dataset("imdb", split="train[:100]")
# Tokenize the dataset
def tokenize(batch):
return tokenizer(batch['text'], padding=True, truncation=True, max_length=128)
tokenized_dataset = dataset.map(tokenize, batched=True)
# Define training arguments
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=1,
logging_steps=1,
report_to="mlflow"
)
# Initialize Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
eval_dataset=tokenized_dataset,
)
# Train the model
trainer.train()
# Evaluate the model
eval_results = trainer.evaluate()
print(eval_results)
sleep(10)
runs_dir = "./mlruns/0"
runs = [x for x in os.listdir(runs_dir) if x != "meta.yaml"]
for run_id in runs:
print(f"Metric files for {run_id}")
metrics_path = os.path.join(runs_dir, run_id, "metrics")
print(*os.listdir(metrics_path), sep="\n")
print("="*16)
```
Running the above script should confirm that two separate runs are created, and that metrics files are placed in each.
### Expected behavior
I expected that all logging would be pushed to the existing run ID (unless there is a reason that this is not the case).
Apologies if this is documented behaviour, but I could not find a reason. | bug | low | Major |
2,716,897,373 | vscode | $CURRENT_TIME_UTC in snippet | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Hello,
I appreciate the functionality in vscode that enables the insertion of variables within snippets, particularly the $CURRENT_* variables that allow for the effective inclusion of date and time. This feature, including the timezone offset for indicating the current timezone, is quite useful. However, I am keen on having an additional variable, $CURRENT_*_UTC, which would provide the current date and time based on Coordinated Universal Time (UTC). | feature-request,snippets | low | Minor |
2,716,986,210 | kubernetes | When kube-apiserver list is used, the maximum message size is 4 GB. A single message may exceed the limit. | ### What happened?
When I create more than 10000 secrets, and per secret size is more than 400KB,than kube-apiserver will report warning,and my cluster can't work well

The page size of kube-apiserver list is 10000. Can I change the page size adaptively each time?

### What did you expect to happen?
cluster work well
### How can we reproduce it (as minimally and precisely as possible)?
create secret more than 10000,and per secret size more than 400kb
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/support,needs-sig,needs-triage | low | Major |
2,717,017,382 | rust | Incorrect rustdoc rendering of const generic default | I tried this code:
https://doc.rust-lang.org/nightly/std/mem/trait.TransmuteFrom.html
```rust
pub unsafe trait TransmuteFrom<Src, const ASSUME: Assume = { Assume::NOTHING }> where Src: ?Sized {}
```
I expected to see this happen: Renders identically to the above in the rustdoc output.
Instead, this happened: Renders as
```rust
pub unsafe trait TransmuteFrom<Src, const ASSUME: Assume = core::::mem::transmutability::TransmuteFrom::{constant#0}> where Src: ?Sized {}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version`:
```
1.85.0-nightly (c44b3d50f 2024-12-03)
``` | T-rustdoc,C-bug,A-const-generics,A-cross-crate-reexports | low | Critical |
2,717,019,871 | ui | [bug]: Issue Initializing ```shadcn``` in ```Next.js``` When axios Is Installed | ### Describe the bug
When initializing `shadcn` in a Next.js project, the process fails if `axios` is installed beforehand. The error indicates that the socket connection was unexpectedly closed during the registry check. Removing `axios` resolves the issue, and `shadcn` initializes successfully.
---
### Bug Description
- **Error Message:**
```
The socket connection was closed unexpectedly. For more information, pass `verbose: true` in the second argument to fetch().
```
- The problem seems to stem from a conflict between `axios` and the shadcn initialization process.
---
### Affected component/components
Core shadcn initialization script
### How to reproduce
1. Set up a new Next.js project.
2. Install `axios` using Bun:
```bash
bun add axios
```
3. Attempt to initialize `shadcn` using:
```bash
bunx --bun shadcn@latest init -d
```
4. Observe the error.
#### Resolution Steps:
1. Remove `axios` using:
```bash
bun remove axios
```
2. Retry the shadcn initialization:
```bash
bunx --bun shadcn@latest init -d
```
### Codesandbox/StackBlitz link
N/A
### Logs
```bash
**Before removing `axios`:**
โ Preflight checks.
โ Verifying framework. Found Next.js.
โ Validating Tailwind CSS.
โ Validating import alias.
โ Writing components.json.
โ น Checking registry.
The socket connection was closed unexpectedly. For more information, pass `verbose: true` in the second argument to fetch().
```
**After removing `axios`:**
```console
โ Preflight checks.
โ Verifying framework. Found Next.js.
โ Validating Tailwind CSS.
โ Validating import alias.
โ Writing components.json.
โ Checking registry.
โ Updating tailwind.config.ts
โ Updating app\globals.css
โ Installing dependencies.
โ Created 1 file:
- lib\utils.ts
Success! Project initialization completed.
You may now add components.
```
### System Info
```bash
- **OS:** Windows 10
- **Bun Version:** 1.1.36
- **Node Version:** 20.12.2
- **shadcn Version:** Latest (at the time of the issue)
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,717,022,661 | material-ui | Accordion unresponsive when imported as`import Accordion from @mui/material/Accordion` via esm.sh | ### Steps to reproduce
Steps:
1. Open this link to live example: https://idx.google.com/mui-accordion-3766562 (first time using idx, not sure whether this link works), here stackblitz: https://stackblitz.com/edit/stackblitz-starters-rihbql?file=index.html
2. See that the Accordion does not work
3. Change the Accordion import lines inside the script tag to import from the global `@mui/material` instead, reload, and see that it works.
<details>
<summary> Here a copy of the standalone html code </summary>
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta name="viewport" content="width=device-width">
<meta charset="utf-8">
<meta name="theme-color" media="(prefers-color-scheme: light)" content="white">
<meta name="theme-color" media="(prefers-color-scheme: dark)" content="#2a2928">
<meta name="color-scheme" content="light dark">
<script type="importmap">
{
"imports": {
"@mui/icons-material": "https://esm.sh/@mui/[email protected]?dev&external=react,react-dom&target=es2020&keep-names",
"@mui/icons-material/": "https://esm.sh/@mui/[email protected]&dev&external=react,react-dom&target=es2020&keep-names/",
"@mui/material": "https://esm.sh/@mui/[email protected]?dev&external=react,react-dom&target=es2020",
"@mui/material/": "https://esm.sh/@mui/[email protected]&dev&external=react,react-dom&target=es2020/",
"htm": "https://esm.sh/[email protected]",
"htm/preact": "https://esm.sh/[email protected]/preact?external=preact",
"preact": "https://esm.sh/[email protected]?target=es2020",
"preact/hooks": "https://esm.sh/[email protected]/hooks?target=es2020",
"react": "https://esm.sh/[email protected]/compat?target=es2020",
"react-dom": "https://esm.sh/[email protected]/compat?target=es2020",
"react/jsx-runtime": "https://esm.sh/[email protected]/jsx-runtime?target=es2020"
}
}
</script>
</head>
<body>
<div id="app"></div>
<script type="module">
// preact to its best - we can create simple html pages which are completely self-containing
import { render } from 'preact'
import { html } from 'htm/preact'
import Accordion from '@mui/material/Accordion'
import AccordionSummary from '@mui/material/AccordionSummary'
import AccordionDetails from '@mui/material/AccordionDetails'
// alternatively, importing it from @mui/material directly works
// import {Accordion, AccordionSummary, AccordionDetails} from "@mui/material"
import ExpandMoreIcon from '@mui/icons-material/ExpandMore'
export function App() {
return html`
<div>
<${Accordion}
>
<${AccordionSummary}
expandIcon=${html`<${ExpandMoreIcon} />`}
>
hello world
<//>
<${AccordionDetails} >
nice stuff
<//>
<//>
<${Accordion}
>
<${AccordionSummary}
expandIcon=${html`<${ExpandMoreIcon} />`}
>
and more
<//>
<${AccordionDetails} >
more stuff
<//>
<//>
</div>
`;
}
render(html`<${App} />`, document.getElementById('app'));
</script>
</body>
</html>
```
</details>
### Current behavior
Accordion is frozen with the one import style
### Expected behavior
Accordion behaves identical on both imports
### Context
This uses preact and importmaps via esm.sh, making it possible to create standalone html apps with material ui
### Your environment
see context above, especially the importmap in the html file
**Search keywords**: Accordion, esm.sh, importmaps, preact | bug ๐,component: accordion,package: material-ui | low | Minor |
2,717,034,785 | next.js | Client components wrapped in React.memo rendered by a server component remount unexpectedly | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/wispy-leftpad-9scxwl?workspaceId=ws_TBJMVdGvaGCmBYC6VerWKV
### To Reproduce
In the provided example, click the "router.refresh()" button to observe the unexpected remount behavior of memo-ed components.
Details are given directly in the provided example. For completeness, this is a copy of the explanation in the example:
<p>
All components here are client components that use the same base
component logging into the console when it is mounted and unmounted and
updating an internal counting state every 500ms. All components are
children of the same server component.
</p>
<div>
The differences are:
<ul>
<li>Normal: Nothing special, only renders the base component</li>
<li>Memo: Same as normal, but wrapped in React.memo</li>
<li>
Indirect Memo: Normal component that renders a second component that
is wrapped in React.memo
</li>
</ul>
</div>
<p>
When you hit the "router.refresh()" button, you can observe, that the
"Memo" component is remounted (see console and resetted state), which is
an unexpected behavior.
</p>
<p>
You can observe that this is not happening for "Indirect Memo". It only
seems to affect components wrappend in React.memo that are directly
rendered from a server component.
</p>
### Current vs. Expected behavior
The component wrapped in React.memo remounts when the server component rerenders, here triggered with router.refresh().
The expected behavior would be, that it behaves like the other components and just rerenders, but not remounts.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 8198
Available CPU cores: 4
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.0.4-canary.37 // Latest available version is detected (15.0.4-canary.37).
eslint-config-next: N/A
react: 19.0.0-rc-b01722d5-20241114
react-dom: 19.0.0-rc-b01722d5-20241114
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
I tested this in next 14 and 15 (15.0.4-canary.37) and was able to reproduce it with both versions. | bug | low | Minor |
2,717,081,441 | flutter | In reverse mode, the `localToGlobal` method of `SliverMainAxisGroup`'s children returned incorrect positions. | ### Steps to reproduce
```dart
import 'package:flutter/material.dart';
GlobalKey key = GlobalKey();
void main() {
runApp(
MaterialApp(
home: Scaffold(
body: Align(
alignment: Alignment.topLeft,
child: SizedBox(
height: 400,
child: CustomScrollView(
reverse: true,
slivers: [
SliverMainAxisGroup(
slivers: [
SliverToBoxAdapter(child: Container(height: 70, color: Colors.red)),
SliverToBoxAdapter(
child: SizedBox(
height: 20,
child: Text("1", key: key),
),
),
SliverToBoxAdapter(child: Container(height: 700, color: Colors.blue)),
],
),
],
),
),
),
floatingActionButton: IconButton(
onPressed: () {
RenderBox rb = key.currentContext!.findRenderObject() as RenderBox;
print(rb.localToGlobal(Offset.zero));
},
icon: const Icon(Icons.add),
),
),
),
);
}
```
Run the code above and click the button.
### Expected results
Output: Offset(0.0, 310.0).
### Actual results
Output: Offset(0.0, 70.0).
### Screenshots or Video

### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[!] Flutter (Channel [user-branch], 3.27.0-1.0.pre.699, on macOS 15.1 24B83 darwin-arm64, locale zh-Hans-CN)
! Flutter version 3.27.0-1.0.pre.699 on channel [user-branch] at /Users/10906/source/repo/github/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at https://flutter.dev/setup.
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[!] Xcode - develop for iOS and macOS (Xcode 16.1)
! CocoaPods 1.15.2 out of date (1.16.2 is recommended).
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
To update CocoaPods, see https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods
[โ] Chrome - develop for the web
[โ] Android Studio (version 2024.1)
[โ] VS Code (version 1.95.3)
[โ] VS Code (version 1.94.2)
[!] Proxy Configuration
! NO_PROXY is not set
[โ] Connected device (5 available)
[โ] Network resources
! Doctor found issues in 3 categories.
```
</details>
| framework,f: scrolling,has reproducible steps,P2,has partial patch,team-framework,triaged-framework,found in release: 3.24,found in release: 3.27 | low | Minor |
2,717,110,677 | react | Bug: useState apply updaters order does not comply with the documentation | React version: 16 & 18
## Steps To Reproduce
1. Call setState(functional updater) twice
2. One of the updater will be calculated before re-render.
Link to code example: https://codesandbox.io/p/sandbox/quizzical-hugle-wsp7zh
## The expected behavior
As per the [documentation of setState](https://react.dev/reference/react/useState#setstate),
> If you pass a function as nextState, it will be treated as an updater function. It must be pure, should take the pending state as its only argument, and should return the next state.
> **React will put your updater function in a queue and re-render your component. During the next render, React will calculate the next state by applying all of the queued updaters to the previous state.**
Expected behaviour is,
1. click on the button
2. enqueue two updater actions
3. the App component re-renders
4. (when visiting useState) apply two actions
5. get the updated new value
## The current behavior
However the current behavior is,
1. click on the button
2. one of the actions is calculated IMMEDIATELY, the rest are enqueued.
3. the App component re-renders
4. (when visiting useState) apply the remaining actions
5. get the updated new value

## Note
Only the *first* time not complying. From the second time it works expectedly.

| Status: Unconfirmed | low | Critical |
2,717,117,425 | rustdesk | Multi-Monitor problem - clicking on first window results in click on other windows | ### Bug Description
Using Linux Mind Debian Edition LMDE 6, 4 Monitors (all different resolutions), connecting to a Windows-Client with 3 Monitors an different resolutions.
On Linux: #2-Main 2560x1440 in the middle, #1 3840x2160 scaled to 150% on the left, #3 3840x2160 scaled 150% on the right, #4 1920x1080 scaled 100% top right.
On Windows Main: one Screen 1920x1080 left, main screen 2560x1440 middle, and last screen 1920x1080 on the right.
Now controlling from the Linux side moving a windows on the main screen to the right at some point (~3/4 of the screen) the windows flicks wild too the second (windows) monitor. I tried screenshoting the behaviour, hope this helps.
Setup Linux:

Setup-Windows:

Moving Explorer-Window on Main on Windows:
I start moving or using the mouse works fine at first, until the mouse moves too far right:

it now jumps to the right monitor:

Mouse still on the middle monitor going up it jumps to the left monitor:

And as a bonus all my clicks with the mouse get magically sent to the right or left monitor.
### How to Reproduce
Unsure, could also be a bug in Linux, but did not occur to me with any other application so far.
### Expected Behavior
The mouse should not switch to the second/third monitor while still active on the main one, I guess?
### Operating system(s) on local (controlling) side and remote (controlled) side
Linux -> Windows
### RustDesk Version(s) on local (controlling) side and remote (controlled) side
1.3.3. -> 1.3.3 (also previous versions up from 1.2.6)
### Screenshots
https://github.com/user-attachments/assets/551f2443-4316-473b-9bf4-a7a76e1da768
### Additional Context
_No response_ | bug | low | Critical |
2,717,130,233 | PowerToys | [Power toys-Image resizer]: Edit Buttons in Image Resizer Not Associated with Their Names and Dimensions. | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Image Resizer
### Steps to reproduce
**Test environment:**
OS Build: Windows 11 Enterprise Insider Preview 24H2 (26120.1340)
Power toys version: v0.86.0
Screen Reader: Narrator
**Repro Steps:**
1. Open the Power Toys app in the desktop.
2. Select the Image Resizer option from left navigation pane.
3. Turn on narrator using Ctrl + Win + Enter.
4. Now navigate to the Edit buttons present below the Add new buttons.
5. Observe the issue.
**User Experience:**
Screen reader users will be unable to understand which button controls which dimension, making it difficult or impossible to resize images effectively.
**Note:**
Same issue is also repro for delete button.
**Attachments:**

https://github.com/user-attachments/assets/465eb498-0f60-413f-80a0-e66c58b38ff8
**Guideline reference:**
https://www.w3.org/WAI/WCAG22/Understanding/info-and-relationships
### โ๏ธ Expected Behavior
Each edit button in the image resizer should have a descriptive label, either visually or programmatically, that clearly identifies its function and the dimensions it controls (e.g., "Set Width to 500px", "Set Height to 300px").
### โ Actual Behavior
The edit buttons in the image resizer tool are not properly labeled or associated with the corresponding image dimensions (e.g., width, height).
### Other Software
_No response_ | Issue-Bug,Resolution-Fix Committed,Product-Image Resizer,Priority-2,Area-Accessibility,A11yE+D,A11ySev2,A11yWCAG | low | Major |
2,717,162,736 | yt-dlp | [ximalaya]The audio related to newspapers on the radio is unavailable. | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
China
### Provide a description that is worded well enough to be understood
The audio data under the audiobook category is functioning normally, but the audio data related to newspaper radio is unavailable, possibly due to an issue with the final parsed video format. Specifically, the audiobook category's audio content, obtained through F12 inspection, is:
https://a.xmcdn.com/group40/M0A/91/86/wKgJVFq3OXGDkz9VADwjXjjdkh4478.m4a
For the hotspot category (newspaper-related radio audio), the audio content obtained through F12 inspection is:
https://audiopay.cos.tx.xmcdn.com/download/1.0.0/storages/b6e0-audiopay/73/82/GAqhQ6cLH1IqAFAAAAM34iN2.mp3
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU --cookies cookies.txt https://www.ximalaya.com/sound/779392260
[debug] Command-line config: ['-vU', '--cookies', 'hlq_cookies.txt', 'https://www.ximalaya.com/sound/779392260']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [2b67ac300] (pip)
[debug] Python 3.9.20 (CPython x86_64 64bit) - Linux-5.4.160-3.el7.x86_64-x86_64-with-glibc2.31 (OpenSSL 3.0.15 3 Sep 2024, glibc 2.31)
[debug] exe versions: ffmpeg 4.2.7, ffprobe 4.2.7
[debug] Optional libraries: sqlite3-3.45.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[debug] Using fake IP 36.150.99.250 (CN) as X-Forwarded-For
[ximalaya] Extracting URL: https://www.ximalaya.com/sound/779392260
[ximalaya] 779392260: Downloading info json
ERROR: [ximalaya] 779392260: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Traceback (most recent call last):
File "/opt/conda/envs/ximalaya/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py", line 1624, in wrapper
return func(self, *args, **kwargs)
File "/opt/conda/envs/ximalaya/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py", line 1780, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/opt/conda/envs/ximalaya/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py", line 1839, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
File "/opt/conda/envs/ximalaya/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py", line 2846, in process_video_result
self.raise_no_formats(info_dict)
File "/opt/conda/envs/ximalaya/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py", line 1121, in raise_no_formats
raise ExtractorError(msg, video_id=info['id'], ie=info['extractor'],
yt_dlp.utils.ExtractorError: [ximalaya] 779392260: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
| site-bug,triage | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.