id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,680,286,630 |
ollama
|
Llama3.2 Safetensors adapter not supported?
|
### What is the issue?
Hi Guys,
First thanks for creating such a wonderful tool. I have run into a problem which I believe is a bug but I may be incorrect and I could be asking for a feature request. Basically I have fine-tuned a llama3.2 model on some local data and then saved this in safetensors format.
Now I create a model file like below and point it to the safe tensor adapter files. Now when I run the command
ollama create myllama3.2 --file myllama3.2.modelfile
I see the below output which I believe tells me it has been a success.
transferring model data 100%
converting model
But when I run the Ollama list command my newly created model is not displayed. My modelfile starts like the below providing the directory where the safetensors adapter file is stored:
FROM llama3.2:1b
ADAPTER /home/shuaib/tmp/Llama-3.2-1B/
In the Ollama docs and the modelfile markdown file [https://github.com/ollama/ollama/blob/main/docs/modelfile.md](url) it says:
Currently supported Safetensor adapters:
Llama (including Llama 2, Llama 3, and Llama 3.1)
Is this why my attempt at creating a model from my fine-tuned model is failing? Because the llama3.2 safetensor adapters are not supported yet? Or am I missing something? Also is there a plan to support the llamam3.2 safentensors adapters anytime soon?
If it is the case because of the lack of support for the llama3.2 safe tensors adapters, should my workaround be to convert my fine-tuned safetensors file into gguf format?
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.1
|
bug
|
low
|
Critical
|
2,680,288,618 |
deno
|
`deno compile --include-code-cache`
|
Surfing on the new `deno compile --include`-style flags, it would be great if the compiled binary could have its own [code cache](https://github.com/denoland/deno/pull/26528#issuecomment-2491735952) included.
Code cache right now will only cache in the first execution and then reuse it in subsequent ones.
As I had stated [here](https://github.com/denoland/deno/pull/26528#issuecomment-2455421250), this new flag would make a lot of sense in ephemeral environments like docker containers, where the first execution is also the only execution, rendering code cache a bit useless in this scenario (to be fair, it should be able to work around the issue by running the application once during `docker build`, but running an application isn't always ok to do).
On my tests on some application, the code cache seems very small:
```console
# let's make sure we don't have any prior cache
❯ deno clean
Removed /home/felipecrs/.cache/deno (7569 files, 86.04MB)
❯ ./pkgx-cc --sync
❯ deno clean
Removed /home/felipecrs/.cache/deno (4 files, **76.27KB)**
```
Only 76.27 KB in this example. A negligible size increase considering the whole binary size.
Maybe it should be even made the default?
|
feat,compile
|
low
|
Minor
|
2,680,345,258 |
deno
|
`deno compile --code-cache-path`
|
With the new [code cache](https://github.com/denoland/deno/pull/26528), compiled programs would end up writing to `~/.cache/deno`. This may not be desirable. The program may have its own cache directory for example. Also, it would end up creating this directory in systems where Deno isn't even installed, which could lead to end-user confusion. End-users may not even know what is Deno.
|
suggestion,compile
|
low
|
Major
|
2,680,389,508 |
vscode
|
Unable to interact with Code windows on Mac Os after window maximization
|
Does this issue occur when all extensions are disabled?: Yes
I have an intermittent issue that I only see in VS code on Mac, although it may be partially due to electron bugs (I ruled out an open-source window management utility called Rectangle that I am also using)
I am using multiple displays and using Rectangle to move application windows between or minimize or maximize.
I haven't fully isolated the exact trigger, but at some point if I try to maximize one of the VS code windows.VS code windows become unresponsive. I can change the window focus by clicking on the windows and I can open new windows from the dock, but the menu bar and the application is unresponsive. I have to force quit VS Code and things return to normal
I also can't updated to newer versions of VS code 1.85, since the remote SSH extensions in the later versions have an incompatibility with the remote machines I connect to.
Version: 1.85.2 (Universal)
Commit: 8b3775030ed1a69b13e4f4c628c612102e30a681
Date: 2024-01-18T06:40:32.531Z (10 mos ago)
Electron: 25.9.7
ElectronBuildId: 26354273
Chromium: 114.0.5735.289
Node.js: 18.15.0
V8: 11.4.183.29-electron.0
OS: Darwin arm64 23.6.0
Sonoma 14.7.1
Steps to Reproduce:
1. Run VS code opening multiple remote windows
2. Maximize a window
|
info-needed
|
low
|
Critical
|
2,680,416,327 |
pytorch
|
Enhanced Feedback for `load_state_dict` with `strict=False`
|
### 🚀 The feature, motivation and pitch
When developers use `load_state_dict` with `strict=False`, they often face significant challenges, especially around debugging weight loading issues. The current behavior does not provide sufficient visibility into what weights were successfully loaded and which keys were unexpected or missing. This creates confusion, as many users unknowingly end up with mismatched or unloaded weights, leading to silent failures and hours of debugging.
This is especially useful for people like me who load partial weights, especially for fine-tuning tasks and domain adaptation. I end up writing my own script to do this all the time and I believe other researchers like me will find it very useful.
Why This is Critical
1. Lack of Feedback:
- Developers using `strict=False` are left in the dark about what was actually loaded.
- There’s no direct mechanism to verify mismatches or missing weights without additional manual inspection.
2. Silent Errors:
- If the weights are not loaded as intended, there’s no clear indication. Many users proceed without realizing their model isn’t correctly initialized.
3. Common Pitfall:
- This is a recurring issue for many developers. A seemingly harmless choice to use `strict=False` often results in broken workflows, especially for those less familiar with PyTorch’s internals.
### Alternatives
Introduce a argument to log detailed information when `load_state_dict` is called with `strict=False`. This feedback can include:
Input
```python
state_dict = torch.load("checkpoint.pth", map_location="cpu")
model.load_state_dict(state_dict, strict=False, verbose=True) # default verbose=False
print('Model Loaded')
```
Output
```python
Loaded Keys: ['layer1.weight', 'layer1.bias', 'layer2.weight']
Unexpected Keys in Checkpoint: ['deprecated_layer.weight', 'old_bias']
Missing Keys in Model: ['layer3.weight', 'layer3.bias']
Model Loaded
```
1. Verbosity: Provide a verbosity flag (e.g., verbose=True) to enable or disable this detailed output, ensuring backward compatibility for users who prefer silence.
2. Loaded Keys: Explicitly list the keys from the checkpoint that were successfully matched and loaded.
3. Unexpected Keys:Clearly display keys in the checkpoint that were not found in the model.
4. Missing Keys: Highlight model keys that have no corresponding weights in the checkpoint.
### Additional context
Benefits of This Feature
- Transparency: Users can immediately identify loading issues without digging through checkpoint files or model definitions.
- Efficiency: Reduces debugging time significantly, especially for models with complex architectures or experimental workflows.
- Better Debugging: Makes it clear whether the mismatch is due to model changes, checkpoint errors, or other reasons.
- Improved User Experience: Aligns with the needs of both beginner and advanced users, ensuring everyone can leverage `load_state_dict` effectively.
I would be happy to contribute these changes to Pytorch and send a PR
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
|
module: nn,triaged
|
low
|
Critical
|
2,680,423,505 |
yt-dlp
|
For the Bilibili website, yt-dlp is unable to obtain the uploader and upload date for videos that contain multiple parts (P).
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
China
### Example URLs
https://www.bilibili.com/video/BV1GJ41117iw
### Provide a description that is worded well enough to be understood
When downloading a video list, for the P0 JSON file, the fields `%(uploader)s` and `%(upload_date>%Y-%m-%d)s` cannot be correctly obtained and are displayed as NA. For P1, P2, etc., the `%(uploader)s` and `%(upload_date>%Y-%m-%d)s` fields are correctly obtained.
> Note, This Bilibili video looks like a playlist, but it is actually a single video containing multiple parts (P), and all parts share the same `BVID` number. Bilibili also has another type of video collection, which is the one that contains multiple independent videos, where each individual video has a different `BVID`. For this type of video collection, yt-dlp is able to correctly obtain the `%(uploader)s` and `%(upload_date>%Y-%m-%d)s` fields for the P0 JSON file.
How can I correctly get the `%(uploader)s` and `%(upload_date>%Y-%m-%d)s` fields for P0?
Here is a list of output filenames:
```
[NA]-[《狄仁杰断案传奇》86版(高清)]-P[000]-[NA]-[《狄仁杰断案传奇》86版(高清)].info.json
[一品豆腐坊少掌柜弟]-[《狄仁杰断案传奇》86版(高清)]-P[001]-[2019-12-12]-[《狄仁杰断案传奇》86版(高清) p01 第1集].info.json
[一品豆腐坊少掌柜弟]-[《狄仁杰断案传奇》86版(高清)]-P[002]-[2019-12-12]-[《狄仁杰断案传奇》86版(高清) p02 第2集].info.json
[一品豆腐坊少掌柜弟]-[《狄仁杰断案传奇》86版(高清)]-P[003]-[2019-12-12]-[《狄仁杰断案传奇》86版(高清) p03 第3集].info.json
[一品豆腐坊少掌柜弟]-[《狄仁杰断案传奇》86版(高清)]-P[004]-[2019-12-12]-[《狄仁杰断案传奇》86版(高清) p04 第4集].info.json
```
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--output', '[%(uploader)s]-[%(playlist)s]-P[%(playlist_index)03d]-[%(upload_date>%Y-%m-%d)s]-[%(title)s].%(ext)s', '--cookies', 'C:\\Users\\username\\Downloads\\cookies.txt', '--write-info-json', '--skip-download', '--paths', 'G:\\test', 'https://www.bilibili.com/video/BV1GJ41117iw']
[debug] Encodings: locale cp936, fs utf-8, pref cp936, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [7ea278792] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19041-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.1-full_build-www.gyan.dev (setts), ffprobe 7.1-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[BiliBili] Extracting URL: https://www.bilibili.com/video/BV1GJ41117iw
[BiliBili] 1GJ41117iw: Downloading webpage
[BiliBili] BV1GJ41117iw: Extracting videos in anthology
[BiliBili] Downloading playlist BV1GJ41117iw - add --no-playlist to download just the video BV1GJ41117iw
[download] Downloading playlist: 《狄仁杰断案传奇》86版(高清)
[info] Writing playlist metadata as JSON to: G:\test\[NA]-[《狄仁杰断案传奇》86版(高清)]-P[000]-[NA]-[《狄仁杰断案传奇》86版(高清)].info.json
[BiliBili] Playlist 《狄仁杰断案传奇》86版(高清): Downloading 14 items of 14
[download] Downloading item 1 of 14
[BiliBili] Extracting URL: https://www.bilibili.com/video/BV1GJ41117iw?p=1
[BiliBili] 1GJ41117iw: Downloading webpage
[BiliBili] BV1GJ41117iw: Extracting videos in anthology
[BiliBili] 78895467: Extracting chapters
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] BV1GJ41117iw_p1: Downloading 1 format(s): 100024+30280
[info] Writing video metadata as JSON to: G:\test\[一品豆腐坊少掌柜弟]-[《狄仁杰断案传奇》86版(高清)]-P[001]-[2019-12-12]-[《狄仁杰断案传奇》86版(高清) p01 第1集].info.json
```
|
site-enhancement,triage
|
low
|
Critical
|
2,680,455,827 |
pytorch
|
`torch.compile` fails with runtime error on Numpy scalar operations
|
### 🐛 Describe the bug
The following pattern shows up in a model from User Empathy Day: https://github.com/suno-ai/bark/blob/f4f32d4cd480dfec1c245d258174bc9bde3c2148/bark/generation.py#L602-L607
```python
@torch.compile(backend="eager")
def run(x):
n = int(round(np.floor(3.1)))
return x + n
run(torch.ones(1))
```
### Error logs
```
Traceback (most recent call last):
File "/Users/ryanguo99/Documents/work/scratch/test.py", line 26, in <module>
test()
File "/Users/ryanguo99/Documents/work/scratch/test.py", line 24, in test
run(torch.ones(1))
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/eval_frame.py", line 556, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 1403, in __call__
return self._torchdynamo_orig_callable(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 1187, in __call__
result = self._inner_convert(
^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 549, in __call__
return _compile(
^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 984, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 712, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 747, in _compile_inner
out_code = transform_code_object(code, transform)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object
transformations(instructions, code_options)
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 233, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/convert_frame.py", line 664, in transform
tracer.run()
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/symbolic_convert.py", line 2841, in run
super().run()
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/symbolic_convert.py", line 1032, in run
while self.step():
^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/symbolic_convert.py", line 944, in step
self.dispatch_table[inst.opcode](self, inst)
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/symbolic_convert.py", line 641, in wrapper
return inner_fn(self, inst)
^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/symbolic_convert.py", line 2314, in CALL
self._call(inst)
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/symbolic_convert.py", line 2308, in _call
self.call_function(fn, args, kwargs)
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/symbolic_convert.py", line 879, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/variables/builtin.py", line 1004, in call_function
return handler(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/variables/builtin.py", line 852, in builtin_dispatch
rv = fn(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/variables/builtin.py", line 772, in call_self_handler
result = self_handler(tx, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/variables/builtin.py", line 1235, in call_round
return round_method.call_function(tx, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/variables/misc.py", line 1032, in call_function
return self.obj.call_method(tx, self.name, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/variables/tensor.py", line 1334, in call_method
return NumpyNdarrayVariable.create(tx, proxy)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/variables/tensor.py", line 1238, in create
return wrap_fx_proxy_cls(
^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/variables/builder.py", line 2182, in wrap_fx_proxy_cls
return _wrap_fx_proxy(
^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/variables/builder.py", line 2278, in _wrap_fx_proxy
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/utils.py", line 2377, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/utils.py", line 2312, in get_fake_value
ret_val = wrap_fake_exception(
^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/utils.py", line 1858, in wrap_fake_exception
return fn()
^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/utils.py", line 2313, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/utils.py", line 2445, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/utils.py", line 2427, in run_node
return node.target(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ryanguo99/Documents/work/pytorch/torch/_dynamo/utils.py", line 2770, in __call__
method_callable = getattr(obj, self.method)
^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <Wrapped method <original __round__>>(*(FakeTensor(..., size=(), dtype=torch.float64),), **{}):
'ndarray' object has no attribute '__round__'
from user code:
File "/Users/ryanguo99/Documents/work/scratch/test.py", line 22, in run
n = int(round(np.floor(3.1)))
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
`7ced49d2ccf`, Python 3.12
cc @mruberry @rgommers @chauhang @penguinwu
|
triaged,module: numpy,oncall: pt2,dynamo-user-empathy-day
|
low
|
Critical
|
2,680,457,321 |
PowerToys
|
Fail to initialize plugins: OneNote
|
### Microsoft PowerToys version
v0.86.0
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
Power Toys Run - Plugin Loading Error Fail to load plugin: OneNote Please report the bug to https://aka.ms/powerToysReportBug. (For third-party plugins, please contact the plugin author.)
### ✔️ Expected Behavior
I should press alt space and have powertoys run work (which it does), but this error is thrown
### ❌ Actual Behavior
I got an error:
Power Toys Run - Plugin Loading Error Fail to load plugin: OneNote Please report the bug to https://aka.ms/powerToysReportBug. (For third-party plugins, please contact the plugin author.)
I do need powertoys run to work for oennote for it to work.
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Critical
|
2,680,464,777 |
pytorch
|
Consider relaxing torch.cond's requirement that the outputs of each branch have matching requires_grad metadata
|
### 🐛 Describe the bug
Here is a surprising behavior I found while working on https://github.com/pytorch/pytorch/pull/140979
```python
import torch
torch.set_default_device("cuda")
def cond_relu(x):
return torch.cond(x > 0, lambda x: x, lambda _: torch.zeros_like(x), [x])
def imperative_relu(x):
if x > 0:
return x
else:
return torch.zeros_like(x)
x = -torch.ones((), requires_grad=False)
# Works
y1 = torch.nn.functional.relu(x)
# Works
y2 = imperative_relu(x)
# Works
y3 = cond_relu(x)
x = -torch.ones((), requires_grad=True)
# Works
y1 = torch.nn.functional.relu(x)
# Works
y2 = imperative_relu(x)
# Does not work
y3 = cond_relu(x)
```
When the input of cond_relu has requires_grad=True, we get the following error:
```
Traceback (most recent call last):
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/variables/higher_order_ops.py", line 54, in graph_break_as_hard_error
return fn(*args, **kwargs)
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/variables/higher_order_ops.py", line 872, in call_function
unimplemented(
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/exc.py", line 313, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Expected branches to return tensors with same metadata. [(tensor_pair, difference)...]:[('pair0:', TensorMetadata(shape=torch.Size([]), dtype=torch.float32, requires_grad=True, stride=(), memory_format=None, is_quantized=False, qparams={}), TensorMetadata(shape=torch.Size([]), dtype=torch.float32, requires_grad=False, stride=(), memory_format=None, is_quantized=False, qparams={}))]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/dgalvez/scratch/code/asr/pytorch/repros/cond_quirks2.py", line 29, in <module>
y2 = cond_relu(x)
File "/home/dgalvez/scratch/code/asr/pytorch/repros/cond_quirks2.py", line 6, in cond_relu
return torch.cond(x > 0, lambda x: x, lambda _: torch.zeros_like(x), [x])
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_higher_order_ops/cond.py", line 207, in cond
return torch.compile(_cond_op_wrapper, backend=backend, fullgraph=True)(
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/eval_frame.py", line 556, in _fn
return fn(*args, **kwargs)
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/convert_frame.py", line 1419, in __call__
return self._torchdynamo_orig_callable(
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/convert_frame.py", line 567, in __call__
return _compile(
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/convert_frame.py", line 1000, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/convert_frame.py", line 730, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/convert_frame.py", line 765, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object
transformations(instructions, code_options)
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/convert_frame.py", line 248, in _fn
return fn(*args, **kwargs)
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/convert_frame.py", line 682, in transform
tracer.run()
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/symbolic_convert.py", line 2841, in run
super().run()
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/symbolic_convert.py", line 1032, in run
while self.step():
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/symbolic_convert.py", line 944, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/symbolic_convert.py", line 641, in wrapper
return inner_fn(self, inst)
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/symbolic_convert.py", line 1714, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/symbolic_convert.py", line 879, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_dynamo/variables/higher_order_ops.py", line 57, in graph_break_as_hard_error
raise UncapturedHigherOrderOpError(reason + msg) from e
torch._dynamo.exc.UncapturedHigherOrderOpError: Cond doesn't work unless it is captured completely with torch.compile. Scroll up to find out what causes the graph break.
from user code:
File "/home/dgalvez/scratch/code/asr/pytorch/torch/_higher_order_ops/cond.py", line 197, in _cond_op_wrapper
return cond_op(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
The important part is this:
```
torch._dynamo.exc.Unsupported: Expected branches to return tensors with same metadata. [(tensor_pair, difference)...]:[('pair0:', TensorMetadata(shape=torch.Size([]), dtype=torch.float32, requires_grad=True, stride=(), memory_format=None, is_quantized=False, qparams={}), TensorMetadata(shape=torch.Size([]), dtype=torch.float32, requires_grad=False, stride=(), memory_format=None, is_quantized=False, qparams={}))]
```
You can see that the only difference between the branches is that one output has requires_grad=True, while the other has requires_grad=False.
This is a little bit unfortunate since it doesn't match some other existing behaviors.
I don't think that simply promoting a tensor with requires_grad=False to have requires_grad=True if the corresponding output from the other branch has requires_grad=True will be problematic. At the very least, this would not affect the ability to run the backward pass via conditional nodes in cuda graphs. This would match the behavior of existing code, where sometimes a gradient is not backpropagated based on runtime values. What do you think?
@bohnstingl @ydwu4 @eellison
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0a0+git13535e2
Is debug build: True
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.27.7
Libc version: glibc-2.39
Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 45 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 16
Stepping: 4
BogoMIPS: 4190.15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 528 MiB (16 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] cudnn==1.1.0
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.3.101
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] nvidia-pytriton==0.5.3
[pip3] nvtx==0.2.10
[pip3] onnx==1.17.0
[pip3] onnx-graphsurgeon==0.3.27
[pip3] onnxruntime==1.16.3
[pip3] onnxruntime-gpu==1.16.3
[pip3] onnxscript==0.1.0.dev20240817
[pip3] open-clip-torch==2.24.0
[pip3] optree==0.13.0
[pip3] pytorch-lightning==2.2.1
[pip3] pytorch-triton==3.0.0+901819d2b6
[pip3] torch==2.6.0a0+git13535e2
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==1.2.0
[pip3] torchsde==0.2.6
[pip3] triton==3.1.0
[pip3] triton-model-navigator==0.7.5
[pip3] tritonclient==2.43.0
[conda] blas 1.0 mkl
[conda] cudnn 1.1.0 pypi_0 pypi
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.10 py310h5eee18b_0
[conda] mkl_random 1.2.7 py310h1128e8f_0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.3.101 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-pytriton 0.5.3 pypi_0 pypi
[conda] nvtx 0.2.10 pypi_0 pypi
[conda] open-clip-torch 2.24.0 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-lightning 2.2.1 pypi_0 pypi
[conda] pytorch-triton 3.0.0+901819d2b6 pypi_0 pypi
[conda] torch 2.6.0a0+git13535e2 dev_0 <develop>
[conda] torchdiffeq 0.2.3 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmetrics 1.2.0 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
[conda] triton-model-navigator 0.7.5 pypi_0 pypi
[conda] tritonclient 2.43.0 pypi_0 pypi
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @ydwu4 @bdhirsh @yf225
|
triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher
|
low
|
Critical
|
2,680,551,473 |
pytorch
|
Use expecttest in test_compiled_optimizers.py
|
### 🐛 Describe the bug
@ezyang pointed out that it's kind of annoying to update the kernel counts in https://github.com/pytorch/pytorch/blob/723498aab8803702dfd508be05a1ebe525112ebd/test/inductor/test_compiled_optimizers.py#L4
We should update this to use expecttest like our other tests.
### Versions
N/A
cc @mruberry @ZainRizvi @chauhang @penguinwu
|
good first issue,module: tests,triaged,enhancement,actionable,oncall: pt2
|
low
|
Critical
|
2,680,578,882 |
ui
|
[bug]: dialog was not found
|
### Describe the bug
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
The component at https://ui.shadcn.com/r/styles/default/dalog.json was not found.
It may not exist at the registry. Please make sure it is a valid component.
### Affected component/components
Dialog and Alert dialog
### How to reproduce
npx shadcn@latest add dalog
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues
|
bug
|
low
|
Critical
|
2,680,584,110 |
pytorch
|
rope perf improvement
|
Tracking issue for improving perf of this kernel.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
triaged,oncall: pt2,module: inductor
|
low
|
Minor
|
2,680,586,087 |
material-ui
|
Select onFocus called way too many times
|
### Steps to reproduce
Steps:
1. Open this link to live example: https://codesandbox.io/p/sandbox/q649qz
2. Open console to see logs
3. Click into the select box
4. Click off the select box
5. Click off the select box again
### Current behavior
On the initial focus, onFocus is called 4 times. On initial blur, onFocus is called 2 more times and onBlur is not called. On second blur, onBlur is called.
### Expected behavior
On initial focus, onFocus should be called once and not again until component is blurred and refocused. onBlur should be called when input is initially blurred.
### Context
I have a code associated to with onFocus event that is being called 4 times when I expect it to be called once.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
**Search keywords**: mui select onfocus multiple times
|
performance,component: select,enhancement
|
low
|
Minor
|
2,680,607,003 |
vscode
|
Terminal suggest: Make sure `cd -` completions play nice in pwsh
|
I see this coming in https://github.com/microsoft/vscode/pull/234363:
```js
{
name: '-',
description: 'Switch to the last used folder',
hidden: true,
}
```
In pwsh it's actually - as in -/+ and they navigate a history stack, not just toggle between the last directory (`$OLDPWD` in bash). We could improve the wording for pwsh and add cd + as well (especially if `hidden` does what I think it does and doesn't show it unless typed), similar to the pwsh support from the script:

When I looked into it this history stack is internal so we can't resolve the folder and stick it on the right for pwsh unfortunately, we might be able to do it on shells that use `$OLDPWD`.
|
feature-request,terminal-suggest
|
low
|
Minor
|
2,680,675,181 |
kubernetes
|
[FG:InPlacePodVerticalScaling] Revisit ResizeStatus and state transitions
|
/kind feature
There are several problems with the current approach to pod ResizeStatus, including:
1. Race condition setting the status: the kubelet can overwrite the Proposed status set by the API server (https://github.com/kubernetes/kubernetes/issues/125394)
2. Can technically have 2 parallel resize statuses: a {Proposed/Deferred/Infeasible} resize when desired resources != allocated resources, and an InProgress resize when allocated resources != actual resources. Currently the former just overrides the latter.
3. Separate mechanism for surfacing details related to error states (Deferred, Infeasible, stuck InProgress)
4. No status to capture desired memory limit < actual memory usage, which can lead to a resize stuck in InProgress (maybe this should move to the deferred state?)
Before graduating to Beta, we should revisit the design of ResizeStatus, and take a holistic look at the user experience.
/sig node
/priority important-soon
/milestone v1.33
/triage accepted
/cc @yujuhong
|
priority/important-soon,sig/node,kind/feature,triage/accepted
|
low
|
Critical
|
2,680,705,641 |
go
|
encoding/binary: Write does not optimise the case of native-format slices
|
### Go version
go version go1.23.3 linux/amd64
### Output of `go env` in your module/workspace:
```shell
Irrelevant, but happy to provide.
```
### What did you do?
I wrote a (large) number of slices of little-endian IEEE floats using `binary.Write` on a little-endian machine. I was expecting the library to recognise that I was writing in the native format, and special-case the write. However, it did not, and the performance was about 3000 slower than expected.
https://go.dev/play/p/wNWegxHXjlG
### What did you see happen?
```
BenchmarkPutFloats-8 196766 5958 ns/op 671.41 MB/s
BenchmarkPutFloatsUnsafe-8 465501314 2.246 ns/op 1781336.18 MB/s
```
### What did you expect to see?
I was expecting the performance of the two implementations to be similar.
|
Performance,NeedsInvestigation
|
low
|
Major
|
2,680,707,844 |
ui
|
[bug]: Inconsistent Scrolling Behavior with Dialog and ScrollArea
|
### Describe the bug
When using a dialog containing a scroll area component, the expected behavior is that the scroll area should handle scrolling when the dialog is active. However, the following issues occur:
1. The scroll area within the dialog does not respond to scroll interactions.
2. In some cases, the background page (outside the dialog) scrolls instead.
3. This behavior often disrupts the overall scroll functionality of the page, rendering it inconsistent.
This issue appears to occur specifically when combining `Dialog` and `ScrollArea` components, leading to broken scroll behavior for both the dialog and the page.
### Affected component/components
Dialog, ScrollArea
### How to reproduce
here's a quick v0 code for the same: https://v0.dev/chat/8TUcPpED3by?b=b_XFselDzUnJH
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Win11, Edge.
(Note: am explicitly using JSX with vite & not TSX)
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues
|
bug
|
low
|
Critical
|
2,680,782,048 |
pytorch
|
Multiple torch.{*}_copy variants are not in generated docs/docs skiplist
|
### 🐛 Describe the bug
to track https://github.com/pytorch/pytorch/pull/140045#discussion_r1852591819
`torch.{*}_copy` variants of functions are not in generated docs despite being public
Some examples are `torch.expand_copy`, `torch.diagonal_copy` etc see [here](https://github.com/twaclaw/pytorch/blob/9a30f686fe5496f3562e98b15b7ab15c26b5db11/torch/_torch_docs.py#L13805-L13843)
1. Is not having these in generated docs intentional?
2. I don't see them on the [coverage test skip list](https://github.com/pytorch/pytorch/blob/main/docs/source/conf.py?fbclid=IwZXh0bgNhZW0CMTEAAR0fbujOxNi9AmkYPYVPWNjmN36WcVVPWOk-ULliwfxjohC7ocLyHeYx5eg_aem_SSUJa4GtZyrmoeWPIEdAbA), are our coverage tests working as expected here?
Please close if this was intentional :)
### Versions
main
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke
|
module: docs,triaged
|
low
|
Critical
|
2,680,875,769 |
go
|
crypto/tls: re-enable two TLS tests with fips140tls.Required
|
Two small TODOs that came out of [CR 629736](https://go-review.googlesource.com/c/go/+/629736):
1. In `handshake_server_test.go` the "RSA" and "RSA with ec_point_format" subtests of `TestTLSPointFormats` are skipped when `fips140tls.Requried()` is enabled, otherwise a handshake failure error is observed. This should be debugged (I suspect I'm overlooking someting small) and the subtests re-enabled for FIPS TLS mode.
2. The `TestRenegotiationExtension` test is skipped when `fips140tls.Required()` is enabled due to its use of RC4 ciphersuites and the RSA 1024 test certificate hiearchy. This _should_ be possible to enable in FIPS TLS mode by replacing RC4 with an AES ciphersuite and using RSA 2048 test certs. Doing so was giving a "Server returned short message of length 7" error. This should be debugged and the test re-enabled for FIPS TLS mode.
|
NeedsFix
|
low
|
Critical
|
2,680,906,786 |
node
|
Add zlib/promise module
|
### What is the problem this feature will solve?
To work with promise we need to use the `promisify` util to converte the callback function version.
### What is the feature you are proposing to solve the problem?
I would like to add this new module to use promises instead of callbacks, similar to the `fs/promises` module.
### What alternatives have you considered?
_No response_
|
feature request
|
low
|
Minor
|
2,680,947,072 |
deno
|
deno test `--require` or `--import` (setup file)
|
I can't really figure how to do some browser mocking in deno.
I'm using the node command like this, which works:
`node --experimental-strip-types --test --import ./test/mock-browser.ts` ([link to code](https://github.com/birkskyum/maplibre-gl-draw/blob/01cc0dcecea8e280b9d931ca27727da812639f27/package.json#L31))
But with deno i get:
`deno test --no-check`:
```
error: ReferenceError: document is not defined
container: document.createElement("div"),
```
I have in my deno.json
```
"lib": [
"deno.window",
"dom"
]
```
The mock-browser.ts being:
```ts
import MockBrowser from "mock-browser";
const mock = new MockBrowser.mocks.MockBrowser();
global.document = mock.getDocument();
global.window = {};
// Polyfill based on https://gist.github.com/paulirish/1579671
let lastTime = 0;
global.requestAnimationFrame = function (fn) {
const now = Date.now();
const nextTime = Math.max(lastTime + 16, now);
setTimeout(() => fn(lastTime = nextTime), nextTime - now);
};
```
For node compat, I believe I need a way to do a similar `--import` in deno test, or another way to give a setup file, which I believe [vitest calls it](https://vitest.dev/config/#setupfiles)
|
suggestion,testing
|
low
|
Critical
|
2,680,973,456 |
go
|
x/mobile: seccomp prevented call to disallowed arm64 system call 434
|
### Go version
go version 1.22.9 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/azaza/Library/Caches/go-build'
GOENV='/Users/azaza/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/azaza/go/pkg/mod'
GONOPROXY='github.com/anyproto/*'
GONOSUMDB='github.com/anyproto/*'
GOOS='darwin'
GOPATH='/Users/azaza/go'
GOPRIVATE='github.com/anyproto/*'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/[email protected]/1.22.9/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/opt/homebrew/Cellar/[email protected]/1.22.9/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.22.9'
GCCGO='gccgo'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/Users/azaza/anytype-heart/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/9v/ytsyk_250mg_q5dlkzlg_85c0000gn/T/go-build314952532=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
I am attempting to build our project library for the Android team using Go 1.23.3.
You can refer to revert of the changes here: [Pull Request #1856](https://github.com/anyproto/anytype-heart/pull/1856/files).
### What did you see happen?
I've noticed crashes when running on Android versions below 12.
It seems this issue should be resolved here.
https://github.com/golang/go/commit/9563300f6e262589ae25c71d778bfcd646d4a750
https://gpages.juszkiewicz.com.pl/syscalls-table/syscalls.html
434 - seems https://www.man7.org/linux/man-pages/man2/pidfd_open.2.html
Android log
```
*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
2024-11-21 11:15:03.968 DEBUG Build fingerprint: 'google/sdk_gphone64_arm64/emulator64_arm64:10/QSR1.210802.001/7603624:user/release-keys'
2024-11-21 11:15:03.968 DEBUG Revision: '0'
2024-11-21 11:15:03.968 DEBUG ABI: 'arm64'
2024-11-21 11:15:03.968 DEBUG Timestamp: 2024-11-21 11:15:03+0100
2024-11-21 11:15:03.968 DEBUG pid: 2169, tid: 2234, name: ytype.app.debug >>> io.anytype.app.debug <<<
2024-11-21 11:15:03.968 DEBUG uid: 10148
2024-11-21 11:15:03.968 DEBUG signal 31 (SIGSYS), code 1 (SYS_SECCOMP), fault addr --------
2024-11-21 11:15:03.968 DEBUG Cause: seccomp prevented call to disallowed arm64 system call 434
2024-11-21 11:15:03.968 DEBUG x0 0000000000000879 x1 0000000000000000 x2 0000000000000000 x3 0000000000000000
2024-11-21 11:15:03.968 DEBUG x4 0000000000000000 x5 0000000000000000 x6 0000000000000000 x7 0000000000000002
2024-11-21 11:15:03.968 DEBUG x8 00000000000001b2 x9 0000000000000002 x10 0000000000000000 x11 0000000000000000
2024-11-21 11:15:03.968 DEBUG x12 0000000000000001 x13 0000000000000010 x14 0000000000000168 x15 0000000000000169
2024-11-21 11:15:03.968 DEBUG x16 00000040006803a0 x17 000000400068f780 x18 0000007d958a4000 x19 0000000000000070
2024-11-21 11:15:03.968 DEBUG x20 000000400068f960 x21 0000004000255980 x22 0000000000000001 x23 7a696d6974706f20
2024-11-21 11:15:03.968 DEBUG x24 0000007d9a13cbe0 x25 ffffffffffffffff x26 0000007d9a536478 x27 0000000000000000
2024-11-21 11:15:03.968 DEBUG x28 00000040000021c0 x29 000000400068f608
2024-11-21 11:15:03.968 DEBUG sp 000000400068f610 lr 0000007d97b5ea7c pc 0000007d97b49b10
2024-11-21 11:15:03.969 DEBUG
backtrace:
2024-11-21 11:15:03.969 DEBUG #00 pc 0000000001588b10 /data/app/io.anytype.app.debug-AmXN-qfNKsfLdIRgZoRbZw==/base.apk (offset 0x207c000)
2024-11-21 11:15:03.982 ConnectivityService requestNetwork for uid/pid:10148/2169 NetworkRequest [ TRACK_DEFAULT id=160, [ Capabilities: INTERNET&NOT_RESTRICTED&TRUSTED Uid: 10148] ]
2024-11-21 11:15:03.983 WifiNetworkFactory got request NetworkRequest [ TRACK_DEFAULT id=160, [ Capabilities: INTERNET&NOT_RESTRICTED&TRUSTED Uid: 10148] ] with score 60 and serial -1
2024-11-21 11:15:03.983 UntrustedWifiNetworkFactory got request NetworkRequest [ TRACK_DEFAULT id=160, [ Capabilities: INTERNET&NOT_RESTRICTED&TRUSTED Uid: 10148] ] with score 60 and serial -1
2024-11-21 11:15:03.983 PhoneSwitcherNetworkRequstListener got request NetworkRequest [ TRACK_DEFAULT id=160, [ Capabilities: INTERNET&NOT_RESTRICTED&TRUSTED Uid: 10148] ] with score 60 and serial -1
2024-11-21 11:15:04.176 /system/bin/tombstoned Tombstone written to: /data/tombstones/tombstone_01
```
### What did you expect to see?
When I use Go 1.22.9, everything works as expected (see the PR above).
TL;DR: The fix for the incorrect syscall in Go 1.23.3 doesn't seem to work.
|
NeedsInvestigation,mobile
|
low
|
Critical
|
2,680,980,753 |
pytorch
|
MPS operator coverage tracking issue (2.6+ version)
|
### 🐛 Describe the bug
### This issue is to have a centralized place to list and track work on adding support to new ops for the MPS backend.
[**PyTorch MPS Ops Project**](https://github.com/users/kulinseth/projects/1/views/1) : Project to track all the ops for MPS backend. There are a very large number of operators in pytorch and so they are not all yet implemented. We will be prioritizing adding new operators based on user feedback. If possible, please also provide link to the network or use-case where this op is getting used.
As Ops are requested we will add " *To Triage*" pool. If we have 3+ requests for an operation and given its complexity/need the operation will be moved "*To be implemented*" pool. If you want to work on adding support for such op, feel free to comment below to get assigned one. Please avoid pickup up an op that is already being worked on tracked in "*In progress*" pool.
[Link to the wiki for details](https://github.com/pytorch/pytorch/wiki/MPS-Backend) on how to add these ops and example PRs.
[**MPS operators coverage matrix**](https://qqaatw.github.io/pytorch-mps-ops-coverage/) - The matrix covers most of the supported operators but is not exhaustive. **Please look at the `In vx.x.x` column, if the box is green, it means that the op implementation is included in the latest release; on the other hand, if the box is yellow, it means the op implementation is in the nightly and has not yet included in the latest release.** Before you comment below, please take a look at this matrix to make sure the operator you're requesting has not been implemented in nightly. More details can be found on the [readme](https://github.com/qqaatw/pytorch-mps-ops-coverage).
This is a spiritual successor of https://github.com/pytorch/pytorch/issues/77764, but hopefully it will not receive requests for the OPs that has been added already since 1.13 but until 2.6
cc @kulinseth @albanD @DenisVieriu97 @jhavukainen
|
feature,triaged,tracker,module: mps
|
low
|
Critical
|
2,680,984,962 |
pytorch
|
Enhancement Request: improve this error message
|
### 🚀 The feature, motivation and pitch
Currently this error message confuses me, and the comment said I could submit an enhancement request
ERROR 2024-11-21T21:14:00.063730401Z [resource.labels.containerName: pytorch] [rank14]: File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 4047, in interpolate
ERROR 2024-11-21T21:14:00.063732261Z [resource.labels.containerName: pytorch] [rank14]: return torch._C._nn.upsample_nearest3d(input, output_size, scale_factors)
ERROR 2024-11-21T21:14:00.063734879Z [resource.labels.containerName: pytorch] [rank14]: RuntimeError: Expected output.numel() <= std::numeric_limits<int32_t>::max() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
### Alternatives
i mean not the end of the world if no one changes this, it's a "nice-to-have"
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki
|
module: nn,triaged
|
low
|
Critical
|
2,680,997,073 |
flutter
|
[in_app_purchase_storekit] Audit import headers to prevent unneeded headers from being public
|
See here: https://github.com/flutter/packages/pull/7912#discussion_r1814265121
> https://github.com/flutter/packages/blob/a87eddf0c281d5d017093125c1a8d9161257741f/packages/in_app_purchase/in_app_purchase_storekit/darwin/in_app_purchase_storekit.podspec#L21
>
> https://guides.cocoapods.org/syntax/podspec.html#public_header_files
>
> Public headers should not use file (quote) imports, but should use angle brackets. https://developer.apple.com/documentation/xcode/build-settings-reference#Quoted-Include-In-Framework-Header
>
> Also the [import order](https://google.github.io/styleguide/objcguide.html#order-of-includes) should be
>
> ```
> #import <Foundation/Foundation.h>
> #import <StoreKit/StoreKit.h>
> ```
>
> then
>
> When in doubt, open up an Apple header and copy their pattern. For example, this is StoreKit's `SKAdImpression` header:
>
> ```objc
> //
> // SKAdImpression.h
> // StoreKit
> //
> // Copyright © 2020 Apple Inc. All rights reserved.
> //
>
> #import <Foundation/Foundation.h>
> #import <StoreKit/StoreKitDefines.h>
> ```
>
|
P2,team-ios,triaged-ios
|
low
|
Major
|
2,681,009,483 |
flutter
|
Loading the same asset between several widget tests loads infinitely
|
### Steps to reproduce
1. Create a test file
2. Create a widget test and call `await rootBundle.loadString(...);`
3. Create a second widget test and call the same method **with the same asset path**
### Expected results
Both tests pass
### Actual results
The second test never completes
### Code sample
<details open><summary>Code sample</summary>
```dart
void main() {
testWidgets('load assets 1', (tester) async {
await rootBundle.loadString('assets/flutter_logo.png');
});
testWidgets('load assets 2', (tester) async {
await rootBundle.loadString('assets/flutter_logo.png');
});
}
```
</details>
### Screenshots or Video
_No response_
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel master, 3.27.0-1.0.pre.111, on macOS 14.7.1 23H222 darwin-arm64, locale en-US)
• Flutter version 3.27.0-1.0.pre.111 on channel master at /Users/.../fvm/versions/master
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 32cb866d3a (5 weeks ago), 2024-10-18 21:33:22 +0200
• Engine revision 76d310e42c
• Dart version 3.7.0 (build 3.7.0-34.0.dev)
• DevTools version 2.40.1
[!] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/.../Library/Android/sdk
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/macos-android-setup for more details.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.100.0
[✓] Connected device (6 available)
• iPhone 16 Pro (mobile) • EB579C74-6193-4E82-9ADD-20E2908F59FD • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-1 (simulator)
• iPhone 16 Pro Max (mobile) • DE1E2B51-91C5-48F3-8E5A-03FD5834F7EF • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-1 (simulator)
• iPad Pro 13-inch (M4) (mobile) • 1F462CE1-F1CE-43D4-AC7F-4A7235A3F31C • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-1 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.7.1 23H222 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.7.1 23H222 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.85
[✓] Network resources
• All expected network resources are available.
```
</details>
|
a: tests,framework,a: assets,has reproducible steps,P3,team-framework,triaged-framework,found in release: 3.24,found in release: 3.27
|
low
|
Major
|
2,681,064,222 |
electron
|
Ability to set ELECTRON_NO_ATTACH_CONSOLE from app
|
### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
I previous post https://github.com/electron/electron/issues/20266 incorrectly posted this as an Issue and not a Feature Request
So here it is!
Im about to launch an Electron app, although i've just discovered that when you run the app from the command line in Windows the terminal session attaches to the log output from the Electron app. You can't quit the terminal session because it will kill the app.
I need my app to be launched on startup of Windows, without having the ugly terminal screen there. I would also like people to be able to script it's start without needing the terminal to stay alive.
I know I can do this with the ELECTRON_NO_ATTACH_CONSOLE env var, but there doesn't seem to be a way to force this to be always on. I don't want to tell people they need to set ELECTRON_NO_ATTACH_CONSOLE=true before running the app, thats ugly.
Using Electron builder to build the production app. Have tried forcing env.ELECTRON_NO_ATTACH_CONSOLE=true at the top of my main file, before anything else loads, but that seems to do nothing.
As far as I can tell, it's not possible right now (really happy if someone can prove me wrong).
### Proposed Solution
Have the ability to configure this option (ELECTRON_NO_ATTACH_CONSOLE) from within the app without the need for environment variable.
### Alternatives Considered
The alternative is to have the user
set ELECTRON_NO_ATTACH_CONSOLE=true in their batch files before running the app
This is not intuitive and I've never seen any other apps besides electron do this.
The end user should not have to make changes for a production app if they use batch files in their workflow.
### Additional Information
It is currently an env variable, if more then one electron app are in the the batch file, then this affects the others as well, so the user must be mindful of which app needs to attach to the console and others that don't.
|
enhancement :sparkles:
|
low
|
Minor
|
2,681,066,362 |
pytorch
|
DISABLED test_linear_backward_memory_usage_cuda_float32 (__main__.TestNestedTensorSubclassCUDA)
|
Platforms: rocm, linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_linear_backward_memory_usage_cuda_float32&suite=TestNestedTensorSubclassCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/33336917353).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_linear_backward_memory_usage_cuda_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_nestedtensor.py", line 3819, in test_linear_backward_memory_usage
self.assertEqual(max_after_gb, 0)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3999, in assertEqual
raise error_metas.pop()[0].to_error(
AssertionError: Scalars are not equal!
Expected 0 but got 1.
Absolute difference: 1
Relative difference: inf
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/test_nestedtensor.py TestNestedTensorSubclassCUDA.test_linear_backward_memory_usage_cuda_float32
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_nestedtensor.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
|
triaged,module: flaky-tests,module: nestedtensor,skipped
|
low
|
Critical
|
2,681,115,955 |
go
|
cmd/compile: cannot build runtime in coverage mode on Wasm
|
### Go version
go version go1.23.3 linux/s390x
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='s390x'
GOBIN=''
GOCACHE='/home/linux1/.cache/go-build'
GOENV='/home/linux1/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='s390x'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/linux1/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/linux1/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/snap/go/10737'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/snap/go/10737/pkg/tool/linux_s390x'
GOVCS=''
GOVERSION='go1.23.3'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/linux1/.config/go/telemetry'
GCCGO='gccgo'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -march=z196 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build3325072388=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Run following command
```
git clone https://github.com/Zxilly/go-size-analyzer.git
GOOS=js GOARCH=wasm go test -v -covermode=atomic -cover -coverpkg=../../...
```
Then build failed.
### What did you see happen?
```
# runtime
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:652:10: write barrier prohibited by caller; preprintpanics
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:802:77: called by gopanic
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:171:7: called by goPanicSlice3AlenU
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:253:71: called by (*bucket).stk
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:309:68: called by stkbucket
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:854:16: called by saveBlockEventStack
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:840:21: called by (*mLockProfile).store
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:773:13: called by (*mLockProfile).recordUnlock
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:819:12: write barrier prohibited by caller; (*_panic).start
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:779:9: called by gopanic
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:171:7: called by goPanicSlice3AlenU
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:253:71: called by (*bucket).stk
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:309:68: called by stkbucket
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:854:16: called by saveBlockEventStack
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:840:21: called by (*mLockProfile).store
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:773:13: called by (*mLockProfile).recordUnlock
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:872:69: write barrier prohibited by caller; (*_panic).nextDefer
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:781:24: called by gopanic
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:171:7: called by goPanicSlice3AlenU
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:253:71: called by (*bucket).stk
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:309:68: called by stkbucket
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:854:16: called by saveBlockEventStack
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:840:21: called by (*mLockProfile).store
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:773:13: called by (*mLockProfile).recordUnlock
../../_tool/go/1.23.2/s390x/src/runtime/malloc.go:1187:19: write barrier prohibited by caller; mallocgc
../../_tool/go/1.23.2/s390x/src/runtime/iface.go:360:74: called by convTnoptr
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:171:19: called by goPanicSlice3AlenU
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:253:71: called by (*bucket).stk
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:309:68: called by stkbucket
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:854:16: called by saveBlockEventStack
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:840:21: called by (*mLockProfile).store
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:773:13: called by (*mLockProfile).recordUnlock
../../_tool/go/1.23.2/s390x/src/runtime/runtime.go:172:8: write barrier prohibited by caller; (*godebugInc).IncNonDefault
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:740:26: called by gopanic
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:171:7: called by goPanicSlice3AlenU
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:253:71: called by (*bucket).stk
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:309:68: called by stkbucket
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:854:16: called by saveBlockEventStack
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:840:21: called by (*mLockProfile).store
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:773:13: called by (*mLockProfile).recordUnlock
../../_tool/go/1.23.2/s390x/src/runtime/trace.go:493:17: write barrier prohibited by caller; traceAdvance
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:795:15: called by gopanic
../../_tool/go/1.23.2/s390x/src/runtime/panic.go:171:7: called by goPanicSlice3AlenU
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:253:71: called by (*bucket).stk
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:309:68: called by stkbucket
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:854:16: called by saveBlockEventStack
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:840:21: called by (*mLockProfile).store
../../_tool/go/1.23.2/s390x/src/runtime/mprof.go:773:13: called by (*mLockProfile).recordUnlock
FAIL github.com/Zxilly/go-size-analyzer [build failed]
```
### What did you expect to see?
Works correctly
|
NeedsInvestigation,arch-wasm,compiler/runtime
|
low
|
Critical
|
2,681,136,885 |
kubernetes
|
Empty pod env var always causes server-side apply conflict
|
### What happened?
I applied a manifest that contains an empty env var in a `PodTemplate` (like `env: [{ name: foo, value: "" }]`). Then, I tried to apply the same spec, with a different field-manager. This produced a conflict:
```
error: Apply failed with 1 conflict: conflict with "kubectl": .spec.containers[name="foo"].env[name="foo"].value
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
```
### What did you expect to happen?
I expect the apply to succeed without a conflict, and the field to be co-owned by both managers (since they are both applying the same value).
### How can we reproduce it (as minimally and precisely as possible)?
Apply the following manifest twice:
1. `kubectl apply --server-side -f ./repro.yaml`
2. `kubectl apply --server-side -f ./repro.yaml --field-manager test`
```yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: foo
image: busybox
args: ["sleep", "1000"]
env:
- name: foo
value: "" # works as expected if this field is removed, or set to a non-empty value
```
### Anything else we need to know?
I don't think this is `kubectl`-specific, it happens when using `client-go` directly as well.
The behaviour around env values is a bit strange in general - if I specify an empty env var value, the value is listed under `managedFields`, but not actually returned in the response object:
```yaml
apiVersion: v1
kind: Pod
metadata:
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:containers:
k:{"name":"foo"}:
.: {}
f:args: {}
f:env:
k:{"name":"foo"}:
.: {}
f:name: {}
f:value: {} # value is a managed field
f:image: {}
f:name: {}
name: foo
namespace: default
spec:
containers:
- args:
- sleep
- "1000"
env:
- name: foo # but there is no value in the returned object!
image: busybox
name: foo
```
### Kubernetes version
<details>
```console
> kubectl version
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.31.0
```
</details>
### Cloud provider
<details>
N/A, using kind
</details>
### OS version
<details>
N/A, using kind
</details>
### Install tools
<details>
kind
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
|
kind/bug,sig/api-machinery,triage/accepted
|
low
|
Critical
|
2,681,137,485 |
pytorch
|
Memory efficient reshape for stride-0 tensors
|
### 🚀 The feature, motivation and pitch
We have many tensors with large shape, but several stride=0 dimensions, i.e. expanded/broadcasted dimensions.
If a reshape that cannot be done as a view is performed, the whole tensor is made full sized and contiguous, resulting in a huge allocation.
IT would be nice if the expanded dimensions could be kept if possible.
Example:
```python
import torch
a=torch.rand(1,2,1).expand(100,2,2)
print("a", a.shape, "stride:", a.stride())
print("storage a", a.untyped_storage().nbytes(), "bytes")
b=a.reshape(-1,4)
print("b", b.shape, "stride:", b.stride())
print("storage b", b.untyped_storage().nbytes(), "bytes")
c=a[:1].reshape(-1,4).expand(100,-1)
print("c", c.shape, "stride:", c.stride())
print("storage c", c.untyped_storage().nbytes(), "bytes")
torch.testing.assert_close(b,c)
```
```
a torch.Size([100, 2, 2]) stride: (0, 1, 0)
storage a 8 bytes
b torch.Size([100, 4]) stride: (4, 1)
storage b 1600 bytes
c torch.Size([100, 4]) stride: (0, 1)
storage c 16 bytes
```
similarly, if a were non-contiguous in some dimensions involved in the reshape.
Is there a smart way to do the reshape?
### Alternatives
We can manually figure out which axes will be joined or split in the reshape and undo the expand in all other stride-0 dimensions before the reshape, and expand after the reshape.
It would be nice to have this as an efficient C function available.
Unfortunatly, I think changing the default reshape behaviour might be considered a breaking change -- as it would then return non contiguous tensors where it previously did not.
### Additional context
_No response_
cc @albanD
|
triaged,enhancement,module: python frontend
|
low
|
Minor
|
2,681,138,074 |
flutter
|
iOS simulator test have confusing logspam.
|
The following logs are emitted constantly but the results seem to be benign:
```
2024-11-21 14:23:13.025619-0800 IosUnitTests[79897:22772182] [ERROR:flutter/runtime/isolate_configuration.cc(223)] application_kernel_asset or application_kernel_list_asset must be set
2024-11-21 14:23:13.025783-0800 IosUnitTests[79897:22772182] [ERROR:flutter/shell/common/engine.cc(215)] Engine run configuration was invalid.
2024-11-21 14:23:13.025912-0800 IosUnitTests[79897:22772182] [ERROR:flutter/shell/common/shell.cc(675)] Could not launch engine with configuration.
```
Invocation:
```
./flutter/testing/run_tests.py --type objc --variant host_debug_unopt_arm64 --ios-variant ios_debug_sim_unopt_arm64
```
|
a: tests,engine,P3,team-ios,triaged-ios
|
low
|
Critical
|
2,681,181,475 |
pytorch
|
Pytorch notifications
|
### 🚀 The feature, motivation and pitch
**Note: this feature is under review, this is a first rough proposal, looking for user feedback right now whether this is needed or not.**
Developers within the PyTorch ecosystem have quite some signal to deal with:
- Github notifications for cc'ed issues
- failing CI tests on HUD / monitoring for successful tests to merge
- PR reverts
- Infra failures (infra teams)
- oncall health for some repos (TTR for issues/reviews), regression for i.e. daily benchmarks
This leads to delays in merging, late response times, frustration with the build and test system.
Additionally, some PRs fall off peoples review list, and developers end up using Dev Infra Office Hours to request reviews.
Some people rely on e-mail notifications and custom mail rules to triage their signals, some people use the Github notifications, others rely on manual pings in slack/other chat tools from colleagues.
The proposal here is to look into a general purpose notification system that users can subscribe to, that can ping either oncalls for certain metrics (TTR too long, regression detected, infra down), or individuals (your PR is ready to merge, you have failing tests to look into).
### Alternatives
- we could stick to Github notitfications, ccbot, [probot]([pytorch-probot](https://github.com/pytorch/test-infra/torchci)) that subscribes people to labels
- We could split this up in multiple use cases, and make a 'stale PR' dashboard per repo, develop an infra-failure solution, and write a simple @notify bot for benchmark regressions
### Additional context
right now we're looking to interview developers on how they deal with this workflow, how they experience developing PyTorch and how they deal with the above mentioned challenges. If you're interested in giving feedback, feel free to do so as a comment to this issue, or contact @wdvr.
cc @ZainRizvi @kit1980 @huydhn @clee2000
|
feature,triaged,module: devx
|
low
|
Critical
|
2,681,224,486 |
langchain
|
PydanticUndefinedAnnotation: name 'SafetySetting' is not defined using ChatVertexAI
|
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
# Ensure your VertexAI credentials are configured
from langchain_google_vertexai import ChatVertexAI
model = ChatVertexAI(model="gemini-1.5-flash")
model.invoke("Hello, world!")
```
### Error Message and Stack Trace (if applicable)
```python
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py:815, in GenerateSchema._resolve_forward_ref(self, obj)
814 try:
--> 815 obj = _typing_extra.eval_type_backport(obj, *self._types_namespace)
816 except NameError as e:
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_typing_extra.py:534, in eval_type_backport(value, globalns, localns, type_params)
533 try:
--> 534 return _eval_type_backport(value, globalns, localns, type_params)
535 except TypeError as e:
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_typing_extra.py:558, in _eval_type_backport(value, globalns, localns, type_params)
557 try:
--> 558 return _eval_type(value, globalns, localns, type_params)
559 except TypeError as e:
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_typing_extra.py:592, in _eval_type(value, globalns, localns, type_params)
591 else:
--> 592 return typing._eval_type( # type: ignore
593 value, globalns, localns
594 )
File /opt/conda/lib/python3.10/typing.py:327, in _eval_type(t, globalns, localns, recursive_guard)
326 if isinstance(t, ForwardRef):
--> 327 return t._evaluate(globalns, localns, recursive_guard)
328 if isinstance(t, (_GenericAlias, GenericAlias, types.UnionType)):
File /opt/conda/lib/python3.10/typing.py:699, in ForwardRef._evaluate(self, globalns, localns, recursive_guard)
693 type_ = _type_check(
694 eval(self.__forward_code__, globalns, localns),
695 "Forward references must evaluate to types.",
696 is_argument=self.__forward_is_argument__,
697 allow_special_forms=self.__forward_is_class__,
698 )
--> 699 self.__forward_value__ = _eval_type(
700 type_, globalns, localns, recursive_guard | {self.__forward_arg__}
701 )
702 self.__forward_evaluated__ = True
File /opt/conda/lib/python3.10/typing.py:329, in _eval_type(t, globalns, localns, recursive_guard)
328 if isinstance(t, (_GenericAlias, GenericAlias, types.UnionType)):
--> 329 ev_args = tuple(_eval_type(a, globalns, localns, recursive_guard) for a in t.__args__)
330 if ev_args == t.__args__:
File /opt/conda/lib/python3.10/typing.py:329, in <genexpr>(.0)
328 if isinstance(t, (_GenericAlias, GenericAlias, types.UnionType)):
--> 329 ev_args = tuple(_eval_type(a, globalns, localns, recursive_guard) for a in t.__args__)
330 if ev_args == t.__args__:
File /opt/conda/lib/python3.10/typing.py:327, in _eval_type(t, globalns, localns, recursive_guard)
326 if isinstance(t, ForwardRef):
--> 327 return t._evaluate(globalns, localns, recursive_guard)
328 if isinstance(t, (_GenericAlias, GenericAlias, types.UnionType)):
File /opt/conda/lib/python3.10/typing.py:699, in ForwardRef._evaluate(self, globalns, localns, recursive_guard)
693 type_ = _type_check(
694 eval(self.__forward_code__, globalns, localns),
695 "Forward references must evaluate to types.",
696 is_argument=self.__forward_is_argument__,
697 allow_special_forms=self.__forward_is_class__,
698 )
--> 699 self.__forward_value__ = _eval_type(
700 type_, globalns, localns, recursive_guard | {self.__forward_arg__}
701 )
702 self.__forward_evaluated__ = True
File /opt/conda/lib/python3.10/typing.py:329, in _eval_type(t, globalns, localns, recursive_guard)
328 if isinstance(t, (_GenericAlias, GenericAlias, types.UnionType)):
--> 329 ev_args = tuple(_eval_type(a, globalns, localns, recursive_guard) for a in t.__args__)
330 if ev_args == t.__args__:
File /opt/conda/lib/python3.10/typing.py:329, in <genexpr>(.0)
328 if isinstance(t, (_GenericAlias, GenericAlias, types.UnionType)):
--> 329 ev_args = tuple(_eval_type(a, globalns, localns, recursive_guard) for a in t.__args__)
330 if ev_args == t.__args__:
File /opt/conda/lib/python3.10/typing.py:329, in _eval_type(t, globalns, localns, recursive_guard)
328 if isinstance(t, (_GenericAlias, GenericAlias, types.UnionType)):
--> 329 ev_args = tuple(_eval_type(a, globalns, localns, recursive_guard) for a in t.__args__)
330 if ev_args == t.__args__:
File /opt/conda/lib/python3.10/typing.py:329, in <genexpr>(.0)
328 if isinstance(t, (_GenericAlias, GenericAlias, types.UnionType)):
--> 329 ev_args = tuple(_eval_type(a, globalns, localns, recursive_guard) for a in t.__args__)
330 if ev_args == t.__args__:
File /opt/conda/lib/python3.10/typing.py:327, in _eval_type(t, globalns, localns, recursive_guard)
326 if isinstance(t, ForwardRef):
--> 327 return t._evaluate(globalns, localns, recursive_guard)
328 if isinstance(t, (_GenericAlias, GenericAlias, types.UnionType)):
File /opt/conda/lib/python3.10/typing.py:694, in ForwardRef._evaluate(self, globalns, localns, recursive_guard)
690 globalns = getattr(
691 sys.modules.get(self.__forward_module__, None), '__dict__', globalns
692 )
693 type_ = _type_check(
--> 694 eval(self.__forward_code__, globalns, localns),
695 "Forward references must evaluate to types.",
696 is_argument=self.__forward_is_argument__,
697 allow_special_forms=self.__forward_is_class__,
698 )
699 self.__forward_value__ = _eval_type(
700 type_, globalns, localns, recursive_guard | {self.__forward_arg__}
701 )
File <string>:1
NameError: name 'SafetySetting' is not defined
The above exception was the direct cause of the following exception:
PydanticUndefinedAnnotation Traceback (most recent call last)
Cell In[15], line 1
----> 1 from langchain_google_vertexai import ChatVertexAI
3 model = ChatVertexAI(model="gemini-1.5-flash")
5 model.invoke("Hello, world!")
File ~/.local/lib/python3.10/site-packages/langchain_google_vertexai/__init__.py:16
14 from langchain_google_vertexai.chains import create_structured_runnable
15 from langchain_google_vertexai.chat_models import ChatVertexAI
---> 16 from langchain_google_vertexai.embeddings import VertexAIEmbeddings
17 from langchain_google_vertexai.evaluators.evaluation import (
18 VertexPairWiseStringEvaluator,
19 VertexStringEvaluator,
20 )
21 from langchain_google_vertexai.functions_utils import (
22 PydanticFunctionsOutputParser,
23 )
File ~/.local/lib/python3.10/site-packages/langchain_google_vertexai/embeddings.py:544
540 embeddings.append(result.image_embedding)
541 return embeddings
--> 544 VertexAIEmbeddings.model_rebuild()
File /opt/conda/lib/python3.10/site-packages/pydantic/main.py:589, in BaseModel.model_rebuild(cls, force, raise_errors, _parent_namespace_depth, _types_namespace)
587 # manually override defer_build so complete_model_class doesn't skip building the model again
588 config = {**cls.model_config, 'defer_build': False}
--> 589 return _model_construction.complete_model_class(
590 cls,
591 cls.__name__,
592 _config.ConfigWrapper(config, check=False),
593 raise_errors=raise_errors,
594 ns_resolver=ns_resolver,
595 )
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_model_construction.py:658, in complete_model_class(cls, cls_name, config_wrapper, raise_errors, ns_resolver, create_model_module)
651 handler = CallbackGetCoreSchemaHandler(
652 partial(gen_schema.generate_schema, from_dunder_get_core_schema=False),
653 gen_schema,
654 ref_mode='unpack',
655 )
657 try:
--> 658 schema = cls.__get_pydantic_core_schema__(cls, handler)
659 except PydanticUndefinedAnnotation as e:
660 if raise_errors:
File /opt/conda/lib/python3.10/site-packages/pydantic/main.py:697, in BaseModel.__get_pydantic_core_schema__(cls, source, handler)
694 if not cls.__pydantic_generic_metadata__['origin']:
695 return cls.__pydantic_core_schema__
--> 697 return handler(source)
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_schema_generation_shared.py:84, in CallbackGetCoreSchemaHandler.__call__(self, source_type)
83 def __call__(self, source_type: Any, /) -> core_schema.CoreSchema:
---> 84 schema = self._handler(source_type)
85 ref = schema.get('ref')
86 if self._ref_mode == 'to-def':
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py:612, in GenerateSchema.generate_schema(self, obj, from_dunder_get_core_schema)
609 schema = from_property
611 if schema is None:
--> 612 schema = self._generate_schema_inner(obj)
614 metadata_js_function = _extract_get_pydantic_json_schema(obj, schema)
615 if metadata_js_function is not None:
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py:881, in GenerateSchema._generate_schema_inner(self, obj)
879 if lenient_issubclass(obj, BaseModel):
880 with self.model_type_stack.push(obj):
--> 881 return self._model_schema(obj)
883 if isinstance(obj, PydanticRecursiveRef):
884 return core_schema.definition_reference_schema(schema_ref=obj.type_ref)
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py:693, in GenerateSchema._model_schema(self, cls)
681 model_schema = core_schema.model_schema(
682 cls,
683 inner_schema,
(...)
689 ref=model_ref,
690 )
691 else:
692 fields_schema: core_schema.CoreSchema = core_schema.model_fields_schema(
--> 693 {k: self._generate_md_field_schema(k, v, decorators) for k, v in fields.items()},
694 computed_fields=[
695 self._computed_field_schema(d, decorators.field_serializers)
696 for d in computed_fields.values()
697 ],
698 extras_schema=extras_schema,
699 model_name=cls.__name__,
700 )
701 inner_schema = apply_validators(fields_schema, decorators.root_validators.values(), None)
702 new_inner_schema = define_expected_missing_refs(inner_schema, recursively_defined_type_refs())
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py:693, in <dictcomp>(.0)
681 model_schema = core_schema.model_schema(
682 cls,
683 inner_schema,
(...)
689 ref=model_ref,
690 )
691 else:
692 fields_schema: core_schema.CoreSchema = core_schema.model_fields_schema(
--> 693 {k: self._generate_md_field_schema(k, v, decorators) for k, v in fields.items()},
694 computed_fields=[
695 self._computed_field_schema(d, decorators.field_serializers)
696 for d in computed_fields.values()
697 ],
698 extras_schema=extras_schema,
699 model_name=cls.__name__,
700 )
701 inner_schema = apply_validators(fields_schema, decorators.root_validators.values(), None)
702 new_inner_schema = define_expected_missing_refs(inner_schema, recursively_defined_type_refs())
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py:1073, in GenerateSchema._generate_md_field_schema(self, name, field_info, decorators)
1066 def _generate_md_field_schema(
1067 self,
1068 name: str,
1069 field_info: FieldInfo,
1070 decorators: DecoratorInfos,
1071 ) -> core_schema.ModelField:
1072 """Prepare a ModelField to represent a model field."""
-> 1073 common_field = self._common_field_schema(name, field_info, decorators)
1074 return core_schema.model_field(
1075 common_field['schema'],
1076 serialization_exclude=common_field['serialization_exclude'],
(...)
1080 metadata=common_field['metadata'],
1081 )
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py:1261, in GenerateSchema._common_field_schema(self, name, field_info, decorators)
1257 schema = self._apply_annotations(
1258 source_type, annotations + validators_from_decorators, transform_inner_schema=set_discriminator
1259 )
1260 else:
-> 1261 schema = self._apply_annotations(
1262 source_type,
1263 annotations + validators_from_decorators,
1264 )
1266 # This V1 compatibility shim should eventually be removed
1267 # push down any `each_item=True` validators
1268 # note that this won't work for any Annotated types that get wrapped by a function validator
1269 # but that's okay because that didn't exist in V1
1270 this_field_validators = filter_field_decorator_info_by_field(decorators.validators.values(), name)
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py:2051, in GenerateSchema._apply_annotations(self, source_type, annotations, transform_inner_schema)
2046 continue
2047 get_inner_schema = self._get_wrapped_inner_schema(
2048 get_inner_schema, annotation, pydantic_js_annotation_functions
2049 )
-> 2051 schema = get_inner_schema(source_type)
2052 if pydantic_js_annotation_functions:
2053 core_metadata = schema.setdefault('metadata', {})
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_schema_generation_shared.py:84, in CallbackGetCoreSchemaHandler.__call__(self, source_type)
83 def __call__(self, source_type: Any, /) -> core_schema.CoreSchema:
---> 84 schema = self._handler(source_type)
85 ref = schema.get('ref')
86 if self._ref_mode == 'to-def':
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py:2032, in GenerateSchema._apply_annotations.<locals>.inner_handler(obj)
2030 from_property = self._generate_schema_from_property(obj, source_type)
2031 if from_property is None:
-> 2032 schema = self._generate_schema_inner(obj)
2033 else:
2034 schema = from_property
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py:875, in GenerateSchema._generate_schema_inner(self, obj)
872 obj = ForwardRef(obj)
874 if isinstance(obj, ForwardRef):
--> 875 return self.generate_schema(self._resolve_forward_ref(obj))
877 BaseModel = import_cached_base_model()
879 if lenient_issubclass(obj, BaseModel):
File /opt/conda/lib/python3.10/site-packages/pydantic/_internal/_generate_schema.py:817, in GenerateSchema._resolve_forward_ref(self, obj)
815 obj = _typing_extra.eval_type_backport(obj, *self._types_namespace)
816 except NameError as e:
--> 817 raise PydanticUndefinedAnnotation.from_name_error(e) from e
819 # if obj is still a ForwardRef, it means we can't evaluate it, raise PydanticUndefinedAnnotation
820 if isinstance(obj, ForwardRef):
PydanticUndefinedAnnotation: name 'SafetySetting' is not defined
For further information visit https://errors.pydantic.dev/2.10/u/undefined-annotation
```
### Description
Using the exact example from [documentation](https://python.langchain.com/docs/integrations/chat/) the following error occur
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Debian 5.10.223-1 (2024-08-10)
> Python Version: 3.10.15 | packaged by conda-forge | (main, Sep 20 2024, 16:37:05) [GCC 13.3.0]
Package Information
-------------------
> langchain_core: 0.3.19
> langchain: 0.3.7
> langchain_community: 0.3.5
> langsmith: 0.1.144
> langchain_google_genai: 2.0.5
> langchain_google_vertexai: 2.0.7
> langchain_openai: 0.2.5
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> google-cloud-aiplatform: 1.73.0
> google-cloud-storage: 2.18.2
> google-generativeai: 0.8.3
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langchain-mistralai: Installed. No version info available.
> numpy: 1.25.2
> openai: 1.55.0
> orjson: 3.10.11
> packaging: 24.1
> pydantic: 2.10.0
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
|
🤖:bug,investigate
|
medium
|
Critical
|
2,681,233,440 |
pytorch
|
Consider making torch.cond return zero rather than None for the gradients of tensors that are in the not-taken branch of the if-else.
|
### 🐛 Describe the bug
While working on #140979, I noticed that torch.cond()'s current backward pass will return None for the gradients of inputs that are not used in the branch selected by `pred`.
There are a few challenges with this:
1. This makes it basically impossible to capture torch.cond with a cuda graph. By specifying that a tensor's gradient is None rather than a tensor of zeros, we cannot pass this gradient into a kernel. Therefore, we cannot capture a single static cuda graph that can handle both sides of a torch.cond, unfortunately, since the arguments to later kernels would be dependent upon whether pred is true or false. We would be forced to fall back to cuda graph trees rather than using a single cuda graph for workloads using torch.cond that need to compute gradients, which is disappointing. This argument likely also applies to other accelerators like TPU's.
Here is an example that demonstrates this point:
```
import torch
torch.set_default_device("cuda")
def cond_branch(x, w1, w2):
return torch.cond(x > 0, lambda x: w1 * x, lambda x: w2 * x, [x])
def imperative_branch(x, w1, w2):
if x > 0:
return w1 * x
else:
return w2 * x
x = torch.ones((), requires_grad=False)
neg_x = -torch.ones((), requires_grad=False)
w1 = torch.zeros((), requires_grad=True)
w2 = torch.zeros((), requires_grad=True)
y2 = cond_branch(x, w1, w2)
grad_w = torch.autograd.grad(y2, [w1, w2], allow_unused=True)
print("GALVEZ:grads=", grad_w)
```
Prints:
```
GALVEZ:grads= (tensor(1., device='cuda:0'), None)
```
Rather than
```
GALVEZ:grads= (tensor(1., device='cuda:0'), tensor(0., device='cuda:0'))
```
2. This does not match the behavior of torch.vmap(torch.cond()), which does return a tensor of 0's. There are a few optimizers that are not no-ops when the gradient is 0. One obvious example is ADAM. Therefore, I think that we may encounter subtle differences under a transformation that *should* retain parity in terms of semantics.
Here is an example.
```
import torch
torch.set_default_device("cuda")
def cond_branch(x, w1, w2):
return torch.cond(x > 0, lambda x: w1 * x, lambda x: w2 * x, [x])
def imperative_branch(x, w1, w2):
if x > 0:
return w1 * x
else:
return w2 * x
x = torch.ones((), requires_grad=False)
neg_x = -torch.ones((), requires_grad=False)
w1 = torch.zeros((), requires_grad=True)
w2 = torch.zeros((), requires_grad=True)
y2 = torch.vmap(cond_branch)(torch.unsqueeze(x, 0), torch.unsqueeze(w1, 0), torch.unsqueeze(w2, 0))
grad_w = torch.autograd.grad(y2, [w1, w2], allow_unused=True)
print("GALVEZ:grads=", grad_w)
```
Prints:
```
GALVEZ:grads= (tensor(1., device='cuda:0'), tensor(0., device='cuda:0'))
```
3. Some may not care about this, but it doesn't match the behavior of jax.lax.cond. I surmise that the reason for jax.lax.cond's behavior is that it TPU is a target, which cannot handle gradients being set to None rather than being set to 0, for similar to cuda graphs, because then it won't be able to create a single static compute graph.
```python
import jax
import jax.numpy as jnp
# Define the conditional branch function using jax.lax.cond
def cond_branch(x, w1, w2):
return jax.lax.cond(
x > 0,
lambda x: w1 * x,
lambda x: w2 * x,
x,
)
# Inputs
x = jnp.ones(())
neg_x = -jnp.ones(())
w1 = jnp.zeros(())
w2 = jnp.zeros(())
w_grads = jax.grad(cond_branch, [1, 2])(x, w1, w2)
print("GALVEZ:", w_grads[0], w_grads[1])
```
Prints:
```
GALVEZ: 1.0 0.0
```
(Rather than 1.0 None)
Now, there is a "problem" with my proposal, which is that currently torch.cond does match the behavior of imperative if-else in python. See here:
```
torch.set_default_device("cuda")
def cond_branch(x, w1, w2):
return torch.cond(x > 0, lambda x: w1 * x, lambda x: w2 * x, [x])
def imperative_branch(x, w1, w2):
if x > 0:
return w1 * x
else:
return w2 * x
x = torch.ones((), requires_grad=False)
neg_x = -torch.ones((), requires_grad=False)
w1 = torch.zeros((), requires_grad=True)
w2 = torch.zeros((), requires_grad=True)
y2 = imperative_branch(x, w1, w2)
y2.backward()
print(w1.grad)
print(w2.grad)
```
```
tensor(1., device='cuda:0')
None
```
I hope that this isn't consider to be particularly problematic, though. I think the benefits of being able to build a single static cuda graph for your workload outweigh the benefits of matching.
Another option, of course, is to add a new flag to torch.cond() to make it output torch.zeros() rather than None in the situations I have described, if backwards compatibility is considered important.
@ydwu4 @eellison @bohnstingl
### Versions
Not important in this case, but same as #141259
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @ydwu4 @bdhirsh @yf225
|
high priority,triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher
|
low
|
Critical
|
2,681,236,448 |
rust
|
Unexpected error when resolving bounds involving associated types
|
<!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I am trying to compile the following code
```rust
struct MyBlock;
trait Block {
type Header;
}
impl Block for MyBlock {
type Header = ();
}
trait Header {}
impl Header for () {}
trait FullBlock: Block<Header: Header> {}
impl<T> FullBlock for T where T: Block<Header: Header> {}
trait Primitives {
type Block;
}
trait FullPrimitives: Primitives<Block: FullBlock<Header = Self::Header>> {
type Header;
}
impl<T> FullPrimitives for T where T: Primitives<Block: FullBlock> {
type Header = <<T as Primitives>::Block as Block>::Header;
}
fn test<P: FullPrimitives<Block = MyBlock>>() {}
```
The compilation is failing with:
```
error[E0283]: type annotations needed: cannot satisfy `MyBlock: FullBlock`
--> src/main.rs:30:12
|
30 | fn test<P: FullPrimitives<Block = MyBlock>>() {}
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: cannot satisfy `MyBlock: FullBlock`
note: required by a bound in `FullPrimitives`
--> src/main.rs:22:41
|
22 | trait FullPrimitives: Primitives<Block: FullBlock<Header = Self::Header>> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `FullPrimitives`
For more information about this error, try `rustc --explain E0283`.
error: could not compile `my_project` (bin "my") due to 1 previous error
```
However, if I relax the `FullBlock` trait requirements it compiles fine
```diff
-trait FullBlock: Block<Header: Header> {}
-impl<T> FullBlock for T where T: Block<Header: Header> {}
+trait FullBlock: Block {}
+impl<T> FullBlock for T where T: Block {}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (c1db4dc24 2024-10-25)
binary: rustc
commit-hash: c1db4dc24267a707409c9bf2e67cf3c7323975c8
commit-date: 2024-10-25
host: aarch64-apple-darwin
release: 1.84.0-nightly
LLVM version: 19.1.1
```
</p>
</details>
|
A-trait-system,T-compiler,A-impl-trait,C-bug,fixed-by-next-solver
|
low
|
Critical
|
2,681,267,333 |
rust
|
`if` and `else` have incompatible types in a `let` statement, where `else` block's evaluation will never be assigned
|
<!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I apologize if this was already reported in a separate issue, or if this is a known issue - I wasn't sure how to best search for previous issues like this.
I also realize this might not be a "bug" per se, but the other issue templates didn't seem to quite fit either.
I tried this code:
```rust
enum Cause { Cause1, Cause2 }
struct MyErr { x: Cause }
fn main() {
_ = f();
}
fn f() -> Result<i32, MyErr> {
let res = could_fail();
let x = if let Ok(x) = res {
x
} else if let Err(e) = res {
cleanup();
return Err(e);
};
Ok(x)
}
fn could_fail() -> Result<i32, MyErr> {
// ... code that could fail and return an Err ...
Ok(0)
}
fn cleanup() {
// ... cleanup code ...
}
```
Playground: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=60acf4e59e3c6403104e01aa409aa395
I expected the code to compile successfully, since the `else if` branch unconditionally returns. Because the `else if` branch always returns, `x` will always be an `i32`.
Instead, I get this compiler error:
```
error[E0308]: `if` and `else` have incompatible types
--> src/main.rs:12:12
|
10 | let x = if let Ok(x) = res {
| ______________-
11 | | x
| | - expected because of this
12 | | } else if let Err(e) = res {
| | ____________^
13 | || cleanup();
14 | || return Err(e);
15 | || };
| || ^
| ||_____|
| |_____`if` and `else` have incompatible types
| expected `i32`, found `()`
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 19.1.1
```
|
A-diagnostics,T-compiler,D-confusing,D-terse,A-control-flow
|
low
|
Critical
|
2,681,272,541 |
rustdesk
|
Online status indicator doesn't work for remotes added via `id@server/r`
|
### Bug Description
I have a remote client that I usually connect to. The remote client is using a relay server and an ID server on its end. When I want to connect to that specific remote via its pure `<id>`, I have to change the networking settings on my client to add that relay and ID server globally. When I do that, everything works just fine and the online status indicator for that remote updates correctly.
Since the adding and deleting of relay and ID servers on my client for connecting to that remote client was becoming cumbersome, I researched this repository and found out that [an alternative method has been added in v1.3.0](https://github.com/rustdesk/rustdesk/issues/6198#issuecomment-1794694273) that allows connecting to remotes behind an ID and a relay server without having to change these settings globally.
Therefore, I gave it a shot and am now able to connect to my remote client via `<id>@<server>/r` without having to set the relay and ID servers globally on my client. Although this works great, the issue I'm experiencing with this method is that it no longer updates the small online status indicator for that remote correctly and it always stays orange, even though I'm able to connect to it just fine and if I use the old method it is able to detect when the remote is online and shows it as green.
### How to Reproduce
1. Install RustDesk on 2 machines (I tested on 2 Windows machines).
2. Open up the settings on machine 1 and go to networking and assign it to a custom (public) ID and relay server.
3. Now try connecting to machine 1 from machine 2 via `<id>@<server>/r` without changing the global networking settings on machine 2.
4. You should be able to connect to machine 1 but the online status indicator won't work and it'll stay orange.
### Expected Behavior
The online status indicator should turn green when the remote behind the ID and relay server is ready to receive connections.
### Operating system(s) on local (controlling) side and remote (controlled) side
Windows 11 -> Windows 11
### RustDesk Version(s) on local (controlling) side and remote (controlled) side
1.3.0 -> 1.3.0
### Screenshots


### Additional Context
_No response_
<!-- POLAR PLEDGE BADGE START -->
## Upvote & Fund
- We're using [Polar.sh](https://polar.sh/rustdesk) so you can upvote and help fund this issue.
- We receive the funding once the issue is completed & confirmed by you.
- Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/rustdesk/rustdesk/issues/10005">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/rustdesk/rustdesk/issues/10005/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/rustdesk/rustdesk/issues/10005/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
|
bug,Fund
|
low
|
Critical
|
2,681,282,748 |
vscode
|
Codes Crashes after about 15 Minutes
|
Type: <b>Performance Issue</b>
Hello,
i am trying to code a Pythen Application. But VS Code keeps consinstently crashing after about 15 Minutes. I have a core dump from journalctl. All Extensions (Not the Python ones) are disabled.
Here is the beginning of the Core Dump:
Nov 22 00:19:11 debian plasmashell[79249]: <--- Last few GCs --->
Nov 22 00:19:11 debian plasmashell[79249]: [79249:0x38dc0011c000] 1375313 ms: Mark-Compact 3664.0 (4086.1) -> 3658.4 (4087.6) MB, pooled: 1 MB, 35.18 / 0.00 ms (average mu = 0.995, current mu = 0.766) task; scavenge might not succeed
Nov 22 00:19:11 debian plasmashell[79249]: [79249:0x38dc0011c000] 1375467 ms: Mark-Compact 3665.7 (4087.6) -> 3659.9 (4089.3) MB, pooled: 0 MB, 39.36 / 0.00 ms (average mu = 0.990, current mu = 0.745) task; scavenge might not succeed
Nov 22 00:19:11 debian plasmashell[79249]: <--- JS stacktrace --->
Nov 22 00:19:11 debian plasmashell[79249]: FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
Nov 22 00:19:11 debian plasmashell[79249]: ----- Native stack trace -----
Nov 22 00:19:11 debian plasmashell[79275]: [1122/001911.776191:ERROR:elf_dynamic_array_reader.h(64)] tag not found
Nov 22 00:19:11 debian plasmashell[79275]: [1122/001911.776530:ERROR:elf_dynamic_array_reader.h(64)] tag not found
Nov 22 00:19:11 debian plasmashell[79275]: [1122/001911.776682:ERROR:elf_dynamic_array_reader.h(64)] tag not found
Nov 22 00:19:11 debian plasmashell[79275]: [1122/001911.792050:ERROR:directory_reader_posix.cc(43)] opendir /home/marvin/.config/Code/Crashpad/attachments/a2b04a1d-7e42-44f8-9285-1bc787e4b08c: No such file or directory (2)
Nov 22 00:19:19 debian systemd-coredump[83388]: [🡕] Process 79249 (code) of user 1000 dumped core.
If possible i can send the rest of the dumped Core too.
But this happens at almost exactly 15 Minutes every time.
Last one: Consumed 14min 55.100s CPU time
Before: Consumed 16min 43.285s CPU time.
And before that: Consumed 14min 27.663s CPU time.
If you need more informate i can provide everything you need.
Thanks in advance
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Linux x64 6.1.0-27-amd64
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 7 3800X 8-Core Processor (16 x 2199)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|1, 2, 2|
|Memory (System)|46.96GB (38.32GB free)|
|Process Argv|--crash-reporter-id 9e41d979-2ff7-435d-b01a-c6add81964e1|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|plasma|
|XDG_CURRENT_DESKTOP|KDE|
|XDG_SESSION_DESKTOP|KDE|
|XDG_SESSION_TYPE|x11|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
0 192 83645 code main
0 0 83649 zygote
0 240 83686 gpu-process
0 0 83746 broker
0 0 83650 zygote
0 0 83652 zygote
0 337 83707 window [1] (camper.py - camper - Visual Studio Code)
0 48 83691 utility-network-service
0 192 83784 extensionHost [1]
0 0 83882 /home/marvin/.vscode/extensions/ms-python.python-2024.20.0-linux-x64/python-env-tools/bin/pet server
0 144 83796 shared-process
0 0 84070 /bin/sh -c /usr/bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,command=
0 96 83797 fileWatcher [1]
0 96 83851 ptyHost
0 0 83868 /usr/bin/bash --init-file /usr/share/code/resources/app/out/vs/workbench/contrib/terminal/common/scripts/shellIntegration-bash.sh
0 0 83973 /usr/bin/bash --init-file /usr/share/code/resources/app/out/vs/workbench/contrib/terminal/common/scripts/shellIntegration-bash.sh
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (camper.py - camper - Visual Studio Code)
| Folder (camper): 42 files
| File types: png(19) py(8) test(3) suo(1) json(1) md(1) old(1) sh(1)
| Conf files:;
```
</details>
<details><summary>Extensions (3)</summary>
Extension|Author (truncated)|Version
---|---|---
debugpy|ms-|2024.12.0
python|ms-|2024.20.0
vscode-pylance|ms-|2024.11.2
</details>
<!-- generated by issue reporter -->
|
bug,freeze-slow-crash-leak,terminal-process
|
low
|
Critical
|
2,681,326,368 |
godot
|
Dance pad buttons detected as Joypad D-pad prevent left+right and up+down
|
### Tested versions
Reproducible in:
v4.3.stable.official.77dcf97d8
v4.2.2.stable.official.15073afe3
### System information
MXLinux 23.3; 6.1.0-11-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.38-4 (2023-08-08) x86_64 GNU/Linux
### Issue description
I'm using a Mayflash GameCube controller to a Wii dance pad.

I mapped my inputs as seen here. Using the keyboard input, I have no problem generating left + right at same time and up + down at same time.

However, when using the dance pad:
```gdscript
var pressed = ""
if Input.is_action_pressed("Up"): pressed += "u"
if Input.is_action_pressed("Down"): pressed += "d"
if Input.is_action_pressed("Left"): pressed += "l"
if Input.is_action_pressed("Right"): pressed += "r"
print ("Pressed: " + pressed)
```
Shows that up + down and left + right will not work.
up + down ==> down. left + right ==> right.
I think this is because GODOT think's it's a dpad and doesn't allow these combinations. --> SDL and OS level otherwise show things working.
But jstest-gtk [is this using SDL?] and sdl-jstest show all buttons working together:
https://github.com/Grumbel/sdl-jstest

### Steps to reproduce
Use any(?) dance pad. I imagine any d-pad controller would not be physically able to reproduce this. (I did try re-mapping buttons as a work around, but failed --> perhaps you can re-map buttons the other way to succeed in reproducing if you don't have a dance pad?).
Otherwise I used: https://www.amazon.com/gp/product/B00RSXRLUE
Konami Wii dance pad. RU054

As a side note, I could get all "directions" pressed using my keyboard arrow keys either, only two. Using wasd allowed me to get all depressed.
Of course dance pad should allow all buttons to be depressed at same time.
### Minimal reproduction project (MRP)
This simple application will allow you to see the uplf (up, down, left, right) print in console.
[dance_pad_test.zip](https://github.com/user-attachments/files/17853242/dance_pad_test.zip)
There's also:
Joypads (Gamepads) Demo: https://godotengine.org/asset-library/asset/2785
|
bug,needs testing,topic:input
|
medium
|
Critical
|
2,681,423,521 |
go
|
crypto/tls: reject TLS 1.3 empty NewSessionTicket messages
|
We are currently failing the new `SendEmptySessionTicket-TLS13` BoGo test. Harmless but worth fixing.
|
NeedsFix
|
low
|
Minor
|
2,681,429,030 |
deno
|
`deno task` should support caching based on input/output
|
I really like that I'm able to remove more and more dependencies using Deno!
v2.1 release is awesome. Perhaps this is already discussed internally, but I couldn't find any tickets here. So here we go.
With the help of task inputs and outputs it will be possible to cache task results. [NX](https://nx.dev/recipes/running-tasks/configure-inputs) and [Turbo repo](https://turbo.build/repo/docs/crafting-your-repository/caching) uses similar model to achieve task caching.
High level API looks like this:
1. User defines task inputs and outputs
2. Deno hashes task inputs and outputs
3. Deno saves outputs in internal cache
Next time task is run:
1. Deno hashes input files again
2. Deno compares new input hash with the previous input hash.
3a. If the input hashes don't match, Deno clears previous cached content (output) and runs task again.
3b. If the input hashes match, deno hashes output files and compares with the previous output hash. If the output hashes don't match, restore output contents from the previous output hash. If the output hashes match, do nothing.
|
suggestion,task runner
|
low
|
Minor
|
2,681,433,913 |
deno
|
[CLI] Network access can support representing IP segments
|
For example, I want to prohibit scripts from accessing the local area network
```bash
deno run --allow-net --deny-net=127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16 script.ts
```
|
permissions,suggestion
|
low
|
Minor
|
2,681,458,020 |
tauri
|
[bug] Cannot connect android device to local nor remote server
|
### Describe the bug
Android app is calling localhost instead of my local server's ip. I followed the tauri documentation and setup the frontend, vite.cofig, tauri.conf.json based on that. I'm running using `pnpm tauri android dev --host`. Calling the remote server gives me another error even though I allowed `tauri:localhost` origin.
`Access to XMLHttpRequest at 'https://api.redacted' from origin 'http://tauri.localhost/' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.`
On iOS everything works fine. Non of the relevant issues provide a solution.
### Reproduction
_No response_
### Expected behavior
Calls to localhost should be routed to my local server's ip
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 15.1.1 x86_64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-apple-darwin (default)
- node: 22.11.0
- pnpm: 9.13.2
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.0
- tao 🦀: 0.30.8
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-store 🦀: 2.1.0
- @tauri-apps/plugin-store : 2.1.0
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1
- tauri-plugin-fs 🦀: 2.0.3
- @tauri-apps/plugin-fs : 2.0.2
- tauri-plugin-log 🦀: 2.0.2
- @tauri-apps/plugin-log : 2.0.0
- tauri-plugin-dialog 🦀: 2.0.3
- @tauri-apps/plugin-dialog : 2.0.1
- tauri-plugin-global-shortcut 🦀: 2.0.1
- @tauri-apps/plugin-global-shortcut : 2.0.0
[-] App
- build-type: bundle
- CSP:
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
Device running Android 14
|
type: bug,status: needs triage,platform: Android
|
low
|
Critical
|
2,681,460,070 |
langchain
|
langchain-chroma== 0.1.4 method get_by_ids is listed in documentation BUT I am getting NotImplementedError
|
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
#----------------
# HuggingFace embedding (no issue)
from langchain_huggingface import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings(model="sentence-transformers/all-mpnet-base-v2")
#----------------
# create langchain-chroma persistent client with collection name 'example_collection; (no issue)
from langchain_chroma import Chroma
vector_store = Chroma(
collection_name="example_collection", # collection is "table" in vectore store
embedding_function=hf, # hf is huggingface embeddings derived from the previous step
persist_directory="./vectorstore/chroma_langchain_db", # Where to save data locally, remove if not necessary
)
#----------------
# add at least one document into vector collection (no issue)
from uuid import uuid4
from langchain_core.documents import Document
document_1 = Document(
page_content="I had chocolate chip pancakes and scrambled eggs for breakfast this morning.",
metadata={"source": "tweet"},
id=1,
)
documents = [
document_1,
]
uuids = [str(uuid4()) for _ in range(len(documents))]
vector_store.add_documents(documents=documents, ids=uuids)
#---------------- ERROR ENCOUNTERED when running get_by_ids
# attempt to run get_by_Ids yields NotImplementedError
vector_store.get_by_ids(['6314982d-455f-47cc-bf97-6e5324f6af62'])
```
### Error Message and Stack Trace (if applicable)
{
"name": "NotImplementedError",
"message": "Chroma does not yet support get_by_ids.",
"stack": "---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[87], line 3
1 # testing get the first two document ids
2 # ids = ['db1e5f74-f18d-4765-a193-d30eaed7552f', '12861b34-df54-4e40-8e1e-ae9ea901d378']
----> 3 vector_store.get_by_ids(['6314982d-455f-47cc-bf97-6e5324f6af62'])
5 # get_by_ids() functionality is not avaiable until v0.2.11
File ~/Documents/0_-_Python_Projects/05_Gen_AI/venv_3_11/lib/python3.11/site-packages/langchain_core/vectorstores/base.py:164, in VectorStore.get_by_ids(self, ids)
140 \"\"\"Get documents by their IDs.
141
142 The returned documents are expected to have the ID field set to the ID of the
(...)
161 .. versionadded:: 0.2.11
162 \"\"\"
163 msg = f\"{self.__class__.__name__} does not yet support get_by_ids.\"
--> 164 raise NotImplementedError(msg)
NotImplementedError: Chroma does not yet support get_by_ids."
}
### Description
I am just trying to run the vector_store method `get_by_ids` - it is listed as one of the available methods in [here](https://python.langchain.com/api_reference/chroma/vectorstores/langchain_chroma.vectorstores.Chroma.html)
### System Info
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:00 PDT 2024; root:xnu-10063.141.2~1/RELEASE_X86_64
> Python Version: 3.11.10 (main, Nov 19 2024, 15:24:32) [Clang 12.0.0 (clang-1200.0.32.29)]
Package Information
-------------------
> langchain_core: 0.3.19
> langchain: 0.3.7
> langchain_community: 0.3.4
> langsmith: 0.1.143
> langchain_chroma: 0.1.4
> langchain_experimental: 0.3.3
> langchain_groq: 0.2.1
> langchain_huggingface: 0.1.2
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.6
> async-timeout: Installed. No version info available.
> chromadb: 0.5.20
> dataclasses-json: 0.6.7
> fastapi: 0.115.5
> groq: 0.12.0
> httpx: 0.27.2
> httpx-sse: 0.4.0
> huggingface-hub: 0.26.2
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.11
> packaging: 24.2
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> sentence-transformers: 3.3.1
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tokenizers: 0.20.3
> transformers: 4.46.3
> typing-extensions: 4.12.2
|
Ɑ: vector store
|
low
|
Critical
|
2,681,508,371 |
tauri
|
[feat] iOS: theme resolution / subscription
|
### Describe the problem
Currently, there is no way to get current theme (color mode) on iOS.
`Webview` object doesn't have the same methods as desktop (and I'm not sure it react on `tauri://theme-changed` event, couldn't reproduce on simulator) e.g. `onThemeChanged()` and `theme()`.
### Describe the solution you'd like
`Window` should not throw `Window does not exist on Mobile client` / `Webview` should offer theme-related methods.
### Alternatives considered
No "system" theme on iOS.
### Additional context
_No response_
|
type: feature request,platform: Linux,platform: Android
|
low
|
Minor
|
2,681,523,294 |
PowerToys
|
Administrator protection/adminless mode breaks relaunch
|
### Microsoft PowerToys version
0.86.0
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
1. Install a 27* series canary build of windows
2. Enable admin protection as detailed [here](https://techcommunity.microsoft.com/blog/windows-itpro-blog/administrator-protection-on-windows-11/4303482)
3. Launch powertoys as admin & set it to always launch as admin
4. Reboot
### ✔️ Expected Behavior
Powertoys to launch with the machine
### ❌ Actual Behavior
Nothing, and powertoys had to be launched manually
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Minor
|
2,681,578,290 |
go
|
crypto: test fips140=only mode
|
`fips140=only` from #70123 breaks any non-FIPS cryptography. Testing a mode designed to break things is tricky.
Running the whole test suite is prohibitive. Instead, we should probably write a dedicated test that goes through things that are expected to work, and things that are not expected to work.
|
Testing,NeedsFix
|
low
|
Minor
|
2,681,579,961 |
tensorflow
|
Floating point exception (core dumped) in `tf.raw_ops.Reshape`
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.17.0
### Custom code
Yes
### OS platform and distribution
Ubuntu 20.04
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Under specific inputs, `tf.raw_ops.Reshape` triggered a crash.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
tensor = tf.constant(-3.5e+35, shape=[5], dtype=tf.float32)
shape = tf.constant([0, 1879048192, 100000000, 1610612736, -1], dtype=tf.int32)
tf.raw_ops.Reshape(tensor=tensor, shape=shape)
```
### Relevant log output
```shell
Floating point exception (core dumped)
```
|
stat:awaiting tensorflower,type:bug,comp:ops,2.17
|
medium
|
Critical
|
2,681,590,573 |
go
|
net/http: TestTransportRemovesH2ConnsAfterIdle/h2 failures
|
```
#!watchflakes
default <- pkg == "net/http" && test == "TestTransportRemovesH2ConnsAfterIdle/h2"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730607166746745185)):
=== RUN TestTransportRemovesH2ConnsAfterIdle/h2
2024/11/22 02:11:36 Error enabling Transport HTTP/2 support: protocol https already registered
transport_test.go:4259: got error: Get "https://127.0.0.1:46039": http2: client conn could not be established
--- FAIL: TestTransportRemovesH2ConnsAfterIdle/h2 (0.01s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
|
NeedsInvestigation
|
low
|
Critical
|
2,681,639,444 |
ant-design
|
rc-slider有支持的startPoint属性,为什么antd的slider的ts定义文件和文档都不写?实际是有效果的
|
### Reproduction link
[](https://codesandbox.io/p/sandbox/ji-ben-antd-5-21-6-forked-tnjzqf)
### Steps to reproduce
```tsx
<Slider
min={-50}
max={50}
startPoint={0}
/>
```
### What is expected?
ts不要给startPoint属性报错,更新文档
### What is actually happening?
ts报错了,文档也不写这个属性
| Environment | Info |
| --- | --- |
| antd | undefined |
| React | 18 |
| System | windows 11 |
| Browser | chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
🗣 Discussion,Inactive
|
low
|
Minor
|
2,681,663,533 |
neovim
|
vim.lsp.rpc.request is nil
|
### Problem
Running `:h vim.lsp.rpc.request` show the document of the function
However I cannot use it because `:lua =vim.lsp.rpc.request` outputs `nil`
### Steps to reproduce
```bash
nvim --clean
```
```vim
:h vim.lsp.rpc.request " Show document of the function
:lua =vim.lsp.rpc.request " Returns nil
```
### Expected behavior
`vim.lsp.rpc.request` should be a function, I guess, instead of `nil`
### Nvim version (nvim -v)
NVIM v0.11.0-dev-1205+g07db909eb5 Build type: RelWithDebInfo LuaJIT 2.1.1731601260
### Vim (not Nvim) behaves the same?
No, Vim doesn't have built-in LSP client
### Operating system/version
Kubuntu 24.04.1
### Terminal name/version
Konsole 23.08.5
### $TERM environment variable
xterm-256colors
### Installation
snap
|
bug,documentation,lsp
|
low
|
Minor
|
2,681,667,391 |
pytorch
|
[Inductor] Different results with Conv2d and BN2d not in `eval mode`
|
### 🐛 Describe the bug
If I use Inductor to compile `Conv2d` and `BN2d` **not use** `eval mode`. The results are inconsistent.
The problem seems to be with BN2d
```python
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv = nn.Conv2d(3, 16, kernel_size=3, padding=2, dilation=2, groups=1)
self.bn = nn.BatchNorm2d(16)
self.global_avg_pool = nn.AdaptiveAvgPool2d((1, 1))
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
x = self.global_avg_pool(x)
return x
m = Model()
x = torch.randn(1, 3, 128, 128) # With the increase of height and width, the error becomes more obvious
output = m(x)
compiled_model = torch.compile(m)
c_output = compiled_model(x)
print(output)
print(c_output)
res = torch.allclose(output, c_output)
print(res)
```
<details>
<summary>click here to see the err log</summary>
```
tensor([[[[-8.6147e-09]],
[[ 9.8953e-10]],
[[-3.4925e-10]],
[[ 4.0745e-09]],
[[-3.9581e-09]],
[[-1.3970e-09]],
[[ 1.2107e-08]],
[[ 4.6566e-10]],
[[-2.2119e-09]],
[[ 5.5879e-09]],
[[ 9.5170e-09]],
[[-6.9849e-10]],
[[ 1.1525e-08]],
[[-1.6298e-09]],
[[-2.1537e-09]],
[[ 5.1223e-09]]]], grad_fn=<MeanBackward1>)
```
```
tensor([[[[ 2.1268e-08]],
[[ 5.0095e-08]],
[[ 4.3936e-08]],
[[-1.0681e-07]],
[[-5.3694e-07]],
[[-1.0622e-07]],
[[-1.0726e-06]],
[[-9.5417e-08]],
[[ 4.2513e-07]],
[[-8.6298e-08]],
[[ 2.0579e-07]],
[[ 2.8708e-08]],
[[ 1.4902e-07]],
[[-5.8193e-08]],
[[-1.7331e-07]],
[[-2.9289e-08]]]], grad_fn=<CompiledFunctionBackward>)
```
```
False
```
</details>
If there is a problem with my use, feel free to let me know :)
### Versions
<details>
<summary>click here to view the version env</summary>
Collecting environment information...
PyTorch version: 2.6.0.dev20241115+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.994
BogoMIPS: 4999.98
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241115+cu124
[pip3] torchaudio==2.5.0.dev20241115+cu124
[pip3] torchvision==0.20.0.dev20241115+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241115+cu124 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241115+cu124 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241115+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
</details>
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
|
triaged,oncall: pt2,module: inductor
|
low
|
Critical
|
2,681,667,499 |
angular
|
Need a splice like method in Reactive Form Arrays.
|
### Which @angular/* package(s) are relevant/related to the feature request?
forms
### Description
They need to provide a solution for the deletion of one index to another specific index.
i.e. I have a 100 elements in Reactive Form Array, and I need to remove 30 to 60 index elements. I need to handle this in for loop.
As you already provided us removeAt(), then it will be very helpful to provide this feature.
### Proposed solution
FormArray.removeSplice(start, end);
something like that.
### Alternatives considered
not fo rnow.
|
area: forms
|
low
|
Minor
|
2,681,672,192 |
godot
|
Autowraping Label within AspectRatioContainer causes crash
|
### Tested versions
Godot_v4.3-stable_mono_win64
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6081) - Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz (16 Threads)
### Issue description
A Label with Autowrap inside an AspectRatioContainer too small to fit it will crash not only a project but the Godot editor itself.
If you're experiencing this issue and it hasn't ben resolved yet, setting the Text Overrun Behavior of the label to anything other than "Trim Nothing" will prevent the crash. This is not a full workaround though, as it forces the label to shrink to it's minimum size when in a container with other controls, which may not be your desired behavior.
### Steps to reproduce
1. Create a new project
2. Create a new scene with root node of type AspectRatioContainer and set its anchor preset to full rect
3. Nest a Label node inside the AspectRatioContainer node
4. Set the Label's Custom Minimum Size to (32,32)
5. Set the Label's Autowrap Mode to Arbitrary (also occurs on Word and Word (Smart))
6. Set the Label's 'Text' property to a string long enough such that it overflows the bounds of the container
-Expected Result-
The label expands the bounds of the container to accomidate, similarly to how a label with Autowrap off will expand the container.
-Result-
The Godot editor stops responding and crashes
### Minimal reproduction project (MRP)
[aspectratiocontainerissue.zip](https://github.com/user-attachments/files/17856666/aspectratiocontainerissue.zip)
|
bug,crash,topic:gui
|
low
|
Critical
|
2,681,723,790 |
kubernetes
|
scheduler: removed the deprecated metric scheduler_cache_size in v1.33
|
scheduler: removed the deprecated metric scheduler_scheduler_cache_size in v1.33
more detail: https://github.com/kubernetes/kubernetes/pull/128810#discussion_r1851486291
|
kind/cleanup,sig/scheduling,needs-triage
|
low
|
Minor
|
2,681,818,678 |
go
|
crypto/internal/fips140test:exe_external: unrecognized failures
|
```
#!watchflakes
default <- pkg == "crypto/internal/fips140test:exe_external" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730628651224099649)):
FAIL crypto/internal/fips140test [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
|
NeedsInvestigation,arch-riscv
|
medium
|
Critical
|
2,681,836,643 |
rust
|
ICE: unstable fingerprints for evaluate_obligation
|
### Code
i dont know what i was writing when happened, i made many changes after it happened without noticing. however, the code i was writing wasnt anything crazy like 1035935 repeated vec! macros or something. i was editing my rng crate
### Affected release channels
- [ ] Previous Stable
- [ ] Current Stable
- [ ] Current Beta
- [x] Current Nightly
### Rust Version
```Shell
rustc 1.84.0-nightly (8adb4b30f 2024-11-13)
binary: rustc
commit-hash: 8adb4b30f40e6fbd21dc1ba26c3301c7eeb6de3c
commit-date: 2024-11-13
host: x86_64-pc-windows-msvc
release: 1.84.0-nightly
LLVM version: 19.1.3
```
### Current error output
```Shell
-
```
### Backtrace
```Shell
-
```
### Anything else?
[rustc-ice-2024-11-22T03_08_30-5056.txt](https://github.com/user-attachments/files/17861764/rustc-ice-2024-11-22T03_08_30-5056.txt)
[rustc-ice-2024-11-22T03_08_30-11016.txt](https://github.com/user-attachments/files/17861763/rustc-ice-2024-11-22T03_08_30-11016.txt)
|
I-ICE,T-compiler,A-incr-comp,C-bug,S-needs-repro
|
low
|
Critical
|
2,681,937,344 |
go
|
os/user: TestLookupGroupIdServiceAccount failures
|
```
#!watchflakes
default <- pkg == "os/user" && test == "TestLookupGroupIdServiceAccount"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8730598809761491761)):
=== RUN TestLookupGroupIdServiceAccount
=== PAUSE TestLookupGroupIdServiceAccount
— [watchflakes](https://go.dev/wiki/Watchflakes)
|
OS-Windows,NeedsInvestigation
|
low
|
Critical
|
2,682,061,679 |
next.js
|
notfound() uses fallback instead of app/not-found.tsx with route group (turbopack)
|
### Link to the code that reproduces this issue
https://github.com/twillhorn/next_not_found_bug
### To Reproduce
1. next dev --turbopack
2. go to localhost:3000 and notice that the fallback 404 page is shown instead of not-found.tsx
### Current vs. Expected behavior
Expected: not-found.tsx to be used
Actual: fallback not found page is used
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:00:32 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6030
Available memory (MB): 18432
Available CPU cores: 11
Binaries:
Node: 20.16.0
npm: 10.8.1
Yarn: N/A
pnpm: 9.12.3
Relevant Packages:
next: 15.0.4-canary.23 // Latest available version is detected (15.0.4-canary.23).
eslint-config-next: N/A
react: 19.0.0-rc-380f5d67-20241113
react-dom: 19.0.0-rc-380f5d67-20241113
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation, Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_
|
bug,Navigation,Turbopack
|
low
|
Critical
|
2,682,127,836 |
ollama
|
Don't try to parse images property for non image model
|
### What is the issue?
Some clients missbehave as demonstrated in this third party client: https://github.com/longy2k/obsidian-bmo-chatbot/issues/105#issuecomment-2487862451. They sent image paths instead of a base64 representation of the image
While this client misbehaves and this issue should be fixed on their site, this is also something I expect ollama to catch, the linked comment showed that llama3.2 is used, a non image capable model. Therefore the image property should not be evaluated and no error messages should be returned.
This is a low priority thing which would be a nice to have but in no way is required
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.2
|
bug
|
low
|
Critical
|
2,682,173,975 |
opencv
|
Refresh stitching module
|
### Describe the feature and motivation
1. Update features after features2d reorg. Use SIFT by default.
2. Use modern LevMarq implementation from 5.x instead of CvLevMarq.
3. Handle bool masks
4. Use USAC
5. DNN-based features and matching (?)
### Additional context
_No response_
|
feature,category: stitching
|
low
|
Minor
|
2,682,202,626 |
godot
|
`CTRL+C` and `CTRL+V` of `EditorProperty::shortcut_input` hide shortcuts of `SceneTreeDock`
|
### Tested versions
- Reproducible in 4.4.dev5, 4.4.dev4
### System information
Windows 11 - Godot v4.4.dev5 - Windows 10.0.22631 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Ti (NVIDIA; 32.0.15.6094) - 13th Gen Intel(R) Core(TM) i5-13600KF (20 threads)
### Issue description
1. Click a node in SceneTreeDock
2. Click a property name of the node in Inspector
3. Click this node in SceneTreeDock agian (can click on the icon or hide button to avoid renaming)
4. CTRL+C
5. CTRL+V (will notice there is a message in output and the node is not duplicated)
https://github.com/user-attachments/assets/499da9a0-f248-426e-a873-1abf0adf4d67
10s, 11s, 13s, 23s triggered CTRL+V in the video.
### Steps to reproduce
See above
### Minimal reproduction project (MRP)
NA
|
bug,topic:editor,usability
|
low
|
Minor
|
2,682,245,032 |
vscode
|
Formatter messes up indentation (even its own)
|
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813
Date: 2024-11-13T14:50:04.152Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.1.0
Steps to Reproduce:
1. Open the attached TypeScript file in VS Code. Or not, since this site inexplicably won't let me attach the source file. So put the code below in a .ts file and open it in VS Code.
2. Set the default formatter to the built-in JavaScript/TypeScript one.
3. In the formatting settings, have "format on paste" off.
4. Select a full `if` statement and right-click, then say "format selection." Note that the resulting _indentation_ is correct.
5. Put the insertion point in front of the brace on the first line of the `if` statement and press Return. Note incorrect indentation.
6. Undo that and highlight a whole `if` statement, and then cut it.
7. Paste. Note incorrect indentation of the braces and body. If you turn "format on paste" **_on_**, this erroneous pasting behavior stops. But it shouldn't happen with "format on paste" **_off_** either, because you're cutting and pasting entire lines, all the way from the left margin. No reformatting is necessary.
Screen grab attached.https://github.com/user-attachments/assets/b52f28cd-4163-4921-97c0-40e8a88ab7a3
I'm not using any third-party formatter. Also, the JavaScript/TypeScript formatting options are being ignored. Screen shot of those options attached too.
This page won't let me attach the source file (fix that), so I'll paste its contents here:
```
class CommManager
{
public sendOTP(OTP: string, EMail: string, phoneNbr: string): void
{
if (!OTP || (!EMail && !phoneNbr))
{
console.error("Missing OTP or E-mail & phone number for user verification.");
return;
}
// This doesn't preclude sending to both, if we want to someday.
if(EMail)
{
console.log(`Sending OTP to ${EMail}.`);
}
if(phoneNbr)
{
console.log(`Sending OTP to ${phoneNbr}.`);
}
}
}
export const commMgr = new CommManager();
```
|
bug,typescript,javascript,editor-autoindent
|
low
|
Critical
|
2,682,254,414 |
vscode
|
Allow selecting extensions when copying a profile
|
Type: <b>Feature Request</b>
v:1.95.3
When creating a new configuration file and choosing to copy from the default configuration, there is no separate option to select which extensions to use in the extensions section. The only options are to select all or none. What is the difference between the new configuration and the original one created this way?
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.22631
Modes:

<!-- generated by issue reporter -->
|
feature-request,user-profiles
|
medium
|
Major
|
2,682,259,723 |
flutter
|
[google_maps_flutter] Maps clustering won't cluster items on mobile
|
### What package does this bug report belong to?
google_maps_flutter
### What target platforms are you seeing this bug on?
Android
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
_flutterfire_internals:
dependency: transitive
description:
name: _flutterfire_internals
sha256: "71c01c1998c40b3af1944ad0a5f374b4e6fef7f3d2df487f3970dbeadaeb25a1"
url: "https://pub.dev"
source: hosted
version: "1.3.46"
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
csslib:
dependency: transitive
description:
name: csslib
sha256: "09bad715f418841f976c77db72d5398dc1253c21fb9c0c7f0b0b985860b2d58e"
url: "https://pub.dev"
source: hosted
version: "1.0.2"
cupertino_icons:
dependency: "direct main"
description:
name: cupertino_icons
sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6
url: "https://pub.dev"
source: hosted
version: "1.0.8"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
firebase_auth:
dependency: "direct main"
description:
name: firebase_auth
sha256: "49c356bac95ed234805e3bb928a86d5b21a4d3745d77be53ecf2d61409ddb802"
url: "https://pub.dev"
source: hosted
version: "5.3.3"
firebase_auth_platform_interface:
dependency: transitive
description:
name: firebase_auth_platform_interface
sha256: "9bc336ce673ea90a9dbdb04f0e9a3e52a32321898dc869cdefe6cc0f0db369ed"
url: "https://pub.dev"
source: hosted
version: "7.4.9"
firebase_auth_web:
dependency: transitive
description:
name: firebase_auth_web
sha256: "56dcce4293e2a2c648c33ab72c09e888bd0e64cbb1681a32575ec9dc9c2f67f3"
url: "https://pub.dev"
source: hosted
version: "5.13.4"
firebase_core:
dependency: "direct main"
description:
name: firebase_core
sha256: "2438a75ad803e818ad3bd5df49137ee619c46b6fc7101f4dbc23da07305ce553"
url: "https://pub.dev"
source: hosted
version: "3.8.0"
firebase_core_platform_interface:
dependency: transitive
description:
name: firebase_core_platform_interface
sha256: e30da58198a6d4b49d5bce4e852f985c32cb10db329ebef9473db2b9f09ce810
url: "https://pub.dev"
source: hosted
version: "5.3.0"
firebase_core_web:
dependency: transitive
description:
name: firebase_core_web
sha256: f967a7138f5d2ffb1ce15950e2a382924239eaa521150a8f144af34e68b3b3e5
url: "https://pub.dev"
source: hosted
version: "2.18.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "3f41d009ba7172d5ff9be5f6e6e6abb4300e263aab8866d2a0842ed2a70f8f0c"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
flutter_plugin_android_lifecycle:
dependency: transitive
description:
name: flutter_plugin_android_lifecycle
sha256: "9b78450b89f059e96c9ebb355fa6b3df1d6b330436e0b885fb49594c41721398"
url: "https://pub.dev"
source: hosted
version: "2.0.23"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
google_maps:
dependency: transitive
description:
name: google_maps
sha256: "4d6e199c561ca06792c964fa24b2bac7197bf4b401c2e1d23e345e5f9939f531"
url: "https://pub.dev"
source: hosted
version: "8.1.1"
google_maps_flutter:
dependency: "direct main"
description:
name: google_maps_flutter
sha256: "209856c8e5571626afba7182cf634b2910069dc567954e76ec3e3fb37f5e9db3"
url: "https://pub.dev"
source: hosted
version: "2.10.0"
google_maps_flutter_android:
dependency: transitive
description:
name: google_maps_flutter_android
sha256: bccf64ccbb2ea672dc62a61177b315a340af86b0228564484b023657544a3fd5
url: "https://pub.dev"
source: hosted
version: "2.14.11"
google_maps_flutter_ios:
dependency: transitive
description:
name: google_maps_flutter_ios
sha256: "6f798adb0aa1db5adf551f2e39e24bd06c8c0fbe4de912fb2d9b5b3f48147b02"
url: "https://pub.dev"
source: hosted
version: "2.13.2"
google_maps_flutter_platform_interface:
dependency: "direct main"
description:
name: google_maps_flutter_platform_interface
sha256: a951981c22d790848efb9f114f81794945bc5c06bc566238a419a92f110af6cb
url: "https://pub.dev"
source: hosted
version: "2.9.5"
google_maps_flutter_web:
dependency: transitive
description:
name: google_maps_flutter_web
sha256: ff39211bd25d7fad125d19f757eba85bd154460907cd4d135e07e3d0f98a4130
url: "https://pub.dev"
source: hosted
version: "0.5.10"
html:
dependency: transitive
description:
name: html
sha256: "1fc58edeaec4307368c60d59b7e15b9d658b57d7f3125098b6294153c75337ec"
url: "https://pub.dev"
source: hosted
version: "0.15.5"
http_parser:
dependency: transitive
description:
name: http_parser
sha256: "2aa08ce0341cc9b354a498388e30986515406668dbcc4f7c950c3e715496693b"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: "976c774dd944a42e83e2467f4cc670daef7eed6295b10b36ae8c85bcbf828235"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
plugin_platform_interface:
dependency: transitive
description:
name: plugin_platform_interface
sha256: "4820fbfdb9478b1ebae27888254d445073732dae3d6ea81f0b7e06d5dedc3f02"
url: "https://pub.dev"
source: hosted
version: "2.1.8"
sanitize_html:
dependency: transitive
description:
name: sanitize_html
sha256: "12669c4a913688a26555323fb9cec373d8f9fbe091f2d01c40c723b33caa8989"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
stream_transform:
dependency: transitive
description:
name: stream_transform
sha256: "14a00e794c7c11aa145a170587321aedce29769c08d7f58b1d141da75e3b1c6f"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
typed_data:
dependency: transitive
description:
name: typed_data
sha256: f9049c039ebfeb4cf7a7104a675823cd72dba8297f264b6637062516699fa006
url: "https://pub.dev"
source: hosted
version: "1.4.0"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "5c5f338a667b4c644744b661f309fb8080bb94b18a7e91ef1dbd343bed00ed6d"
url: "https://pub.dev"
source: hosted
version: "14.2.5"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
sdks:
dart: ">=3.5.3 <4.0.0"
flutter: ">=3.24.0"
```
</details>
### Steps to reproduce
1. Open the app with some markers
2. Zoom out and see that they're not getting clustered
Same code works on web, but not on android or iOS.
According to [android maps utils](https://github.com/googlemaps/android-maps-utils) that was referenced in https://github.com/flutter/packages/pull/4319 I added `implementation 'com.google.maps.android:android-maps-utils:3.9.0'` dependency into `build.gradle`, but that does not seem to change anything.
### Expected results
The markers should be clustered.
Run the same code on chrome and it clusters it, see screenshot below

### Actual results
The markers are not getting clustered
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:google_maps_flutter/google_maps_flutter.dart';
const clusterManagerID = ClusterManagerId('1');
class GoogleMapScreen extends StatelessWidget {
const GoogleMapScreen({super.key});
@override
Widget build(BuildContext context) {
final Set<Marker> markers = {
Marker(
markerId: const MarkerId('1'),
clusterManagerId: clusterManagerID,
position: const LatLng(1.2830173, 103.8513365),
onTap: () {},
),
Marker(
markerId: const MarkerId('2'),
clusterManagerId: clusterManagerID,
position: const LatLng(1.2830173, 103.8413365),
onTap: () {},
),
};
return GoogleMap(
initialCameraPosition: const CameraPosition(target: LatLng(1.2830173, 103.8413365), zoom: 14),
clusterManagers: {
ClusterManager(
clusterManagerId: clusterManagerID,
onClusterTap: (cluster) {},
),
},
onMapCreated: (controller) {},
markers: markers,
);
}
}
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
Initial Map load all looks ok

When scrolling out it won't cluster the 2 markers

</details>
### Logs
<details open><summary>Logs</summary>
```console
D/zzcc ( 4604): preferredRenderer: null
I/Google Android Maps SDK( 4604): Google Play services package version: 244433035
I/Google Android Maps SDK( 4604): Google Play services maps renderer version(maps_core): 244125202
I/m.ccu ( 4604): FpsProfiler MAIN created on main
I/m.dsu ( 4604): Map using legacy labeler
I/m.drw ( 4604): Network fetching: false
I/m.drw ( 4604): requestDrawingConfig for epoch 713 legend ROADMAP
I/m.drw ( 4604): Network fetching: true
I/m.ekp ( 4604): Found 43 zoom mappings
I/m.ekp ( 4604): Zoom tables loaded
I/m.drw ( 4604): requestDrawingConfig for epoch 713 legend ROADMAP
I/m.ekp ( 4604): Found 43 zoom mappings
I/m.ekp ( 4604): Zoom tables loaded
I/m.drw ( 4604): Network fetching: true
I/m.drw ( 4604): requestDrawingConfig for epoch 713 legend ROADMAP
3
I/m.drw ( 4604): Network fetching: true
I/PlatformViewsController( 4604): Hosting view in view hierarchy for platform view: 0
I/PlatformViewsController( 4604): PlatformView is using SurfaceProducer backend
I/GoogleMapController( 4604): Installing custom TextureView driven invalidator.
E/GoogleMapController( 4604): Cannot enable MyLocation layer as location permissions are not granted
I/le.expenses_app( 4604): Background concurrent mark compact GC freed 29MB AllocSpace bytes, 169(6756KB) LOS objects, 34% free, 46MB/70MB, paused 494us,3.411ms total 216.329ms
I/m.bzu ( 4604): Initial labeling completed.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.5, on macOS 15.0 24A335 darwin-arm64, locale en-US)
• Flutter version 3.24.5 on channel stable at /Users/zilvinasskuodys/Desktop/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision dec2ee5c1f (9 days ago), 2024-11-13 11:13:06 -0800
• Engine revision a18df97ca5
• Dart version 3.5.4
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/zilvinasskuodys/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.95.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.100.0
[✓] Connected device (4 available)
• sdk gphone64 arm64 (mobile) • emulator-5554 • android-arm64 • Android 15 (API 35) (emulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.0 24A335 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0 24A335 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.86
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
|
platform-android,platform-ios,p: maps,package,team-ecosystem,has reproducible steps,P2,triaged-ecosystem,found in release: 3.24,found in release: 3.27
|
low
|
Critical
|
2,682,278,064 |
vscode
|
Javascript/Typescript formatting options are ignored
|

https://github.com/user-attachments/assets/89f74df8-9c1a-482b-962d-25b3bcaf8a12
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.95.3
Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813
Date: 2024-11-13T14:50:04.152Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Darwin arm64 24.1.0
Steps to Reproduce:
1. Open Settings and search for "Javascript."
2. Scroll about halfway down until you find "JavaScript > Format: Place open brace..." and turn both those options on.
3. Perform the steps in https://github.com/microsoft/vscode/issues/234401 and note the disregard for the above settings.
|
bug,javascript,formatting
|
low
|
Critical
|
2,682,303,987 |
rust
|
"macabi" is not an "ABI" in the same sense as e.g. "gnueabihf"
|
I know it has the literal characters "abi" in it but it's not an "ABI" in the sense of "gnueabihf", the HF and SF targets use actually-different calling conventions. It should probably have been a `target_env`.
Same story with the Apple "sim" targets.
|
O-macos,O-ios,T-compiler,C-bug,A-targets
|
low
|
Minor
|
2,682,353,547 |
godot
|
Changing texture width height on Line2D with tiled texture doesn't update properly automatically
|
### Tested versions
Reproducible in:
v4.2.2.stable.official [15073afe3]
v4.3.stable.official.77dcf97d8
v4.4.dev5.official.9e6098432
### System information
MX Linux 23.3, 6.1.0-11-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.38-4 (2023-08-08) x86_64 GNU/Linux, OpenGL API 3.3.0 NVIDIA 535.183.01 - Compatibility - Using Device: NVIDIA - NVIDIA GeForce RTX 3060 Ti
### Issue description
Here is a view of the Line2D with a tiled texture.

Here I change height from 512 to 256. While the texture update, it doesn't properly reflect how it should be textured.

After toggling visibility, texture updates properly.

Changing texture size should trigger similar updates as toggling visibility (or changing width of line) --> this also causes proper update of texture.
---
Also see my document feedback where I encountered this: https://github.com/godotengine/godot-docs/issues/10297
### Steps to reproduce
Described above.
### Minimal reproduction project (MRP)
[mrp-line2d-tiledtexture-texturesizechanged.zip](https://github.com/user-attachments/files/17867316/mrp-line2d-tiledtexture-texturesizechanged.zip)
|
bug,topic:2d
|
low
|
Minor
|
2,682,430,199 |
tensorflow
|
Not GPU detected using tensorflow/tensorflow:latest-gpu docker image
|
### Issue type
Support
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tensorflow/tensorflow:latest-gpu
### Custom code
Yes
### OS platform and distribution
Ubuntu 20.04
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
sudo docker run --gpus all -it --rm tensorflow/tensorflow:latest-gpu python -c "import tensorflow as tf; print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))"
2024-11-22 08:20:55.695121: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1732263655.707089 1 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1732263655.710704 1 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-11-22 08:20:55.722395: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-11-22 08:20:57.119693: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE: forward compatibility was attempted on non supported HW
2024-11-22 08:20:57.119716: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 81f8d81af78d
2024-11-22 08:20:57.119720: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 81f8d81af78d
2024-11-22 08:20:57.119789: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 545.23.6
2024-11-22 08:20:57.119804: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 470.256.2
2024-11-22 08:20:57.119808: E external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:262] kernel version 470.256.2 does not match DSO version 545.23.6 -- cannot find working devices in this configuration
Num GPUs Available: 0
### Standalone code to reproduce the issue
```shell
sudo docker run --gpus all -it --rm tensorflow/tensorflow:latest-gpu python -c "import tensorflow as tf; print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))"
I've also followed this recommendation. https://stackoverflow.com/questions/79127647/tensorflow-docker-not-using-gpu/79214187#79214187
```
### Relevant log output
```shell
root@35b972e97b30:/# nvidia-smi
Fri Nov 22 08:44:56 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.256.02 Driver Version: 470.256.02 CUDA Version: 12.3 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| N/A 38C P0 22W / N/A | 10MiB / 5946MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
root@35b972e97b30:/# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Wed_Nov_22_10:17:15_PST_2023
Cuda compilation tools, release 12.3, V12.3.107
Build cuda_12.3.r12.3/compiler.33567101_0
```
|
stat:awaiting tensorflower,type:support,comp:gpu,TF 2.18
|
low
|
Critical
|
2,682,441,507 |
rust
|
Tracking Issue for `const_array_as_mut_slice`
|
Add the `const` specifier to `<[T; N]>::as_mut_slice`.
### Public API
```rust
impl<T, const N: usize> {
pub const fn as_mut_slice(&mut self) -> &mut [T];
}
```
### Steps / History
- [x] Implementation: #133332
- [ ] Final comment period (FCP)
- [ ] Stabilization PR
### Unresolved Questions
- None yet.
|
T-libs-api,C-tracking-issue
|
low
|
Major
|
2,682,450,804 |
pytorch
|
Rounding issue of tensor sizes when sharding
|
### 🐛 Describe the bug
I have noticed `test_linear_row_wise_parallel` fails when run with 6 GPUs
https://github.com/pytorch/pytorch/blob/f2f7ef9d5908f53a5fd7991ed0a1ef99069813a1/test/distributed/tensor/parallel/test_parallelize_api.py#L137
it raises 2 kinds of errors (first from 5 process, the last from 1 process):
```
RuntimeError: a and b must have same reduction dim, but got [9, 18] X [16, 10].
RuntimeError: a and b must have same reduction dim, but got [9, 6] X [16, 10].
```
The issue is the sharding of the input tensor of `inp_size = [9, 16]`.
It results (correctly) in a sharding of `[3, 3, 3, 3, 3, 1]` but at some point the code assumes even sharding leading to `3 * 6 = 18` and `1 * 6 = 6` as the input size to the linear layer.
This can be reproduced with this code:
```python
import torch
import torch.distributed as dist
from torch.distributed._tensor import DeviceMesh
from torch.distributed.tensor.parallel.api import parallelize_module
from torch.distributed.tensor.parallel.style import RowwiseParallel
import tempfile
WORLD_SIZE = 6
def run(rank, file_name):
dist.init_process_group(
backend="gloo",
world_size=WORLD_SIZE,
rank=rank,
init_method=f"file://{file_name}",
)
inp_size = [9, 8*3]
model = torch.nn.Linear(inp_size[-1], 10)
device_mesh = DeviceMesh("cpu", list(range(WORLD_SIZE)))
model = parallelize_module(model, device_mesh, RowwiseParallel())
inp = torch.rand(*inp_size)
inp = inp.chunk(WORLD_SIZE, dim=-1)[rank]
model(inp)
dist.barrier()
dist.destroy_process_group()
if __name__ == '__main__':
file_name = tempfile.NamedTemporaryFile(delete=False).name
torch.multiprocessing.spawn(run, args=(file_name,), nprocs=WORLD_SIZE, join=True, daemon=False)
```
### Versions
```
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Linux Mint 21.3 (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
```
Note that this could "just" be a design issue of the test:
- for the row-wise distribution it uses a size of [`16`](https://github.com/pytorch/pytorch/blob/f2f7ef9d5908f53a5fd7991ed0a1ef99069813a1/test/distributed/tensor/parallel/test_parallelize_api.py#L139)
- for the col-wise distribution it uses a size of [`12`](https://github.com/pytorch/pytorch/blob/f2f7ef9d5908f53a5fd7991ed0a1ef99069813a1/test/distributed/tensor/parallel/test_parallelize_api.py#L118)
It uses ALL GPUs if there is an even number of GPUs, else only 4: https://github.com/pytorch/pytorch/blob/f2f7ef9d5908f53a5fd7991ed0a1ef99069813a1/test/distributed/tensor/parallel/test_parallelize_api.py#L35
From the sizes it is clear that the first test works only for 2,4,8,16 GPUs and the second only for 2,3,4,6,12 GPUs so both together only work for 2 or 4 GPUs
--> The test should run ONLY with a world_size of 4 or the implementation fixed if it is intended to work
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
|
oncall: distributed,triaged
|
low
|
Critical
|
2,682,473,001 |
flutter
|
[asset_transformers] Provide the current target platform in the asset transformer
|
### Use case
Developers wanting to transform assets, based on the current target platform (i.e. `flutter build web` -> web; `flutter build apk` -> android), currently cannot infer the target platform during the asset transformation.
### Proposal
I propose that the target platform of the `flutter build` invocation is provided as either an argument, or in `Platform.environment`. I initially tried checking what is inside `Platform.environment` and the `FLUTTER_BUILD_MODE` seems to be there already?
Then I think the easiest thing to do is to also add something like `FLUTTER_BUILD_TARGET_PLATFORM` to `Platform.environment` or the args that are passed to the asset transformer, whichever makes more sense / is cleaner from a tool perspective.
Context: I was initially exploring if I could work around the absence of https://github.com/flutter/flutter/issues/141371 using an asset transformer
|
c: new feature,tool,a: assets,c: proposal,a: build,P2,team-tool,triaged-tool
|
low
|
Minor
|
2,682,485,861 |
deno
|
fmt(svelte): formatter error on dynamic style in svelte
|
Version: Deno 2.1.1
```svelte
<script lang="ts">
const { color }: { color: string | null } = $props();
</script>
<div
style="background-color: {color || 'blueviolet'};"
>
</div>
```
```txt
deno fmt --unstable-component
Error formatting: C:\temp\deno-init\test.svelte
syntax error at line 6, col 28: expect token `<ident>`, but found `{`
Checked 4 files
```
|
bug,upstream,deno fmt
|
low
|
Critical
|
2,682,562,136 |
ollama
|
LLM(vision) GGUF Recommendation: Is there any LLM(vision) with great performance in GGUF format?
|
### Disappointing Performance
It's really strange that I have tried many **LLMs with vision** in `GGUF` format, listed in the official website, such as `Llama3.2-vision`, `llava`, `llava-llama3`, `llava-phi3`. However, all of their performance is disappointing in **vision** aspect, even a simple task like recognizing an image of an apple.
### GGUF format
By the way, to be quickly start, I tried all LLMs in `GGUF` format. I don't know whether it is a hinder for good performance.
### Help
So, I post this issue for your help and a good LLM with vision in `GGUF` format recommendation, or **any other importing ways** to get a great performance, thanks for your help!!!
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.4.2
|
bug
|
low
|
Major
|
2,682,628,005 |
kubernetes
|
bug: RPM repo PGP check fails
|
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
**What happened**:
When trying to upgrade package using official Yum repo this occurs:
```
Running transaction
Transaction failed: Signature verification failed.
PGP check for package "kubectl-1.31.3-150500.1.1.x86_64" (/var/cache/libdnf5/kubernetes-3d554b2ea1b53740/packages/kubectl-1.31.3-150500.1.1.x86_64.rpm) from repo "kubernetes" has failed: Problem occurred when opening the package.
```
**What you expected to happen**:
Update succeeds and package installed.
**How to reproduce it (as minimally and precisely as possible)**:
<!-- Please make sure you are able to reproduce the bug using a supported version and version skew of Kubernetes. See https://kubernetes.io/docs/setup/release/version-skew-policy for which versions and are currently supported.
-->
Add repo following https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management and try to update RPM package.
**Anything else we need to know?**: It worked until now.
**Environment**:
- Kubernetes client and server versions (use `kubectl version`): kubectl-1.31.3-150500.1.1.x86_64
- Cloud provider or hardware configuration: n/a/
- OS (e.g: `cat /etc/os-release`): Fedora Linux 41
|
kind/bug,sig/docs,sig/release,needs-triage
|
medium
|
Critical
|
2,682,628,604 |
next.js
|
Inconsistent behavior of `usePathname` with static rendering during build
|
### Link to the code that reproduces this issue
https://github.com/amannn/nextjs-bug-repro-usepathnamessg/commit/c6b084df43e08a9b043548fb577be8db5f059bde
### To Reproduce
Compare the output of `usePathname` in development vs during the static prerender when accessing the route `/test` which uses a rewrite in the middleware.
### Current vs. Expected behavior
`usePathname` seems to pick up a request-time pathname from a middleware rewrite. However, as this information is not available during a static build, `usePathname` will return a different value in this case. Then again, when running the app in production, `usePathname` will return the pathname that's displayed in the browser (i.e. after a rewrite), but only on the client side.
In the reproduction this leads to a text content mismatch, but more generally this breaks a dev/prod guarantee. I'd expect a static prerender to only be a performance optimization and not to change application logic.
I guess some unification would be necessary here.
Note that this only applies for static prerendering, if you use SSR in production then the behavior is the same as in dev.
A more real world use case where this behavior was encountered: https://github.com/amannn/next-intl/issues/1568
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:00 PDT 2024; root:xnu-10063.141.2~1/RELEASE_X86_64
Available memory (MB): 16384
Available CPU cores: 12
Binaries:
Node: 20.11.1
npm: 10.2.4
Yarn: 1.22.22
pnpm: 9.14.2
Relevant Packages:
next: 15.0.4-canary.23 // Latest available version is detected (15.0.4-canary.23).
eslint-config-next: 15.0.4-canary.23
react: 19.0.0-rc-380f5d67-20241113
react-dom: 19.0.0-rc-380f5d67-20241113
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local)
### Additional context
Maybe a hook like `useInternalPathname()` (naming TBD) could help that always reliably returns the pathname that renders in Next.js, regardless of any rewrites—essentially matching the directory structure in `src/app`.
|
bug,Middleware
|
low
|
Critical
|
2,682,642,953 |
rust
|
Tracking Issue for relnotes interest group ping group
|
Tracking issue for creating a ping group for relnotes PRs so interested contributors can get pinged for that specific purpose (instead of pinging whole teams like T-compiler which has idk 50+ members). Note that the following steps are derived from [T-compiler notification group procedure](https://forge.rust-lang.org/compiler/notification-groups.html) but adjusted because it's compiler-specific ping groups.
### Implementation steps
- [x] ~~File a tracking issue in the [rust-lang/compiler-team](https://github.com/rust-lang/compiler-team) repository to collect your progress.~~ Not compiler-specific ping group.
- [x] Create a PR against the [rust-lang/team](https://github.com/rust-lang/team) repository adding the notification group. https://github.com/rust-lang/team/pull/1613
- [x] Configure the [rust-lang/rust](https://github.com/rust-lang/rust) repository to accept triagebot commands for this group. https://github.com/rust-lang/rust/pull/135032
- [ ] Configure the [rust-lang/blog.rust-lang.org](https://github.com/rust-lang/blog.rust-lang.org) repository to accept triagebot commands for this group. https://github.com/rust-lang/blog.rust-lang.org/pull/1453
- [x] ~~Create a PR for the rustc-dev-guide amending [the notification group section](https://rustc-dev-guide.rust-lang.org/notification-groups/about.html) to mention your group.~~ T-release mostly, not compiler.
- [x] Create a sample PR for the [rust-lang/team](https://github.com/rust-lang/team) repository showing how one can add oneself. This will be referenced by your blog post to show people how to join. See https://github.com/rust-lang/team/pull/1613.
- [ ] Create a PR for the forge release notes section to describe the relnotes interest group ping group.
- [x] ~~Create a Zulip stream for the notification group. If you don’t have the permission to do, you can ask on [#t-compiler/wg-meta](https://rust-lang.zulipchat.com/#narrow/stream/185694-t-compiler.2Fwg-meta).~~ Unclear if needed, probably better to discuss on the relnotes PR itself.
- [ ] Write an announcement blog post for Inside Rust and open a PR against [blog.rust-lang.org](https://github.com/rust-lang/blog.rust-lang.org).
### Relevant context
- Zulip discussion: https://rust-lang.zulipchat.com/#narrow/channel/241545-t-release/topic/Please.20CC.20lang
|
C-tracking-issue,T-release,A-meta
|
low
|
Minor
|
2,682,656,292 |
pytorch
|
[export] run_decompositions fails on `torch.ops.aten.index_put_`
|
### 🐛 Describe the bug
This example started to fail yesterday with the nightly build.
```python
import torch
class UpdateModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.params = torch.zeros((4, 4, 10))
def forward(self, update, index1, index2):
copy = self.params.clone()
copy[index1, torch.tensor([1, 2], dtype=torch.int64), index2] = update
return copy
model = UpdateModel()
update = (torch.arange(2) + 10).reshape((2,)).to(torch.float32)
index1 = torch.tensor([1, 2]).to(torch.int64)
index2 = torch.tensor([7, 8]).to(torch.int64)
model(update, index1, index2)
ep = torch.export.export(model, (update, index1, index2))
print(ep.graph)
ep.run_decompositions() # Fails here
```
```
graph():
%c_params : [num_users=1] = placeholder[target=c_params]
%c_lifted_tensor_0 : [num_users=1] = placeholder[target=c_lifted_tensor_0]
%update : [num_users=1] = placeholder[target=update]
%index1 : [num_users=1] = placeholder[target=index1]
%index2 : [num_users=1] = placeholder[target=index2]
%clone : [num_users=1] = call_function[target=torch.ops.aten.clone.default](args = (%c_params,), kwargs = {})
%lift_fresh_copy : [num_users=1] = call_function[target=torch.ops.aten.lift_fresh_copy.default](args = (%c_lifted_tensor_0,), kwargs = {})
%detach_ : [num_users=1] = call_function[target=torch.ops.aten.detach_.default](args = (%lift_fresh_copy,), kwargs = {})
%index_put_ : [num_users=1] = call_function[target=torch.ops.aten.index_put_.default](args = (%clone, [%index1, %detach_, %index2], %update), kwargs = {})
return (index_put_,)
File "site-packages/torch/fx/interpreter.py", line 228, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "site-packages/torch/fx/interpreter.py", line 308, in call_function
return target(*args, **kwargs)
File "site-packages/torch/_ops.py", line 723, in __call__
return self._op(*args, **kwargs)
File "site-packages/torch/_subclasses/functional_tensor.py", line 545, in __torch_dispatch__
outs_unwrapped = func._op_dk(
RuntimeError: false INTERNAL ASSERT FAILED at "/pytorch/build/aten/src/ATen/RegisterFunctionalization_1.cpp":5939, please report a bug to PyTorch. mutating a non-functional tensor with a functional tensor is not allowed. Please ensure that all of your inputs are wrapped inside of a functionalize() call.
While executing %index_put_ : [num_users=1] = call_function[target=torch.ops.aten.index_put_.default](args = (%clone, [%index1, %detach_, %index2], %update), kwargs = {})
Original traceback:
File "test_issue_pytorch_2024_export.py", line 32, in forward
copy[index1, torch.tensor([1, 2], dtype=torch.int64), index2] = update
```
### Versions
PyTorch version: 2.6.0.dev20241121+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4060 Laptop GPU
Nvidia driver version: 538.92
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13800H
CPU family: 6
Model: 186
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 1
Stepping: 2
BogoMIPS: 5836.79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 480 KiB (10 instances)
L1i cache: 320 KiB (10 instances)
L2 cache: 12.5 MiB (10 instances)
L3 cache: 24 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] clip-anytorch==2.6.0
[pip3] CoCa-pytorch==0.1.0
[pip3] dalle2-pytorch==1.15.6
[pip3] ema-pytorch==0.7.0
[pip3] executorch==0.4.0
[pip3] flake8==7.1.1
[pip3] mypy==1.11.2
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.18.0
[pip3] onnx-extended==0.3.0
[pip3] onnxconverter-common==1.14.0
[pip3] onnxruntime-gpu==1.21.0
[pip3] onnxruntime-training==1.21.0+cu121
[pip3] onnxscript==0.1.0.dev20240905
[pip3] open_clip_torch==2.26.1
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] pytorch-warmup==0.1.1
[pip3] rotary-embedding-torch==0.8.4
[pip3] torch==2.6.0.dev20241121+cu124
[pip3] torch-fidelity==0.3.0
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.5.0
[pip3] torchaudio==2.5.0.dev20241121+cu124
[pip3] torchmetrics==1.4.3
[pip3] torchvision==0.20.0.dev20241121+cu124
[pip3] triton==3.1.0
[pip3] vector-quantize-pytorch==1.18.1
[conda] Could not collect
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
|
oncall: pt2,oncall: export
|
low
|
Critical
|
2,682,657,347 |
godot
|
[3.x] Particles2D and Multithreaded rendering throws error (Rare)
|
### Tested versions
Tested in Godot 3.5.1, GLES2
[edit] Tested in Godot 3.6, GLES2
### System information
Ubuntu 24.04.1, 64bit, NVIDIA GeForce GTX 1050 Ti
### Issue description
When toggling emission of a CPUParticles2D node it will sometimes throw the error:
`emit_signal: Error calling method from signal 'frame_pre_draw': 'CPUParticles2D::': Method not found..`
This seems to only occur when multithreaded render mode is enabled.
The bug occurs quite rarely, and with varying behavior. Sometimes the error is just printed once, sometimes it keeps spamming very quickly, and sometimes it crashes the game process.
### Steps to reproduce
- Make a particle system and an animation that toggles it on and off (no other particle system settings are required, although the issue MRP has 'one-shot' enabled)
- Make a script that plays the animation repeatedly with a small random delay between plays.
- Duplicate it a few hundred times, to get the error to reproduce without having to wait for hours
- To speed up the reproduction, you can also increase engine_time_scale
Note how the error is printed when render mode in the project settings is "Multi-Threaded", and does not occur with "Single-Safe" render mode.
### Minimal reproduction project (MRP)
Reproduction project available at
github.com/Tugsav/godot-particle2d-multithreading-bug
Reproduction using time scale 50 should be pretty quick.
Reproduction without modifying time scale can take a while (i had the error occur after ~17 min)
|
bug,topic:rendering,needs testing
|
low
|
Critical
|
2,682,663,563 |
ui
|
[feat]: Add support for react 19
|
### Feature description
Is there any timeframe or plan to migrate components to react 19?
### Affected component/components
Most if not all
### Additional Context
The use of forwardRef on same/most components for example is forcing the need to implement some change on the component definition and integration.
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs
|
area: request
|
low
|
Major
|
2,682,676,995 |
kubernetes
|
Failure cluster [34ae9536...] [sig-node] [sig-windows] Container runtime failed test
|
### Failure cluster [34ae95361eb5abbcb926](https://go.k8s.io/triage#34ae95361eb5abbcb926)

##### Error text:
```
[FAILED] Timed out after 300.000s.
Expected
<v1.PodPhase>: Failed
to equal
<v1.PodPhase>: Succeeded
In [It] at: k8s.io/kubernetes/test/e2e/common/node/runtime.go:157 @ 11/13/24 11:24:41.726
```
#### Recent failures:
[11/22/2024, 4:01:07 AM ci-kubernetes-e2e-windows-containerd-gce-master](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-windows-containerd-gce-master/1859794199298183168)
[11/22/2024, 3:14:07 AM ci-kubernetes-e2e-windows-win2022-containerd-gce-master](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-windows-win2022-containerd-gce-master/1859782365962833920)
[11/21/2024, 11:14:03 PM ci-kubernetes-e2e-windows-win2022-containerd-gce-master](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-windows-win2022-containerd-gce-master/1859721958103453696)
[11/21/2024, 8:00:04 PM ci-kubernetes-e2e-windows-containerd-gce-master](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-windows-containerd-gce-master/1859673139961663488)
[11/21/2024, 7:14:04 PM ci-kubernetes-e2e-windows-win2022-containerd-gce-master](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-windows-win2022-containerd-gce-master/1859661559568011264)
/kind failing-test
/kind flake
Flake > 65% occurrence
/sig node
/sig windows
|
sig/node,kind/flake,sig/windows,kind/failing-test,needs-triage
|
low
|
Critical
|
2,682,679,379 |
pytorch
|
`torch.det` with python 3.10 and torch 2.5.0+cu121 raises RuntimeError
|
### 🐛 Describe the bug
With python 3.10 and torch 2.5.0+cu121, `torch.det` on a cuda tensor raises the following error:
```
RuntimeError: Error in dlopen for library libnvrtc.so.12and libnvrtc-XXXXXXXX.so.12
```
Code to reproduce:
```python
import torch
torch.det(torch.rand(4, 4).cuda())
```
### Versions
PyTorch version: 2.5.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.9 (main, Feb 8 2023, 09:53:56) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070 with Max-Q Design
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
CPU family: 6
Model: 158
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 10
CPU max MHz: 4500,0000
CPU min MHz: 800,0000
BogoMIPS: 5199.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 1,5 MiB (6 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0+cu121
[conda] Could not collect
cc @seemethere @malfet @osalpekar @atalman @ptrblck @msaroufim
|
module: binaries,module: cuda,triaged
|
low
|
Critical
|
2,682,723,393 |
rust
|
Tracking issue for release notes of #133293: Updates Solaris target information, adds Solaris maintainer
|
This issue tracks the release notes text for #133293.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Compatibility notes
- [Increase `sparcv9-sun-solaris` and `x86_64-pc-solaris` Solaris baseline to 11.4.](https://github.com/rust-lang/rust/pull/133293)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @psumbera, @Noratrieb -- origin issue/PR authors and assignees for starting to draft text
|
T-compiler,relnotes,O-solaris,relnotes-tracking-issue
|
low
|
Minor
|
2,682,734,858 |
next.js
|
Turbopack does not respect forceSwcTransforms
|
### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/peaceful-bhabha-n67k3g
### To Reproduce
1. Have a next config:
```
const nextConfig = {
experimental: {
forceSwcTransforms: true,
turbo: {
rules: {
"*.svg": {
loaders: ["@svgr/webpack"],
exclude: /node_modules/,
as: "*.js",
},
},
},
},
};
```
2. Have a babel file in root
3. Run `next dev --turbopack`
### Current vs. Expected behavior
You are using configuration and/or tools that are not yet
supported by Next.js with Turbopack:
Babel detected (babel.config.js)
Babel is not yet supported. To use Turbopack at the moment,
you'll need to remove your usage of Babel.
- Unsupported Next.js configuration option(s) (next.config.js)
To use Turbopack, remove the following configuration options:
- experimental.forceSwcTransforms
If you cannot make the changes above, but still want to try out
Next.js with Turbopack, create the Next.js playground app
by running the following commands:
yarn create next-app --example with-turbopack with-turbopack-app
cd with-turbopack-app
yarn run dev
⚠ Learn more about Next.js and Turbopack: https://nextjs.link/with-turbopack
Expected: The build should succeed, same as it does without `--turbopack`.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Mon Aug 12 20:51:54 PDT 2024; root:xnu-11215.1.10~2/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 20.11.0
npm: 10.2.4
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_
|
bug,Turbopack
|
low
|
Minor
|
2,682,752,756 |
ui
|
[bug]: cli does not work for simple javascript react code.
|
### Describe the bug
I have created a react project without typescript in `Vite`, Try to install the shadcn using cli it does not work.
```cmd
PS D:\User\code\react-test> npx shadcn@latest init
✔ Preflight checks.
✔ Verifying framework. Found Vite.
✔ Validating Tailwind CSS.
✖ Validating import alias.
No import alias found in your tsconfig.json file.
Visit https://ui.shadcn.com/docs/installation/vite to learn how to set an import alias.
```
`tailwind.config.js`:
```js
/** @type {import('tailwindcss').Config} */
export default {
content: ['./index.html', './src/**/*.{js,jsx}'],
theme: {
extend: {},
},
plugins: [],
};
```
`vite.config.js`:
```js
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
// https://vite.dev/config/
export default defineConfig({
plugins: [react()],
resolve: {
alias: {
'@': path.resolve(__dirname, './src'),
},
},
});
```
### Affected component/components
cli
### How to reproduce
1. Create a new project with vite without ts.
2. try to add cli after adding tailwindcss
### Codesandbox/StackBlitz link
https://github.com
### Logs
_No response_
### System Info
```bash
Nodejs: 20, React: 18.3.1
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues
|
bug
|
low
|
Critical
|
2,682,841,285 |
godot
|
Godot4.4-dev5: `ClassDB` provides classes AnimationNodeStartState and AnimationNodeEndStatewhere which are not registered
|
### Tested versions
v4.4.dev5.mono.official [9e6098432]
### System information
all
### Issue description
I iterate over all classes provided by `ClassDB` for testing. Using `ClassDB:get_class_list()`
With Godot4.4.dev5 there is now “AnimationNodeStartState”, “AnimationNodeEndState” there are not documented and cannot be instantiated.
### Steps to reproduce
Can be verified by
```
extends Node
func _ready() -> void:
prints("AnimationNodeStartState class_exists:", ClassDB.class_exists("AnimationNodeStartState"))
prints("AnimationNodeStartState can_instantiate:", ClassDB.can_instantiate("AnimationNodeStartState"))
# compile error
AnimationNodeStartState.new()
```
output:
```
AnimationNodeStartState class_exists: true
AnimationNodeStartState can_instantiate: true
````
errors:
```
res://test.gd:7 - Parse Error: Native class GDScriptNativeClass used in script doesn't exist or isn't exposed.
res://test.gd:7 - Parse Error: Static function "new()" not found in base "GDScriptNativeClass".
```
### Minimal reproduction project (MRP)
n/a
|
bug,topic:core,regression,topic:animation
|
low
|
Critical
|
2,682,847,202 |
react
|
[React 19] ForwardRef props are not referentially stable, breaking downstream memoizations
|
## Summary
I don't know if this is a known behaviour or a bug, but something that's worth highlighting at least.
ForwardRef components are not deprecated, but they're not perfectly backwards compatible either.
The mere existance of `ref` prop on a ForwardRef component, even if `undefined`, makes the component `props` referentially unstable, breaking a whole host of downstream memoizations.
Reproduction:
https://codesandbox.io/p/sandbox/youthful-faraday-8m98cd?file=%2Fsrc%2FApp.tsx%3A34%2C23
React 18:
https://codesandbox.io/p/sandbox/youthful-faraday-forked-32lrwm?workspaceId=8a8eeabe-fead-479a-a993-25a9868e8015
If it's a known behaviour – please highlight this in the docs. Don't think we would have gone ahead with upgrading yet until some of our dependencies would be supporting React 19, because a whole host of packages (charting, datagrids, etc.) that are performance sensitive, suddenly had performance issues.
If not, it needs some attention, as it has pretty bad performance implications.
I believe a lot of maintainers are not dropping ForwardRef yet, because they think it's perfectly backwards compatible, and it's easier to maintain backwards compatibility for them.
|
React 19
|
low
|
Critical
|
2,682,857,646 |
transformers
|
SmolLM is ExecuTorch Compatible
|
### Feature request
### Feature request
Enable SmolLM to ["Export to ExecuTorch"](https://github.com/huggingface/transformers/issues/32253) workflow.
#### Instructions
Instructions of how to enable this model for ExecuTorch:
1. Export the model to ExportIR. For LLM, to run with performance, typically you will need to export the model with cache. #34101 is a reference of how to export and validate the model. Note that you may run into some export issue and it may require fixes in the modeling code.
2. Lower the model to ExecuTorch (to generate a .pte file). You will need to clone the github repo and create a recipe to lower the model. For example lowering the to XNNPACK is the simplest way. See the example code here: https://github.com/pytorch/executorch/blob/release/0.4/extension/export_util/export_hf_model.py#L89L106
3. Run the model with ExecuTorch. You can follow these instructions to build and run the executor runtime for llama: https://github.com/pytorch/executorch/tree/release/0.4/examples/models/llama2#step-4-run-on-your-computer-to-validate
(Optional) Congrats! Once you complete step 1-3, you will be able to run the model on a host machine. Now if you would to go further like making the model faster, smaller, cheaper for your model use-case, you can create more complicated recipes with quantizations and delegations for different HW accelerators. You can find more tutorials on our website, for example to optimize and run the model with Core ML on Apple’s platform: https://pytorch.org/executorch/stable/build-run-coreml.html
### Motivation
See details in #32253
### Your contribution
TBD
|
Feature request,ExecuTorch
|
low
|
Major
|
2,682,887,332 |
next.js
|
`webpackPrefetch: true` magic comment doesn't work in next v15.0.3
|
### Link to the code that reproduces this issue
https://github.com/gavrilikhin-d/repro
### To Reproduce
Test prefetching in production mode
1. ```bash
npm install
npm run build
npm run start
2. Open http://localhost:3000
3. Open the Network tab in the browser's developer tools
4. Click on the "Load Dynamic Component"
5. See no prefetching happening
### Current vs. Expected behavior
Prefetching should work
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:05:14 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T8103
Available memory (MB): 8192
Available CPU cores: 8
Binaries:
Node: 22.9.0
npm: 10.8.3
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: N/A
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Lazy Loading, Webpack
### Which stage(s) are affected? (Select all that apply)
next build (local), next start (local)
### Additional context
Broken since `15.0.3-canary.7`, because on `15.0.3-canary.6` it works
|
bug,Webpack,Lazy Loading
|
low
|
Critical
|
2,683,019,203 |
next.js
|
`“use cache”` does not serialize the serializable value (but `unstable_cache` serializes)
|
### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/keen-cloud-gggnf4?embed=1
### To Reproduce
unstable_cache serializes everything and everything works.
All logs are output to the console
1. Try entering mail in the field with unstable_cache
2. Try to enter mail again in the field with unstable_cache and look at the logs (make sure there are no errors)
3. Try to enter the mail again in the “use cache” field and make sure that the error pops up.
```
Error: Only plain objects, and a few built-ins, can be passed to Client Components from Server Components. Classes or null prototypes are not supported.
[{}]
^^
at stringify (<anonymous>)
Error while saving cache key: ["development","c04d358ed385c8992fa9fcf45e6ff96556234302f3",[{"ctx":{},"type":"query","path":"user.getByEmail","rawInput":"[email protected]","meta":"$undefined","input":"[email protected]","next":"$T"}]] Error: Only plain objects, and a few built-ins, can be passed to Client Components from Server Components. Classes or null prototypes are not supported.
[{}]
^^
```
BONUS:
1. Try replacing `<main>` in `page.tsx` with a fragment (empty tag) and make sure the error does not correspond to reality
### Current vs. Expected behavior
Current: error
Expected: no error
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 16320
Available CPU cores: 6
Binaries:
Node: 22.10.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.12.2
Relevant Packages:
next: 15.0.4-canary.22 // Codesandbox: (15.0.4-canary.23)
eslint-config-next: 15.0.0
react: 19.0.0-rc-380f5d67-20241113
react-dom: 19.0.0-rc-380f5d67-20241113
typescript: 5.7.1-rc
Next.js Config:
output: standalone
```
### Which area(s) are affected? (Select all that apply)
dynamicIO
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
An unexplained error is displayed when trying to serialize a value. But `“use cache”` cannot serialize it, although `unstable_cache` serializes everything fine
|
bug,dynamicIO
|
low
|
Critical
|
2,683,055,870 |
go
|
proposal: net/http: add methods to Request for setting and getting Bearer tokens
|
### Proposal Details
I propose the addition of the following two methods:
```go
// SetBearerAuth, if the provided token is valid, sets the request's
// Authorization header to use the Bearer authentication scheme with that token
// and returns true.
// Otherwise, it leaves the request unchanged and returns false.
// See RFC 6750, Section 2.1.
func (*Request) SetBearerAuth(token string) bool
// BearerAuth returns the token provided in the request's
// Authorization header, if the request uses the Bearer authentication scheme.
func (r *Request) BearerAuth() (token string, ok bool)
```
---
Those methods parallel ones related to HTTP Basic Authentication already exported by net/http:
- https://pkg.go.dev/net/http#Request.SetBasicAuth
- https://pkg.go.dev/net/http#Request.BasicAuth
At first, you may think that the logic for getting/setting a Bearer token is so trivial that it doesn't deserve its own methods in the standard library. However, I've come to realise that many implementations out there suffer from correctness issues and/or performance issues; here are two examples (among others):
- [Many](https://github.com/search?q=language%3AGo+%2F%22Bearer+%22%2F+%2F%5C.Header%5C.Get%5C%28%22Authorization%22%5C%29%2F&type=code) implementations [mistakenly](https://auth0.com/blog/the-bearer-token-case/) parse "Bearer" as case-sensitive, which may cause interoperability issues.
- [Many](https://github.com/search?q=language%3AGo+%2Fhttp%5C.Request%2F+%2Fstrings%5C.Split%5C%28.*%2C+%22+%22%5C%29%2F+%2F%5C.Header%5C.Get%5C%28%22Authorization%22%5C%29%2F&type=code) implementations naively rely on [`strings.Split`](https://pkg.go.dev/strings#Split), thereby facilitating [denial-of-service attacks](https://owasp.org/www-community/attacks/Denial_of_Service); see also [this tangentially related issue](https://github.com/rs/cors/issues/170) (now resolved) in [github.com/rs/cors](https://github.com/rs/cors).
Moreover, and despite my lack of data to back up the following claim, I believe that Bearer is one of the most popular authentication scheme nowadays (likely even more popular than Basic), given the prominent role it plays in OAuth 2.x and OpenID Connect. Therefore, the logic required for parsing a request's Authorization header that uses Bearer arguably deserves to be enshrined in the standard library.
[This playground](https://go.dev/play/p/X3VaUBsWgpm) contains a standalone implementation as well as a test suite. For convenience, `SetBearerAuth` and `BearerAuth` are presented there as package-level functions rather than as `*Request` methods.
|
Proposal
|
medium
|
Major
|
2,683,058,030 |
pytorch
|
numpy's division compiled but returning Different results
|
### 🐛 Describe the bug
I want to compile a model with torch.compile
One of the snippets I've used is
```python
print(type(self.upsample_scale), self.upsample_scale, "self.upsample_scale")
print(1/self.upsample_scale, type(1/self.upsample_scale), "1/self.upsample_scale")
rad_values = torch.nn.functional.interpolate(rad_values.transpose(1, 2),
scale_factor=1/self.upsample_scale,
mode="linear").transpose(1, 2)
```
And this part is actually raising the issue.
Is it an issue with the compilation of numpy's division. As I see type changes before and after compilation.
Or is it an issue with interpolate function, not getting proper arguments.
I further tried a simple code
```python
def foo(x):
return 1/x
x = np.float64(12)
print(foo(x))
print(type(foo(x)))
# Output
# 0.08333333333333333
# <class 'numpy.float64'>
```
and
```python
@torch.compile
def foo(x):
return 1/x
x = np.float64(12)
print(foo(x))
print(type(foo(x)))
# Output
# 0.08333333333333333
# <class 'numpy.ndarray'>
```
Is it an issue with numpy's division compilation or is the issue of passing this value in interpolate should cause issue or not.
### Error logs
Before compile
<class 'numpy.int64'> 300 self.upsample_scale
0.0033333333333333335 <class 'numpy.float64'> 1/self.upsample_scale
And everything works
After Compile same code returns
<class 'numpy.int64'> 300 self.upsample_scale
0.0033333334 <class 'numpy.ndarray'> 1/self.upsample_scale
```txt
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[11], line 33
31 for text in texts:
32 start_time = time.time()
---> 33 out = infer(text, inp_audio)
34 print(f"Time taken: {time.time() - start_time}")
35 ipd.display(ipd.Audio(out, rate=24000))
Cell In[11], line 18, in infer(text, audio)
15 wave_tensor = torch.from_numpy(wave).float().to(device)
16 encoded = encoder(wave_tensor)
---> 18 outs = model(tokens, input_lengths, encoded).cpu().numpy()[..., :-50]
19 return outs
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py:465, in _TorchDynamoContext.__call__.<locals>._fn(*args, **kwargs)
460 saved_dynamic_layer_stack_depth = (
461 torch._C._functorch.get_dynamic_layer_stack_depth()
462 )
464 try:
--> 465 return fn(*args, **kwargs)
466 finally:
467 # Restore the dynamic layer stack depth if necessary.
468 torch._C._functorch.pop_dynamic_layer_stack_and_undo_to_depth(
469 saved_dynamic_layer_stack_depth
470 )
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/StyleTTS2/Models/StyleTTS2Catalan/models.py:3042, in StyleTTS2Catalan.forward(self, tokens, input_lengths, ref_s, alpha, beta, diffusion_steps, embedding_scale, device)
3039 self.text_aligner = ASRCNN(**args.asr_config)
3040 self.pitch_extractor = JDCNet(num_class=1, seq_len=192)
-> 3042 def forward(self, tokens, input_lengths, ref_s, alpha=0.3, beta=0.7,
3043 diffusion_steps=5, embedding_scale=1, device='cuda'):
3044 # Prepare text mask
3045 text_mask = length_to_mask(input_lengths).to(device)
3047 # Text encoding
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/StyleTTS2/Models/StyleTTS2Catalan/models.py:3045, in torch_dynamo_resume_in_forward_at_3045(___stack0, self, tokens, input_lengths, ref_s, alpha, beta, diffusion_steps, embedding_scale, device)
3042 def forward(self, tokens, input_lengths, ref_s, alpha=0.3, beta=0.7,
3043 diffusion_steps=5, embedding_scale=1, device='cuda'):
3044 # Prepare text mask
-> 3045 text_mask = length_to_mask(input_lengths).to(device)
3047 # Text encoding
3048 t_en = self.text_encoder(tokens, input_lengths, text_mask)
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/StyleTTS2/Models/StyleTTS2Catalan/models.py:3048, in torch_dynamo_resume_in_forward_at_3048(___stack0, self, tokens, input_lengths, ref_s, alpha, beta, diffusion_steps, embedding_scale, device, text_mask)
3045 text_mask = length_to_mask(input_lengths).to(device)
3047 # Text encoding
-> 3048 t_en = self.text_encoder(tokens, input_lengths, text_mask)
3050 # Duration prediction with BERT and subsequent processing
3051 bert_dur = self.bert(tokens, attention_mask=(~text_mask).int())
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/StyleTTS2/Models/StyleTTS2Catalan/models.py:3055, in torch_dynamo_resume_in_forward_at_3055(___stack0, self, input_lengths, ref_s, alpha, beta, device, text_mask, t_en, d_en)
3052 d_en = self.bert_encoder(bert_dur).transpose(-1, -2)
3054 # Sample from the latent space using diffusion
-> 3055 s_pred = self.sampler(
3056 noise=torch.randn((1, 256)).unsqueeze(1).to(device),
3057 embedding=bert_dur,
3058 embedding_scale=embedding_scale,
3059 features=ref_s,
3060 num_steps=diffusion_steps
3061 ).squeeze(1)
3063 # Split latent variables into `s` and `ref`
3064 s = s_pred[:, 128:]
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/StyleTTS2/Models/StyleTTS2Catalan/models.py:3104, in torch_dynamo_resume_in_forward_at_3078(___stack0, self, input_lengths, device, t_en, s, ref)
3101 asr = asr_new
3103 # Decoder output
-> 3104 out = self.decoder(asr, F0_pred, N_pred, ref.squeeze().unsqueeze(0))
3106 return out.squeeze()
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/StyleTTS2/Models/StyleTTS2Catalan/models.py:2826, in Decoder.forward(self, asr, F0_curve, N, s)
2818 self.asr_res = nn.Sequential(
2819 weight_norm(nn.Conv1d(512, 64, kernel_size=1)),
2820 )
2823 self.generator = Generator(style_dim, resblock_kernel_sizes, upsample_rates, upsample_initial_channel, resblock_dilation_sizes, upsample_kernel_sizes)
-> 2826 def forward(self, asr, F0_curve, N, s):
2827 if self.training:
2828 downlist = [0, 3, 7]
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/StyleTTS2/Models/StyleTTS2Catalan/models.py:2759, in Generator.forward(self, x, s, f0)
2756 self.ups.apply(init_weights)
2757 self.conv_post.apply(init_weights)
-> 2759 def forward(self, x, s, f0):
2761 f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t
2763 har_source, noi_source, uv = self.m_source(f0)
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/StyleTTS2/Models/StyleTTS2Catalan/models.py:2695, in SourceModuleHnNSF.forward(self, x)
2692 self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
2693 self.l_tanh = torch.nn.Tanh()
-> 2695 def forward(self, x):
2696 """
2697 Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
2698 F0_sampled (batchsize, length, 1)
2699 Sine_source (batchsize, length, 1)
2700 noise_source (batchsize, length 1)
2701 """
2702 # source for harmonic branch
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/StyleTTS2/Models/StyleTTS2Catalan/models.py:2632, in SineGen.forward(self, f0)
2629 sines = torch.cos(i_phase * 2 * np.pi)
2630 return sines
-> 2632 def forward(self, f0):
2633 """ sine_tensor, uv = forward(f0)
2634 input F0: tensor(batchsize=1, length, dim=1)
2635 f0 for unvoiced steps should be 0
2636 output sine_tensor: tensor(batchsize=1, length, dim)
2637 output uv: tensor(batchsize=1, length, 1)
2638 """
2639 f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim,
2640 device=f0.device)
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/StyleTTS2/Models/StyleTTS2Catalan/models.py:2557, in SineGen._f02sine(self, f0_values)
2554 uv = (f0 > self.voiced_threshold).type(torch.float32)
2555 return uv
-> 2557 def _f02sine(self, f0_values):
2558 """ f0_values: (batchsize, length, dim)
2559 where dim indicates fundamental tone and overtones
2560 """
2561 # convert to F0 in rad. The interger part n can be ignored
2562 # because 2 * np.pi * n doesn't affect phase
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/StyleTTS2/Models/StyleTTS2Catalan/models.py:2585, in torch_dynamo_resume_in__f02sine_at_2585(___stack0, self, rad_values)
2571 # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad)
2572 if not self.flag_for_pulse:
2573 # # for normal case
2574
(...)
2583
2584 # phase = torch.cumsum(rad_values, dim=1) * 2 * np.pi
-> 2585 print(type(self.upsample_scale), self.upsample_scale, "self.upsample_scale")
2586 print(1/self.upsample_scale, type(1/self.upsample_scale), "1/self.upsample_scale")
2587 rad_values = torch.nn.functional.interpolate(rad_values.transpose(1, 2),
2588 scale_factor=1/self.upsample_scale,
2589 mode="linear").transpose(1, 2)
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/StyleTTS2/Models/StyleTTS2Catalan/models.py:2586, in torch_dynamo_resume_in__f02sine_at_2586(___stack0, self, rad_values)
2572 if not self.flag_for_pulse:
2573 # # for normal case
2574
(...)
2583
2584 # phase = torch.cumsum(rad_values, dim=1) * 2 * np.pi
2585 print(type(self.upsample_scale), self.upsample_scale, "self.upsample_scale")
-> 2586 print(1/self.upsample_scale, type(1/self.upsample_scale), "1/self.upsample_scale")
2587 rad_values = torch.nn.functional.interpolate(rad_values.transpose(1, 2),
2588 scale_factor=1/self.upsample_scale,
2589 mode="linear").transpose(1, 2)
2590 print(rad_values.shape)
File ~/Desktop/dev/aiml/audio/StyleTTS2-SageMaker/.venv/lib/python3.12/site-packages/torch/nn/functional.py:4559, in interpolate(input, size, scale_factor, mode, align_corners, recompute_scale_factor, antialias)
4557 if input.dim() == 3 and mode == "linear":
4558 assert align_corners is not None
-> 4559 return torch._C._nn.upsample_linear1d(
4560 input, output_size, align_corners, scale_factors
4561 )
4562 if input.dim() == 4 and mode == "bilinear":
4563 assert align_corners is not None
TypeError: upsample_linear1d() received an invalid combination of arguments - got (Tensor, NoneType, bool, list), but expected one of:
* (Tensor input, tuple of ints output_size, bool align_corners, tuple of floats scale_factors)
didn't match because some of the arguments have invalid types: (Tensor, !NoneType!, bool, !list of [numpy.ndarray]!)
* (Tensor input, tuple of ints output_size, bool align_corners, float scales = None, *, Tensor out = None)
```
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: 18.1.3 (1ubuntu1)
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050 Laptop GPU
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 6800H with Radeon Graphics
CPU family: 25
Model: 68
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 53%
CPU max MHz: 4785.0000
CPU min MHz: 400.0000
BogoMIPS: 6388.18
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torch-model-archiver==0.12.0
[pip3] torchaudio==2.5.1
[pip3] torchserve==0.12.0
[pip3] triton==3.1.0
[conda] Could not collect
cc @mruberry @rgommers @chauhang @penguinwu
|
triaged,module: numpy,oncall: pt2
|
low
|
Critical
|
2,683,098,350 |
bitcoin
|
Add support for creating v3 raw transactions in `createrawtransaction` RPC
|
### Please describe the feature you'd like to see added.
Currently, `createrawtransaction` RPC creates only v2 raw transaction, i.e. the first byte of the serialised transaction hex is `02`. It'd be helpful for the RPC to conditionally create V3 raw transactions if such intent is passed in the arguments of the RPC call.
### Is your feature related to a problem, if so please describe it.
_No response_
### Describe the solution you'd like
- A new argument to the RPC call can be passed that allows the user to create v3 raw transactions conditionally. It can be either a `bool isV3` or `int version` (with sanity checks) whichever is most compatible with the current RPC nomenclature.
- Default transaction version need not change as part of this solution.
### Describe any alternatives you've considered
_No response_
### Please leave any additional context
Need to double check for conformity of v3 raw transactions with other RPCs. Though I didn't notice any issue with signing and broadcasting v3 transactions after manually updating the hex during testing TRUC transactions while coming up with [V28 Testing Guide](https://github.com/bitcoin-core/bitcoin-devwiki/wiki/28.0-Release-Candidate-Testing-Guide#2-v3-transactions-truc).
|
Feature
|
low
|
Minor
|
2,683,150,077 |
PowerToys
|
Fancyzones layout - Snap zone to existing window on the desktop.
|
### Description of the new feature / enhancement
A method of editing a zone to exactly match the edges od a window on the desktop.
Maybe key combo while dragging zone highlights a window while hovering over it, then Dropping the Zone snaps it to the dimensions of that window.
Maybe an option to snap the edge of a zone to the window edges of Desktop windows.
### Scenario when this would be used?
I have a specific app that I sized to the exact size that I need.
I cannot get a Zone to exactly match the window size.
The dimensions reported by Fanchzones does not match the same dimensions of the desktop, maybe due to scaling of the desktop.
### Supporting information
_No response_
|
Needs-Triage
|
low
|
Minor
|
2,683,152,024 |
rust
|
`AtomicUsize::from_mut` has incorrect alignment requirements in docs
|
### Location
[`core::sync::atomic::AtomicUsize::from_mut`](https://doc.rust-lang.org/nightly/core/sync/atomic/struct.AtomicUsize.html#method.from_mut)
[`core::sync::atomic::AtomicIsize::from_mut`](https://doc.rust-lang.org/nightly/core/sync/atomic/struct.AtomicIsize.html#method.from_mut)
### Summary
The documentation for [`AtomicUsize::from_mut`](https://doc.rust-lang.org/nightly/core/sync/atomic/struct.AtomicUsize.html#method.from_mut) claims that it's only available when `usize` has alignment 8, but in reality it's available when `usize` has alignment equal to its size (AFAICT), which is different on non-64-bit platforms. For example, this code compiles on 32-bit Linux:
```rust
#![feature(atomic_from_mut)]
use std::sync::atomic::AtomicUsize;
pub fn main(){
let mut x: usize = 0;
assert_eq!(std::mem::size_of_val(&x), 4);
let _y: &mut AtomicUsize = AtomicUsize::from_mut(&mut x);
}
```
[Godbolt](https://godbolt.org/z/jaEev1Wos)
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"ARandomDev99"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END -->
|
E-easy,E-help-wanted,A-docs,T-libs
|
low
|
Minor
|
2,683,231,383 |
deno
|
Vitest Coverage v8 & Istanbul Providers Broken
|
Relates to #23882.
Version: Deno 2.1.1
The v8 provider returns 0 for everything.
The istanbul provider is just wrong:
<img width="977" alt="Screenshot 2024-11-22 at 7 25 54 AM" src="https://github.com/user-attachments/assets/7cf53583-32ba-47ca-a10f-095c1f32ad32">
<img width="492" alt="Screenshot 2024-11-22 at 7 26 30 AM" src="https://github.com/user-attachments/assets/e532eec3-ce1c-4bee-ac13-b590d5ad82fc">
<img width="552" alt="Screenshot 2024-11-22 at 7 28 21 AM" src="https://github.com/user-attachments/assets/71ab738e-c354-402f-b681-2d2291a33edd">
I think instanbul is mostly just mapping the line #s wrong, but it generally looks accurate-ish.
|
debugger,testing,node compat
|
low
|
Critical
|
2,683,249,373 |
electron
|
Exiting GPU process due to errors during initialization
|
### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.0
### What operating system(s) are you using?
Ubuntu
### Operating System Version
20.04.6
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
No error being outputted in the console
### Actual Behavior
When running `electron .` in the project folder, this is the output:
```
> [email protected] start
> electron .
[8642:1122/134210.699969:ERROR:viz_main_impl.cc(181)] Exiting GPU process due to errors during initialization
```
### Testcase Gist URL
_No response_
### Additional Information
The system is a freshly created Ubuntu VM connecting using remote desktop.
The app is based on this repo: https://github.com/serialport/electron-serialport/
The application had been built initially (4 years ago) on a Windows PC that then had been formatted, and I have been asked to try to create a new building environment. I started from scratch using the repo mentioned above and imported the logic of the app to avoid issues caused by using obsolete versions of the libraries used.
The weird thing is that the window opens and load the content as expected (just loading a URL). I am not sure what this error is even referring to, as the message doesn't give more details.
Can someone at least explain what this error is about so I can further investigate?
|
platform/linux,bug :beetle:
|
low
|
Critical
|
2,683,275,841 |
rust
|
Confusing error message when using re-exported struct [E0423]
|
### Code
```Rust
pub use my_mod::MyStruct; // this pub use is causing the problem
mod my_mod {
#[derive(Debug)]
pub struct MyStruct(u32);
mod my_sub_mod {
use crate::MyStruct; // import the rexported struct
fn my_func() {
let s = MyStruct(42);
println!("MyStruct: {:?}", s);
}
}
}
```
### Current output
```Shell
error[E0423]: expected function, tuple struct or tuple variant, found struct `MyStruct`
--> src/lib.rs:11:21
|
11 | let s = MyStruct(42);
| ^^^^^^^^
For more information about this error, try `rustc --explain E0423`.
```
### Desired output
```Shell
Either having no error or the error saying the import is bad
```
### Rust Version
```Shell
rustc 1.81.0
```
### Anything else?
This is a really confusing message that is not helping
|
A-diagnostics,T-compiler
|
low
|
Critical
|
2,683,276,128 |
next.js
|
Relay multi-project configuration doesn't work
|
### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/crimson-frog-go8s7s
### To Reproduce
Provide a Relay configuration that makes use of the multi-project format:
```js
module.exports = {
root: ".",
// For simplicity I'm defining just one project but it doesn't make any difference
sources: {
".": "repro",
},
excludes: ["**/node_modules/**", "**/__mocks__/**", "**/__generated__/**"],
projects: {
"repro": {
output: "__generated__",
language: "typescript",
schema: "schema.graphql",
},
},
};
```
Start the app (both dev mode or build) and observe the Relay integration doesn't work.
### Current vs. Expected behavior
```
⚠ Invalid next.config.mjs options detected:
⚠ "compiler.relay.src" is missing, expected string
⚠ See more info here: https://nextjs.org/docs/messages/invalid-next-config
```
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.0.4-canary.23 // Latest available version is detected (15.0.4-canary.23).
eslint-config-next: N/A
react: 19.0.0-rc-380f5d67-20241113
react-dom: 19.0.0-rc-380f5d67-20241113
typescript: 5.3.3
Next.js Config:
output: N/A
Done in 3.69s.
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
_No response_
|
bug
|
low
|
Minor
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.