id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
โ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
---|---|---|---|---|---|---|
2,756,468,103 | rust | Tracking issue for release notes of #83163: Tracking Issue for const_swap |
This issue tracks the release notes text for #83163.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Const stabilized APIs
- [`std::mem::swap`](https://doc.rust-lang.org/stable/std/mem/fn.swap.html)
- [`std::ptr::swap`](https://doc.rust-lang.org/stable/std/ptr/fn.swap.html)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @usbalbin -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,relnotes-tracking-issue | low | Minor |
2,756,472,568 | transformers | AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels' still has no clear guide around | ### System Info
Python 3.11.10, transformers 4.47.0
### Who can help?
@stevhliu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Trying to train by using
`from transformers import AutoFeatureExtractor
feature_extractor = AutoFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")`
as feature extractor and keep getting `AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels' still has no clear guide around`
found [this](https://github.com/huggingface/transformers/issues/25801) that said to repair the docs but I still haven't found the solution to do it by reading links and docs surrounding the links. Is it still a feature or should I move to other feature extractor?
### Expected behavior
``AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels' ` solution should be
`feature_extractor = AutoFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512", do_reduce_labels=True)
`
according to the [link](https://github.com/huggingface/transformers/issues/25801), but the problem persists.
Edit2:
Complete error message since by the time I wrote this I already try running it again for another chance. Here's the complete error code
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[158], line 1
----> 1 trainer.train()
2 trainer.push_to_hub()
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:2155, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
2152 try:
2153 # Disable progress bars when uploading models during checkpoints to avoid polluting stdout
2154 hf_hub_utils.disable_progress_bars()
-> 2155 return inner_training_loop(
2156 args=args,
2157 resume_from_checkpoint=resume_from_checkpoint,
2158 trial=trial,
2159 ignore_keys_for_eval=ignore_keys_for_eval,
2160 )
2161 finally:
2162 hf_hub_utils.enable_progress_bars()
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:2589, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
2587 self.state.epoch = epoch + (step + 1 + steps_skipped) / steps_in_epoch
2588 self.control = self.callback_handler.on_step_end(args, self.state, self.control)
-> 2589 self._maybe_log_save_evaluate(
2590 tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, start_time
2591 )
2592 else:
2593 self.control = self.callback_handler.on_substep_end(args, self.state, self.control)
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:3047, in Trainer._maybe_log_save_evaluate(self, tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, start_time)
3045 metrics = None
3046 if self.control.should_evaluate:
-> 3047 metrics = self._evaluate(trial, ignore_keys_for_eval)
3048 is_new_best_metric = self._determine_best_metric(metrics=metrics, trial=trial)
3050 if self.args.save_strategy == SaveStrategy.BEST:
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:3001, in Trainer._evaluate(self, trial, ignore_keys_for_eval, skip_scheduler)
3000 def _evaluate(self, trial, ignore_keys_for_eval, skip_scheduler=False):
-> 3001 metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
3002 self._report_to_hp_search(trial, self.state.global_step, metrics)
3004 # Run delayed LR scheduler now that metrics are populated
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:4051, in Trainer.evaluate(self, eval_dataset, ignore_keys, metric_key_prefix)
4048 start_time = time.time()
4050 eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop
-> 4051 output = eval_loop(
4052 eval_dataloader,
4053 description="Evaluation",
4054 # No point gathering the predictions if there are no metrics, otherwise we defer to
4055 # self.args.prediction_loss_only
4056 prediction_loss_only=True if self.compute_metrics is None else None,
4057 ignore_keys=ignore_keys,
4058 metric_key_prefix=metric_key_prefix,
4059 )
4061 total_batch_size = self.args.eval_batch_size * self.args.world_size
4062 if f"{metric_key_prefix}_jit_compilation_time" in output.metrics:
File c:\Users\Lenovo\miniconda3\envs\pretrain-huggingface\Lib\site-packages\transformers\trainer.py:4340, in Trainer.evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix)
4338 eval_set_kwargs["losses"] = all_losses if "loss" in args.include_for_metrics else None
4339 eval_set_kwargs["inputs"] = all_inputs if "inputs" in args.include_for_metrics else None
-> 4340 metrics = self.compute_metrics(
4341 EvalPrediction(predictions=all_preds, label_ids=all_labels, **eval_set_kwargs)
4342 )
4343 elif metrics is None:
4344 metrics = {}
Cell In[156], line 27, in compute_metrics(eval_pred)
19 pred_labels = logits_tensor.detach().cpu().numpy()
20 # currently using _compute instead of compute
21 # see this issue for more info: https://github.com/huggingface/evaluate/pull/328#issuecomment-1286866576
22 metrics = metric._compute(
23 predictions=pred_labels,
24 references=labels,
25 num_labels=num_labels,
26 ignore_index=0,
---> 27 reduce_labels=feature_extractor.reduce_labels,
28 )
30 # add per category metrics as individual key-value pairs
31 per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
AttributeError: 'SegformerFeatureExtractor' object has no attribute 'reduce_labels'
``` | bug,Vision | low | Critical |
2,756,488,170 | kubernetes | File name too long | ### What happened?
Hi guys,
I need your help with something really weird. I have JupyterHub in an Openshift production environment and one of my users created a notebook with a really long name. Everything was fine, until the next day when he failed to initialize his pod due to this error:

So if you can help me with this, I would really appreciate it.
### What did you expect to happen?
Start as usual
### How can we reproduce it (as minimally and precisely as possible)?
Create a file with a very long name
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
v4.5.7
```
</details>
### Cloud provider
ARO ~ Azure Rethad Openshift
### OS version
```console
# On Linux:
$ cat /etc/os-release
# "Ubuntu 22.04.1 LTS"
$ uname -a
# Linux UBUNTU-MNGR 6.8.0-1018-azure #21~22.04.1-Ubuntu SMP Fri Nov 8 00:21:25 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
I'm using:
JupyterHub: 4.0.2
Image: datascience-notebook:python-3.11.7
Openshift: 4.15.27
Kubernetes: v1.28.11+add48d0
Helm chart: https://hub.jupyter.org/helm-chart version 3.2.1
``` | kind/support,needs-sig,needs-triage | low | Critical |
2,756,489,130 | next.js | Error importing pure TS packages (with Bun, Turbo, or neither) | ### Link to the code that reproduces this issue
https://github.com/ctjlewis/ts-package-example
### To Reproduce
Repro here:
https://github.com/ctjlewis/nextjs-ts-package
It attempts to load this simple mock package:
https://github.com/ctjlewis/ts-package-example
---
Without Bun, Turbo throws:
```
โจฏ ./node_modules/ts-package-example/index.ts
Module parse failed: Unexpected token (1:7)
> export type GoodNextjsSupport = false;
| export type PossibleSupport = boolean;
|
Import trace for requested module:
./node_modules/ts-package-example/index.ts
./app/page.tsx
```
When Bun is used *with* Turbo (`bun --bun next dev --turbo`), we get a segfault from Bun:
```
Bun v1.1.41 ([`b8f28ed`](<https://github.com/oven-sh/bun/tree/b8f28ed8afd1c2b60568b2b0158d39a674178027>)) on macos aarch64 [AutoCommand]
Segmentation fault at address 0x4000000002B
- *10 unknown/js code*
<!-- from bun.report: etViGWlalFX_NMfe5SXL -->
<sub>Sentry Issue: <strong><a href="https://bun-p9.sentry.io/issues/5298323857/">BUN-85</a></strong></sub>
```
### Current vs. Expected behavior
It's important that the module resolve and load correctly. It is effectively the simplest possible pure-TS program.
Is it maybe the `module` export? I will try with `exports` keyword after this, but either way `module` entrypoint is simplest config.
I doubt it is, since it seems to throw on the `type` keyword - is it actually expecting JS from all `node_modules`?
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:05:14 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T8103
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 21.0.0
npm: 10.2.0
Yarn: 1.22.19
pnpm: 9.7.1
Relevant Packages:
next: 15.1.1-canary.17 // Latest available version is detected (15.1.1-canary.17).
eslint-config-next: 15.1.2
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Module Resolution, Runtime, Turbopack, TypeScript, Webpack
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local), Vercel (Deployed), Other (Deployed)
### Additional context
cc @/Jarred-Sumner for Bun crash, cc @/jaredpalmer for Turbopack, cc @/leerob for coordination.
(Config issue: Tags removed.) | Webpack,TypeScript,Runtime,Turbopack,Module Resolution | low | Critical |
2,756,509,988 | godot | RichTextLabel (RTL): add_text(), append_text(), add_image() (and probably others) do not work on orphan RTL | ### Tested versions
Reproducible in: v4.4.dev7.official [46c8f8c5c], v4.3.stable.official [77dcf97d8]
### System information
Godot v4.4.dev7 - Windows 11 (build 22631) - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce MX450 (NVIDIA; 32.0.15.5613) - 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz (8 threads)
### Issue description
add_text(), append_text(), add_image() and probably other similar functions do not work on orphan RichTextLabel. Setting the text property directly works.
In the Minimal reproduction project, I only used append_text(), but in my current project in v4.3.stable.official add_image() doesn't work either.
I think it's a bug because I didn't find anything that explains why it behaves like that.
It's pretty simple to bypass putting add_child() before add_*(). Deferring the add_* functions works too. However, I think it's not the expected behavior.
<img width="682" alt="Image" src="https://github.com/user-attachments/assets/c626a384-0b45-4619-839b-14d329a00e01" />
### Steps to reproduce
- Create a RichTextLabel by code (instantiating a scene or creating a new RTL with enough size)
- Add some text using add_text() or append_text()
- Add the RTL to the tree with add_child()
Can be done on a brand new project. I added a Minimal reproduction project anyway.
### Minimal reproduction project (MRP)
[orphan_RTL_test.zip](https://github.com/user-attachments/files/18232032/orphan_RTL_test.zip)
| bug,needs testing,topic:gui | low | Critical |
2,756,536,640 | flutter | [web] --optimization-level 4 breaks web apps on Safari | When building my Flutter web application with `--optimization-level=4` (the default), Safari only displays a gray box, whereas Chrome and Firefox show the application correctly. Using `--optimization-level=2` resolves the issue for Safari, suggesting that the optimization level might be causing the bug.
### Steps to Reproduce
1. Visit Non-Working Build (https://bug-report-non-workung.pages.dev/)
- Compiled with: `flutter build web --release --no-web-resources-cdn --optimization-level=4`(default)
- Chrome/Firefox: The app runs successfully.
- Safari: The app shows only a gray box (error).
2. Visit Working Build (https://bug-report-working.pages.dev/)
- Compiled with: `flutter build web --release --no-web-resources-cdn --optimization-level=2`
- Chrome/Firefox: The app runs successfully.
- Safari: The app also runs successfully.
> Note: I was unable to isolate a minimal reproducible example, but Iโm happy to attempt it if given further guidance.
### Expected results
Using `--optimization-level=4` (the default) should produce a web build that works on Safari, just like it does in Chrome and Firefox.
### Actual results
With `--optimization-level=4`, the web app fails to render on Safari (displays a gray box), even though it works correctly on Chrome and Firefox.
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="1310" alt="Screenshot 2024-12-23 at 19 23 25" src="https://github.com/user-attachments/assets/6a6cfdb9-3486-445f-9058-b1db334215a8" />
<img width="760" alt="Screenshot 2024-12-23 at 19 23 05" src="https://github.com/user-attachments/assets/0eccc17b-f747-4ecd-b085-1b53cc0435c6" />
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 3.27.1, on macOS 14.6.1 23G93 darwin-arm64, locale en-DE)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[โ] Xcode - develop for iOS and macOS (Xcode 16.2)
[โ] Chrome - develop for the web
[โ] Android Studio (version 2023.3)
[โ] Android Studio (version 2021.3)
[โ] Android Studio (version 2022.2)
[โ] IntelliJ IDEA Community Edition (version 2024.2.3)
[โ] VS Code (version 1.95.3)
[โ] Connected device (4 available)
[โ] Network resources
โข No issues found!
```
</details>
| in triage | low | Critical |
2,756,543,036 | PowerToys | For the color picker, display the names of at least the X11 color names | ### Description of the new feature / enhancement
Red
### Scenario when this would be used?
Crimson
Brown
Coral
...
### Supporting information
https://en.wikipedia.org/wiki/X11_color_names | Needs-Triage | low | Minor |
2,756,569,298 | godot | Weird (choppy?) rendering of linear movement | ### Tested versions
- Ocurred in v4.4.dev7.official [46c8f8c5c]
### System information
Godot v4.4.dev7 - Linux Mint 22 (Wilma) on X11 - X11 display driver, Multi-window, 1 monitor - Vulkan (Forward+) - integrated Intel(R) Graphics (ADL GT2) - 12th Gen Intel(R) Core(TM) i7-12700H (20 threads)
### Issue description
On the sample scene (see MRP) there are a few meshes that moves to the left with constant speed and a static (immovable) camera that observes given meshes.
The movement itself is dead simple:
```gdscript
func _process(delta: float) -> void:
for child in get_children():
child.position.x -= move_speed * delta
```
The problem is -- movement appears to be choppy / ragged despite stable frame framerate. This problem can be observed at any VSync mode. This problem persists between different monitors, different refresh rates and different PCs. Also a friend of mine observed same project launching MRP on Win10 using the same build (4.4 dev7 official).
But after I went back from work and launched MRP again I found it somehow fixed itself or idk.
### Steps to reproduce
Just launch MRP and see trees / checkerboard moves in uncanny way. Or not.
Recording 1: MRP w/ ragged movement (compressed by reddit):
https://github.com/user-attachments/assets/d561302b-5187-4c9c-895f-5c506c172405
On this recording monitor supports refresh rate of 75Hz hence the 75 FPS. Without VSync it would be about 500 FPS so I think low/unstable frame rate is not the problem here.
Recoring 2: MRP I recorded just now:
- top-left: MRP running so smooth;
- bottom-right: MRP running choppy (this is the only recording I had on this PC so it is a bit different from actual MRP but shows exactly this problem);
- on this recording monitor supports refresh rate of 60Hz hence the 60 FPS.
- video: https://youtu.be/ddn8dyUDDMA
### Minimal reproduction project (MRP)
[2024_12_train.zip](https://github.com/user-attachments/files/18232384/2024_12_train.zip)
| needs testing,topic:3d | low | Minor |
2,756,573,663 | go | go/types: missing error on an edge-case with continue/break with labels | Once again, because i am poking around with `go/types` internals, i have spotted a bug in the type-checker.
The code below type-checks without any error:
```go
package test
func test() {
outer:
for range "aa" {
break outer
}
for range "aa" {
break outer
}
}
```
This happens because `lstmt`:
https://github.com/golang/go/blob/b9955f0ad952a22388eead15e3d15610a29e03a0/src/go/types/labels.go#L169
is assigned and never cleared, thus:
https://github.com/golang/go/blob/b9955f0ad952a22388eead15e3d15610a29e03a0/src/go/types/labels.go#L234-L236
is executed with a wrong label (for the second for statement).
CC @adonovan @griesemer | NeedsFix | low | Critical |
2,756,578,352 | rust | Is StdoutLock guranteed to be reentrant? | ### Location
https://doc.rust-lang.org/std/io/struct.Stdout.html#method.lock
https://doc.rust-lang.org/std/io/struct.Stderr.html#method.lock
https://doc.rust-lang.org/std/io/struct.Stdin.html#method.lock
### Summary
Currently code like this is fine and a convenient way to lock stdout to prevent other threads from interrupting while printing is happening (this is especially useful when the printing happens deep inside some library/component, where `StdoutLock` can't be easily passed as an argument):
```rust
let stdout = io::stdout().lock();
println!("a");
println!("b");
println!("c");
drop(stdout);
```
As far as I understand, this works because `Stdout::lock()` is reentrant. However, the documentation does not mention that `Stdout::lock()` is reentrant.
Is `Stdout::lock()` guaranteed to be reentrant in all future Rust versions, or is this an implementation detail that should not be relied on?
Either way it would be nice if the documentation of `Stdout::lock()` made it clear what the expected behavior is and how much of it is an implementation detail.
The same question also applies to `Stderr::lock()` and `Stdin::lock()`. | T-libs-api,A-docs | low | Minor |
2,756,583,306 | go | x/tools/gopls: report if accepting a completion item will result in auto-import | ### gopls version
0.17.1
### go env
```shell
..
```
### What did you do?
"os" is not yet imported:
<img width="888" alt="image" src="https://github.com/user-attachments/assets/447f0444-ccd4-48ef-b999-2eaa53dd7a70" />
however, strings are already imported:
<img width="917" alt="image" src="https://github.com/user-attachments/assets/8b1a86f9-7311-4470-b525-8740a6a06415" />
### What did you see happen?
They looks the same.
### What did you expect to see?
Show a hint whether a package is going to be auto-imported, because some package names look similar, I want to use an already imported one, but find myself often accept a wrong name, therefor need to undo auto import.
In clangd, there is a little circle indicating this:
<img width="915" alt="image" src="https://github.com/user-attachments/assets/985723dd-3e34-4646-ac44-6658c9f72b2a" />
after accpeting `printf`, the circle disappears, because `stdio.h` is auto imported
<img width="874" alt="image" src="https://github.com/user-attachments/assets/8e0ce01c-fa24-45ac-94c4-a72cd4b2a3be" />
### Editor and settings
_No response_
### Logs
_No response_ | FeatureRequest,gopls,Tools | low | Minor |
2,756,584,063 | PowerToys | Workspaces launches Tor Browser instead of Firefox if both are installed. | ### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
Workspaces even version 87.1 cannot tell the difference between the Tor Browser and Firefox if both are installed. It used to be able to do so before somewhere version 84.
It thinks that if both are installed, when it tries to launch Firefox, it launches Tor instead. Latest version of both browsers are installed.
### โ๏ธ Expected Behavior
When you pick Firefox, it should launch Firefox and not the Tor browser.
### โ Actual Behavior
It launches the Tor browser instead of the Firefox browser.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response,Product-Workspaces | low | Major |
2,756,589,573 | flutter | Re-enable network access for AVDs | The `video_player` driver test `asset videos live stream duration != 0` is failing (see failure: https://ci.chromium.org/ui/p/flutter/builders/prod/Linux_android%20android_platform_tests_shard_6%20master/1962/overview) because https://flutter-review.googlesource.com/c/recipes/+/61440 removed the flag that enables network access. In the long run, this should be re-enabled on a per-test basis (see https://github.com/flutter/flutter/issues/160691), but https://flutter-review.googlesource.com/c/recipes/+/61660 should be landed for now to un-skip the `asset videos live stream duration != 0` test. | platform-android,P1,team-android,triaged-android | medium | Critical |
2,756,610,954 | flutter | [ios]Make engine scenario tests depend on actual frameworks | ### Use case
The engine scenario tests rely on fake commands sending to the engine, as we couldn't use framework, which was in a separate repo. This setup doesn't give us enough confidence since it's not testing the whole pipeline. Furthermore, it gives us "false confidence", which IMO is even worse than "no confidence" at all.
### Proposal
Since the 2 repos are merged, we should have no problem using actual framework in those scenario tests. | a: tests,platform-ios,P2,team-ios,triaged-ios | low | Minor |
2,756,622,393 | flutter | `flutter build apk --config-only` prints `Config complete` right before the command exits, which doesn't match other commands | ```
$ flutter build apk --config-only
Downloading android-arm-profile/darwin-x64 tools... 287ms
Downloading android-arm-release/darwin-x64 tools... 134ms
Downloading android-arm64-profile/darwin-x64 tools... 142ms
Downloading android-arm64-release/darwin-x64 tools... 132ms
Downloading android-x64-profile/darwin-x64 tools... 125ms
Downloading android-x64-release/darwin-x64 tools... 123ms
Config complete.
```
The "Config complete." part is a but odd, since the tool doesn't announce other commands are "complete", including `flutter build ios --config-only` or `flutter build macos --config-only`
This is a really tiny nit, and this command isn't used by most flutter devs.
Suggest removing this log line:
https://github.com/flutter/flutter/blob/5c7d9d01bae7b279b7d95a37577f782c51847d5b/packages/flutter_tools/lib/src/android/gradle.dart#L455
Will have to detect another way that the `config-only` logic stopped there:
https://github.com/flutter/flutter/blob/5c7d9d01bae7b279b7d95a37577f782c51847d5b/packages/flutter_tools/test/integration.shard/flutter_build_config_only_test.dart#L54 | tool,P3,team-android,triaged-android | low | Minor |
2,756,648,561 | neovim | defaults: `q` should close floating windows | ### Problem
Within the plugin ecosystem, `q` (in normal mode) is a common mapping to close floating windows. It's also used within core, to close the LSP Hover window.
Usually, floating windows aren't modifiable, so having `q` mapped to recording a macro isn't all that useful.
### Expected behavior
`q` should be mapped to `:quit` for floats. | enhancement,defaults,needs:discussion | low | Major |
2,756,668,817 | ui | [bug]: Add button component with yarn fails | ### Describe the bug
npx shadcn@latest add button
โ Checking registry.
โ Installing dependencies.
โ Updating files.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
The component at https://ui.shadcn.com/r/colors/brand-primary.json was not found.
It may not exist at the registry. Please make sure it is a valid component.
### Affected component/components
Button
### How to reproduce
npx shadcn@latest add button
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
MacOS: 14.7.1
Chrome: 130.0.6723.117
Node: 20.10.0
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,756,682,971 | PowerToys | Remapping ALT+<key> to DEL doesn't fully work : CTRL+ALT+<key> triggers CTRL+ALT+DEL instead of CTRL+DEL | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
- Remap ALT+X to DEL
- Type CTRL+AL+X
It should trigger CTRL+DEL (delete the entire next word), but it triggers CTRL+ALT+DEL which opens the menu to switch users, log out, etc.
The bug is not specific to ALT+X (you can reproduce this by replacing X with any other _non modifier_ key)
### โ๏ธ Expected Behavior
Typing CTRL+ALT+X with ALT+X remapped to DEL should trigger CTRL+DEL to delete the entire next word (or do any other action associated with the CTRL+DEL shortcut)
### โ Actual Behavior
Typing CTRL+ALT+X with ALT+X remaped to DEL opens the menu to log out, etc.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,756,710,810 | pytorch | [inductor][cpu] Accuracy failure on bmm max_autotune for offset input weights | ### ๐ Describe the bug
Accuracy error is occurring for BMM max_autotune code when input weights have an offset. Issue is not reproducible on main due to #143102 but after #143141 lands, this issue shows up. Found testing torchbench `sam` model with `--amp`.
Here is a sample test to reproduce (could be added to `test/inductor/test_cpu_select_algorithm.py`):
```python
@patches
@torch.no_grad
@unittest.skipIf(not TEST_MKL, "Test requires MKL")
@dtypes(torch.bfloat16)
def test_bmm_5d(self, dtype):
class M(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, w):
return x @ w[2]
counters.clear()
x = torch.randn(400, 196, 196).to(dtype=dtype)
w = torch.randn(3, 400, 196, 80).to(dtype=dtype)
mod = M().to(dtype=dtype).eval()
with verify(dtype) as (atol, rtol):
self.common(mod, (x, w), atol=atol, rtol=rtol)
self.assertEqual(counters["inductor"]["select_algorithm_autotune"], 1)
```
The error seems to be related to taking the `as_strided` tensor in `normalize_shapes` in `cpp_gemm_template.py`, but more investigation is needed.
### Versions
Seen in main after cherry-picking from #143141
cc @chauhang @penguinwu | oncall: pt2,oncall: cpu inductor | low | Critical |
2,756,724,761 | ollama | Change ToolFunction->Parameters to json.RawMessage like in the Format property | ### What is the issue?
I'm trying to use Tools in the `ChatRequest`, but the `Parameters` property in `ToolFunction` does not allow me to put my full JSON schema in there, while the `Format` property does.
I would suggest changing the type of `Parameters` into `json.RawMessage` just like `Format`.
I'm currently using `Format` property as a workaround.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4 | bug | low | Minor |
2,756,735,724 | go | sync: unaligned atomic write => SIGBUS (darwin/amd64) | ```
#!watchflakes
default <- pkg == "golang.org/x/tools/go/buildutil" && test == ""
```
Issue created automatically to collect these failures.
```go
func (o *Once) doSlow(f func()) {
o.m.Lock()
defer o.m.Unlock()
if o.done.Load() == 0 {
defer o.done.Store(1) // SIGBUS
f()
}
}
```
Example ([log](https://ci.chromium.org/b/8727729778436950241)):
unexpected fault address 0x428b487f
fatal error: fault
[signal SIGBUS: bus error code=0x2 addr=0x428b487f pc=0xf5ee324]
goroutine 1 gp=0xc000004380 m=0 mp=0xf960400 [running, locked to thread]:
runtime.throw({0xf742163?, 0xf5f4ed1?})
/Users/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:1099 +0x48 fp=0xc00006cbe0 sp=0xc00006cbb0 pc=0xf5dbc68
runtime.sigpanic()
/Users/swarming/.swarming/w/ir/x/w/goroot/src/runtime/signal_unix.go:922 +0x18a fp=0xc00006cc40 sp=0xc00006cbe0 pc=0xf5dd5ea
sync.(*Once).doSlow.deferwrap2()
...
goroutine 5 gp=0xc000005500 m=nil [finalizer wait]:
runtime.gopark(0xf980760?, 0x490013?, 0x78?, 0x26?, 0xf57e41e?)
/Users/swarming/.swarming/w/ir/x/w/goroot/src/runtime/proc.go:435 +0xce fp=0xc000042630 sp=0xc000042610 pc=0xf5dbd8e
runtime.runfinq()
/Users/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mfinal.go:196 +0x107 fp=0xc0000427e0 sp=0xc000042630 pc=0xf584ec7
runtime.goexit({})
/Users/swarming/.swarming/w/ir/x/w/goroot/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc0000427e8 sp=0xc0000427e0 pc=0xf5e2f41
created by runtime.createfing in goroutine 1
/Users/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mfinal.go:166 +0x3d
FAIL golang.org/x/tools/go/buildutil 0.031s
โ [watchflakes](https://go.dev/wiki/Watchflakes)
| WaitingForInfo,NeedsInvestigation,Tools,compiler/runtime | low | Critical |
2,756,738,225 | go | go/printer: Comment-LineBreak-LineBreak-SelectorExpr-Comment AST modification issue | ### Go version
go version go1.23.4 windows/amd64
### Output of `go env` in your module/workspace:
```shell
set GO111MODULE=
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\user\AppData\Local\go-build
set GOENV=C:\Users\user\AppData\Roaming\go\env
set GOEXE=.exe
set GOEXPERIMENT=
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOINSECURE=
set GOMODCACHE=C:\Users\user\go\pkg\mod
set GONOPROXY=
set GONOSUMDB=
set GOOS=windows
set GOPATH=C:\Users\user\go
set GOPRIVATE=
set GOPROXY=https://proxy.golang.org,direct
set GOROOT=C:\Program Files\Go
set GOSUMDB=sum.golang.org
set GOTMPDIR=
set GOTOOLCHAIN=auto
set GOTOOLDIR=C:\Program Files\Go\pkg\tool\windows_amd64
set GOVCS=
set GOVERSION=go1.23.4
set GODEBUG=
set GOTELEMETRY=local
set GOTELEMETRYDIR=C:\Users\user\AppData\Roaming\go\telemetry
set GCCGO=gccgo
set GOAMD64=v1
set AR=ar
set CC=gcc
set CXX=g++
set CGO_ENABLED=0
set GOMOD=NUL
set GOWORK=
set CGO_CFLAGS=-O2 -g
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-O2 -g
set CGO_FFLAGS=-O2 -g
set CGO_LDFLAGS=-O2 -g
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -fno-caret-diagnostics -Qunused-arguments -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=C:\Users\user\AppData\Local\Temp\go-build3559419192=/tmp/go-build -gno-record-gcc-switches
```
### What did you do?
While refactoring a codebase of ours programmatically to rename a poorly named package we ran into this issue. The bug is triggered if you modify the `X` component of a `SelectorExpr` to be longer than where a following comment starts, **and** the selector expression has a comment + double newline preceding it.
```go
package main
import (
"go/ast"
"go/parser"
"go/printer"
"go/token"
"log"
"bytes"
)
func init() {
log.SetFlags(log.LstdFlags|log.Lshortfile)
}
var errCase string = `
package main
import (
"log"
)
func main() {
// Comment with line break after it
log.Println() // Comment
}
`
func main() {
fset := token.NewFileSet()
root, err := parser.ParseFile(fset, "error_case.go", errCase,
parser.SkipObjectResolution|parser.ParseComments)
if err != nil {
log.Fatal(err)
}
ast.Inspect(root, func (n ast.Node) bool {
switch n.(type) {
case *ast.SelectorExpr:
se := n.(*ast.SelectorExpr)
switch se.X.(type) {
case *ast.Ident:
ident := se.X.(*ast.Ident)
ident.Name = "123456789012345"
}
}
return true
})
ast.Print(fset, root)
buf := &bytes.Buffer{}
err = printer.Fprint(buf, fset, root)
if err != nil {
log.Fatal(err)
}
log.Println(string(buf.Bytes()))
}
```
### What did you see happen?
If there is a selector expression with a comment + double newline preceding it (the second newline is required), and a comment following it, and you modify the `selectorExpr.X` to be longer than where the following comment starts, `go/printer` will intersperse the following comment in the middle of the selector expression.
This occurs due to this block in `go/printer` - https://github.com/golang/go/blob/master/src/go/printer/printer.go#L1018
```go
package main
import (
"log"
)
func main() {
// Comment with line break after it
123456789012345 // Comment
.Println()
}
```
There is a variable `p.impliedSemi` (`p = printer)` to indicate that a newline implies a semi colon. In this function (`print`) there is also a local variable `impliedSemi` that `p.impliedSemi` is set to at the end of the function.
Before a token is printed, we backtrack and `flush` is called, `flush` calls `intersperseComments` to print any comments preceding that token for comments whose `commentOffset > next.Offset` and if `p.impliedSemi` is `false` (ignoring other parts of the conditional for the purpose of this bug report).
When the `IDENT` (`se.X`) token is encountered we set the local variable `impliedSemi` to `true`. The function then calls `flush` where we print the preceding comment. Now, the linked block will print any newlines (up to two) after that comment, before our token. I am not sure why, but this block now overrides `impliedSemi` and sets it to `false`. The function finishes and `p.impliedSemi` is set to `false`. Now the conditions for our bug are set.
*Aside: I am not 100% on the intention of why this block sets `impliedSemi` to `false`. It has printed newlines, but those newlines come before the token (`IDENT`) that affected `impliedSemi` previously, and we have not yet updated `p.impliedSemi`.*
So, on the next token (the `.`) we enter `flush` once again, which enters `intersperseComments`. Now, the comment following the selector expression meets the conditional that its start comes before the next token (`.`) and `p.impliedSemi` is `false`. So we print the comment before printing the rest of the selector expression.
### What did you expect to see?
```go
package main
import (
"log"
)
func main() {
// Comment with line break after it
123456789012345.Println() // Comment
}
```
If there is not the very specific case of a comment + double newline preceding the selector expression, this works as intended. If there is a statement, or no newline after the comment, the selector expression does not get split up.
Depending on the intention behind setting `impliedSemi` in the linked block, some solutions may be -
* Do not set `impliedSemi` in this block
* Do not set `impliedSemi` in this block if the newlines are the result of backtracking for comments
Thank you! | NeedsInvestigation,FixPending | low | Critical |
2,756,739,615 | flutter | Prevent `flaux` and legacy `engine` repository from rolling .ci.yaml | The `flaux` and `engine` repositories have outdated .ci.yaml files, but they still have CI steps which attempt to roll their .ci.yaml file into the luci config in the infra repository. This ends up stomping the valid luci configuration coming from the rolls in the post-monorepo-merge `flutter` repository. See this commit in the infra repo as an example: https://flutter.googlesource.com/infra/+/0aea90dbf750756b49a8aa776969dd5b5e834dbb
Both the `flaux` and `engine` repos are not likely to have many more commits added to them since they are archived, but we may want to consider just removing the ci_yaml roller step from each of them so that this doesn't happen again. | team-infra,P1,c: tech-debt,monorepo | medium | Minor |
2,756,740,146 | flutter | Flutter 3.27.1 webview rendering on Samsung Tab Active3 SM-T575 | ### Steps to reproduce
I can't reproduce this issue on my devices and don't have reproducible test.
The app is built with Flutter 3.27.1 and issue is reported by users hitting it running on Samsung Tab Active3 SM-T575 tablet.
The screen contains the latest version of the `webview_flutter` plugin and it shows some basic html loaded using `WebViewController.loadHtmlString(...)`
```
webview_flutter: 4.10.0
webview_flutter_android: 4.2.0
```
### Expected results
Expecting to see a properly rendered html.
### Actual results
The content of webview is not shown:

### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.27.1, on macOS 13.6.6 22G630 darwin-arm64, locale en-CA)
โข Flutter version 3.27.1 on channel stable at ****
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 17025dd882 (7 days ago), 2024-12-17 03:23:09 +0900
โข Engine revision cb4b5fff73
โข Dart version 3.6.0
โข DevTools version 2.40.2
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at ****
โข Platform android-35, build-tools 34.0.0
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 15.2)
โข Xcode at /Applications/Xcode_15.2.app/Contents/Developer
โข Build 15C500b
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] IntelliJ IDEA Ultimate Edition (version 2024.3.1.1)
โข IntelliJ at /Applications/IntelliJ IDEA.app
โข Flutter plugin version 83.0.4
โข Dart plugin version 243.23177
[โ] Connected device (5 available)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 13.6.6 22G630
darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 13.6.6 22G630
darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 131.0.6778.205
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| P1,e: impeller,team-engine,triaged-engine | medium | Major |
2,756,762,298 | pytorch | CUDA error when compiling loss function | ### ๐ Describe the bug
In torchtitan, we recently turned on torch.compile on the loss function. It runs well until a recent pytorch nightly. As it broke CI, we have to turn it off in https://github.com/pytorch/torchtitan/pull/755. Please help resolve so that we can re-enable it.
### Error logs
There are various errors when running in different environments, CI vs. local, H100 vs. A100.
Here's the CI failure:
https://github.com/pytorch/torchtitan/actions/runs/12403557255/job/34627247992
### Versions
CI failure starts from Dec 12th or 13th pytorch nightly.
cc @chauhang @penguinwu | triaged,oncall: pt2,activation-checkpointing | low | Critical |
2,756,769,055 | rust | `on_unimplemented` label and notes are not displayed for transitive bounds | ### Code
```Rust
struct S;
#[diagnostic::on_unimplemented(
message = "unable to generate binding for function",
label = "Some label",
note = "note 1",
note = "note 2",
note = "note 3",
)]
trait Failure7<'a> {}
impl<'a> Clone for S where S: Failure7<'a> {
fn clone(&self) -> Self {
unreachable!()
}
}
fn main() {
fn take_clone(_: impl Clone) {}
take_clone(S);
}
```
### Current output
```Shell
error[E0277]: the trait bound `S: Clone` is not satisfied
--> src/main.rs:20:16
|
20 | take_clone(S);
| ---------- ^ the trait `Clone` is not implemented for `S`
| |
| required by a bound introduced by this call
|
note: required for `S` to implement `Clone`
--> src/main.rs:12:10
|
12 | impl<'a> Clone for S where S: Failure7<'a> {
| ^^^^^ ^ ------------ unsatisfied trait bound introduced here
note: required by a bound in `take_clone`
--> src/main.rs:19:27
|
19 | fn take_clone(_: impl Clone) {}
| ^^^^^ required by this bound in `take_clone`
help: consider borrowing here
|
20 | take_clone(&S);
| +
For more information about this error, try `rustc --explain E0277`.
error: could not compile `playground` (bin "playground") due to 1 previous error
```
### Rationale and extra context
The output should include the provided `label` and `note` values, as it does when depending on the trait bound directly.
### Rust Version
```Shell
1.83.0
``` | A-diagnostics,T-compiler | low | Critical |
2,756,810,351 | rust | Missed optimization: bounds checking if index is both subtracted and divided | Take this code:
```rust
#[no_mangle]
pub fn example(arr: &[u32]) -> u32 {
let mut result = 0;
for i in 1..arr.len() {
result = arr[(i - 1) / 2];
}
result
}
```
If there was only a subtraction, the bounds check would be removed. Same if there was only a division (or a right shift).
But if there are both, it isn't:
```asm
example_1:
xor eax, eax
cmp rsi, 2
jb .LBB0_5
lea rcx, [rsi - 1]
xor edx, edx
.LBB0_2:
mov rax, rdx
shr rax
cmp rax, rsi
jae .LBB0_6
inc rdx
cmp rcx, rdx
jne .LBB0_2
mov eax, dword ptr [rdi + 4*rax]
.LBB0_5:
ret
.LBB0_6:
push rax
lea rdx, [rip + .L__unnamed_1]
mov rdi, rax
call qword ptr [rip + core::panicking::panic_bounds_check::he3703b517476def5@GOTPCREL]
```
Adding `unsafe {assert_unchecked((i - 1) / 2 < i)};` fixes the issue.
[Godbolt](https://godbolt.org/#g:!((g:!((g:!((h:codeEditor,i:(filename:'1',fontScale:14,fontUsePx:'0',j:1,lang:rust,selection:(endColumn:1,endLineNumber:21,positionColumn:1,positionLineNumber:12,selectionStartColumn:1,selectionStartLineNumber:21,startColumn:1,startLineNumber:12),source:'use+std::hint::assert_unchecked%3B%0A%0A%23%5Bno_mangle%5D%0Apub+fn+example_1(arr:+%26%5Bu32%5D)+-%3E+u32+%7B%0A++++let+mut+result+%3D+0%3B%0A++++for+i+in+1..arr.len()+%7B%0A++++++++result+%3D+arr%5B(i+-+1)+/+2%5D%3B%0A++++%7D%0A++++result%0A%7D%0A%0A%23%5Bno_mangle%5D%0Apub+fn+example_2(arr:+%26%5Bu32%5D)+-%3E+u32+%7B%0A++++let+mut+result+%3D+0%3B%0A++++for+i+in+1..arr.len()+%7B%0A++++++++unsafe+%7Bassert_unchecked((i+-+1)+/+2+%3C+i)%7D%3B%0A++++++++result+%3D+arr%5B(i+-+1)+%3E%3E+1%5D%3B%0A++++%7D%0A++++result%0A%7D%0A'),l:'5',n:'0',o:'Rust+source+%231',t:'0')),k:42.60642270351008,l:'4',n:'0',o:'',s:0,t:'0'),(g:!((h:compiler,i:(compiler:nightly,filters:(b:'0',binary:'1',binaryObject:'1',commentOnly:'0',debugCalls:'1',demangle:'0',directives:'0',execute:'1',intel:'0',libraryCode:'0',trim:'1',verboseDemangling:'0'),flagsViewOpen:'1',fontScale:14,fontUsePx:'0',j:1,lang:rust,libs:!(),options:'-C+opt-level%3D3',overrides:!((name:edition,value:'2021')),selection:(endColumn:1,endLineNumber:1,positionColumn:1,positionLineNumber:1,selectionStartColumn:1,selectionStartLineNumber:1,startColumn:1,startLineNumber:1),source:1),l:'5',n:'0',o:'+rustc+nightly+(Editor+%231)',t:'0')),k:57.393577296489916,l:'4',n:'0',o:'',s:0,t:'0')),l:'2',n:'0',o:'',t:'0')),version:4) | T-compiler,C-optimization | low | Critical |
2,756,819,291 | vscode | automaticy run command after applying code in editor | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
When I apply the code from github copilt in the editor it expands all the code blocks. Could there be a setting where you can let commands run (like: fold level 1 or formatting the code), to turn the file back to the way you want it. | feature-request | low | Minor |
2,756,839,586 | rust | regression: error[E0275]: overflow evaluating the requirement (related to `pin-project`) | These look similar, but feel free to split the issue if they have distinct causes.
1. https://crater-reports.s3.amazonaws.com/beta-1.84.0-4-retry2/beta-2024-12-08/gh/ethereum-mousse.mousse/log.txt
```
[INFO] [stdout] error[E0275]: overflow evaluating the requirement `(PhantomData<warp::filters::any::AnyFut>, PhantomData<()>, PhantomData<Exact<...>>): Sized`
[INFO] [stdout] |
[INFO] [stdout] = help: consider increasing the recursion limit by adding a `#![recursion_limit = "256"]` attribute to your crate (`http_api`)
[INFO] [stdout] = note: required for `AlwaysUnpin<'_, (PhantomData<AnyFut>, PhantomData<()>, ...)>` to implement `Unpin`
[INFO] [stdout] note: required because it appears within the type `__State<'_, AnyFut, (), Exact<Opaque<__StaticPath>>>`
[INFO] [stdout] --> /opt/rustwide/cargo-home/registry/src/index.crates.io-6f17d22bba15001f/warp-0.3.0/src/filter/and.rs:46:6
[INFO] [stdout] |
[INFO] [stdout] 46 | enum State<T, TE, U: Filter> {
[INFO] [stdout] | ^^^^^
[INFO] [stdout] = note: required for `State<AnyFut, (), Exact<Opaque<__StaticPath>>>` to implement `Unpin`
[INFO] [stdout] note: required because it appears within the type `__AndFuture<'_, Any, Exact<Opaque<__StaticPath>>>`
[INFO] [stdout] --> /opt/rustwide/cargo-home/registry/src/index.crates.io-6f17d22bba15001f/warp-0.3.0/src/filter/and.rs:40:12
[INFO] [stdout] |
[INFO] [stdout] 40 | pub struct AndFuture<T: Filter, U: Filter> {
[INFO] [stdout] | ^^^^^^^^^
[INFO] [stdout] = note: required for `AndFuture<Any, Exact<Opaque<__StaticPath>>>` to implement `Unpin`
[INFO] [stdout] note: required because it appears within the type `__State<'_, AndFuture<Any, Exact<Opaque<__StaticPath>>>, (), ...>`
[INFO] [stdout] --> /opt/rustwide/cargo-home/registry/src/index.crates.io-6f17d22bba15001f/warp-0.3.0/src/filter/and.rs:46:6
[INFO] [stdout] |
[INFO] [stdout] 46 | enum State<T, TE, U: Filter> {
[INFO] [stdout] | ^^^^^
[INFO] [stdout] = note: required for `State<AndFuture<Any, Exact<Opaque<__StaticPath>>>, (), Exact<...>>` to implement `Unpin`
[INFO] [stdout] note: required because it appears within the type `__AndFuture<'_, And<Any, Exact<Opaque<__StaticPath>>>, Exact<...>>`
[INFO] [stdout] --> /opt/rustwide/cargo-home/registry/src/index.crates.io-6f17d22bba15001f/warp-0.3.0/src/filter/and.rs:40:12
[INFO] [stdout] |
[INFO] [stdout] 40 | pub struct AndFuture<T: Filter, U: Filter> {
[INFO] [stdout] | ^^^^^^^^^
[INFO] [stdout] = note: required for `AndFuture<And<Any, Exact<Opaque<__StaticPath>>>, Exact<Opaque<...>>>` to implement `Unpin`
[...]
```
2. https://crater-reports.s3.amazonaws.com/beta-1.84.0-4-retry2/beta-2024-12-08/gh/vagicc.diesel-demo/log.txt
```
[INFO] [stdout] error[E0275]: overflow evaluating the requirement `pin_project::__private::AlwaysUnpin<'_, (PhantomData<warp::filters::any::Any>, PhantomData<Exact<warp::path::internal::Opaque<...>>>)>: Unpin`
[INFO] [stdout] |
[INFO] [stdout] = help: consider increasing the recursion limit by adding a `#![recursion_limit = "256"]` attribute to your crate (`diesel_demo`)
[INFO] [stdout] note: required because it appears within the type `__AndFuture<'_, Any, Exact<Opaque<__StaticPath>>>`
[INFO] [stdout] --> /opt/rustwide/cargo-home/registry/src/index.crates.io-6f17d22bba15001f/warp-0.3.1/src/filter/and.rs:40:12
[INFO] [stdout] |
[INFO] [stdout] 40 | pub struct AndFuture<T: Filter, U: Filter> {
[INFO] [stdout] | ^^^^^^^^^
[INFO] [stdout] = note: required for `AndFuture<Any, Exact<Opaque<__StaticPath>>>` to implement `Unpin`
[INFO] [stdout] note: required because it appears within the type `__State<'_, AndFuture<Any, Exact<Opaque<__StaticPath>>>, (), ...>`
[INFO] [stdout] --> /opt/rustwide/cargo-home/registry/src/index.crates.io-6f17d22bba15001f/warp-0.3.1/src/filter/and.rs:46:6
[INFO] [stdout] |
[INFO] [stdout] 46 | enum State<T, TE, U: Filter> {
[INFO] [stdout] | ^^^^^
[INFO] [stdout] = note: required for `State<AndFuture<Any, Exact<Opaque<__StaticPath>>>, (), Exact<...>>` to implement `Unpin`
[INFO] [stdout] note: required because it appears within the type `__AndFuture<'_, And<Any, Exact<Opaque<__StaticPath>>>, Exact<...>>`
[INFO] [stdout] --> /opt/rustwide/cargo-home/registry/src/index.crates.io-6f17d22bba15001f/warp-0.3.1/src/filter/and.rs:40:12
[INFO] [stdout] |
[INFO] [stdout] 40 | pub struct AndFuture<T: Filter, U: Filter> {
[INFO] [stdout] | ^^^^^^^^^
[INFO] [stdout] = note: required for `AndFuture<And<Any, Exact<Opaque<__StaticPath>>>, Exact<Opaque<...>>>` to implement `Unpin`
[INFO] [stdout] note: required because it appears within the type `__State<'_, AndFuture<And<Any, Exact<Opaque<...>>>, ...>, ..., ...>`
[INFO] [stdout] --> /opt/rustwide/cargo-home/registry/src/index.crates.io-6f17d22bba15001f/warp-0.3.1/src/filter/and.rs:46:6
[INFO] [stdout] |
[INFO] [stdout] 46 | enum State<T, TE, U: Filter> {
[INFO] [stdout] | ^^^^^
[INFO] [stdout] = note: required for `State<AndFuture<And<Any, Exact<Opaque<__StaticPath>>>, ...>, ..., ...>` to implement `Unpin`
[...]
```
### Version it worked on
It most recently worked on: 1.83.0
### Version with regression
Using rustc 1.84.0-beta.4 in crater https://github.com/rust-lang/rust/issues/134138.
@rustbot modify labels: +regression-from-stable-to-beta -regression-untriaged | T-compiler,regression-from-stable-to-beta,C-bug,I-prioritize,E-needs-investigation | low | Critical |
2,756,862,617 | rust | `std::ptr::swap_nonoverlapping` is not always untyped | <!--
Thank you for filing a regression report! ๐ A regression is something that changed between versions of Rust but was not supposed to.
Please provide a short summary of the regression, along with any information you feel is relevant to replicate it.
-->
### Code
I tried this code:
```rust
#![allow(unused)]
use std::mem::{size_of, align_of};
#[repr(C)]
struct Foo(usize, u8);
fn main() {
let buf1: [usize; 2] = [1000, 2000];
let buf2: [usize; 2] = [3000, 4000];
// Foo and [usize; 2] have the same size and alignment,
// so swap_nonoverlapping should treat them the same
assert_eq!(size_of::<Foo>(), size_of::<[usize; 2]>());
assert_eq!(align_of::<Foo>(), align_of::<[usize; 2]>());
let mut b1 = buf1;
let mut b2 = buf2;
// Safety: b1 and b2 are distinct local variables,
// with the same size and alignment as Foo.
unsafe {
std::ptr::swap_nonoverlapping(
b1.as_mut_ptr().cast::<Foo>(),
b2.as_mut_ptr().cast::<Foo>(),
1,
);
}
assert_eq!(b1, buf2); // Fails: [3000, 160] != [3000, 4000] or [3000, 1952] != [3000, 4000]
assert_eq!(b2, buf1); // Fails: [1000, 208] != [1000, 2000] or [1000, 4048] != [1000, 2000]
}
```
[godbolt link](https://rust.godbolt.org/z/s391rbG1j)
I expected to see this happen: The two `assert_eq!`s should succeed; `b1` and `b2` should be completely swapped, since [`std::ptr::swap_nonoverlapping::<T>`](https://doc.rust-lang.org/nightly/std/ptr/fn.swap_nonoverlapping.html) claims to swap bytes, not `T`s.
Instead, this happened: They are not entirely swapped. `swap_nonoverlapping` appears to be doing a typed swap at `Foo`, skipping/zeroing/not-correctly-swapping the 7 padding bytes at the end of `Foo`.
I think this only happens with types where `size_of::<T>() > size_of::<usize>()` from looking at the implementation, but I'm not sure.
In debug mode:
Rust 1.61-1.70 appear to consistently give `[3000, 160]`.
Rust 1.71-nightly appear to consistently give `[3000, 1952]`.
4000 is `0x0fa0`
160 is `0x00a0`
1952 is `0x07a0`
2000 is `0x07d0`
so it looks like Rust 1.61-1.70 are zeroing the padding bytes, and Rust 1.71-nightly are ignoring them.
### Version it worked on
It most recently worked on: Rust 1.60
### Version with regression
`rustc --version --verbose`:
```
rustc 1.61.0 (fe5b13d68 2022-05-18)
binary: rustc
commit-hash: fe5b13d681f25ee6474be29d748c65adcd91f69e
commit-date: 2022-05-18
host: x86_64-unknown-linux-gnu
release: 1.61.0
LLVM version: 14.0.0
```
@rustbot modify labels: +regression-from-stable-to-stable -regression-untriaged
| P-high,T-compiler,regression-from-stable-to-stable,C-bug,T-libs | low | Critical |
2,756,862,755 | langchain | o1-mini and o1 is not supported yet? | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
api-1 | Error during streaming: BadRequestError: 400 Unsupported value: 'messages[0].role' does not support 'developer' with this model.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
api-1 | Error during streaming: BadRequestError: 400 Unsupported value: 'messages[0].role' does not support 'developer' with this model.
### System Info
nodejs | ๐ค:bug | low | Critical |
2,756,883,950 | langchain | openapi toolkit does not replace variable values from API specification parameters | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
import yaml
from langchain_community.agent_toolkits.openapi import planner
from langchain_openai import ChatOpenAI
from langchain_community.agent_toolkits.openapi.spec import reduce_openapi_spec
from langchain_community.utilities.requests import RequestsWrapper
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Load Swagger Specification
with open("waapi.yaml", "r", encoding='utf8') as f:
raw_waapi_api_spec = yaml.load(f, Loader=yaml.Loader)
# Add the host from environment variable
waapi_host = os.getenv("WAAPI_HOST")
if waapi_host:
raw_waapi_api_spec["servers"] = [{"url": waapi_host}]
# Reduce the OpenAPI spec
waapi_api_spec = reduce_openapi_spec(raw_waapi_api_spec)
# Define the LangChain Agent
llm = ChatOpenAI(model_name="gpt-4", temperature=0.0, api_key=OPENAI_API_KEY)
# Set headers for the requests
headers = {"X-API-KEY": os.getenv("WAAPI_API_KEY")}
requests_wrapper = RequestsWrapper(headers=headers)
# NOTE: set allow_dangerous_requests manually for security concern https://python.langchain.com/docs/security
waapi_agent = planner.create_openapi_agent(
waapi_api_spec,
requests_wrapper,
llm,
allow_dangerous_requests=False,
)
# Interactive loop for user input
print("Interactive WAAPI Agent. Enter your query below. Press Enter on an empty line to exit.")
while True:
user_query = input("Your query: ").strip()
if not user_query:
print("Exiting the program. Goodbye!")
break
try:
response = waapi_agent.invoke(user_query)
print(f"Response: {response}")
except Exception as e:
print(f"An error occurred: {e}")
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I want to use langchain to read an API specification and figure out the sequence of API calls that would be required to create a new user account based on the API spec given below -
https://waha.devlike.pro/swagger/openapi.json
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.26100
> Python Version: 3.13.1 (tags/v3.13.1:0671451, Dec 3 2024, 19:06:28) [MSC v.1942 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.28
> langchain: 0.3.13
> langchain_community: 0.3.13
> langsmith: 0.2.4
> langchain_mistralai: 0.2.4
> langchain_openai: 0.2.14
> langchain_text_splitters: 0.3.4
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 2.2.1
> openai: 1.58.1
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.4
> pydantic-settings: 2.7.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> tokenizers: 0.21.0
> typing-extensions: 4.12.2 | ๐ค:bug | low | Critical |
2,756,963,740 | rust | regression: ICE: thread 'rustc' panicked at compiler/rustc_metadata/src/rmeta/decoder.rs:1501:75 | A few rustdoc results in crater have the same backtrace:
1. https://crater-reports.s3.amazonaws.com/beta-rustdoc-1.84.0-4-retry/beta-2024-12-08/reg/conflagrate-0.1.0/log.txt
2. https://crater-reports.s3.amazonaws.com/beta-rustdoc-1.84.0-4-retry/beta-2024-12-08/reg/ink-5.1.1/log.txt
3. https://crater-reports.s3.amazonaws.com/beta-rustdoc-1.84.0-4-retry/beta-2024-12-08/reg/perigee-0.7.0/log.txt
### Meta
Using rustc 1.84.0-beta.4 in crater https://github.com/rust-lang/rust/issues/134138.
### Error output
```
[INFO] [stderr] thread 'rustc' panicked at compiler/rustc_metadata/src/rmeta/decoder.rs:1501:75:
[INFO] [stderr] called `Option::unwrap()` on a `None` value
```
<details><summary><strong>Backtrace</strong></summary>
<p>
```
[INFO] [stderr] stack backtrace:
[INFO] [stderr] 0: 0x78dc8929152a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h98c26ac25ffe89bb
[INFO] [stderr] 1: 0x78dc89a248fc - core::fmt::write::h0dd5f6e2238c7982
[INFO] [stderr] 2: 0x78dc8a94ffd1 - std::io::Write::write_fmt::hc00bff0a88e08857
[INFO] [stderr] 3: 0x78dc89291382 - std::sys::backtrace::BacktraceLock::print::h5c19c1c038ee186d
[INFO] [stderr] 4: 0x78dc8929385a - std::panicking::default_hook::{{closure}}::h77172f079a1fcb13
[INFO] [stderr] 5: 0x78dc892936c0 - std::panicking::default_hook::he5f8e3b203ccddba
[INFO] [stderr] 6: 0x78dc883108b5 - std[b9e7ca495922dc28]::panicking::update_hook::<alloc[a74230b4a4ddab6d]::boxed::Box<rustc_driver_impl[a84ad889800babb7]::install_ice_hook::{closure#0}>>::{closure#0}
[INFO] [stderr] 7: 0x78dc89293f38 - std::panicking::rust_panic_with_hook::h53863d4e9018df39
[INFO] [stderr] 8: 0x78dc89293cd6 - std::panicking::begin_panic_handler::{{closure}}::h67280e6fa0757873
[INFO] [stderr] 9: 0x78dc892919d9 - std::sys::backtrace::__rust_end_short_backtrace::h1dac3f54d6fbbeb2
[INFO] [stderr] 10: 0x78dc892939cc - rust_begin_unwind
[INFO] [stderr] 11: 0x78dc85cee560 - core::panicking::panic_fmt::h6792bd1b2bf01041
[INFO] [stderr] 12: 0x78dc85fa057c - core::panicking::panic::hbb5c236a846c507c
[INFO] [stderr] 13: 0x78dc86d490a9 - core::option::unwrap_failed::hc5b7b0e3b50bafe9
[INFO] [stderr] 14: 0x78dc8a772a10 - <rustc_metadata[ab9baa4e98ff82a5]::creader::CrateMetadataRef>::def_key
[INFO] [stderr] 15: 0x78dc8a772342 - <rustc_metadata[ab9baa4e98ff82a5]::creader::CStore as rustc_session[b2ed5a1fdb1e5dc3]::cstore::CrateStore>::def_path
[INFO] [stderr] 16: 0x78dc8a7721f8 - <rustc_middle[bd74a45dc8a6aeec]::ty::context::TyCtxt>::def_path
[INFO] [stderr] 17: 0x78dc889b6199 - <rustc_middle[bd74a45dc8a6aeec]::ty::context::TyCtxt>::def_path_debug_str
[INFO] [stderr] 18: 0x78dc887f7e8d - rustc_interface[164bca52f843a547]::callbacks::def_id_debug
[INFO] [stderr] 19: 0x78dc89a248fc - core::fmt::write::h0dd5f6e2238c7982
[INFO] [stderr] 20: 0x78dc89a248fc - core::fmt::write::h0dd5f6e2238c7982
[INFO] [stderr] 21: 0x78dc89a24760 - alloc::fmt::format::format_inner::h09cb60e85cf9218c
[INFO] [stderr] 22: 0x78dc889ce33b - rustc_middle[bd74a45dc8a6aeec]::util::bug::opt_span_bug_fmt::<rustc_span[84c8ca3d0b382817]::span_encoding::Span>::{closure#0}
[INFO] [stderr] 23: 0x78dc889b58fa - rustc_middle[bd74a45dc8a6aeec]::ty::context::tls::with_opt::<rustc_middle[bd74a45dc8a6aeec]::util::bug::opt_span_bug_fmt<rustc_span[84c8ca3d0b382817]::span_encoding::Span>::{closure#0}, !>::{closure#0}
[INFO] [stderr] 24: 0x78dc889b578b - rustc_middle[bd74a45dc8a6aeec]::ty::context::tls::with_context_opt::<rustc_middle[bd74a45dc8a6aeec]::ty::context::tls::with_opt<rustc_middle[bd74a45dc8a6aeec]::util::bug::opt_span_bug_fmt<rustc_span[84c8ca3d0b382817]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
[INFO] [stderr] 25: 0x78dc86ae9f60 - rustc_middle[bd74a45dc8a6aeec]::util::bug::bug_fmt
[INFO] [stderr] 26: 0x78dc8891e4af - <rustc_metadata[ab9baa4e98ff82a5]::creader::CrateMetadataRef>::missing
[INFO] [stderr] 27: 0x78dc89c13763 - rustc_query_impl[8969a7b8ce30dc91]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[8969a7b8ce30dc91]::query_impl::def_kind::dynamic_query::{closure#2}::{closure#0}, rustc_middle[bd74a45dc8a6aeec]::query::erase::Erased<[u8; 3usize]>>
[INFO] [stderr] 28: 0x78dc89c12833 - rustc_query_system[bc50a602aa0ddde9]::query::plumbing::try_execute_query::<rustc_query_impl[8969a7b8ce30dc91]::DynamicConfig<rustc_query_system[bc50a602aa0ddde9]::query::caches::DefIdCache<rustc_middle[bd74a45dc8a6aeec]::query::erase::Erased<[u8; 3usize]>>, false, false, false>, rustc_query_impl[8969a7b8ce30dc91]::plumbing::QueryCtxt, false>
[INFO] [stderr] 29: 0x78dc89c1254f - rustc_query_impl[8969a7b8ce30dc91]::query_impl::def_kind::get_query_non_incr::__rust_end_short_backtrace
[INFO] [stderr] 30: 0x61e7f64f6b4e - rustc_middle[bd74a45dc8a6aeec]::query::plumbing::query_get_at::<rustc_query_system[bc50a602aa0ddde9]::query::caches::DefIdCache<rustc_middle[bd74a45dc8a6aeec]::query::erase::Erased<[u8; 3usize]>>>
[INFO] [stderr] 31: 0x61e7f6570412 - <alloc[a74230b4a4ddab6d]::vec::Vec<(rustdoc[bef8b17de71f3e2]::passes::collect_intra_doc_links::Res, core[d10bf40c8679dc2f]::option::Option<rustdoc[bef8b17de71f3e2]::passes::collect_intra_doc_links::UrlFragment>)>>::retain::<<rustdoc[bef8b17de71f3e2]::passes::collect_intra_doc_links::LinkCollector>::resolve_ambiguities::{closure#0}>::{closure#0}
[INFO] [stderr] 32: 0x61e7f66d2d20 - rustdoc[bef8b17de71f3e2]::core::run_global_ctxt
[INFO] [stderr] 33: 0x61e7f67fe47d - rustdoc[bef8b17de71f3e2]::main_args::{closure#2}::{closure#0}::{closure#0}
[INFO] [stderr] 34: 0x61e7f6576f34 - rustc_interface[164bca52f843a547]::interface::run_compiler::<core[d10bf40c8679dc2f]::result::Result<(), rustc_span[84c8ca3d0b382817]::ErrorGuaranteed>, rustdoc[bef8b17de71f3e2]::main_args::{closure#2}>::{closure#1}
[INFO] [stderr] 35: 0x61e7f64f3765 - std[b9e7ca495922dc28]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[164bca52f843a547]::util::run_in_thread_with_globals<rustc_interface[164bca52f843a547]::util::run_in_thread_pool_with_globals<rustc_interface[164bca52f843a547]::interface::run_compiler<core[d10bf40c8679dc2f]::result::Result<(), rustc_span[84c8ca3d0b382817]::ErrorGuaranteed>, rustdoc[bef8b17de71f3e2]::main_args::{closure#2}>::{closure#1}, core[d10bf40c8679dc2f]::result::Result<(), rustc_span[84c8ca3d0b382817]::ErrorGuaranteed>>::{closure#0}, core[d10bf40c8679dc2f]::result::Result<(), rustc_span[84c8ca3d0b382817]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[d10bf40c8679dc2f]::result::Result<(), rustc_span[84c8ca3d0b382817]::ErrorGuaranteed>>
[INFO] [stderr] 36: 0x61e7f6585aec - <<std[b9e7ca495922dc28]::thread::Builder>::spawn_unchecked_<rustc_interface[164bca52f843a547]::util::run_in_thread_with_globals<rustc_interface[164bca52f843a547]::util::run_in_thread_pool_with_globals<rustc_interface[164bca52f843a547]::interface::run_compiler<core[d10bf40c8679dc2f]::result::Result<(), rustc_span[84c8ca3d0b382817]::ErrorGuaranteed>, rustdoc[bef8b17de71f3e2]::main_args::{closure#2}>::{closure#1}, core[d10bf40c8679dc2f]::result::Result<(), rustc_span[84c8ca3d0b382817]::ErrorGuaranteed>>::{closure#0}, core[d10bf40c8679dc2f]::result::Result<(), rustc_span[84c8ca3d0b382817]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[d10bf40c8679dc2f]::result::Result<(), rustc_span[84c8ca3d0b382817]::ErrorGuaranteed>>::{closure#1} as core[d10bf40c8679dc2f]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
[INFO] [stderr] 37: 0x78dc8a9328f9 - std::sys::pal::unix::thread::Thread::new::thread_start::h48cf765408f6f5f2
[INFO] [stderr] 38: 0x78dc84a6bac3 - <unknown>
[INFO] [stderr] 39: 0x78dc84afca04 - clone
[INFO] [stderr] 40: 0x0 - <unknown>
[INFO] [stderr]
[INFO] [stderr] error: the compiler unexpectedly panicked. this is a bug.
[INFO] [stderr]
[INFO] [stderr] note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-rustdoc&template=ice.md
[INFO] [stderr]
[INFO] [stderr] note: rustc 1.84.0-beta.4 (202008a1b 2024-12-07) running on x86_64-unknown-linux-gnu
[INFO] [stderr]
[INFO] [stderr] note: compiler flags: --crate-type lib
[INFO] [stderr]
[INFO] [stderr] note: some of the compiler flags provided by cargo are hidden
[INFO] [stderr]
[INFO] [stderr] query stack during panic:
[INFO] [stderr] panicked at compiler/rustc_metadata/src/rmeta/decoder.rs:1499:14:
[INFO] [stderr] lock was already held
[INFO] [stderr] thread panicked while processing panic. aborting.
```
</p>
</details>
| T-rustdoc,I-ICE,T-compiler,regression-from-stable-to-stable,C-bug,I-prioritize | low | Critical |
2,756,996,608 | react | Bug: `eslint-plugin-react-hooks` false positive with `for` loop in function body | <!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version: N/A
Eslint-plugin-react-hooks version: 5.1.0
## Steps To Reproduce
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example:
```tsx
const Dots = () => {
const count = 9;
const [highlightIndex, updateHighlightIndex] = React.useState(0);
React.useEffect(() => {
const updateHighlightIndexIntervalID = setInterval(() => {
updateHighlightIndex((i) => (i + 1) % count);
}, 200);
return () => {
clearInterval(updateHighlightIndexIntervalID);
};
}, []);
const dots: JSX.Element[] = [];
for (let i = 0; i < count; i++) {
dots.push(<span key={i} style={{opacity:i === highlightIndex ? 1 : 0.5}}>{i}</span>);
}
return <div>{dots}</div>;
};
```
## The current behavior
The linter reports the following error:
```
ESLint: React Hook "React. useState" may be executed more than once. Possibly because it is called in a loop. React Hooks must be called in the exact same order in every component render.(react-hooks/ rules-of-hooks)
```
This is incorrect. The for loop is correctly reading a reactive variable. No hooks are called conditionally or inside a loop. The code can be rewritten to satisfy the linter but there is nothing wrong with the original code.
## The expected behavior
No error is reported. Having a `for` loop reading a reactive variable should not report an error.
| Status: Unconfirmed | low | Critical |
2,757,014,985 | pytorch | [inductor] [dtype] `ReplicationPad` raise dtype error on eager but pass the check on indcutor | ### ๐ Describe the bug
**symptom**: when using a normal input to this model, `signbit` output a `bool` value. `replication_pad` rejects bool on eager but pass the check on inductor. I'm not sure which one should be taken.
**device**: both on cpu and cuda
**exposed area**: ReplicationPad1d, ReplicationPad2d, ReplicationPad3d
**relation**: similarly logic to #143752
**code**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
torch.manual_seed(0)
torch.set_grad_enabled(False)
from torch._inductor import config
config.fallback_random = True
class Model(nn.Module):
def __init__(self, pad_operator):
super(Model, self).__init__()
self.pad = pad_operator
self.signbit = torch.signbit
def forward(self, x):
x = self.signbit(x)
x = self.pad(x)
return x
def run_test(dim, device, backend):
r_pad = eval(f"nn.ReplicationPad{dim}d(padding=1)")
model = Model(r_pad).to(device)
x = torch.randn([1] * (dim + 2)).to(device)
if backend == "inductor":
model = torch.compile(model)
try:
y = model(x)
print(f"succeed on {device} with {backend}: {y.dtype}")
except Exception as e:
print(f"fail on {device} with {backend}: {e}")
run_test(1, "cpu", "eager") # fail on cpu with eager: "replication_pad1d" not implemented for 'Bool'
run_test(1, "cpu", "inductor") # succeed on cpu with inductor: torch.bool
run_test(1, "cuda", "eager") # fail on cuda with eager: "replication_pad1d_cuda" not implemented for 'Bool'
run_test(1, "cuda", "inductor") # succeed on cuda with inductor: torch.bool
run_test(2, "cpu", "eager") # fail on cpu with eager: "replication_pad2d" not implemented for 'Bool'
run_test(2, "cpu", "inductor") # succeed on cpu with inductor: torch.bool
run_test(2, "cuda", "eager") # fail on cuda with eager: "replication_pad2d_cuda" not implemented for 'Bool'
run_test(2, "cuda", "inductor") # succeed on cuda with inductor: torch.bool
run_test(3, "cpu", "eager") # fail on cpu with eager: "replication_pad3d" not implemented for 'Bool'
run_test(3, "cpu", "inductor") # succeed on cpu with inductor: torch.bool
run_test(3, "cuda", "eager") # fail on cuda with eager: "replication_pad3d_cuda" not implemented for 'Bool'
run_test(3, "cuda", "inductor") # succeed on cuda with inductor: torch.bool
```
### Error logs
```
fail on cpu with eager: "replication_pad1d" not implemented for 'Bool'
succeed on cpu with inductor: torch.bool
fail on cuda with eager: "replication_pad1d_cuda" not implemented for 'Bool'
succeed on cuda with inductor: torch.bool
fail on cpu with eager: "replication_pad2d" not implemented for 'Bool'
succeed on cpu with inductor: torch.bool
fail on cuda with eager: "replication_pad2d_cuda" not implemented for 'Bool'
succeed on cuda with inductor: torch.bool
fail on cpu with eager: "replication_pad3d" not implemented for 'Bool'
succeed on cpu with inductor: torch.bool
fail on cuda with eager: "replication_pad3d_cuda" not implemented for 'Bool'
succeed on cuda with inductor: torch.bool
```
### Versions
the same as #143752
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov | triaged,oncall: pt2,module: inductor | low | Critical |
2,757,024,078 | pytorch | Looking for valid compiling option for extension based on torch-2.1.0+cpu.cxx11.abi | ### ๐ Describe the bug
Try to compile extension based on [torch-2.1.0+cpu.cxx11.abi](https://download.pytorch.org/whl/cpu-cxx11-abi/torch-2.1.0%2Bcpu.cxx11.abi-cp39-cp39-linux_x86_64.whl#sha256=f100b87d0e307dcac6321dd8f4895f14f6fa6974a921e9e7369bd9c7be4f0d5d) and set D_GLIBCXX_USE_CXX11_ABI=1.
env info:
```
Arch: x86_64
GCC version: (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9)
CMake version: version 3.18.4
Libc version: glibc-2.28
```
An segmentation fault occurs during pybind11 initialization when import the extension which inherits the torch._C._distributed_c10d.Backend. Tried the following options but none of them solved the problem:
1. set(CXX_STANDARD_REQUIRED ON)
2. string(APPEND CMAKE_CXX_FLAGS " -fabi-version=11")
Only using self-compiled torch package in same environment can fix the problem, and it seems that some _**static_strings**_ are missing in [torch-2.1.0+cpu.cxx11.abi](https://download.pytorch.org/whl/cpu-cxx11-abi/torch-2.1.0%2Bcpu.cxx11.abi-cp39-cp39-linux_x86_64.whl#sha256=f100b87d0e307dcac6321dd8f4895f14f6fa6974a921e9e7369bd9c7be4f0d5d) by tracing _**internals_pp**_ in torch/inculde/pybind11/detail/internals.h.
```
inline internals **&get_internals_pp() {
static internals **internals_pp = nullptr;
return internals_pp;
}
```
missing static_strings
```
...
[38] = "torch._C._distributed_c10d._ProcessGroupWrapper",
[39] = "torch._C._distributed_c10d._Options",
[40] = "torch._C._distributed_c10d.Device",
[41] = "torch._C._distributed_c10d.ProcessGroupGloo",
[42] = "torch._C._distributed_c10d.Backend",
[43] = "torch._C._distributed_c10d.Options",
[44] = "torch._C._distributed_c10d.BackendType",
[45] = "torch._C._distributed_c10d.ProcessGroup",
...
```
**Is there any pybind11 requirements are missing?**
### Versions
PyTorch version: 2.1.0+cpu-cxx11-abi
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: AlmaLinux 8.10 (Cerulean Leopard) (x86_64)
GCC version: (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9)
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.28
Python version: 3.9.21 (main, Dec 17 2024, 07:34:47) [GCC 14.2.1 20240801 (Red Hat 14.2.1-1)] (64-bit runtime)
Python platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6266C CPU @ 3.00GHz
Stepping: 7
CPU MHz: 3000.000
BogoMIPS: 6000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 30976K
NUMA node0 CPU(s): 0-31
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.21.3
[pip3] torch==2.1.0+cpu.cxx11.abi
[conda] numpy 1.24.4 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @malfet @seemethere @xmfan | high priority,needs reproduction,module: crash,module: build,module: cpp-extensions,triaged | low | Critical |
2,757,051,073 | vscode | JSON rendering: No way to get pretty printed colored output | ### Applies To
- [x] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
<img width="3120" alt="Image" src="https://github.com/user-attachments/assets/b1d4c5b8-0b48-43ed-9c5a-5feb1ea54636" />
expecting output like
<img width="132" alt="Image" src="https://github.com/user-attachments/assets/839e631a-f54e-4b02-84cd-b69ef741abac" />
### VS Code Version
1.96.0
### Jupyter Extension Version
2024.11.0
### Jupyter logs
```shell
Visual Studio Code (1.96.0, undefined, desktop)
Jupyter Extension Version: 2024.11.0.
Python Extension Version: 2024.22.0.
Pylance Extension Version: 2024.12.1.
Platform: darwin (arm64).
Temp Storage folder ~/Library/Application Support/Code/User/globalStorage/ms-toolsai.jupyter/version-2024.11.0
Workspace folder ~/work/vscode-jupyter-json, Home = /Users/vesa.vilhonen
12:19:00.403 [info] Starting Kernel (Python Path: ~/.venv/bin/python, Venv, 3.13.0) for '~/work/vscode-jupyter-json/problem.ipynb' (disableUI=true)
12:19:00.475 [info] Process Execution: ~/.venv/bin/python -m pip list
12:19:00.476 [info] Process Execution: ~/.venv/bin/python -c "import ipykernel; print(ipykernel.__version__); print("5dc3a68c-e34e-4080-9c3e-2a532b2ccb4d"); print(ipykernel.__file__)"
12:19:00.477 [info] Process Execution: ~/.venv/bin/python -m ipykernel_launcher --f=/Users/~/Library/Jupyter/runtime/kernel-v376d4969e47eafaac7a1a841ee24f71589557f6bc.json
> cwd: //Users/~/work/vscode-jupyter-json
12:19:00.932 [info] Kernel successfully started
12:19:00.934 [info] Process Execution: ~/.venv/bin/python /Users/~/.vscode/extensions/ms-toolsai.jupyter-2024.11.0-darwin-arm64/pythonFiles/printJupyterDataDir.py
```
### Coding Language and Runtime Version
Python 3.13.0 but happens with every version
### Language Extension Version (if applicable)
_No response_
### Anaconda Version (if applicable)
_No response_
### Running Jupyter locally or remotely?
Local | bug,info-needed | low | Minor |
2,757,052,949 | opencv | Error Compiling for CUDA: cuDSS | ### System Information
OS/Platform : Ubuntu 22.04 (Docker Image) (x86_64)
NVIDIA RTX 3060Ti
Opencv version: Latest
Compiler version: GNU 11.4.0
Python version: 3.10.12
### Detailed description
Hello, I am trying to compile OpenCV for use with an NVIDIA GPU inside a docker container. I have been building with the commands listed in steps to reproduce since July with no issues, however, now it is complaining about cudss. After running the build command, the following log is produced.
```0.136 -- The CXX compiler identification is GNU 11.4.0
0.154 -- The C compiler identification is GNU 11.4.0
0.157 -- Detecting CXX compiler ABI info
0.197 -- Detecting CXX compiler ABI info - done
0.200 -- Check for working CXX compiler: /usr/bin/c++ - skipped
0.200 -- Detecting CXX compile features
0.201 -- Detecting CXX compile features - done
0.202 -- Detecting C compiler ABI info
0.237 -- Detecting C compiler ABI info - done
0.240 -- Check for working C compiler: /usr/bin/cc - skipped
0.240 -- Detecting C compile features
0.240 -- Detecting C compile features - done
0.593 -- Detected processor: x86_64
0.612 -- Found PythonInterp: /usr/bin/python3 (found suitable version "3.10.12", minimum required is "3.2")
0.621 -- Found PythonLibs: /usr/lib/x86_64-linux-gnu/libpython3.10.so (found suitable exact version "3.10.12")
0.749 -- Looking for ccache - not found
0.749 -- Performing Test HAVE_CXX_FSIGNED_CHAR
0.791 -- Performing Test HAVE_CXX_FSIGNED_CHAR - Success
0.791 -- Performing Test HAVE_C_FSIGNED_CHAR
0.826 -- Performing Test HAVE_C_FSIGNED_CHAR - Success
0.827 -- Performing Test HAVE_CXX_FFAST_MATH
0.866 -- Performing Test HAVE_CXX_FFAST_MATH - Success
0.866 -- Performing Test HAVE_C_FFAST_MATH
0.901 -- Performing Test HAVE_C_FFAST_MATH - Success
0.901 -- Performing Test HAVE_CXX_FNO_FINITE_MATH_ONLY
0.941 -- Performing Test HAVE_CXX_FNO_FINITE_MATH_ONLY - Success
0.941 -- Performing Test HAVE_C_FNO_FINITE_MATH_ONLY
0.976 -- Performing Test HAVE_C_FNO_FINITE_MATH_ONLY - Success
0.976 -- Performing Test HAVE_CXX_W
1.016 -- Performing Test HAVE_CXX_W - Success
1.016 -- Performing Test HAVE_C_W
1.050 -- Performing Test HAVE_C_W - Success
1.050 -- Performing Test HAVE_CXX_WALL
1.089 -- Performing Test HAVE_CXX_WALL - Success
1.089 -- Performing Test HAVE_C_WALL
1.125 -- Performing Test HAVE_C_WALL - Success
1.125 -- Performing Test HAVE_CXX_WRETURN_TYPE
1.165 -- Performing Test HAVE_CXX_WRETURN_TYPE - Success
1.165 -- Performing Test HAVE_C_WRETURN_TYPE
1.202 -- Performing Test HAVE_C_WRETURN_TYPE - Success
1.202 -- Performing Test HAVE_CXX_WNON_VIRTUAL_DTOR
1.241 -- Performing Test HAVE_CXX_WNON_VIRTUAL_DTOR - Success
1.242 -- Performing Test HAVE_C_WNON_VIRTUAL_DTOR
1.279 -- Performing Test HAVE_C_WNON_VIRTUAL_DTOR - Failed
1.279 -- Performing Test HAVE_CXX_WADDRESS
1.320 -- Performing Test HAVE_CXX_WADDRESS - Success
1.320 -- Performing Test HAVE_C_WADDRESS
1.354 -- Performing Test HAVE_C_WADDRESS - Success
1.354 -- Performing Test HAVE_CXX_WSEQUENCE_POINT
1.394 -- Performing Test HAVE_CXX_WSEQUENCE_POINT - Success
1.395 -- Performing Test HAVE_C_WSEQUENCE_POINT
1.431 -- Performing Test HAVE_C_WSEQUENCE_POINT - Success
1.431 -- Performing Test HAVE_CXX_WFORMAT
1.470 -- Performing Test HAVE_CXX_WFORMAT - Success
1.470 -- Performing Test HAVE_C_WFORMAT
1.505 -- Performing Test HAVE_C_WFORMAT - Success
1.505 -- Performing Test HAVE_CXX_WFORMAT_SECURITY
1.546 -- Performing Test HAVE_CXX_WFORMAT_SECURITY - Success
1.546 -- Performing Test HAVE_C_WFORMAT_SECURITY
1.582 -- Performing Test HAVE_C_WFORMAT_SECURITY - Success
1.583 -- Performing Test HAVE_CXX_WMISSING_DECLARATIONS
1.624 -- Performing Test HAVE_CXX_WMISSING_DECLARATIONS - Success
1.624 -- Performing Test HAVE_C_WMISSING_DECLARATIONS
1.660 -- Performing Test HAVE_C_WMISSING_DECLARATIONS - Success
1.660 -- Performing Test HAVE_CXX_WMISSING_PROTOTYPES
1.701 -- Performing Test HAVE_CXX_WMISSING_PROTOTYPES - Failed
1.701 -- Performing Test HAVE_C_WMISSING_PROTOTYPES
1.737 -- Performing Test HAVE_C_WMISSING_PROTOTYPES - Success
1.738 -- Performing Test HAVE_CXX_WSTRICT_PROTOTYPES
1.777 -- Performing Test HAVE_CXX_WSTRICT_PROTOTYPES - Failed
1.777 -- Performing Test HAVE_C_WSTRICT_PROTOTYPES
1.814 -- Performing Test HAVE_C_WSTRICT_PROTOTYPES - Success
1.814 -- Performing Test HAVE_CXX_WUNDEF
1.855 -- Performing Test HAVE_CXX_WUNDEF - Success
1.855 -- Performing Test HAVE_C_WUNDEF
1.891 -- Performing Test HAVE_C_WUNDEF - Success
1.892 -- Performing Test HAVE_CXX_WINIT_SELF
1.930 -- Performing Test HAVE_CXX_WINIT_SELF - Success
1.930 -- Performing Test HAVE_C_WINIT_SELF
1.968 -- Performing Test HAVE_C_WINIT_SELF - Success
1.968 -- Performing Test HAVE_CXX_WPOINTER_ARITH
2.009 -- Performing Test HAVE_CXX_WPOINTER_ARITH - Success
2.009 -- Performing Test HAVE_C_WPOINTER_ARITH
2.046 -- Performing Test HAVE_C_WPOINTER_ARITH - Success
2.046 -- Performing Test HAVE_CXX_WSHADOW
2.087 -- Performing Test HAVE_CXX_WSHADOW - Success
2.087 -- Performing Test HAVE_C_WSHADOW
2.123 -- Performing Test HAVE_C_WSHADOW - Success
2.124 -- Performing Test HAVE_CXX_WSIGN_PROMO
2.165 -- Performing Test HAVE_CXX_WSIGN_PROMO - Success
2.165 -- Performing Test HAVE_C_WSIGN_PROMO
2.201 -- Performing Test HAVE_C_WSIGN_PROMO - Failed
2.201 -- Performing Test HAVE_CXX_WUNINITIALIZED
2.242 -- Performing Test HAVE_CXX_WUNINITIALIZED - Success
2.242 -- Performing Test HAVE_C_WUNINITIALIZED
2.278 -- Performing Test HAVE_C_WUNINITIALIZED - Success
2.278 -- Performing Test HAVE_CXX_WSUGGEST_OVERRIDE
2.318 -- Performing Test HAVE_CXX_WSUGGEST_OVERRIDE - Success
2.318 -- Performing Test HAVE_C_WSUGGEST_OVERRIDE
2.354 -- Performing Test HAVE_C_WSUGGEST_OVERRIDE - Failed
2.355 -- Performing Test HAVE_CXX_WNO_DELETE_NON_VIRTUAL_DTOR
2.398 -- Performing Test HAVE_CXX_WNO_DELETE_NON_VIRTUAL_DTOR - Success
2.398 -- Performing Test HAVE_C_WNO_DELETE_NON_VIRTUAL_DTOR
2.434 -- Performing Test HAVE_C_WNO_DELETE_NON_VIRTUAL_DTOR - Failed
2.434 -- Performing Test HAVE_CXX_WNO_UNNAMED_TYPE_TEMPLATE_ARGS
2.473 -- Performing Test HAVE_CXX_WNO_UNNAMED_TYPE_TEMPLATE_ARGS - Failed
2.474 -- Performing Test HAVE_C_WNO_UNNAMED_TYPE_TEMPLATE_ARGS
2.508 -- Performing Test HAVE_C_WNO_UNNAMED_TYPE_TEMPLATE_ARGS - Failed
2.508 -- Performing Test HAVE_CXX_WNO_COMMENT
2.547 -- Performing Test HAVE_CXX_WNO_COMMENT - Success
2.548 -- Performing Test HAVE_C_WNO_COMMENT
2.583 -- Performing Test HAVE_C_WNO_COMMENT - Success
2.583 -- Performing Test HAVE_CXX_WIMPLICIT_FALLTHROUGH_3
2.625 -- Performing Test HAVE_CXX_WIMPLICIT_FALLTHROUGH_3 - Success
2.625 -- Performing Test HAVE_C_WIMPLICIT_FALLTHROUGH_3
2.660 -- Performing Test HAVE_C_WIMPLICIT_FALLTHROUGH_3 - Success
2.661 -- Performing Test HAVE_CXX_WNO_STRICT_OVERFLOW
2.701 -- Performing Test HAVE_CXX_WNO_STRICT_OVERFLOW - Success
2.701 -- Performing Test HAVE_C_WNO_STRICT_OVERFLOW
2.737 -- Performing Test HAVE_C_WNO_STRICT_OVERFLOW - Success
2.737 -- Performing Test HAVE_CXX_FDIAGNOSTICS_SHOW_OPTION
2.777 -- Performing Test HAVE_CXX_FDIAGNOSTICS_SHOW_OPTION - Success
2.777 -- Performing Test HAVE_C_FDIAGNOSTICS_SHOW_OPTION
2.813 -- Performing Test HAVE_C_FDIAGNOSTICS_SHOW_OPTION - Success
2.813 -- Performing Test HAVE_CXX_WNO_LONG_LONG
2.854 -- Performing Test HAVE_CXX_WNO_LONG_LONG - Success
2.854 -- Performing Test HAVE_C_WNO_LONG_LONG
2.889 -- Performing Test HAVE_C_WNO_LONG_LONG - Success
2.890 -- Performing Test HAVE_CXX_PTHREAD
2.929 -- Performing Test HAVE_CXX_PTHREAD - Success
2.929 -- Performing Test HAVE_C_PTHREAD
2.966 -- Performing Test HAVE_C_PTHREAD - Success
2.966 -- Performing Test HAVE_CXX_FOMIT_FRAME_POINTER
3.005 -- Performing Test HAVE_CXX_FOMIT_FRAME_POINTER - Success
3.005 -- Performing Test HAVE_C_FOMIT_FRAME_POINTER
3.041 -- Performing Test HAVE_C_FOMIT_FRAME_POINTER - Success
3.041 -- Performing Test HAVE_CXX_FFUNCTION_SECTIONS
3.080 -- Performing Test HAVE_CXX_FFUNCTION_SECTIONS - Success
3.080 -- Performing Test HAVE_C_FFUNCTION_SECTIONS
3.117 -- Performing Test HAVE_C_FFUNCTION_SECTIONS - Success
3.118 -- Performing Test HAVE_CXX_FDATA_SECTIONS
3.159 -- Performing Test HAVE_CXX_FDATA_SECTIONS - Success
3.159 -- Performing Test HAVE_C_FDATA_SECTIONS
3.194 -- Performing Test HAVE_C_FDATA_SECTIONS - Success
3.196 -- Performing Test HAVE_CPU_SSE_SUPPORT (check file: cmake/checks/cpu_sse.cpp)
3.245 -- Performing Test HAVE_CPU_SSE_SUPPORT - Success
3.246 -- Performing Test HAVE_CPU_SSE2_SUPPORT (check file: cmake/checks/cpu_sse2.cpp)
3.296 -- Performing Test HAVE_CPU_SSE2_SUPPORT - Success
3.296 -- Performing Test HAVE_CPU_SSE3_SUPPORT (check file: cmake/checks/cpu_sse3.cpp)
3.322 -- Performing Test HAVE_CPU_SSE3_SUPPORT - Failed
3.323 -- Performing Test HAVE_CXX_MSSE3 (check file: cmake/checks/cpu_sse3.cpp)
3.374 -- Performing Test HAVE_CXX_MSSE3 - Success
3.374 -- Performing Test HAVE_CXX_MSSSE3 (check file: cmake/checks/cpu_ssse3.cpp)
3.426 -- Performing Test HAVE_CXX_MSSSE3 - Success
3.426 -- Performing Test HAVE_CXX_MSSE4_1 (check file: cmake/checks/cpu_sse41.cpp)
3.476 -- Performing Test HAVE_CXX_MSSE4_1 - Success
3.476 -- Performing Test HAVE_CXX_MPOPCNT (check file: cmake/checks/cpu_popcnt.cpp)
3.517 -- Performing Test HAVE_CXX_MPOPCNT - Success
3.517 -- Performing Test HAVE_CXX_MSSE4_2 (check file: cmake/checks/cpu_sse42.cpp)
3.569 -- Performing Test HAVE_CXX_MSSE4_2 - Success
3.569 -- Performing Test HAVE_CXX_MAVX (check file: cmake/checks/cpu_avx.cpp)
3.736 -- Performing Test HAVE_CXX_MAVX - Success
3.736 -- Performing Test HAVE_CXX_MF16C (check file: cmake/checks/cpu_fp16.cpp)
3.908 -- Performing Test HAVE_CXX_MF16C - Success
3.908 -- Performing Test HAVE_CXX_MAVX2 (check file: cmake/checks/cpu_avx2.cpp)
4.073 -- Performing Test HAVE_CXX_MAVX2 - Success
4.074 -- Performing Test HAVE_CXX_MFMA
4.116 -- Performing Test HAVE_CXX_MFMA - Success
4.116 -- Performing Test HAVE_CXX_MAVX512F (check file: cmake/checks/cpu_avx512.cpp)
4.272 -- Performing Test HAVE_CXX_MAVX512F - Success
4.272 -- Performing Test HAVE_CXX_MAVX512F_MAVX512CD (check file: cmake/checks/cpu_avx512common.cpp)
4.432 -- Performing Test HAVE_CXX_MAVX512F_MAVX512CD - Success
4.433 -- Performing Test HAVE_CXX_MAVX512F_MAVX512CD_MAVX512VL_MAVX512BW_MAVX512DQ (check file: cmake/checks/cpu_avx512skx.cpp)
4.572 -- Performing Test HAVE_CXX_MAVX512F_MAVX512CD_MAVX512VL_MAVX512BW_MAVX512DQ - Success
4.573 -- Performing Test HAVE_CPU_BASELINE_FLAGS
4.615 -- Performing Test HAVE_CPU_BASELINE_FLAGS - Success
4.616 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_SSE4_1
4.658 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_SSE4_1 - Success
4.659 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_SSE4_2
4.700 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_SSE4_2 - Success
4.701 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX
4.741 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX - Success
4.742 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_FP16
4.783 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_FP16 - Success
4.784 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX2
4.825 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX2 - Success
4.826 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX512_SKX
4.868 -- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX512_SKX - Success
4.868 -- Performing Test HAVE_CXX_FVISIBILITY_HIDDEN
4.908 -- Performing Test HAVE_CXX_FVISIBILITY_HIDDEN - Success
4.908 -- Performing Test HAVE_C_FVISIBILITY_HIDDEN
4.944 -- Performing Test HAVE_C_FVISIBILITY_HIDDEN - Success
4.944 -- Performing Test HAVE_CXX_FVISIBILITY_INLINES_HIDDEN
4.984 -- Performing Test HAVE_CXX_FVISIBILITY_INLINES_HIDDEN - Success
4.984 -- Performing Test HAVE_C_FVISIBILITY_INLINES_HIDDEN
5.020 -- Performing Test HAVE_C_FVISIBILITY_INLINES_HIDDEN - Failed
5.021 -- Performing Test HAVE_LINK_AS_NEEDED
5.060 -- Performing Test HAVE_LINK_AS_NEEDED - Success
5.060 -- Performing Test HAVE_LINK_NO_UNDEFINED
5.101 -- Performing Test HAVE_LINK_NO_UNDEFINED - Success
5.103 -- Looking for pthread.h
5.141 -- Looking for pthread.h - found
5.142 -- Looking for posix_memalign
5.180 -- Looking for posix_memalign - found
5.180 -- Looking for malloc.h
5.219 -- Looking for malloc.h - found
5.219 -- Looking for memalign
5.258 -- Looking for memalign - found
5.385 -- Found OpenMP_C: -fopenmp (found version "4.5")
5.432 -- Found OpenMP_CXX: -fopenmp (found version "4.5")
5.432 -- Found OpenMP: TRUE (found version "4.5")
5.437 -- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found suitable version "1.2.11", minimum required is "1.2.3")
5.445 -- Could NOT find AVIF (missing: AVIF_LIBRARY AVIF_INCLUDE_DIR)
5.453 -- Found JPEG: /usr/lib/x86_64-linux-gnu/libjpeg.so (found version "80")
5.455 -- Looking for assert.h
5.494 -- Looking for assert.h - found
5.494 -- Looking for dlfcn.h
5.531 -- Looking for dlfcn.h - found
5.531 -- Looking for fcntl.h
5.568 -- Looking for fcntl.h - found
5.568 -- Looking for inttypes.h
5.606 -- Looking for inttypes.h - found
5.606 -- Looking for io.h
5.621 -- Looking for io.h - not found
5.621 -- Looking for limits.h
5.658 -- Looking for limits.h - found
5.658 -- Looking for memory.h
5.695 -- Looking for memory.h - found
5.695 -- Looking for search.h
5.733 -- Looking for search.h - found
5.733 -- Looking for stdint.h
5.769 -- Looking for stdint.h - found
5.770 -- Looking for string.h
5.809 -- Looking for string.h - found
5.809 -- Looking for strings.h
5.846 -- Looking for strings.h - found
5.846 -- Looking for sys/time.h
5.885 -- Looking for sys/time.h - found
5.885 -- Looking for sys/types.h
5.923 -- Looking for sys/types.h - found
5.923 -- Looking for unistd.h
5.964 -- Looking for unistd.h - found
5.964 -- Performing Test C_HAS_inline
6.001 -- Performing Test C_HAS_inline - Success
6.001 -- Looking for stddef.h
6.040 -- Looking for stddef.h - found
6.040 -- Check size of signed short
6.079 -- Check size of signed short - done
6.080 -- Check size of unsigned short
6.120 -- Check size of unsigned short - done
6.120 -- Check size of signed int
6.158 -- Check size of signed int - done
6.158 -- Check size of unsigned int
6.199 -- Check size of unsigned int - done
6.200 -- Check size of signed long
6.239 -- Check size of signed long - done
6.239 -- Check size of unsigned long
6.277 -- Check size of unsigned long - done
6.277 -- Check size of signed long long
6.316 -- Check size of signed long long - done
6.316 -- Check size of unsigned long long
6.356 -- Check size of unsigned long long - done
6.356 -- Check size of unsigned char *
6.395 -- Check size of unsigned char * - done
6.395 -- Check size of size_t
6.433 -- Check size of size_t - done
6.433 -- Check size of ptrdiff_t
6.472 -- Check size of ptrdiff_t - done
6.472 -- Check size of INT8
6.488 -- Check size of INT8 - failed
6.488 -- Check size of INT16
6.504 -- Check size of INT16 - failed
6.504 -- Check size of INT32
6.520 -- Check size of INT32 - failed
6.520 -- Looking for floor
6.561 -- Looking for floor - found
6.561 -- Looking for pow
6.600 -- Looking for pow - found
6.600 -- Looking for sqrt
6.642 -- Looking for sqrt - found
6.642 -- Looking for isascii
6.678 -- Looking for isascii - found
6.678 -- Looking for memset
6.717 -- Looking for memset - found
6.717 -- Looking for mmap
6.755 -- Looking for mmap - found
6.755 -- Looking for getopt
6.791 -- Looking for getopt - found
6.792 -- Looking for memmove
6.831 -- Looking for memmove - found
6.831 -- Looking for setmode
6.868 -- Looking for setmode - not found
6.868 -- Looking for strcasecmp
6.909 -- Looking for strcasecmp - found
6.909 -- Looking for strchr
6.947 -- Looking for strchr - found
6.947 -- Looking for strrchr
6.984 -- Looking for strrchr - found
6.984 -- Looking for strstr
7.023 -- Looking for strstr - found
7.023 -- Looking for strtol
7.060 -- Looking for strtol - found
7.060 -- Looking for strtol
7.097 -- Looking for strtol - found
7.097 -- Looking for strtoull
7.134 -- Looking for strtoull - found
7.134 -- Looking for lfind
7.172 -- Looking for lfind - found
7.173 -- Performing Test HAVE_C_WNO_UNUSED_BUT_SET_VARIABLE
7.208 -- Performing Test HAVE_C_WNO_UNUSED_BUT_SET_VARIABLE - Success
7.209 -- Performing Test HAVE_C_WNO_MISSING_PROTOTYPES
7.246 -- Performing Test HAVE_C_WNO_MISSING_PROTOTYPES - Success
7.246 -- Performing Test HAVE_C_WNO_MISSING_DECLARATIONS
7.285 -- Performing Test HAVE_C_WNO_MISSING_DECLARATIONS - Success
7.285 -- Performing Test HAVE_C_WNO_UNDEF
7.322 -- Performing Test HAVE_C_WNO_UNDEF - Success
7.322 -- Performing Test HAVE_C_WNO_UNUSED
7.359 -- Performing Test HAVE_C_WNO_UNUSED - Success
7.359 -- Performing Test HAVE_C_WNO_SIGN_COMPARE
7.395 -- Performing Test HAVE_C_WNO_SIGN_COMPARE - Success
7.395 -- Performing Test HAVE_C_WNO_CAST_ALIGN
7.431 -- Performing Test HAVE_C_WNO_CAST_ALIGN - Success
7.431 -- Performing Test HAVE_C_WNO_SHADOW
7.468 -- Performing Test HAVE_C_WNO_SHADOW - Success
7.468 -- Performing Test HAVE_C_WNO_MAYBE_UNINITIALIZED
7.506 -- Performing Test HAVE_C_WNO_MAYBE_UNINITIALIZED - Success
7.506 -- Performing Test HAVE_C_WNO_POINTER_TO_INT_CAST
7.543 -- Performing Test HAVE_C_WNO_POINTER_TO_INT_CAST - Success
7.543 -- Performing Test HAVE_C_WNO_INT_TO_POINTER_CAST
7.579 -- Performing Test HAVE_C_WNO_INT_TO_POINTER_CAST - Success
7.580 -- Performing Test HAVE_C_WNO_MISLEADING_INDENTATION
7.617 -- Performing Test HAVE_C_WNO_MISLEADING_INDENTATION - Success
7.617 -- Performing Test HAVE_C_WNO_IMPLICIT_FALLTHROUGH
7.654 -- Performing Test HAVE_C_WNO_IMPLICIT_FALLTHROUGH - Success
7.654 -- Performing Test HAVE_C_WNO_UNUSED_PARAMETER
7.690 -- Performing Test HAVE_C_WNO_UNUSED_PARAMETER - Success
7.691 -- Performing Test HAVE_C_WNO_ARRAY_PARAMETER
7.727 -- Performing Test HAVE_C_WNO_ARRAY_PARAMETER - Success
7.727 -- Performing Test HAVE_C_WNO_STRICT_PROTOTYPES
7.763 -- Performing Test HAVE_C_WNO_STRICT_PROTOTYPES - Success
7.763 -- Performing Test HAVE_CXX_WNO_MISSING_DECLARATIONS
7.806 -- Performing Test HAVE_CXX_WNO_MISSING_DECLARATIONS - Success
7.806 -- Performing Test HAVE_CXX_WNO_UNUSED_PARAMETER
7.847 -- Performing Test HAVE_CXX_WNO_UNUSED_PARAMETER - Success
7.847 -- Performing Test HAVE_CXX_WNO_MISSING_PROTOTYPES
7.890 -- Performing Test HAVE_CXX_WNO_MISSING_PROTOTYPES - Failed
7.890 -- Performing Test HAVE_CXX_WNO_UNDEF
7.931 -- Performing Test HAVE_CXX_WNO_UNDEF - Success
7.933 -- Found WebP: /usr/lib/x86_64-linux-gnu/libwebp.so
7.937 -- The imported target "openjpip" references the file
7.937 "/usr/lib/x86_64-linux-gnu/libopenjpip.so.2.4.0"
7.937 but this file does not exist. Possible reasons include:
7.937 * The file was deleted, renamed, or moved to another location.
7.937 * An install or uninstall procedure did not complete successfully.
7.937 * The installation package was faulty and contained
7.937 "/usr/lib/x86_64-linux-gnu/openjpeg-2.4/OpenJPEGTargets.cmake"
7.937 but not all the files it references.
7.937
7.937 -- The imported target "opj_decompress" references the file
7.937 "/usr/bin/opj_decompress"
7.937 but this file does not exist. Possible reasons include:
7.937 * The file was deleted, renamed, or moved to another location.
7.937 * An install or uninstall procedure did not complete successfully.
7.937 * The installation package was faulty and contained
7.937 "/usr/lib/x86_64-linux-gnu/openjpeg-2.4/OpenJPEGTargets.cmake"
7.937 but not all the files it references.
7.937
7.937 -- The imported target "opj_compress" references the file
7.937 "/usr/bin/opj_compress"
7.937 but this file does not exist. Possible reasons include:
7.937 * The file was deleted, renamed, or moved to another location.
7.937 * An install or uninstall procedure did not complete successfully.
7.937 * The installation package was faulty and contained
7.937 "/usr/lib/x86_64-linux-gnu/openjpeg-2.4/OpenJPEGTargets.cmake"
7.937 but not all the files it references.
7.937
7.937 -- The imported target "opj_dump" references the file
7.937 "/usr/bin/opj_dump"
7.937 but this file does not exist. Possible reasons include:
7.937 * The file was deleted, renamed, or moved to another location.
7.937 * An install or uninstall procedure did not complete successfully.
7.937 * The installation package was faulty and contained
7.937 "/usr/lib/x86_64-linux-gnu/openjpeg-2.4/OpenJPEGTargets.cmake"
7.937 but not all the files it references.
7.937
7.937 -- The imported target "opj_jpip_addxml" references the file
7.937 "/usr/bin/opj_jpip_addxml"
7.937 but this file does not exist. Possible reasons include:
7.937 * The file was deleted, renamed, or moved to another location.
7.937 * An install or uninstall procedure did not complete successfully.
7.937 * The installation package was faulty and contained
7.937 "/usr/lib/x86_64-linux-gnu/openjpeg-2.4/OpenJPEGTargets.cmake"
7.937 but not all the files it references.
7.937
7.937 -- The imported target "opj_server" references the file
7.937 "/usr/bin/opj_server"
7.937 but this file does not exist. Possible reasons include:
7.937 * The file was deleted, renamed, or moved to another location.
7.937 * An install or uninstall procedure did not complete successfully.
7.937 * The installation package was faulty and contained
7.937 "/usr/lib/x86_64-linux-gnu/openjpeg-2.4/OpenJPEGTargets.cmake"
7.937 but not all the files it references.
7.937
7.937 -- The imported target "opj_dec_server" references the file
7.937 "/usr/bin/opj_dec_server"
7.937 but this file does not exist. Possible reasons include:
7.937 * The file was deleted, renamed, or moved to another location.
7.937 * An install or uninstall procedure did not complete successfully.
7.937 * The installation package was faulty and contained
7.937 "/usr/lib/x86_64-linux-gnu/openjpeg-2.4/OpenJPEGTargets.cmake"
7.937 but not all the files it references.
7.937
7.937 -- The imported target "opj_jpip_transcode" references the file
7.937 "/usr/bin/opj_jpip_transcode"
7.937 but this file does not exist. Possible reasons include:
7.937 * The file was deleted, renamed, or moved to another location.
7.937 * An install or uninstall procedure did not complete successfully.
7.937 * The installation package was faulty and contained
7.937 "/usr/lib/x86_64-linux-gnu/openjpeg-2.4/OpenJPEGTargets.cmake"
7.937 but not all the files it references.
7.937
7.937 -- The imported target "opj_jpip_test" references the file
7.937 "/usr/bin/opj_jpip_test"
7.937 but this file does not exist. Possible reasons include:
7.937 * The file was deleted, renamed, or moved to another location.
7.937 * An install or uninstall procedure did not complete successfully.
7.937 * The installation package was faulty and contained
7.937 "/usr/lib/x86_64-linux-gnu/openjpeg-2.4/OpenJPEGTargets.cmake"
7.937 but not all the files it references.
7.937
7.937 -- Found system OpenJPEG: openjp2 (found version "2.4.0")
7.953 -- Found OpenEXR: /usr/lib/x86_64-linux-gnu/libIlmImf-2_5.so
7.965 -- libva: missing va.h header (VA_INCLUDE_DIR)
7.967 -- Found TBB (cmake): _lib-NOTFOUND
7.967 -- IPPICV: Downloading ippicv_2021.12.0_lnx_intel64_20240425_general.tgz from https://raw.githubusercontent.com/opencv/opencv_3rdparty/7f55c0c26be418d494615afca15218566775c725/ippicv/ippicv_2021.12.0_lnx_intel64_20240425_general.tgz
9.866 -- found Intel IPP (ICV version): 2021.12.0 [2021.12.0]
9.866 -- at: /opencv/build/3rdparty/ippicv/ippicv_lnx/icv
9.867 -- found Intel IPP Integration Wrappers sources: 2021.12.0
9.867 -- at: /opencv/build/3rdparty/ippicv/ippicv_lnx/iw
9.880 -- Looking for pthread.h
9.920 -- Looking for pthread.h - found
9.920 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
9.959 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
9.959 -- Found Threads: TRUE
9.964 -- Found CUDNN: /usr/lib/x86_64-linux-gnu/libcudnn.so (found suitable version "9.3.0", minimum required is "7.5")
9.964 -- NVCUVID: Header not found, WITH_NVCUVID requires Nvidia decoding library header /usr/local/cuda;/usr/local/cuda/include/nvcuvid.h
9.964 -- NVCUVENC: Header not found, WITH_NVCUVENC requires Nvidia encoding library header /usr/local/cuda;/usr/local/cuda/include/nvEncodeAPI.h
9.964 -- CUDA detected: 12.6
9.965 -- CUDA: Using CUDA_ARCH_BIN=8.6
9.965 -- CUDA: NVCC target flags -gencode;arch=compute_86,code=sm_86;-D_FORCE_INLINES
9.970 -- Could not find OpenBLAS include. Turning OpenBLAS_FOUND off
9.970 -- Could not find OpenBLAS lib. Turning OpenBLAS_FOUND off
9.973 -- Found Atlas: /usr/include/x86_64-linux-gnu
9.973 -- Found Atlas (include: /usr/include/x86_64-linux-gnu, library: /usr/lib/x86_64-linux-gnu/libatlas.so)
9.973 -- LAPACK(Atlas): LAPACK_LIBRARIES: /usr/lib/x86_64-linux-gnu/liblapack.so;/usr/lib/x86_64-linux-gnu/libcblas.so;/usr/lib/x86_64-linux-gnu/libatlas.so
10.16 -- LAPACK(Atlas): Support is enabled.
10.16 -- Performing Test HAVE_CXX_WNO_DEPRECATED
10.21 -- Performing Test HAVE_CXX_WNO_DEPRECATED - Success
10.21 -- Performing Test HAVE_CXX_WNO_SHADOW
10.25 -- Performing Test HAVE_CXX_WNO_SHADOW - Success
10.25 -- Performing Test HAVE_CXX_WNO_UNUSED_LOCAL_TYPEDEFS
10.30 -- Performing Test HAVE_CXX_WNO_UNUSED_LOCAL_TYPEDEFS - Success
10.30 -- Performing Test HAVE_CXX_WNO_SIGN_COMPARE
10.34 -- Performing Test HAVE_CXX_WNO_SIGN_COMPARE - Success
10.34 -- Performing Test HAVE_CXX_WNO_SIGN_PROMO
10.38 -- Performing Test HAVE_CXX_WNO_SIGN_PROMO - Success
10.38 -- Performing Test HAVE_CXX_WNO_TAUTOLOGICAL_UNDEFINED_COMPARE
10.43 -- Performing Test HAVE_CXX_WNO_TAUTOLOGICAL_UNDEFINED_COMPARE - Failed
10.43 -- Performing Test HAVE_CXX_WNO_IGNORED_QUALIFIERS
10.47 -- Performing Test HAVE_CXX_WNO_IGNORED_QUALIFIERS - Success
10.47 -- Performing Test HAVE_CXX_WNO_EXTRA
10.51 -- Performing Test HAVE_CXX_WNO_EXTRA - Success
10.51 -- Performing Test HAVE_CXX_WNO_UNUSED_FUNCTION
10.56 -- Performing Test HAVE_CXX_WNO_UNUSED_FUNCTION - Success
10.56 -- Performing Test HAVE_CXX_WNO_UNUSED_CONST_VARIABLE
10.60 -- Performing Test HAVE_CXX_WNO_UNUSED_CONST_VARIABLE - Success
10.60 -- Performing Test HAVE_CXX_WNO_SHORTEN_64_TO_32
10.64 -- Performing Test HAVE_CXX_WNO_SHORTEN_64_TO_32 - Failed
10.64 -- Performing Test HAVE_CXX_WNO_INVALID_OFFSETOF
10.69 -- Performing Test HAVE_CXX_WNO_INVALID_OFFSETOF - Success
10.69 -- Performing Test HAVE_CXX_WNO_ENUM_COMPARE_SWITCH
10.73 -- Performing Test HAVE_CXX_WNO_ENUM_COMPARE_SWITCH - Failed
10.73 -- Performing Test HAVE_CXX_WNO_SUGGEST_OVERRIDE
10.77 -- Performing Test HAVE_CXX_WNO_SUGGEST_OVERRIDE - Success
10.77 -- Performing Test HAVE_CXX_WNO_INCONSISTENT_MISSING_OVERRIDE
10.82 -- Performing Test HAVE_CXX_WNO_INCONSISTENT_MISSING_OVERRIDE - Failed
10.82 -- Performing Test HAVE_CXX_WNO_IMPLICIT_FALLTHROUGH
10.86 -- Performing Test HAVE_CXX_WNO_IMPLICIT_FALLTHROUGH - Success
10.86 -- Performing Test HAVE_CXX_WNO_ARRAY_BOUNDS
10.90 -- Performing Test HAVE_CXX_WNO_ARRAY_BOUNDS - Success
10.90 -- Performing Test HAVE_CXX_WNO_STRINGOP_OVERFLOW
10.94 -- Performing Test HAVE_CXX_WNO_STRINGOP_OVERFLOW - Success
10.94 -- Performing Test HAVE_CXX_WNO_STRINGOP_OVERREAD
10.99 -- Performing Test HAVE_CXX_WNO_STRINGOP_OVERREAD - Success
10.99 -- Performing Test HAVE_CXX_WNO_EXTRA_SEMI
11.03 -- Performing Test HAVE_CXX_WNO_EXTRA_SEMI - Success
11.03 -- Performing Test HAVE_CXX_WNO_COMMA
11.07 -- Performing Test HAVE_CXX_WNO_COMMA - Failed
11.07 -- Performing Test HAVE_CXX_WNO_CLASS_MEMACCESS
11.12 -- Performing Test HAVE_CXX_WNO_CLASS_MEMACCESS - Success
11.13 -- Found Java: /usr/bin/java (found version "11.0.25")
11.14 -- Found JNI: /usr/lib/jvm/default-java/lib/libjawt.so
12.76 -- Found VTK 9.1.0
12.76 -- Looking for dlerror in dl
12.81 -- Looking for dlerror in dl - found
12.81 -- ADE: Downloading v0.1.2e.zip from https://github.com/opencv/ade/archive/v0.1.2e.zip
13.55 -- Checking for module 'gtk+-3.0'
13.56 -- Found gtk+-3.0, version 3.24.33
13.59 -- Checking for module 'gtk+-2.0'
13.60 -- Found gtk+-2.0, version 2.24.33
13.64 -- Performing Test HAVE_CXX_WNO_STRICT_ALIASING
13.68 -- Performing Test HAVE_CXX_WNO_STRICT_ALIASING - Success
13.68 -- Checking for modules 'libavcodec;libavformat;libavutil;libswscale'
13.70 -- Found libavcodec, version 58.134.100
13.70 -- Found libavformat, version 58.76.100
13.71 -- Found libavutil, version 56.70.100
13.71 -- Found libswscale, version 5.9.100
13.73 -- Checking for module 'libavresample'
13.73 -- No package 'libavresample' found
13.86 -- Checking for module 'gstreamer-base-1.0'
13.87 -- Found gstreamer-base-1.0, version 1.20.3
13.89 -- Checking for module 'gstreamer-app-1.0'
13.90 -- Found gstreamer-app-1.0, version 1.20.1
13.92 -- Checking for module 'gstreamer-riff-1.0'
13.93 -- Found gstreamer-riff-1.0, version 1.20.1
13.95 -- Checking for module 'gstreamer-pbutils-1.0'
13.95 -- Found gstreamer-pbutils-1.0, version 1.20.1
13.97 -- Checking for module 'gstreamer-video-1.0'
13.98 -- Found gstreamer-video-1.0, version 1.20.1
14.00 -- Checking for module 'gstreamer-audio-1.0'
14.00 -- Found gstreamer-audio-1.0, version 1.20.1
14.13 -- Performing Test HAVE_CXX_WNO_ENUM_COMPARE
14.17 -- Performing Test HAVE_CXX_WNO_ENUM_COMPARE - Success
14.18 -- Performing Test HAVE_CXX_WNO_UNINITIALIZED
14.22 -- Performing Test HAVE_CXX_WNO_UNINITIALIZED - Success
14.22 -- Performing Test HAVE_CXX_WNO_DEPRECATED_DECLARATIONS
14.26 -- Performing Test HAVE_CXX_WNO_DEPRECATED_DECLARATIONS - Success
14.26 -- Performing Test HAVE_CXX_WNO_TAUTOLOGICAL_COMPARE
14.31 -- Performing Test HAVE_CXX_WNO_TAUTOLOGICAL_COMPARE - Success
14.31 -- Performing Test HAVE_CXX_WNO_UNUSED_VARIABLE
14.36 -- Performing Test HAVE_CXX_WNO_UNUSED_VARIABLE - Success
14.36 -- Checking for module 'freetype2'
14.37 -- Found freetype2, version 24.1.18
14.39 -- Checking for module 'harfbuzz'
14.40 -- Found harfbuzz, version 2.7.4
14.42 -- freetype2: YES (ver 24.1.18)
14.42 -- harfbuzz: YES (ver 2.7.4)
14.53 -- Found HDF5: /usr/lib/x86_64-linux-gnu/hdf5/serial/libhdf5.so;/usr/lib/x86_64-linux-gnu/libcrypto.so;/usr/lib/x86_64-linux-gnu/libcurl.so;/usr/lib/x86_64-linux-gnu/libpthread.a;/usr/lib/x86_64-linux-gnu/libsz.so;/usr/lib/x86_64-linux-gnu/libz.so;/usr/lib/x86_64-linux-gnu/libdl.a;/usr/lib/x86_64-linux-gnu/libm.so (found version "1.10.7")
14.54 -- Julia not found. Not compiling Julia Bindings.
14.55 -- Module opencv_ovis disabled because OGRE3D was not found
14.68 -- Found AMD headers in: /usr/include/suitesparse
14.68 -- Found AMD library: /usr/lib/x86_64-linux-gnu/libamd.so
14.68 -- Found CAMD headers in: /usr/include/suitesparse
14.68 -- Found CAMD library: /usr/lib/x86_64-linux-gnu/libcamd.so
14.68 -- Found CCOLAMD headers in: /usr/include/suitesparse
14.68 -- Found CCOLAMD library: /usr/lib/x86_64-linux-gnu/libccolamd.so
14.68 -- Found CHOLMOD headers in: /usr/include/suitesparse
14.68 -- Found CHOLMOD library: /usr/lib/x86_64-linux-gnu/libcholmod.so
14.68 -- Found COLAMD headers in: /usr/include/suitesparse
14.68 -- Found COLAMD library: /usr/lib/x86_64-linux-gnu/libcolamd.so
14.68 -- Found SPQR headers in: /usr/include/suitesparse
14.68 -- Found SPQR library: /usr/lib/x86_64-linux-gnu/libspqr.so
14.68 -- Found Config headers in: /usr/include/suitesparse
14.68 -- Found Config library: /usr/lib/x86_64-linux-gnu/libsuitesparseconfig.so
14.68 -- Found Intel Thread Building Blocks (TBB) library (2021.5 / 12050). Assuming SuiteSparseQR was compiled with TBB.
14.72 -- Adding librt to SuiteSparse_config libraries (required on Linux & Unix [not OSX] if SuiteSparse is compiled with timing).
14.73 -- Found METIS: /usr/include (found version "5.1.0")
14.73 -- Looking for cholmod_metis
14.80 -- Looking for cholmod_metis - found
14.83 -- Found cudss: (Version:0.3.0
14.83 CMakePackageDir:/usr/local/libcudss-linux-0.3.0.9_cuda12-archive/lib/cmake/cudss
14.83 IncludeDir:/usr/local/libcudss-linux-0.3.0.9_cuda12-archive/include
14.83 LibraryDir:/usr/local/libcudss-linux-0.3.0.9_cuda12-archive/lib/.
14.83 ComponentsFound:[cudss])
14.98 -- Checking SFM glog/gflags deps... TRUE
15.00 -- Checking for module 'tesseract'
15.01 -- Found tesseract, version 4.1.1
15.03 -- Tesseract: YES (ver 4.1.1)
15.30 -- Allocator metrics storage type: 'long long'
15.30 -- Performing Test HAVE_CXX_WNO_UNUSED_BUT_SET_VARIABLE
15.35 -- Performing Test HAVE_CXX_WNO_UNUSED_BUT_SET_VARIABLE - Success
15.51 -- Excluding from source files list: modules/imgproc/src/imgwarp.lasx.cpp
15.51 -- Excluding from source files list: modules/imgproc/src/resize.lasx.cpp
15.57 -- Registering hook 'INIT_MODULE_SOURCES_opencv_dnn': /opencv/modules/dnn/cmake/hooks/INIT_MODULE_SOURCES_opencv_dnn.cmake
15.58 -- opencv_dnn: filter out ocl4dnn source code
15.58 -- Excluding from source files list: <BUILD>/modules/dnn/layers/layers_common.rvv.cpp
15.58 -- Excluding from source files list: <BUILD>/modules/dnn/layers/layers_common.lasx.cpp
15.58 -- Excluding from source files list: <BUILD>/modules/dnn/int8layers/layers_common.rvv.cpp
15.58 -- Excluding from source files list: <BUILD>/modules/dnn/int8layers/layers_common.lasx.cpp
15.59 -- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/conv_block.neon.cpp
15.59 -- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/conv_block.neon_fp16.cpp
15.59 -- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/conv_depthwise.rvv.cpp
15.59 -- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/conv_depthwise.lasx.cpp
15.59 -- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/fast_gemm_kernels.neon.cpp
15.59 -- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/fast_gemm_kernels.lasx.cpp
15.64 -- Performing Test HAVE_CXX_WNO_OVERLOADED_VIRTUAL
15.68 -- Performing Test HAVE_CXX_WNO_OVERLOADED_VIRTUAL - Success
15.70 CMake Warning at /opencv_contrib/modules/cudacodec/CMakeLists.txt:26 (message):
15.70 cudacodec::VideoReader requires Nvidia Video Codec SDK. Please resolve
15.70 dependency or disable WITH_NVCUVID=OFF
15.70
15.70
15.70 CMake Warning at /opencv_contrib/modules/cudacodec/CMakeLists.txt:30 (message):
15.70 cudacodec::VideoWriter requires Nvidia Video Codec SDK. Please resolve
15.70 dependency or disable WITH_NVCUVENC=OFF
15.70
15.70
15.73 -- highgui: using builtin backend: QT5
15.77 -- Performing Test Iconv_IS_BUILT_IN
15.81 -- Performing Test Iconv_IS_BUILT_IN - Success
15.81 -- wechat_qrcode: Downloading detect.caffemodel from https://raw.githubusercontent.com/WeChatCV/opencv_3rdparty/a8b69ccc738421293254aec5ddb38bd523503252/detect.caffemodel
16.28 -- wechat_qrcode: Downloading detect.prototxt from https://raw.githubusercontent.com/WeChatCV/opencv_3rdparty/a8b69ccc738421293254aec5ddb38bd523503252/detect.prototxt
16.54 -- wechat_qrcode: Downloading sr.caffemodel from https://raw.githubusercontent.com/WeChatCV/opencv_3rdparty/a8b69ccc738421293254aec5ddb38bd523503252/sr.caffemodel
16.77 -- wechat_qrcode: Downloading sr.prototxt from https://raw.githubusercontent.com/WeChatCV/opencv_3rdparty/a8b69ccc738421293254aec5ddb38bd523503252/sr.prototxt
16.98 -- xfeatures2d/boostdesc: Downloading boostdesc_bgm.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_bgm.i
17.15 -- xfeatures2d/boostdesc: Downloading boostdesc_bgm_bi.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_bgm_bi.i
17.31 -- xfeatures2d/boostdesc: Downloading boostdesc_bgm_hd.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_bgm_hd.i
17.46 -- xfeatures2d/boostdesc: Downloading boostdesc_binboost_064.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_binboost_064.i
17.78 -- xfeatures2d/boostdesc: Downloading boostdesc_binboost_128.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_binboost_128.i
18.06 -- xfeatures2d/boostdesc: Downloading boostdesc_binboost_256.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_binboost_256.i
18.46 -- xfeatures2d/boostdesc: Downloading boostdesc_lbgm.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_lbgm.i
18.87 -- xfeatures2d/vgg: Downloading vgg_generated_48.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_48.i
19.59 -- xfeatures2d/vgg: Downloading vgg_generated_64.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_64.i
19.97 -- xfeatures2d/vgg: Downloading vgg_generated_80.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_80.i
20.41 -- xfeatures2d/vgg: Downloading vgg_generated_120.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_120.i
20.86 -- data: Downloading face_landmark_model.dat from https://raw.githubusercontent.com/opencv/opencv_3rdparty/8afa57abc8229d611c4937165d20e2a2d9fc5a12/face_landmark_model.dat
22.72 -- Use autogenerated whitelist /opencv/build/modules/js_bindings_generator/whitelist.json
22.75 -- Found AMD headers in: /usr/include/suitesparse
22.75 -- Found AMD library: /usr/lib/x86_64-linux-gnu/libamd.so
22.75 -- Found CAMD headers in: /usr/include/suitesparse
22.75 -- Found CAMD library: /usr/lib/x86_64-linux-gnu/libcamd.so
22.75 -- Found CCOLAMD headers in: /usr/include/suitesparse
22.75 -- Found CCOLAMD library: /usr/lib/x86_64-linux-gnu/libccolamd.so
22.75 -- Found CHOLMOD headers in: /usr/include/suitesparse
22.75 -- Found CHOLMOD library: /usr/lib/x86_64-linux-gnu/libcholmod.so
22.75 -- Found COLAMD headers in: /usr/include/suitesparse
22.75 -- Found COLAMD library: /usr/lib/x86_64-linux-gnu/libcolamd.so
22.75 -- Found SPQR headers in: /usr/include/suitesparse
22.75 -- Found SPQR library: /usr/lib/x86_64-linux-gnu/libspqr.so
22.75 -- Found Config headers in: /usr/include/suitesparse
22.75 -- Found Config library: /usr/lib/x86_64-linux-gnu/libsuitesparseconfig.so
22.75 -- Found Intel Thread Building Blocks (TBB) library (2021.5 / 12050). Assuming SuiteSparseQR was compiled with TBB.
22.75 -- Adding librt to SuiteSparse_config libraries (required on Linux & Unix [not OSX] if SuiteSparse is compiled with timing).
22.78 CMake Error at /usr/local/libcudss-linux-0.3.0.9_cuda12-archive/lib/cmake/cudss/cudss-targets.cmake:42 (message):
22.78 Some (but not all) targets in this export set were already defined.
22.78
22.78 Targets Defined: cudss, cudss_static
22.78
22.78 Targets not yet defined: cudss_commlayer_openmpi, cudss_commlayer_nccl
22.78
22.78 Call Stack (most recent call first):
22.78 /usr/local/libcudss-linux-0.3.0.9_cuda12-archive/lib/cmake/cudss/cudss-config.cmake:134 (include)
22.78 /usr/share/cmake-3.22/Modules/CMakeFindDependencyMacro.cmake:47 (find_package)
22.78 /usr/local/lib/cmake/Ceres/CeresConfig.cmake:183 (find_dependency)
22.78 /opencv_contrib/modules/sfm/CMakeLists.txt:7 (find_package)
22.78
22.78
22.78 -- Configuring incomplete, errors occurred!
22.78 See also "/opencv/build/CMakeFiles/CMakeOutput.log".
22.78 See also "/opencv/build/CMakeFiles/CMakeError.log".```
### Steps to reproduce
Clone the Isaac ROS common repository, enter the following steps into a Docker-file and compile with build_image_layers.sh, the Docker-file created should be substituted in place of TOP_LEVEL
./Build/Dependencies/isaac_ros_common/scripts/build_image_layers.sh --context_dir ${PWD} --image_key base.ros2_humble.TOP_LEVEL --build_arg USERNAME=user --image_name test:${ARCH}
`#Build OpenCV from source for nvidia acceleration
RUN apt update && apt install -y cmake libjpeg-dev libjpeg8-dev libjpeg-turbo8-dev \
libpng-dev libtiff-dev libglew-dev libavcodec-dev libavformat-dev libswscale-dev \
libgtk2.0-dev libgtk-3-dev libcanberra-gtk* python3-pip libxvidcore-dev libx264-dev \
libtbb-dev libdc1394-dev libxine2-dev libv4l-dev v4l-utils qv4l2 libtesseract-dev libpostproc-dev \
libswresample-dev libvorbis-dev libfaac-dev libmp3lame-dev libtheora-dev libopencore-amrnb-dev \
libopencore-amrwb-dev libopenblas-dev libatlas-base-dev libblas-dev liblapack-dev liblapacke-dev \
libeigen3-dev gfortran libhdf5-dev libprotobuf-dev protobuf-compiler libgoogle-glog-dev libgflags-dev \
libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev \
gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly \
gstreamer1.0-libav gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 \
gstreamer1.0-qt5 gstreamer1.0-pulseaudio libmetis-dev libopenblas-dev cudss
WORKDIR /
#Implement version control later
RUN git clone -b 4.x --depth=1 https://github.com/opencv/opencv.git && \
git clone -b 4.x --depth=1 https://github.com/opencv/opencv_contrib.git&& cd opencv && mkdir build && cd build && cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr \
-D OPENCV_EXTRA_MODULES_PATH=/opencv_contrib/modules -D EIGEN_INCLUDE_PATH=/usr/include/eigen3 \
-D WITH_OPENCL=OFF -D CUDA_ARCH_BIN=8.6 -D CUDA_ARCH_PTX="" -D WITH_CUDA=ON \
-D WITH_CUDNN=ON -D WITH_CUBLAS=ON -D ENABLE_FAST_MATH=ON -D CUDA_FAST_MATH=ON -D OPENCV_DNN_CUDA=ON \
-D WITH_QT=ON -D WITH_OPENMP=ON -D BUILD_TIFF=ON -D WITH_FFMPEG=ON -D WITH_GSTREAMER=ON -D WITH_TBB=ON BUILD_TBB=ON\
-D BUILD_TESTS=OFF -D WITH_EIGEN=ON -D WITH_V4L=ON -D WITH_LIBV4L=ON -D WITH_PROTOBUF=ON -D OPENCV_ENABLE_NONFREE=ON \
-D INSTALL_C_EXAMPLES=OFF -D INSTALL_PYTHON_EXAMPLES=OFF -D PYTHON3_PACKAGES_PATH=/usr/lib/python3/dist-packages \
-D OPENCV_GENERATE_PKGCONFIG=ON -D BUILD_EXAMPLES=OFF .. && make -j6 && make install;`
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug | low | Critical |
2,757,087,920 | godot | Bottom Panel FileSystem default shortcut key (ALT+F) conflicts with other shortcuts | ### Tested versions
v4.4.dev7.official [46c8f8c5c]
### System information
Operating System: Kubuntu 24.04 KDE Plasma Version: 5.27.11 KDE Frameworks Version: 5.115.0 Qt Version: 5.15.13 Kernel Version: 6.8.0-49-generic (64-bit)
### Issue description
Shortcut key (ALT+F) for the FileSystem toggle tab when docked at the Bottom Panels, conflicts with a list of other defaults. More specifically with the code folding ALT+F in the Script Editor.
I believe a good choice would be ALT+T since it is not used but could be anything else (E.g. ALT+L).
Please see images below.
### Steps to reproduce
- Change the FileSystem dock to Bottom Panels
- Focus caret inside an open Script.
- Press ALT+F, (notice it does not close the FileSystem tab but will fold code instead.)
- All the other Bottom Panel keys work regardless where the caret focus is.
### Minimal reproduction project (MRP)


| bug,topic:editor,usability,topic:input | low | Minor |
2,757,105,070 | flutter | [packages][tool] `make-deps-path-based` should sort dependency overrides | The `camera/camera/example` app had the following dependency override:
```
dependency_overrides:
camera_web:
path: ../../camera_web
```
When running `dart flutter_plugin_tools.dart make-deps-path-based --current-package --target-dependencies=camera_android_camerax` on the branch of [this PR](https://github.com/flutter/packages/pull/8331), the script produced the following changes:
```diff
diff --git a/packages/camera/camera/example/pubspec.yaml b/packages/camera/camera/example/pubspec.yaml
index 9f3749514..66087e5b4 100644
--- a/packages/camera/camera/example/pubspec.yaml
+++ b/packages/camera/camera/example/pubspec.yaml
@@ -28,9 +28,15 @@ dev_dependencies:
integration_test:
sdk: flutter
+
+# FOR TESTING AND INITIAL REVIEW ONLY. DO NOT MERGE.
+# See https://github.com/flutter/flutter/blob/master/docs/ecosystem/contributing/README.md#changing-federated-plugins
dependency_overrides:
+
camera_web:
path: ../../camera_web
+ camera_android_camerax:
+ path: ../../../../packages/camera/camera_android_camerax
flutter:
uses-material-design: true
```
That change is invalid, because it breaks our `sort_pub_dependencies` lint:
```
Finding changed packages relative to "cebedf87bf381c605850b7c5e04c580396e3666e"...
Rewriting references to: camera_android_camerax...
Modified packages/camera/camera/pubspec.yaml
Running for all packages that have uncommitted changes
Changed packages: camera/camera
[0:00] Running for packages/camera/camera...
Running command: "flutter pub get" in /b/s/w/ir/x/w/packages/packages/camera/camera
Resolving dependencies...
Downloading packages...
+ ... list of deps
Changed 56 dependencies!
12 packages have newer versions incompatible with dependency constraints.
Try `flutter pub outdated` for more information.
Resolving dependencies in `./example`...
Downloading packages...
Got dependencies in `./example`.
Running command: "dart analyze --fatal-infos" in /b/s/w/ir/x/w/packages/packages/camera/camera
Analyzing camera...
info - example/pubspec.yaml:38:3 - Dependencies not sorted alphabetically. Try sorting the dependencies alphabetically (A to Z). - sort_pub_dependencies
1 issue found.
The following packages had errors:
packages/camera/camera
```
Should the tool sort the pubspec after modifying it?
| p: tooling,package,team-ecosystem,P2,triaged-ecosystem | low | Critical |
2,757,128,592 | next.js | Bump webpack dependency to 5.97.1 to resolve compilation errors on .wasm files with Reference Types enabled. | ### Link to the code that reproduces this issue
https://github.com/njbrown/next-ferrostar
### To Reproduce
1. Clone repo
2. run `yarn` to install deps
3. run `yarn dev`
### Current vs. Expected behavior
Current Behavior:
Webpack is unable to parse .wasm embedded in the `@stadiamaps/ferrostar` package.

Expected Behavior:
A map should be displayed on the page.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #115-Ubuntu SMP Mon Apr 15 09:52:04 UTC 2024
Available memory (MB): 40069
Available CPU cores: 8
Binaries:
Node: 18.19.1
npm: 10.2.4
Yarn: 1.22.22
pnpm: 9.0.6
Relevant Packages:
next: 15.1.2 // Latest available version is detected (15.1.2).
eslint-config-next: 15.1.2
react: 19.1.0-canary-ef979d47-20241218
react-dom: 19.1.0-canary-ef979d47-20241218
typescript: 5.7.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Webpack
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local)
### Additional context
This happens because of a [bug](https://github.com/webpack/webpack/issues/15566#issuecomment-2461005789) in a dependency (`@webassemblyjs/wasm-parser:1.12.1`) in the current webpack version (`webpack:5.96.1`). This has since been fixed in `webpack:5.97.1` because they have updated the dependency `@webassemblyjs/wasm-parser:1.14.1` .
references:
https://github.com/webpack/webpack/issues/15566#issue-1174661774
https://github.com/webpack/webpack/issues/15566#issuecomment-2461005789 | Webpack | low | Critical |
2,757,135,595 | tauri | [bug] why devtools setting not working? | ### Describe the bug

The dock location set not working and the dark theme setting not save๏ผ๏ผ
### Reproduction
_No response_
### Expected behavior
the setting be working.
### Full `tauri info` output
```text
~~~
$ pnpm run tauri info
> [email protected] tauri D:\db\dbnexus-app
> tauri "info"
WARNING: Only one package manager should be used, but found npm and pnpm and yarn.
Please remove unused package manager lock files, will use npm for now!
[โ] Environment
- OS: Windows 10.0.26100 x86_64 (X64)
โ WebView2: 131.0.2903.112
โ MSVC: Visual Studio Community 2022
โ rustc: 1.85.0-nightly (bdc6b3de4 2024-12-23)
โ cargo: 1.85.0-nightly (652623b77 2024-12-20)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: nightly-x86_64-pc-windows-msvc (default)
- node: 18.12.0
- pnpm: 8.15.3
- npm: 8.19.2
[-] Packages
- tauri ๐ฆ: 2.1.1
- tauri-build ๐ฆ: 2.0.3
- wry ๐ฆ: 0.47.2
- tao ๐ฆ: 0.30.8
- @tauri-apps/api ๎: 2.0.0-beta.13 (outdated, latest: 2.1.1)
- @tauri-apps/cli ๎: 2.1.0
[-] Plugins
- tauri-plugin-fs ๐ฆ: 2.2.0
- @tauri-apps/plugin-fs ๎: not installed!
- tauri-plugin-shell ๐ฆ: 2.2.0
- @tauri-apps/plugin-shell ๎: 2.2.0
- tauri-plugin-dialog ๐ฆ: 2.2.0
- @tauri-apps/plugin-dialog ๎: 2.2.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
~~~
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: upstream,platform: Windows | low | Critical |
2,757,161,174 | pytorch | @custom_op extensions could not be export.export()ed via AOT and run from C++ | ### ๐ Describe the bug
Here is the repro. I am adding a @custom_op to a working example that saves ExportedProgram via AOT and runs it from C++. When I add custom operation, it stops working :
Error: Could not find schema for mylib::custom_add.
```
import torch
def custom_add_direct(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
return a + b
@torch.library.custom_op("mylib::custom_add", mutates_args=(),
device_types="cuda",
)
def _(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
return custom_add_direct(a,b)
@torch.library.register_fake("mylib::custom_add")
def _(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
return torch.empty_like(a)
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.fc1 = torch.nn.Linear(10, 16)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(16, 1)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
x = self.fc1(x)
y = self.relu(x)
x = self.fc2(torch.ops.mylib.custom_add(x, y))
x = self.sigmoid(x)
return x
with torch.no_grad():
device = "cuda" if torch.cuda.is_available() else "cpu"
model = Model().to(device=device)
example_inputs = (torch.randn(8, 10, device=device),)
# Export the model
exported = torch.export.export(model, example_inputs)
# Compile the model
output_path = torch._inductor.aoti_compile_and_package(
exported,
example_inputs,
package_path="model.pt2",
)
```
Here's the C++ code, it runs model.pt2 perfectly if I replace "torch.ops.mylib.custom_add(x, y)" above with "x+y" :
```
#include <iostream>
#include <vector>
#include <torch/torch.h>
#include <torch/csrc/inductor/aoti_package/model_package_loader.h>
int main() {
c10::InferenceMode mode;
// Load the compiled model
torch::inductor::AOTIModelPackageLoader loader("model.pt2");
// Prepare input tensor
std::vector<torch::Tensor> inputs = {torch::randn({8, 10}, at::kCUDA)};
// Run inference
std::vector<torch::Tensor> outputs = loader.run(inputs);
// Print the result
std::cout << "Inference result:" << std::endl;
std::cout << outputs[0] << std::endl;
return 0;
}
```
### Versions
Pytorch nightly
cc @svekars @brycebortree @sekyondaMeta @AlannaBurke @malfet @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @zou3519 @bdhirsh @yf225 | module: docs,module: error checking,triaged,module: custom-operators,oncall: pt2,oncall: export,module: pt2-dispatcher | medium | Critical |
2,757,215,058 | rust | Outdated comment about std::hash::DefaultHasher being inaccessible | ### Location
https://github.com/rust-lang/rust/blob/f3343420c813a3dad6746e274137ba51bf97f063/library/std/src/hash/random.rs#L5-L6
### Summary
This comment in library/std/src/hash/random.rs says
```Although its items are public and contain stability attributes, they can't actually be accessed outside this crate.```
However, std::hash::DefaultHasher and RandomState are accessible since version 1.76 (feature = "std_hash_exports"). The comment predates that, so I suppose it's outdated and should be reworded or removed to avoid confusion.
Somewhat related: https://doc.rust-lang.org/std/index.html?search=DefaultHasher only shows std::collections::hash_map::DefaultHasher, while presumably it should also show std::hash::DefaultHasher. Please let me know if there's a better way to report that one, as I'm not sure what the best venue would be. | A-docs,T-libs | low | Minor |
2,757,233,516 | vscode | Super super slow to do any task |
Type: <b>Performance Issue</b>
Hello,
Recently I discovered that my VS Code became super slow while completing any task. A simple saving/source control changing can take like half a minute to complete. I have also disabled all extensions but no luck. My system resources can also hold. Please help!!!
VS Code version: Code 1.96.2 (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Windows_NT x64 10.0.26100
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 7 6800H with Radeon Graphics (16 x 3194)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.19GB (20.83GB free)|
|Process Argv|--crash-reporter-id 6c6d7ed8-2ab2-4ba3-a81b-95c302b3749a|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
1 162 34808 code main
0 32 3152 crashpad-handler
0 52 16568 utility-network-service
0 170 16920 shared-process
2 119 34180 gpu-process
0 131 48572 ptyHost
0 9 2524 conpty-agent
0 9 6620 conpty-agent
0 97 25208 C:\Users\haozhe\AppData\Local\Microsoft\WindowsApps\Microsoft.PowerShell_8wekyb3d8bbwe\pwsh.exe -noexit -command "try { . \"e:\Program Files (x86)\Microsoft VS Code\resources\app\out\vs\workbench\contrib\terminal\common\scripts\shellIntegration.ps1\" } catch {}"
0 9 25612 conpty-agent
0 96 27096 C:\Users\haozhe\AppData\Local\Microsoft\WindowsApps\Microsoft.PowerShell_8wekyb3d8bbwe\pwsh.exe -noexit -command "try { . \"e:\Program Files (x86)\Microsoft VS Code\resources\app\out\vs\workbench\contrib\terminal\common\scripts\shellIntegration.ps1\" } catch {}"
0 96 44792 C:\Users\haozhe\AppData\Local\Microsoft\WindowsApps\Microsoft.PowerShell_8wekyb3d8bbwe\pwsh.exe -noexit -command "try { . \"e:\Program Files (x86)\Microsoft VS Code\resources\app\out\vs\workbench\contrib\terminal\common\scripts\shellIntegration.ps1\" } catch {}"
2 404 49468 window [1] (index.tsx - interface - Visual Studio Code)
14 706 50468 extensionHost [1]
2 204 9180 electron-nodejs (tsserver.js )
0 109 15448 electron-nodejs (typingsInstaller.js typesMap.js )
0 8 13688 c:\Users\haozhe\.vscode\extensions\ms-python.python-2024.22.0-win32-x64\python-env-tools\bin\pet.exe server
0 7 48700 C:\WINDOWS\system32\conhost.exe 0x4
2 177 21248 "E:\Program Files (x86)\Microsoft VS Code\Code.exe" c:\Users\haozhe\.vscode\extensions\streetsidesoftware.code-spell-checker-4.0.21\packages\_server\dist\main.cjs --node-ipc --clientProcessId=50468
0 22 21892 "C:\Program Files\Docker\Docker\resources\bin\docker.exe" context ls --format "{{json .}}"
0 7 40284 C:\WINDOWS\system32\conhost.exe 0x4
0 149 26176 electron-nodejs (tsserver.js )
0 120 35420 electron-nodejs (languageServer.js )
0 204 41268 electron-nodejs (server.js )
1 96 46556 electron-nodejs (eslintServer.js )
0 6 24812 C:\WINDOWS\system32\cmd.exe /d /s /c "npm.cmd config get prefix"
0 55 25432 "C:\Program Files\nodejs\\node.exe" "C:\Program Files\nodejs\node_modules\npm\bin\npm-cli.js" config get prefix
0 7 43108 C:\WINDOWS\system32\conhost.exe 0x4
0 114 49428 electron-nodejs (server-node.js )
0 96 49448 electron-nodejs (serverMain.js )
0 121 50836 electron-nodejs (tailwindServer.js )
0 99 51860 "E:\Program Files (x86)\Microsoft VS Code\Code.exe" "e:\Program Files (x86)\Microsoft VS Code\resources\app\extensions\json-language-features\server\dist\node\jsonServerMain" --node-ipc --clientProcessId=50468
0 129 51568 fileWatcher [1]
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (index.tsx - interface - Visual Studio Code)
| Folder (interface): 1320 files
| File types: tsx(243) svg(192) js(172) png(139) ts(97) pack(80) json(48)
| html(25) xml(9) old(6)
| Conf files: github-actions(3) package.json(2) project.json(1)
| launch.json(1) settings.json(1) tsconfig.json(1)
| Launch Configs: node-terminal(2) pwa-chrome;
```
</details>
<details><summary>Extensions (99)</summary>
Extension|Author (truncated)|Version
---|---|---
three-js-snippets|aer|0.6.3
project-manager|ale|12.8.0
vscode-tailwindcss|bra|0.12.17
browserstack-vscode|bro|1.2.4
vscode-pytest|Cam|0.1.1
thrift|cdu|0.0.1
path-intellisense|chr|2.10.0
gitignore|cod|0.9.0
python-snippets|cst|0.1.2
vscode-mysql-client2|cwe|8.0.7
AndroidLauncher|Dan|0.0.7
vscode-eslint|dba|3.0.10
python-extensions-pack|dem|1.0.3
git-extension-pack|don|0.1.3
githistory|don|0.6.20
es7-react-js-snippets|dsz|4.4.3
vscode-wasm|dts|1.4.1
gitlens|eam|16.1.1
threejs|En1|0.0.1
prettier-vscode|esb|11.0.0
auto-close-tag|for|0.5.15
auto-rename-tag|for|0.1.10
vscode-mysql|for|0.5.0
copilot|Git|1.254.0
copilot-chat|Git|0.23.2
vscode-github-actions|git|0.27.0
go|gol|0.44.0
gc-excelviewer|Gra|4.2.62
vscode-test-explorer|hbe|2.22.1
vscode-power-mode|hoo|3.0.2
discord-vscode|icr|5.8.0
latex-workshop|Jam|10.7.0
cmake-language-support-vscode|jos|0.0.9
python-resource-monitor|kai|0.3.0
vscode-sshfs|Kel|1.26.1
vsc-python-indent|Kev|1.18.0
discord|Kua|0.0.6
node-module-intellisense|lei|1.5.0
vscode-python-test-adapter|lit|0.8.2
document|min|2.2.2
mongodb-vscode|mon|1.11.0
vscode-docker|ms-|1.29.3
vscode-language-pack-zh-hans|MS-|1.96.2024121109
vscode-dotnet-runtime|ms-|2.2.3
vscode-kubernetes-tools|ms-|1.3.18
playwright|ms-|1.1.12
black-formatter|ms-|2024.4.0
debugpy|ms-|2024.14.0
isort|ms-|2023.10.1
python|ms-|2024.22.1
vscode-pylance|ms-|2024.12.1
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.394.0
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
vscode-remote-extensionpack|ms-|0.26.0
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
makefile-tools|ms-|0.11.13
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
test-adapter-converter|ms-|0.2.1
vscode-react-native|msj|1.13.0
go-doc|msy|1.0.1
autodocstring|njp|0.6.1
vscode-python-typehint|njq|1.5.1
hardhat-solidity|Nom|0.8.7
indent-rainbow|ode|8.3.1
vscode-numpy-viewer|Per|0.1.8
platformio-ide|pla|3.3.3
java|red|1.38.0
vscode-yaml|red|1.15.0
LiveServer|rit|5.7.9
vs-code-prettier-eslint|rve|6.0.0
markdown-preview-enhanced|shd|0.8.15
autoimport|ste|1.5.4
code-spell-checker|str|4.0.21
cmake|twx|0.0.17
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
livecode|xir|1.3.10
markdown-pdf|yza|1.5.0
markdown-all-in-one|yzh|3.6.2
json|Zai|2.0.2
html-css-class-completion|Zig|1.20.0
vscode-open-in-github|ziy|1.3.6
go-snippets|zsh|0.0.4
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,757,242,458 | ollama | ollama.com model quantization levels are not displayed correctly | ### What is the issue?
This issue is a continuation of #7816
The issue with incorrect local ollama quantization levels in #7816 has been resolved, but the same problem appears in the model cards of models uploaded to ollama.com.
example:
https://ollama.com/CBYellowstone/sakura-v1.0

it should be:

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.4 | bug,ollama.com | low | Minor |
2,757,249,041 | pytorch | [inducotr] [cuda] `frexp` output different result when meeting `inf` | ### ๐ Describe the bug
**symptom**: When input tensor is `inf`, the second tensor returned by `frexp` is `-2147483648`. Eager output is zero (CPU inductor is also zero)
**device**: only cuda
**exposed area**: only input tensor is `inf` (`nan` wouldn't trigger inconsistency)
**code**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
torch.manual_seed(0)
torch.set_grad_enabled(False)
from torch._inductor import config
config.fallback_random = True
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
def forward(self, x):
a, b = torch.frexp(x)
return b
model = Model().cuda()
x = torch.Tensor([float("inf")]).cuda()
inputs = [x]
output = model(*inputs)
c_model = torch.compile(model)
c_output = c_model(*inputs)
print(output)
print(c_output)
```
### Error logs
```
tensor([0], device='cuda:0', dtype=torch.int32)
tensor([-2147483648], device='cuda:0', dtype=torch.int32)
```
### Versions
PyTorch version: 2.6.0.dev20241218+cu126
OS: Ubuntu 20.04.6 LTS (x86_64)
CPU: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
GPU: V100
<details>
<summary>click for detailed env</summary>
```
PyTorch version: 2.6.0.dev20241218+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 16.0.1
CMake version: version 3.26.0
Libc version: glibc-2.31
Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-202-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 560.35.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 20
On-line CPU(s) list: 0-19
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2499.996
BogoMIPS: 4999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 640 KiB
L1i cache: 640 KiB
L2 cache: 80 MiB
L3 cache: 16 MiB
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241205
[pip3] optree==0.13.1
[pip3] pytorch-triton==3.2.0+gitf9cdf582
[pip3] torch==2.6.0.dev20241218+cu126
[pip3] torchaudio==2.6.0.dev20241218+cu126
[pip3] torchvision==0.22.0.dev20241218+cu126
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.6.4.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.6.80 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.5.1.17 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.3.0.4 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.7.77 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.7.1.2 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.5.4.2 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.3 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.85 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.6.77 pypi_0 pypi
[conda] optree 0.13.1 pypi_0 pypi
[conda] pytorch-triton 3.2.0+gitf9cdf582 pypi_0 pypi
[conda] torch 2.6.0.dev20241218+cu126 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20241218+cu126 pypi_0 pypi
[conda] torchvision 0.22.0.dev20241218+cu126 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
</details>
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @bertmaher @int3 @davidberard98 @nmacchioni @embg @peterbell10 | triaged,oncall: pt2,module: inductor,upstream triton | low | Critical |
2,757,257,719 | godot | Wrong gradient colors when game or editor start | ### Tested versions
- Reproducible in 4.3.stable and 4.4.dev7
### System information
Linux Mint 22 x86_64 Cinnamon 6.2.9 X11 (Nvidia 1050 Mobile), exported game for Windows 10 (Nvidia 1060); both Vulkan rendering
### Issue description

On the picture are color palette representations:
1. As lossless texture with nearest filtering.
2. As set of color rectangles.
3. As gradient texture without interpolation (constant values).
When I start an editor or game, some colors in gradient are displayed incorrectly. Two last colors in a column changed from `#393939` and `#202020` to `#383838` and `#1f1f1f` respectively. The visual representation of color in inspector is also wrong, but the shown value is correct. If I update the color value, the visual representation updates to the correct color.
It's not the problem of control nodes, I have the same issue in spatial shaders.
### Steps to reproduce
Create gradient with constant values like `#202020` or `#393939` and compare them with ones you will get when you restart the editor.
### Minimal reproduction project (MRP)
[wrong-color-4.4-dev7.zip](https://github.com/user-attachments/files/18236540/wrong-color-4.4-dev7.zip)
It's the project from screenshot for 4.4.dev7. | discussion,topic:rendering,documentation,topic:2d | low | Minor |
2,757,293,256 | pytorch | PyTorch source code build failed on some Windows 11 environment caused by C++ protocol buffer compiler | ### ๐ Describe the bug
The pytorch source code build crashed on Windows 11 caused by **C++ protocol buffer compiler**
```
>python setup.py bdist_wheel
Building wheel torch-2.6.0a0+git0189052
-- Building version 2.6.0a0+git0189052
cmake --build . --target install --config Release
[1/2444] Running C++ protocol buffer compiler on C:/User...rch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
FAILED: third_party/onnx/onnx/onnx_onnx_torch-ml.pb.cc third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h C:/Users/arc/chuanqiw/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.cc C:/Users/arc/chuanqiw/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h
C:\WINDOWS\system32\cmd.exe /C "cd /D C:\Users\arc\chuanqiw\pytorch\build\third_party\onnx && C:\Users\arc\chuanqiw\pytorch\build\bin\protoc.exe C:/Users/arc/chuanqiw/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto -I C:/Users/arc/chuanqiw/pytorch/build/third_party/onnx --cpp_out dllexport_decl=ONNX_API:C:/Users/arc/chuanqiw/pytorch/build/third_party/onnx && C:\Users\arc\miniforge3\envs\chuanqiw_build\Lib\site-packages\cmake\data\bin\cmake.exe -DFILENAME=C:/Users/arc/chuanqiw/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.h -DNAMESPACES=onnx_torch -P C:/Users/arc/chuanqiw/pytorch/cmake/ProtoBufPatch.cmake && C:\Users\arc\miniforge3\envs\chuanqiw_build\Lib\site-packages\cmake\data\bin\cmake.exe -DFILENAME=C:/Users/arc/chuanqiw/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.pb.cc -DNAMESPACES=onnx_torch -P C:/Users/arc/chuanqiw/pytorch/cmake/ProtoBufPatch.cmake"
[26/2444] Building CXX object third_party\ideep\mkl-dnn\...ommon\CMakeFiles\dnnl_common.dir\memory_zero_pad.cpp.obj
ninja: build stopped: subcommand failed.
```
If I download pre-built [protobuf 3.13](https://github.com/protocolbuffers/protobuf/releases/tag/v3.13.0) `protoc.exe` binary to `C:\Users\arc\chuanqiw\pytorch\build\bin\protoc.exe`, the build can be worked around.
Full configuration.
```
>python setup.py bdist_wheel
Building wheel torch-2.6.0a0+git0189052
-- Building version 2.6.0a0+git0189052
cmake -GNinja -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=C:\Users\arc\chuanqiw\pytorch\torch -DCMAKE_PREFIX_PATH=C:\Users\arc\miniforge3\envs\chuanqiw_build\Lib\site-packages -DPython_EXECUTABLE=C:\Users\arc\miniforge3\envs\chuanqiw_build\python.exe -DTORCH_BUILD_VERSION=2.6.0a0+git0189052 -DUSE_NUMPY=True C:\Users\arc\chuanqiw\pytorch
-- The CXX compiler identification is MSVC 19.41.34123.0
-- The C compiler identification is MSVC 19.41.34123.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Not forcing any particular BLAS to be found
CMake Warning at CMakeLists.txt:422 (message):
TensorPipe cannot be used on Windows. Set it to OFF
CMake Warning at CMakeLists.txt:424 (message):
KleidiAI cannot be used on Windows. Set it to OFF
-- Performing Test C_HAS_AVX_1
-- Performing Test C_HAS_AVX_1 - Success
-- Performing Test C_HAS_AVX2_1
-- Performing Test C_HAS_AVX2_1 - Success
-- Performing Test C_HAS_AVX512_1
-- Performing Test C_HAS_AVX512_1 - Success
-- Performing Test CXX_HAS_AVX_1
-- Performing Test CXX_HAS_AVX_1 - Success
-- Performing Test CXX_HAS_AVX2_1
-- Performing Test CXX_HAS_AVX2_1 - Success
-- Performing Test CXX_HAS_AVX512_1
-- Performing Test CXX_HAS_AVX512_1 - Success
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success
-- Current compiler supports avx512f extension. Will build fbgemm.
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Failed
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Failed
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Compiler does not support SVE extension. Will not build perfkernels.
-- Performing Test HAS/UTF_8
-- Performing Test HAS/UTF_8 - Success
CUDA_TOOLKIT_ROOT_DIR not found or specified
-- Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY)
CMake Warning at cmake/public/cuda.cmake:31 (message):
PyTorch: CUDA cannot be found. Depending on whether you are building
PyTorch or a PyTorch dependent library, the next warning / error will give
you more info.
Call Stack (most recent call first):
cmake/Dependencies.cmake:44 (include)
CMakeLists.txt:865 (include)
CMake Warning at cmake/Dependencies.cmake:76 (message):
Not compiling with CUDA. Suppress this warning with -DUSE_CUDA=OFF.
Call Stack (most recent call first):
CMakeLists.txt:865 (include)
CMake Warning at cmake/Dependencies.cmake:95 (message):
Not compiling with XPU. Could NOT find SYCL.Suppress this warning with
-DUSE_XPU=OFF.
Call Stack (most recent call first):
CMakeLists.txt:865 (include)
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
CMake Deprecation Warning at third_party/protobuf/cmake/CMakeLists.txt:2 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
--
-- 3.13.0.0
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:C:/Users/arc/chuanqiw/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- Trying to find preferred BLAS backend of choice: MKL
-- MKL_THREADING = OMP
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- MKL_THREADING = OMP
CMake Warning at cmake/Dependencies.cmake:208 (message):
MKL could not be found. Defaulting to Eigen
Call Stack (most recent call first):
CMakeLists.txt:865 (include)
CMake Warning at cmake/Dependencies.cmake:256 (message):
Preferred BLAS (MKL) cannot be found, now searching for a general BLAS
library
Call Stack (most recent call first):
CMakeLists.txt:865 (include)
-- MKL_THREADING = OMP
-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_core - pthread]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl - guide - pthread - m]
-- Library mkl: not found
-- MKL library not found
-- Checking for [blis]
-- Library blis: BLAS_blis_LIBRARY-NOTFOUND
-- Checking for [Accelerate]
-- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND
-- Checking for [vecLib]
-- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND
-- Checking for [flexiblas]
-- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND
-- Checking for [openblas]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [openblas - pthread - m]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [openblas - pthread - m - gomp]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [libopenblas]
-- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran - pthread]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [acml - gfortran]
-- Library acml: BLAS_acml_LIBRARY-NOTFOUND
-- Checking for [blis]
-- Library blis: BLAS_blis_LIBRARY-NOTFOUND
-- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY)
-- Checking for [ptf77blas - atlas - gfortran]
-- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND
-- Checking for []
-- Looking for sgemm_
-- Looking for sgemm_ - not found
-- Cannot find a library with BLAS API. Not using BLAS.
-- Using pocketfft in directory: C:/Users/arc/chuanqiw/pytorch/third_party/pocketfft/
-- The ASM compiler identification is MSVC
-- Found assembler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/cl.exe
-- Building for XNNPACK_TARGET_PROCESSOR: x86_64
-- Generating microkernels.cmake
No microkernel found in src\reference\binary-elementwise.cc
No microkernel found in src\reference\packing.cc
No microkernel found in src\reference\unary-elementwise.cc
-- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.41.0.windows.2")
-- git version: v1.6.1 normalized to 1.6.1
-- Version: 1.6.1
-- Looking for shm_open in rt
-- Looking for shm_open in rt - not found
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX -- success
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK -- success
CMake Warning (dev) at third_party/fbgemm/CMakeLists.txt:93 (find_package):
Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
are removed. Run "cmake --help-policy CMP0148" for policy details. Use
the cmake_policy command to set the policy and suppress this warning.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found PythonInterp: C:/Users/arc/miniforge3/envs/chuanqiw_build/python.exe (found version "3.10.15")
-- Performing Test COMPILER_SUPPORTS_AVX512
-- Performing Test COMPILER_SUPPORTS_AVX512 - Success
-- MKL_THREADING = OMP
-- Check OMP with lib C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.41.34120/lib/x64/libomp.lib and flags -openmp:experimental
-- MKL_THREADING = OMP
-- Check OMP with lib C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.41.34120/lib/x64/libomp.lib and flags -openmp:experimental
CMake Warning (dev) at C:/Users/arc/miniforge3/envs/chuanqiw_build/Lib/site-packages/cmake/data/share/cmake-3.30/Modules/FindPackageHandleStandardArgs.cmake:441 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:590 (find_package_handle_standard_args)
third_party/fbgemm/CMakeLists.txt:136 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_C: -openmp:experimental
CMake Warning (dev) at C:/Users/arc/miniforge3/envs/chuanqiw_build/Lib/site-packages/cmake/data/share/cmake-3.30/Modules/FindPackageHandleStandardArgs.cmake:441 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:590 (find_package_handle_standard_args)
third_party/fbgemm/CMakeLists.txt:136 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_CXX: -openmp:experimental
-- Found OpenMP: TRUE
CMake Warning at third_party/fbgemm/CMakeLists.txt:138 (message):
OpenMP found! OpenMP_C_INCLUDE_DIRS =
CMake Warning at third_party/fbgemm/CMakeLists.txt:232 (message):
==========
CMake Warning at third_party/fbgemm/CMakeLists.txt:233 (message):
CMAKE_BUILD_TYPE = Release
CMake Warning at third_party/fbgemm/CMakeLists.txt:234 (message):
CMAKE_CXX_FLAGS_DEBUG is /Z7 /Ob0 /Od /RTC1 /bigobj
CMake Warning at third_party/fbgemm/CMakeLists.txt:235 (message):
CMAKE_CXX_FLAGS_RELEASE is /O2 /Ob2 /DNDEBUG /bigobj
CMake Warning at third_party/fbgemm/CMakeLists.txt:236 (message):
==========
** AsmJit Summary **
ASMJIT_DIR=C:/Users/arc/chuanqiw/pytorch/third_party/fbgemm/third_party/asmjit
ASMJIT_TEST=FALSE
ASMJIT_TARGET_TYPE=SHARED
ASMJIT_DEPS=
ASMJIT_LIBS=asmjit
ASMJIT_CFLAGS=
ASMJIT_PRIVATE_CFLAGS=-MP;-GF;-Zc:__cplusplus;-Zc:inline;-Zc:strictStrings;-Zc:threadSafeInit-;-W4
ASMJIT_PRIVATE_CFLAGS_DBG=-GS
ASMJIT_PRIVATE_CFLAGS_REL=-GS-;-O2;-Oi
CMake Deprecation Warning at third_party/ittapi/CMakeLists.txt:7 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
CMake Deprecation Warning at third_party/FP16/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
CMake Deprecation Warning at third_party/psimd/CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
-- Using third party subdirectory Eigen.
-- Found Python: C:\Users\arc\miniforge3\envs\chuanqiw_build\python.exe (found version "3.10.15") found components: Interpreter Development.Module NumPy
-- Using third_party/pybind11.
-- pybind11 include dirs: C:/Users/arc/chuanqiw/pytorch/cmake/../third_party/pybind11/include
-- Could NOT find OpenTelemetryApi (missing: OpenTelemetryApi_INCLUDE_DIRS)
-- Using third_party/opentelemetry-cpp.
-- opentelemetry api include dirs: C:/Users/arc/chuanqiw/pytorch/cmake/../third_party/opentelemetry-cpp/api/include
-- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
-- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
CMake Warning at cmake/Dependencies.cmake:939 (message):
Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF
Call Stack (most recent call first):
CMakeLists.txt:865 (include)
-- Adding OpenMP CXX_FLAGS: -openmp:experimental
-- Will link against OpenMP libraries: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.41.34120/lib/x64/libomp.lib
CMake Deprecation Warning at third_party/gloo/CMakeLists.txt:1 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:21 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'BUILD_BENCHMARK'.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:35 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'USE_NCCL'.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:36 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'USE_RCCL'.
This warning is for project developers. Use -Wno-dev to suppress it.
-- MSVC detected
-- Set USE_REDIS OFF
-- Set USE_IBVERBS OFF
-- Set USE_NCCL OFF
-- Set USE_RCCL OFF
-- Set USE_LIBUV ON
-- Only USE_LIBUV is supported on Windows
-- Gloo build as SHARED library
CMake Warning (dev) at third_party/onnx/CMakeLists.txt:106 (find_package):
Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
are removed. Run "cmake --help-policy CMP0148" for policy details. Use
the cmake_policy command to set the policy and suppress this warning.
This warning is for project developers. Use -Wno-dev to suppress it.
Generated: C:/Users/arc/chuanqiw/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
Generated: C:/Users/arc/chuanqiw/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
Generated: C:/Users/arc/chuanqiw/pytorch/build/third_party/onnx/onnx/onnx-data_onnx_torch.proto
--
-- ******** Summary ********
-- CMake version : 3.30.5
-- CMake command : C:/Users/arc/miniforge3/envs/chuanqiw_build/Lib/site-packages/cmake/data/bin/cmake.exe
-- System : Windows
-- C++ compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/cl.exe
-- C++ compiler version : 19.41.34123.0
-- CXX flags : /DWIN32 /D_WINDOWS /GR /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL /EHsc /wd26812
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;__STDC_FORMAT_MACROS
-- CMAKE_PREFIX_PATH : C:\Users\arc\miniforge3\envs\chuanqiw_build\Lib\site-packages
-- CMAKE_INSTALL_PREFIX : C:/Users/arc/chuanqiw/pytorch/torch
-- CMAKE_MODULE_PATH : C:/Users/arc/chuanqiw/pytorch/cmake/Modules;C:/Users/arc/chuanqiw/pytorch/cmake/public/../Modules_CUDA_fix
--
-- ONNX version : 1.17.0
-- ONNX NAMESPACE : onnx_torch
-- ONNX_USE_LITE_PROTO : OFF
-- USE_PROTOBUF_SHARED_LIBS : OFF
-- Protobuf_USE_STATIC_LIBS : ON
-- ONNX_DISABLE_EXCEPTIONS : OFF
-- ONNX_DISABLE_STATIC_REGISTRATION : OFF
-- ONNX_WERROR : OFF
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_SHARED_LIBS :
-- BUILD_SHARED_LIBS : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
-- Found CUDA with FP16 support, compiling with torch.cuda.HalfTensor
-- Adding -DNDEBUG to compile flags
CMake Warning at cmake/Dependencies.cmake:1408 (message):
Not compiling with MAGMA. Suppress this warning with -DUSE_MAGMA=OFF.
Call Stack (most recent call first):
CMakeLists.txt:865 (include)
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- MKL_THREADING = OMP
-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_core - pthread]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl - guide - pthread - m]
-- Library mkl: not found
-- MKL library not found
-- Checking for [blis]
-- Library blis: BLAS_blis_LIBRARY-NOTFOUND
-- Checking for [Accelerate]
-- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND
-- Checking for [vecLib]
-- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND
-- Checking for [flexiblas]
-- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND
-- Checking for [openblas]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [openblas - pthread - m]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [openblas - pthread - m - gomp]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [libopenblas]
-- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran - pthread]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [acml - gfortran]
-- Library acml: BLAS_acml_LIBRARY-NOTFOUND
-- Checking for [blis]
-- Library blis: BLAS_blis_LIBRARY-NOTFOUND
-- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY)
-- Checking for [ptf77blas - atlas - gfortran]
-- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND
-- Checking for []
-- Cannot find a library with BLAS API. Not using BLAS.
-- LAPACK requires BLAS
-- Cannot find a library with LAPACK API. Not using LAPACK.
disabling CUDA because NOT USE_CUDA is set
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
-- Will build oneDNN UKERNEL
-- MKL_THREADING = OMP
-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - libiomp5md]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core - libiomp5md]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_intel_thread - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_sequential - mkl_core]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_core - libiomp5md - pthread]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_core - libiomp5md - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl_intel_lp64 - mkl_core - pthread]
-- Library mkl_intel_lp64: not found
-- Checking for [mkl_intel - mkl_core - pthread]
-- Library mkl_intel: not found
-- Checking for [mkl - guide - pthread - m]
-- Library mkl: not found
-- MKL library not found
-- Checking for [blis]
-- Library blis: BLAS_blis_LIBRARY-NOTFOUND
-- Checking for [Accelerate]
-- Library Accelerate: BLAS_Accelerate_LIBRARY-NOTFOUND
-- Checking for [vecLib]
-- Library vecLib: BLAS_vecLib_LIBRARY-NOTFOUND
-- Checking for [flexiblas]
-- Library flexiblas: BLAS_flexiblas_LIBRARY-NOTFOUND
-- Checking for [openblas]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [openblas - pthread - m]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [openblas - pthread - m - gomp]
-- Library openblas: BLAS_openblas_LIBRARY-NOTFOUND
-- Checking for [libopenblas]
-- Library libopenblas: BLAS_libopenblas_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [goto2 - gfortran - pthread]
-- Library goto2: BLAS_goto2_LIBRARY-NOTFOUND
-- Checking for [acml - gfortran]
-- Library acml: BLAS_acml_LIBRARY-NOTFOUND
-- Checking for [blis]
-- Library blis: BLAS_blis_LIBRARY-NOTFOUND
-- Could NOT find Atlas (missing: Atlas_CBLAS_INCLUDE_DIR Atlas_CLAPACK_INCLUDE_DIR Atlas_CBLAS_LIBRARY Atlas_BLAS_LIBRARY Atlas_LAPACK_LIBRARY)
-- Checking for [ptf77blas - atlas - gfortran]
-- Library ptf77blas: BLAS_ptf77blas_LIBRARY-NOTFOUND
-- Checking for []
-- Cannot find a library with BLAS API. Not using BLAS.
-- MKLDNN_CPU_RUNTIME = OMP
CMake Deprecation Warning at third_party/ideep/mkl-dnn/CMakeLists.txt:17 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
-- DNNL_TARGET_ARCH: X64
-- DNNL_LIBRARY_NAME: dnnl
CMake Warning (dev) at C:/Users/arc/miniforge3/envs/chuanqiw_build/Lib/site-packages/cmake/data/share/cmake-3.30/Modules/FindPackageHandleStandardArgs.cmake:441 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_C)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:590 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/cmake/OpenMP.cmake:55 (find_package)
third_party/ideep/mkl-dnn/CMakeLists.txt:119 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_C: -openmp:experimental
CMake Warning (dev) at C:/Users/arc/miniforge3/envs/chuanqiw_build/Lib/site-packages/cmake/data/share/cmake-3.30/Modules/FindPackageHandleStandardArgs.cmake:441 (message):
The package name passed to `find_package_handle_standard_args` (OpenMP_CXX)
does not match the name of the calling package (OpenMP). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/Modules/FindOpenMP.cmake:590 (find_package_handle_standard_args)
third_party/ideep/mkl-dnn/cmake/OpenMP.cmake:55 (find_package)
third_party/ideep/mkl-dnn/CMakeLists.txt:119 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found OpenMP_CXX: -openmp:experimental
-- Enabled testing coverage: CI
-- Enabled workload: TRAINING
-- Enabled primitives: ALL
-- Enabled primitive CPU ISA: ALL
-- Enabled primitive GPU ISA: ALL
-- Enabled GeMM kernels ISA: ALL
-- Primitive cache is enabled
-- Experimental functionality for ukernels is enabled
-- The ASM_MASM compiler identification is MSVC
-- Found assembler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/ml64.exe
-- Graph component is enabled
-- Graph compiler backend is disabled.
-- Found MKL-DNN: TRUE
-- {fmt} version: 11.0.2
-- Build type: Release
-- Using CPU-only version of Kineto
-- Configuring Kineto dependency:
-- KINETO_SOURCE_DIR = C:/Users/arc/chuanqiw/pytorch/third_party/kineto/libkineto
-- KINETO_BUILD_TESTS = OFF
-- KINETO_LIBRARY_TYPE = static
CMake Warning (dev) at third_party/kineto/libkineto/CMakeLists.txt:15 (find_package):
Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
are removed. Run "cmake --help-policy CMP0148" for policy details. Use
the cmake_policy command to set the policy and suppress this warning.
This warning is for project developers. Use -Wno-dev to suppress it.
INFO CUDA_SOURCE_DIR =
INFO ROCM_SOURCE_DIR =
INFO CUPTI unavailable or disabled - not building GPU profilers
-- Kineto: FMT_SOURCE_DIR = C:/Users/arc/chuanqiw/pytorch/third_party/fmt
-- Kineto: FMT_INCLUDE_DIR = C:/Users/arc/chuanqiw/pytorch/third_party/fmt/include
INFO CUPTI_INCLUDE_DIR = /extras/CUPTI/include
INFO ROCTRACER_INCLUDE_DIR = /include/roctracer
INFO DYNOLOG_INCLUDE_DIR = C:/Users/arc/chuanqiw/pytorch/third_party/kineto/libkineto/third_party/dynolog/
INFO IPCFABRIC_INCLUDE_DIR = C:/Users/arc/chuanqiw/pytorch/third_party/kineto/libkineto/third_party/dynolog//dynolog/src/ipcfabric/
-- Configured Kineto (CPU)
-- Performing Test HAS/WD4624
-- Performing Test HAS/WD4624 - Success
-- Performing Test HAS/WD4068
-- Performing Test HAS/WD4068 - Success
-- Performing Test HAS/WD4067
-- Performing Test HAS/WD4067 - Success
-- Performing Test HAS/WD4267
-- Performing Test HAS/WD4267 - Success
-- Performing Test HAS/WD4661
-- Performing Test HAS/WD4661 - Success
-- Performing Test HAS/WD4717
-- Performing Test HAS/WD4717 - Success
-- Performing Test HAS/WD4244
-- Performing Test HAS/WD4244 - Success
-- Performing Test HAS/WD4804
-- Performing Test HAS/WD4804 - Success
-- Performing Test HAS/WD4273
-- Performing Test HAS/WD4273 - Success
-- Performing Test HAS_WNO_STRINGOP_OVERFLOW
-- Performing Test HAS_WNO_STRINGOP_OVERFLOW - Failed
--
-- Use the C++ compiler to compile (MI_USE_CXX=ON)
--
-- Library base name: mimalloc
-- Version : 1.8
-- Build type : release
-- C++ Compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/cl.exe
-- Compiler flags : /Zc:__cplusplus
-- Compiler defines :
-- Link libraries : psapi;shell32;user32;advapi32;bcrypt
-- Build targets : static
--
-- Performing Test HAS_WDEPRECATED
-- Performing Test HAS_WDEPRECATED - Failed
-- don't use NUMA
-- Looking for backtrace
-- Looking for backtrace - not found
-- Could NOT find Backtrace (missing: Backtrace_LIBRARY Backtrace_INCLUDE_DIR)
-- headers outputs:
-- sources outputs:
-- declarations_yaml outputs:
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed
-- Using ATen parallel backend: OMP
disabling CUDA because USE_CUDA is set false
-- Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY OPENSSL_INCLUDE_DIR)
-- Check size of long double
-- Check size of long double - done
-- Performing Test COMPILER_SUPPORTS_FLOAT128
-- Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed
-- Performing Test COMPILER_SUPPORTS_SSE2
-- Performing Test COMPILER_SUPPORTS_SSE2 - Success
-- Performing Test COMPILER_SUPPORTS_SSE4
-- Performing Test COMPILER_SUPPORTS_SSE4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX
-- Performing Test COMPILER_SUPPORTS_AVX - Success
-- Performing Test COMPILER_SUPPORTS_FMA4
-- Performing Test COMPILER_SUPPORTS_FMA4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX2
-- Performing Test COMPILER_SUPPORTS_AVX2 - Success
-- Performing Test COMPILER_SUPPORTS_AVX512F
-- Performing Test COMPILER_SUPPORTS_AVX512F - Success
-- Found OpenMP_C: -openmp:experimental (found version "2.0")
-- Found OpenMP_CXX: -openmp:experimental (found version "2.0")
-- Found OpenMP: TRUE (found version "2.0")
-- Performing Test COMPILER_SUPPORTS_OPENMP
-- Performing Test COMPILER_SUPPORTS_OPENMP - Success
-- Performing Test COMPILER_SUPPORTS_OMP_SIMD
-- Performing Test COMPILER_SUPPORTS_OMP_SIMD - Failed
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed
-- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM
-- Performing Test COMPILER_SUPPORTS_SYS_GETRANDOM - Failed
-- Configuring build for SLEEF-v3.6.0
Target system: Windows-10.0.22631
Target processor: AMD64
Host system: Windows-10.0.22631
Host processor: AMD64
Detected C compiler: MSVC @ C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/cl.exe
CMake: 3.30.5
Make program: C:/Users/arc/miniforge3/envs/chuanqiw_build/Scripts/ninja.exe
-- Using option `/D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE ` to compile libsleef
-- Building shared libs : OFF
-- Building static test bins: OFF
-- MPFR : LIB_MPFR-NOTFOUND
-- GMP : LIBGMP-NOTFOUND
-- RT :
-- FFTW3 : LIBFFTW3-NOTFOUND
-- OPENSSL :
-- SDE : SDE_COMMAND-NOTFOUND
-- COMPILER_SUPPORTS_OPENMP : FALSE
AT_INSTALL_INCLUDE_DIR include/ATen/core
core header install: C:/Users/arc/chuanqiw/pytorch/build/aten/src/ATen/core/TensorBody.h
core header install: C:/Users/arc/chuanqiw/pytorch/build/aten/src/ATen/core/aten_interned_strings.h
core header install: C:/Users/arc/chuanqiw/pytorch/build/aten/src/ATen/core/enum_tag.h
CMake Deprecation Warning at test/edge/CMakeLists.txt:1 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
-- Performing Test HAS_WNO_UNUSED_PRIVATE_FIELD
-- Performing Test HAS_WNO_UNUSED_PRIVATE_FIELD - Failed
-- Generating sources for unboxing kernels C:\Users\arc\miniforge3\envs\chuanqiw_build\python.exe;-m;torchgen.gen_executorch;--source-path=C:/Users/arc/chuanqiw/pytorch/test/edge/../../test/edge;--install-dir=C:/Users/arc/chuanqiw/pytorch/build/out;--tags-path=C:/Users/arc/chuanqiw/pytorch/test/edge/../../aten/src/ATen/native/tags.yaml;--aten-yaml-path=C:/Users/arc/chuanqiw/pytorch/test/edge/../../aten/src/ATen/native/native_functions.yaml;--use-aten-lib;--op-selection-yaml-path=C:/Users/arc/chuanqiw/pytorch/test/edge/../../test/edge/selected_operators.yaml;--custom-ops-yaml-path=C:/Users/arc/chuanqiw/pytorch/test/edge/../../test/edge/custom_ops.yaml
CMake Warning at CMakeLists.txt:1275 (message):
Generated cmake files are only fully tested if one builds with system glog,
gflags, and protobuf. Other settings may generate files that are not well
tested.
--
-- ******** Summary ********
-- General:
-- CMake version : 3.30.5
-- CMake command : C:/Users/arc/miniforge3/envs/chuanqiw_build/Lib/site-packages/cmake/data/bin/cmake.exe
-- System : Windows
-- C++ compiler : C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/cl.exe
-- C++ compiler id : MSVC
-- C++ compiler version : 19.41.34123.0
-- Using ccache if found : OFF
-- CXX flags : /DWIN32 /D_WINDOWS /GR /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273
-- Shared LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099
-- Static LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099
-- Module LD flags : /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099
-- Build type : Release
-- Compile definitions : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;ONNX_NAMESPACE=onnx_torch;_CRT_SECURE_NO_DEPRECATE=1;USE_EXTERNAL_MZCRC;MINIZ_DISABLE_ZIP_READER_CRC32_CHECKS;FLASHATTENTION_DISABLE_ALIBI;WIN32_LEAN_AND_MEAN;_UCRT_LEGACY_INFINITY;NOMINMAX;USE_MIMALLOC
-- CMAKE_PREFIX_PATH : C:\Users\arc\miniforge3\envs\chuanqiw_build\Lib\site-packages
-- CMAKE_INSTALL_PREFIX : C:/Users/arc/chuanqiw/pytorch/torch
-- USE_GOLD_LINKER : OFF
--
-- TORCH_VERSION : 2.6.0
-- BUILD_STATIC_RUNTIME_BENCHMARK: OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_PYTHON : True
-- Python version : 3.10.15
-- Python executable : C:\Users\arc\miniforge3\envs\chuanqiw_build\python.exe
-- Python library : C:/Users/arc/miniforge3/envs/chuanqiw_build/libs/python310.lib
-- Python includes : C:/Users/arc/miniforge3/envs/chuanqiw_build/include
-- Python site-package : C:\Users\arc\miniforge3\envs\chuanqiw_build\Lib\site-packages
-- BUILD_SHARED_LIBS : ON
-- CAFFE2_USE_MSVC_STATIC_RUNTIME : OFF
-- BUILD_TEST : True
-- BUILD_JNI : OFF
-- BUILD_MOBILE_AUTOGRAD : OFF
-- BUILD_LITE_INTERPRETER: OFF
-- INTERN_BUILD_MOBILE :
-- TRACING_BASED : OFF
-- USE_BLAS : 0
-- USE_LAPACK : 0
-- USE_ASAN : OFF
-- USE_TSAN : OFF
-- USE_CPP_CODE_COVERAGE : OFF
-- USE_CUDA : OFF
-- USE_XPU : OFF
-- USE_ROCM : OFF
-- BUILD_NVFUSER :
-- USE_EIGEN_FOR_BLAS : ON
-- USE_FBGEMM : ON
-- USE_FAKELOWP : OFF
-- USE_KINETO : ON
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LITE_PROTO : OFF
-- USE_PYTORCH_METAL : OFF
-- USE_PYTORCH_METAL_EXPORT : OFF
-- USE_MPS : OFF
-- CAN_COMPILE_METAL :
-- USE_MKL : OFF
-- USE_MKLDNN : ON
-- USE_MKLDNN_ACL : OFF
-- USE_MKLDNN_CBLAS : OFF
-- USE_UCC : OFF
-- USE_ITT : ON
-- USE_NCCL : OFF
-- USE_NNPACK : OFF
-- USE_NUMPY : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENMP : ON
-- USE_MIMALLOC : ON
-- USE_MIMALLOC_ON_MKL : OFF
-- USE_VULKAN : OFF
-- USE_PROF : OFF
-- USE_PYTORCH_QNNPACK : OFF
-- USE_XNNPACK : ON
-- USE_DISTRIBUTED : ON
-- USE_MPI : OFF
-- USE_GLOO : ON
-- USE_GLOO_WITH_OPENSSL : OFF
-- USE_TENSORPIPE : OFF
-- Public Dependencies :
-- Private Dependencies : Threads::Threads;pthreadpool;cpuinfo;XNNPACK;microkernels-prod;fbgemm;ittnotify;fp16;caffe2::openmp;gloo;fmt::fmt-header-only;kineto
-- Public CUDA Deps. :
-- Private CUDA Deps. :
-- USE_COREML_DELEGATE : OFF
-- BUILD_LAZY_TS_BACKEND : ON
-- USE_ROCM_KERNEL_ASSERT : OFF
-- Performing Test HAS_WMISSING_PROTOTYPES
-- Performing Test HAS_WMISSING_PROTOTYPES - Failed
-- Performing Test HAS_WERROR_MISSING_PROTOTYPES
-- Performing Test HAS_WERROR_MISSING_PROTOTYPES - Failed
-- Configuring done (76.9s)
-- Generating done (2.8s)
-- Build files have been written to: C:/Users/arc/chuanqiw/pytorch/build
cmake --build . --target install --config Release
```
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise (10.0.22631 64-bit)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: N/A
Python version: 3.10.15 | packaged by conda-forge | (main, Oct 16 2024, 01:15:49) [MSC v.1941 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Name: 12th Gen Intel(R) Core(TM) i9-12900
Manufacturer: GenuineIntel
Family: 207
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 2400
MaxClockSpeed: 2400
L2CacheSize: 14336
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] optree==0.13.0
[conda] numpy 2.1.2 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
```
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | module: build,module: windows,triaged | low | Critical |
2,757,296,334 | deno | `Deno.serve` terminates due to unexpected error if `--unstable-otel` is specified | Version: Deno 2.1.4
# Reproduction
This reproduction code panics in v2.1.2 and 2.1.3.
It works in 2.1.1.
```ts
// main.ts
Deno.serve((_req) => new Response("Hello, world!"));
```
```sh
$ deno run -A --unstable-otel main.ts
```
Then request to `http://localhost:8000`, the following error occurs:
```sh
Terminating Deno.serve loop due to unexpected error Error: instrumentation scope not available
at submitSpan (ext:deno_telemetry/telemetry.ts:56:3)
at endSpan (ext:deno_telemetry/telemetry.ts:170:7)
at ext:deno_http/00_serve.ts:449:71
at <anonymous>
at eventLoopTick (ext:core/01_core.js:175:7)
```
No env vars are specified.
This doesn't happen if `OTEL_EXPORTER_OTLP_PROTOCOL` env var is specified (e.g. `http/json`) but I don't want to enable otel export in local development.
| bug,otel | low | Critical |
2,757,316,384 | excalidraw | Painting board and brush issues | Using the paintbrush on the Ronor Pad7 tablet feels like there is no ink, and it is also very difficult to draw. In short, there is a problem with the brushstrokes, which can be found on the browser's official website;
Here are some information about the tablet:
model: AGM3-W09HN
Version number: [4.0.0.246](url) (C00E206R4P11)
Magic UI version: 4.0
Android version: 10
Processor: Media Tek Helio G80
Browser: Huawei Browser
Here is the operation https://github.com/user-attachments/assets/07c8e9a4-d00c-4f99-8c57-5606a0a30b37
| bug,freedraw,huawei | low | Minor |
2,757,317,238 | PowerToys | New+ Shortcut keys | ### Description of the new feature / enhancement
Can you add a shortcut key for new+, just like the new (w) in Win10
### Scenario when this would be used?
We can directly use the shortcut key wf to create folders after opening the right-click menu, which is a feature already implemented in Win10. However, after using new+, we can create more commonly used files by standardizing template names. For example, when we need to record information temporarily, we can use "wt" to create a txt file, and use "wx"" to choose to create Excel files for different purposes
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,757,429,594 | rust | Infinite recursion in type resolution for unused type | Example:
```rust
use std::collections::HashMap;
use std::marker::PhantomData;
trait Resource {}
struct Res<'w, T: ?Sized + Resource>(&'w T);
impl<'w, 'a, T: Resource> IntoIterator for &'a Res<'w, T>
where
&'a T: IntoIterator,
{
type Item = <&'a T as IntoIterator>::Item;
type IntoIter = <&'a T as IntoIterator>::IntoIter;
fn into_iter(self) -> Self::IntoIter {
self.0.into_iter()
}
}
trait Matcher<T> {}
impl<'a, A, B, U: Matcher<&'a A>, V: Matcher<&'a B>> Matcher<(&'a A, &'a B)> for (U, V) {}
struct EqMatcher<T>(PhantomData<T>);
impl<T> Matcher<T> for EqMatcher<T> {}
struct ContainerMatcher<T>(Box<dyn Matcher<T>>);
impl<T, ContainerT> Matcher<ContainerT> for ContainerMatcher<T> where
ContainerT: IntoIterator<Item = T>
{
}
#[test]
fn it_works() {
// todo just to get the types to work.
let matcher: Box<(EqMatcher<&u32>, ContainerMatcher<(&u32, &u32)>)> = todo!();
// This is converting the Box into a Box<dyn Matcher>
let bad_matcher = ContainerMatcher(matcher);
let blah = HashMap::<u32, HashMap<u32, u32>>::default();
fn check<T>(val: T, matcher: impl Matcher<T>) {}
check(&blah, bad_matcher);
}
```
As a short summary: I am implementing a trait for one struct to operate on anything that impls IntoIterator. In addition this trait "distributes" to elements in a pair. I then try and merge these two properties to construct `bad_matcher`. For context, this happens as a result of merging the `googletest` and `bevy_ecs` crates.
I expected to see this happen: The test to compile.
Instead, this happened: Failed to compile due to infinite recursion of the `Res` type (which is completely unused).
```
error[E0275]: overflow evaluating the requirement `&_: IntoIterator`
--> src\lib.rs:38:40
|
38 | let bad_matcher = ContainerMatcher(matcher);
| ^^^^^^^
|
= help: consider increasing the recursion limit by adding a `#![recursion_limit = "256"]` attribute to your crate (`bevy_googletest`)
note: required for `&Res<'_, _>` to implement `IntoIterator`
--> src\lib.rs:8:27
|
8 | impl<'w, 'a, T: Resource> IntoIterator for &'a Res<'w, T>
| ^^^^^^^^^^^^ ^^^^^^^^^^^^^^
9 | where
10 | &'a T: IntoIterator,
| ------------ unsatisfied trait bound introduced here
= note: 123 redundant requirements hidden
= note: required for `&Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, ...>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>` to implem
ent `IntoIterator`
note: required for `ContainerMatcher<(&u32, &u32)>` to implement `Matcher<&Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<
'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res
<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Re
s<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, _>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>`
--> src\lib.rs:30:21
|
30 | impl<T, ContainerT> Matcher<ContainerT> for ContainerMatcher<T> where
| ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
31 | ContainerT: IntoIterator<Item = T>
| -------- unsatisfied trait bound introduced here
= note: 1 redundant requirement hidden
= note: required for `(EqMatcher, ContainerMatcher<(&u32, &u32)>)` to implement `Matcher<(&u32, &Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res
<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Re
s<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, R
es<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, _>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>)>`
= note: required for the cast from `Box<(EqMatcher, ContainerMatcher<(&u32, &u32)>)>` to `Box<dyn Matcher<(&u32, &Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Res<'_, Re
s<'_, Res<'_, Res<'_, ...>>>>>>>>>>>>>>>>>>>>>>>>>>>>)>>`
= note: the full name for the type has been written to 'E:\Content\bevy_googletest\target\debug\deps\bevy_googletest-3fef145a913b892a.long-type-9581655154092012888.txt'
= note: consider using `--verbose` to print the full type name to the console
= note: the full name for the type has been written to 'E:\Content\bevy_googletest\target\debug\deps\bevy_googletest-3fef145a913b892a.long-type-1543903567385044461.txt'
= note: consider using `--verbose` to print the full type name to the console
```
### Meta
`rustc --version --verbose`:
```
rustc 1.83.0 (90b35a623 2024-11-26)
binary: rustc
commit-hash: 90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf
commit-date: 2024-11-26
host: x86_64-pc-windows-msvc
release: 1.83.0
LLVM version: 19.1.1
```
This seems to be fixed with `-Znext-solver`.
</p>
</details>
| C-bug,I-hang,E-needs-mcve,T-types,fixed-by-next-solver | low | Critical |
2,757,484,276 | transformers | Training issues latest version | ### System Info
- `transformers` version: 4.48.0.dev0
- Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.11.11
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 3060 Laptop GPU
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Clone ModernBert repo
2. Install latest transformers version (4.48.0-dev0)
3. Run examples/train_st.py to finetune modernbert.
### Expected behavior
I would expect no errors.
However when building transformers from an earlier commit it works.
> `pip install git+https://github.com/huggingface/transformers.git@f42084e6411c39b74309af4a7d6ed640c01a4c9e`
So I think something broke during the latest commits. | trainer,bug | low | Critical |
2,757,499,193 | flutter | [camera_android_camerax] Can't run an android app with camera plugin dependency | ### What package does this bug report belong to?
camera
### What target platforms are you seeing this bug on?
Android
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
camera:
dependency: "direct main"
description:
name: camera
sha256: "26ff41045772153f222ffffecba711a206f670f5834d40ebf5eed3811692f167"
url: "https://pub.dev"
source: hosted
version: "0.11.0+2"
camera_android_camerax:
dependency: transitive
description:
name: camera_android_camerax
sha256: abcfa1ac32bd03116b4cfda7e8223ab391f01966e65823c064afe388550d1b3d
url: "https://pub.dev"
source: hosted
version: "0.6.10+3"
camera_avfoundation:
dependency: transitive
description:
name: camera_avfoundation
sha256: "2e4c568f70e406ccb87376bc06b53d2f5bebaab71e2fbcc1a950e31449381bcf"
url: "https://pub.dev"
source: hosted
version: "0.9.17+5"
camera_platform_interface:
dependency: transitive
description:
name: camera_platform_interface
sha256: b3ede1f171532e0d83111fe0980b46d17f1aa9788a07a2fbed07366bbdbb9061
url: "https://pub.dev"
source: hosted
version: "2.8.0"
camera_web:
dependency: transitive
description:
name: camera_web
sha256: "595f28c89d1fb62d77c73c633193755b781c6d2e0ebcd8dc25b763b514e6ba8f"
url: "https://pub.dev"
source: hosted
version: "0.3.5"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
cross_file:
dependency: transitive
description:
name: cross_file
sha256: "7caf6a750a0c04effbb52a676dce9a4a592e10ad35c34d6d2d0e4811160d5670"
url: "https://pub.dev"
source: hosted
version: "0.3.4+2"
cupertino_icons:
dependency: "direct main"
description:
name: cupertino_icons
sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6
url: "https://pub.dev"
source: hosted
version: "1.0.8"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "3f41d009ba7172d5ff9be5f6e6e6abb4300e263aab8866d2a0842ed2a70f8f0c"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
flutter_plugin_android_lifecycle:
dependency: transitive
description:
name: flutter_plugin_android_lifecycle
sha256: "615a505aef59b151b46bbeef55b36ce2b6ed299d160c51d84281946f0aa0ce0e"
url: "https://pub.dev"
source: hosted
version: "2.0.24"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: "976c774dd944a42e83e2467f4cc670daef7eed6295b10b36ae8c85bcbf828235"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
plugin_platform_interface:
dependency: transitive
description:
name: plugin_platform_interface
sha256: "4820fbfdb9478b1ebae27888254d445073732dae3d6ea81f0b7e06d5dedc3f02"
url: "https://pub.dev"
source: hosted
version: "2.1.8"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
stream_transform:
dependency: transitive
description:
name: stream_transform
sha256: ad47125e588cfd37a9a7f86c7d6356dde8dfe89d071d293f80ca9e9273a33871
url: "https://pub.dev"
source: hosted
version: "2.1.1"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "5c5f338a667b4c644744b661f309fb8080bb94b18a7e91ef1dbd343bed00ed6d"
url: "https://pub.dev"
source: hosted
version: "14.2.5"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
sdks:
dart: ">=3.5.1 <4.0.0"
flutter: ">=3.24.0"
```
</details>
### Steps to reproduce
1. `flutter create flutter_camera_test_app`
2. `cd flutter_camera_test_app`
3. `flutter pub add camera`
4. `flutter run ./lib/main.dart`
### Expected results
Build completed successfully
### Actual results
Build failed
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
_counter++;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: const Icon(Icons.add),
),
);
}
}
```
</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching ./lib/main.dart on sdk gphone64 x86 64 in debug mode...
ERROR:/Users/leshanative/.gradle/caches/transforms-3/eb027cf7fb4ebd29dd7b00512d1f206b/transformed/jetified-error_prone_annotations-2.36.0.jar: D8: java.lang.NullPointerException: Cannot invoke "String.length()" because "<parameter1>" is null
ERROR:/Users/leshanative/.gradle/caches/transforms-3/a3dec4472059256c933611cf9ed01408/transformed/jetified-camera-video-1.4.1-runtime.jar: D8: java.lang.NullPointerException: Cannot invoke "String.length()" because "<parameter1>" is null
ERROR:/Users/leshanative/.gradle/caches/transforms-3/8d9d12f0d15f7e2ea5eecc920b2ffe22/transformed/jetified-camera-camera2-1.4.1-runtime.jar: D8: java.lang.NullPointerException: Cannot invoke "String.length()" because "<parameter1>" is null
ERROR:/Users/leshanative/.gradle/caches/transforms-3/a584d7e457f6978aeca4bafa5967ceeb/transformed/jetified-camera-core-1.4.1-runtime.jar: D8: java.lang.NullPointerException: Cannot invoke "String.length()" because "<parameter1>" is null
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:mergeExtDexDebug'.
> Could not resolve all files for configuration ':app:debugRuntimeClasspath'.
> Failed to transform camera-video-1.4.1.aar (androidx.camera:camera-video:1.4.1) to match attributes {artifactType=android-dex, asm-transformed-variant=NONE, dexing-enable-desugaring=true, dexing-enable-jacoco-instrumentation=false, dexing-is-debuggable=true, dexing-min-sdk=21, org.gradle.category=library, org.gradle.dependency.bundling=external, org.gradle.libraryelements=aar, org.gradle.status=release, org.gradle.usage=java-runtime}.
> Execution failed for DexingWithClasspathTransform: /Users/leshanative/.gradle/caches/transforms-3/a3dec4472059256c933611cf9ed01408/transformed/jetified-camera-video-1.4.1-runtime.jar.
> Error while dexing.
> Failed to transform camera-camera2-1.4.1.aar (androidx.camera:camera-camera2:1.4.1) to match attributes {artifactType=android-dex, asm-transformed-variant=NONE, dexing-enable-desugaring=true, dexing-enable-jacoco-instrumentation=false, dexing-is-debuggable=true, dexing-min-sdk=21, org.gradle.category=library, org.gradle.dependency.bundling=external, org.gradle.libraryelements=aar, org.gradle.status=release, org.gradle.usage=java-runtime}.
> Execution failed for DexingWithClasspathTransform: /Users/leshanative/.gradle/caches/transforms-3/8d9d12f0d15f7e2ea5eecc920b2ffe22/transformed/jetified-camera-camera2-1.4.1-runtime.jar.
> Error while dexing.
> Failed to transform camera-core-1.4.1.aar (androidx.camera:camera-core:1.4.1) to match attributes {artifactType=android-dex, asm-transformed-variant=NONE, dexing-enable-desugaring=true, dexing-enable-jacoco-instrumentation=false, dexing-is-debuggable=true, dexing-min-sdk=21, org.gradle.category=library, org.gradle.dependency.bundling=external, org.gradle.libraryelements=aar, org.gradle.status=release, org.gradle.usage=java-runtime}.
> Execution failed for DexingWithClasspathTransform: /Users/leshanative/.gradle/caches/transforms-3/a584d7e457f6978aeca4bafa5967ceeb/transformed/jetified-camera-core-1.4.1-runtime.jar.
> Error while dexing.
> Failed to transform error_prone_annotations-2.36.0.jar (com.google.errorprone:error_prone_annotations:2.36.0) to match attributes {artifactType=android-dex, asm-transformed-variant=NONE, dexing-enable-desugaring=true, dexing-enable-jacoco-instrumentation=false, dexing-is-debuggable=true, dexing-min-sdk=21, org.gradle.category=library, org.gradle.libraryelements=jar, org.gradle.status=release, org.gradle.usage=java-runtime}.
> Execution failed for DexingWithClasspathTransform: /Users/leshanative/.gradle/caches/transforms-3/eb027cf7fb4ebd29dd7b00512d1f206b/transformed/jetified-error_prone_annotations-2.36.0.jar.
> Error while dexing.
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 14s
Running Gradle task 'assembleDebug'... 14.7s
Error: Gradle task assembleDebug failed with exit code 1
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[โ] Flutter (Channel stable, 3.24.1, on macOS 15.0.1 24A348 darwin-x64, locale en-BY)
[โ] Android toolchain - develop for Android devices (Android SDK version 31.0.0)
[โ] Xcode - develop for iOS and macOS (Xcode 16.2)
[โ] Chrome - develop for the web
[โ] Android Studio (version 2022.2)
[โ] IntelliJ IDEA Community Edition (version 2023.1.2)
[โ] VS Code (version 1.96.0)
[โ] Connected device (5 available)
[โ] Network resources
โข No issues found!
```
</details>
| waiting for customer response,in triage | medium | Critical |
2,757,576,705 | neovim | Cannot use `sudoedit` with the AppImage build | ### Problem
The AppImage build of neovim, at least at 0.10.3 cannot be used inside sudoedit. Other builds work as expected.
### Steps to reproduce
```
$ EDITOR=path/to/neovim.appimage sudoedit /hello
[sudo] password for federico:
create mount dir error: Permission denied
sudoedit: /hello unchanged
```
### Expected behavior
```
$ EDITOR=path/to/neovim.appimage sudoedit /hello
[sudo] password for federico:
sudoedit: /hello unchanged
```
(no error about mount dir)
### Nvim version (nvim -v)
0.10.3
### Vim (not Nvim) behaves the same?
vim doesn't have an appimage
### Operating system/version
Debian 12, Linux 6.5.0-0.deb12.1-amd64
### Terminal name/version
wezterm 20240203-110809-5046fc22
### $TERM environment variable
xterm-256color
### Installation
AppImage | bug,needs:response,permissions | low | Critical |
2,757,585,332 | godot | TextEdit caret appears at the top-left of the screen instead of inside the TextEdit field on iPhone (HTML export) | ### Tested versions
Reproducible in Godot v4.3.stable.official [77dcf97d8], Web Export, Iphone (12, 14, 16) - I tested on this device
### System information
Ubuntu 22.04.5 LTS - Godot v4.3.stable - Mesa Intelยฎ UHD Graphics 620 (KBL GT2) - Intelยฎ Coreโข i7-8650U CPU @ 1.90GHz ร 8
### Issue description
When using the HTML export on an iPhone, the caret for a TextEdit field is incorrectly positioned at the top-left corner of the screen instead of appearing inside the TextEdit field while typing. I am attaching image below

### Steps to reproduce
- Open the exported game on an iPhone any browser.
- Navigate to a TextEdit field.
- Start typing.
- Observe the caret's position.
### Minimal reproduction project (MRP)
[test.zip](https://github.com/user-attachments/files/18238716/test.zip)
[demo](https://chimtest1.netlify.app/) | bug,platform:web,topic:gui | low | Minor |
2,757,639,902 | ant-design | Ant Design v6 tasks | - Discussion from #51919
- Preparation
- [x] Create next branch
- React support version bump to 18
- [x] CI 16, 17, 18 Test Case change to 18, 19 @zombieJ
- #52130
- [ ] `rc-util` remove all the `.js` suffix file @li-jia-nan
- https://github.com/react-component/util/pull/610
- https://github.com/react-component/father-plugin/pull/10
- https://github.com/umijs/umi/issues/12865
- https://github.com/umijs/umi/pull/12874
- [x] `rc-util/ref` support React 19 https://github.com/react-component/util/pull/594
- [x] `rc-util` `ReactRender` remove React 17 support https://github.com/react-component/util/pull/617
- [ ] Latest nextjs version example update @li-jia-nan
- [ ] Pre compile with react compiler
- [ ] Use ES classes, remove these babel plugins: [ant-design/antd-tools@3b3447e/lib/getBabelCommonConfig.js#L20-L22](https://github.com/ant-design/antd-tools/blob/3b3447eb106a80ebc64e634ca60ce1cd55e8a351/lib/getBabelCommonConfig.js#L20-L22) @Wxh16144
- Remove findDOMNode & Update Ref
- Remove `findDOMNode`:
- [ ] Remove `findDOMNode` from `rc-util/React/findDOMNode` @Zyf665 https://github.com/react-component/util/pull/611
- [x] Remove `findDOMNode` from `rc-resize-observer` @Zyf665 https://github.com/react-component/resize-observer/pull/216
- [ ] Remove `findDOMNode` from `rc-trigger` @GDobby
- [ ] Remove `findDOMNode` from `rc-virtual-list` @CoderSerio
- [x] Remove `findDOMNode` from `rc-mutation-observer` https://github.com/react-component/mutate-observer/pull/14
- [x] Remove `findDOMNode` from `rc-css-motion` https://github.com/react-component/motion/pull/59
- Update `ref`
- Keep `forwardRef` as fallback to compatible with React 18. This may do nothing currently
- [x] Tag remove margin @li-jia-nan
- #52123
- css variables only
- [ ] @ant-design/cssinjs provide pure css variable mode
- [ ] Change theme should not re-render sub component
- [ ] Remove style compatible code & docs
- [ ] Remove React โค 18 version compatible code
- [ ] Remove dynamic style code and provide css variables code only
- [ ] Update `@ant-design/cssinjs-util`
- [ ] Remove antd ConfigProvider `cssvar` config. @Roxannej
- Retire v4 deprecated API
- Check `dist/antd.js` for listing the `warning` method listing.
- [x] Remove Icon file #52241
- [x] Remove BackTop file #52206
- [ ] rc component warning fixing & update antd code @aojunhao123
- [ ] `@ant-design/compatible` for v6
- [x] antd deprecated API @kiner-tang
> - antd warning fixing
> - Remove doc from antd site
> - Remove test case from antd code
Components:
- [x] auto-complete #52198
- [x] breadcrumb #52202
- [x] cascader #52203
- [x] collapse #52204
- [x] config-provider
- [x] date-picker #52232
- [x] drawer #52233
- [x] dropdown #52234
- [x] form #52235
- [x] modal #52237
- [x] popconfirm #52457 @thinkasany
- [x] progress #52244
- [x] select #52368
- [x] slider #52369
- [x] table #52460
- [x] tag #52229 @aojunhao123
- [x] time-picker
- [x] tooltip #52457 @thinkasany
- [x] tree
- [x] tree-select #52471
- [x] typography #52472
- [x] upload #52476
- Mobile UX improvement
- [ ] Refactor `rc-trigger` to support mobile style switch
- [ ] Listing component with popup for update
- [ ] `rc-picker` add mobile vision
- Completely drop the dependency on `@ctrl/tinycolor`
- [x] Replace `@ctrl/tinycolor` to `@ant-design/fast-color` @aojunhao123
- #52107
- #52157
- Features
- [ ] Masonry @OysterD3
- #40910
- #51705
- #52162
- [ ] Grid support media query @985563349
- #51390
- [ ] Replace Result default image to reduce bundle size
- [ ] Carousel dot show duration #46876
- Semantic DOM & Design #51885
- [ ] All `dropdown` rename to `popup` @li-jia-nan
- [ ] Semantic DOM Update @thinkasany
- [ ] Affix @thinkasany
- [ ] Alert @thinkasany
- [ ] Anchor @thinkasany
- [x] App @thinkasany
- [ ] AutoComplete @thinkasany
- [ ] Avatar @thinkasany
~~BackTop~~
- [x] Badge @thinkasany #52303
- [ ] Breadcrumb @thinkasany
- [ ] Button @thinkasany @liangchaofei
- [ ] Calendar @thinkasany
- [x] Card @thinkasany #52214
- [ ] Carousel @thinkasany
- [ ] Cascader @thinkasany
- [ ] Checkbox @thinkasany
- [x] Col @thinkasany
- [x] Collapse @thinkasany #52258
- [ ] ColorPicker @thinkasany
- [ ] DatePicker @thinkasany
- [x] Descriptions @thinkasany #52120
- [x] Divider @thinkasany
- [x] Drawer @thinkasany #52247
- [ ] Dropdown @thinkasany
- [x] Empty @thinkasany #52208
- [x] Flex @thinkasany
- [ ] FloatButton @thinkasany
- [ ] Form @thinkasany
- [ ] Image @thinkasany
- [ ] Input @thinkasany
- [ ] InputNumber @thinkasany
- [ ] Layout @thinkasany
- [ ] List @thinkasany
- [ ] Mentions @thinkasany
- [ ] Menu @thinkasany
- [x] Modal @thinkasany #52340
- [ ] Pagination @thinkasany
- [x] Popconfirm @thinkasany #52126
- [x] Popover @thinkasany #52110
- [ ] Progress @thinkasany
- [ ] QRCode @thinkasany #52172
- [ ] Radio @thinkasany
- [x] Rate @thinkasany
- [x] Result @thinkasany #52171
- [x] Row @thinkasany
- [x] Segmented @thinkasany #52376
- [ ] Select @thinkasany
- [ ] Skeleton @coding-ice #52470
- [x] Slider @thinkasany #52185
- [x] Space @thinkasany #52248
- [ ] Spin @thinkasany
- [ ] Splitter @thinkasany
- [ ] Splitter custom dragger icon @wanpan11
- #52039
- #52216
- [x] Statistic @thinkasany #52141
- [ ] Steps @thinkasany
- [ ] Switch @thinkasany
- [ ] Table @thinkasany
- [ ] Tabs @thinkasany
- [ ] Tag @thinkasany
- [ ] TimePicker @thinkasany
- [ ] Timeline @thinkasany
- [x] Tooltip @thinkasany #51872
- [ ] Tour @thinkasany #52250
- [ ] Transfer @thinkasany
- [ ] Tree @thinkasany
- [ ] TreeSelect @thinkasany
- [ ] Typography @thinkasany
- [ ] Upload @thinkasany
- [ ] Watermark @thinkasany
- [ ] message @thinkasany
- [ ] notification @thinkasany
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ก Feature Request,6.x | medium | Minor |
2,757,662,496 | rust | `std::thread::yield_now()` example is lacking. | ### Location
`rust/library/std/src/thread/mod.rs:755`
```rs
/// # Examples
///
/// ```
/// use std::thread;
///
/// thread::yield_now();
/// ```
```
### Summary
It's not great, ideally it should show a mostly real use case.
It should maybe hint that this should not be used directly to create spin loops without first considering `std::hint::spin_loop` | A-docs,T-libs,C-discussion | low | Minor |
2,757,694,262 | pytorch | The tensor-based computation of exponentiation and logarithmic operations is much slower than using NumPy | ### ๐ Describe the bug
Hi there, hope this message finds you well.
I have encountered a significant performance issue when using PyTorch tensors for exponentiation (torch.exp()) and logarithmic operations (torch.log()) compared to NumPy. Specifically, these tensor operations are much slower than their NumPy counterparts. This issue is likely real. When I tested the following code, I didn't use a GPU.
The issue lies in the` loss_5()` function. On my machine, when implementing` loss_5` with **NumPy** in the example below, it took **23** seconds, but when using PyTorch, it took **781** seconds.
```Python
# -*-coding:utf-8 -*-
import numpy as np
import tqdm
from sklearn.decomposition import NMF
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.pipeline import Pipeline
import torch
import torch.nn as nn
from sklearn.cluster import SpectralClustering
from sklearn.metrics.pairwise import cosine_similarity
def init_graph(low_dim_x):
n_spot = low_dim_x.shape[0]
n_neighbor = 15
init_W = cosine_similarity(low_dim_x)
"""cos_init = np.zeros((n_spot, n_spot))
for i in range(n_spot):
vec = init_W[i, :]
distance = vec.argsort()[:: -1]
for t in range(n_neighbor + 1):
y = distance[t]
cos_init[i, y] = init_W[i, y]"""
return init_W
def spectral_clustering(x: np.array, n_cluster: int) -> np.array:
"""
Args:
x (np.array): feature matrix $x /in R^{N times D}$
n_cluster (int): cluster number
Returns:
np.array: clustering labels
"""
model = SpectralClustering(n_clusters=n_cluster,
assign_labels='discretize',
random_state=0).fit(x)
labels = model.labels_
partition = [[] for i in range(n_cluster)]
for i in range(x.shape[0]):
partition[labels[i]].append(i + 1)
"""grids = np.zeros((x.shape[0],x.shape[0]))
for i in range(x.shape[0]):
for j in range(x.shape[0]):
if model.labels_[i] == model.labels_[j]:
grids[i,j] = 1"""
return partition
def get_laplace_matrix(x):
#x = x + np.eye(x.shape[0])
degree_matrix = np.zeros((x.shape[0], x.shape[0]))
for i in range(x.shape[0]):
degree_matrix[i, i] = sum(x[i, :])
lap = degree_matrix - x
#lap = lap + 0.01*np.eye(lap.shape[0])
return lap
def nmf_ini(x: np.array, rank: np.array) -> np.array:
"""do NMF(non-negative matrix factorization) with a given matrix x and expected dimension.
Args:
x (np.array): non-negative matrix X to be factorized
dimension (np.array): dimension
Returns:
np.array: (W, H) whose product approximates the non-negative matrix X
"""
"""model = NMF(n_components=dimension, init='random', random_state=0, max_iter=500)
w = model.fit_transform(x)
h = model.components_"""
u, s, v = np.linalg.svd(x, full_matrices=False)
w_ini = u[:,:rank]
h_ini = np.diag(s[:rank])@v[:rank,:]
return w_ini, h_ini
class MVFC(nn.Module):
def __init__(self, parameters):
super(MVFC, self).__init__()
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.gene_number = nn.Parameter(
torch.tensor(parameters['gene_number']), requires_grad=False)
self.spot_number = nn.Parameter(
torch.tensor(parameters['spot_number']), requires_grad=False)
self.feature_dimension = nn.Parameter(
torch.tensor(parameters['feature_dimension']), requires_grad=False)
self.alpha = nn.Parameter(
torch.tensor(parameters['alpha']), requires_grad=False)
self.beta = nn.Parameter(
torch.tensor(parameters['beta']), requires_grad=False)
self.gamma = nn.Parameter(
torch.tensor(parameters['gamma']), requires_grad=False)
self.eta = nn.Parameter(
torch.tensor(parameters['eta']), requires_grad=False)
self.epochs = nn.Parameter(
torch.tensor(parameters['epochs']), requires_grad=False)
self.base_spot = nn.Parameter(torch.rand((self.spot_number, self.feature_dimension),
dtype=torch.float32)
)
self.base_spot_g = nn.Parameter(torch.rand((self.gene_number, self.feature_dimension),
dtype=torch.float32))
self.feature_fusion = nn.Parameter(torch.rand((self.feature_dimension,
self.spot_number),
dtype = torch.float32 ) )
self.affinity_graph = nn.Parameter(torch.rand((self.spot_number,
self.spot_number),
dtype=torch.float32))
def objective_function(self,
w1,
w2,
lap_w2,
lap_w1):
"""
Args:
input:
Returns:
"""
loss_component = self.compute_loss(w1 = w1,
w2 = w2,
lap_w2 = lap_w2,lap_w1=lap_w1)
return loss_component
def initialize(self, w1,w2):
print("model initializing...")
with torch.no_grad():
n_components = int(self.feature_dimension.detach())
w, h = nmf_ini(w1.to("cpu").detach().numpy(),n_components)
w = torch.from_numpy(w).float().to(self.device)
h = torch.from_numpy(h).float().to(self.device)
self.base_spot_g.data, self.feature_fusion.data = w, h
w, h = nmf_ini(w2.to("cpu").detach().numpy(), n_components)
w = torch.from_numpy(w).float().to(self.device)
h = torch.from_numpy(h).float().to(self.device)
self.base_spot.data, self.feature_fusion.data = w, h
w1.to(self.device)
w2.to(self.device)
print("model initialized...")
def compute_loss(self,w1,w2,lap_w2,lap_w1):
# TODO
loss = torch.zeros(6,dtype=torch.float32)
# ST NMF
loss[0] = self.loss_0(w1=w1)
# spatial NMF
loss[1] = self.loss_1(w2=w2)
# penalty
#loss[2] = self.loss_2()
# lpp
loss[3] = self.loss_3(lap_w2=lap_w2, lap_w1=lap_w1)
# affinity graph
loss[4] = self.loss_4()
# contrastive loss
loss[5] = self.loss_5(w2)
return loss
def loss_0(self,w1):
return torch.norm(w1 - self.base_spot_g @ self.feature_fusion )
# self representation
"""def loss_0(self, w1):
return torch.norm(w1 - w1 @ (self.feature_fusion + self.sr_gene))"""
def loss_1(self,w2):
return self.alpha * torch.norm(w2 - self.base_spot @ self.feature_fusion)
def loss_2(self):
return self.beta*torch.norm(self.affinity_graph,p=1)
def loss_3(self, lap_w2, lap_w1):
return self.gamma * torch.trace(self.feature_fusion @ lap_w2 @ self.feature_fusion.T)
def loss_4(self):
return self.eta * torch.norm(self.feature_fusion - self.feature_fusion @ self.affinity_graph)
def loss_5(self,w2):
contrastive_loss = 0
for i in range(self.affinity_graph.shape[0]):
denominator = torch.sum(
torch.exp(self.affinity_graph[i,:])) - torch.exp(self.affinity_graph[i,i])
for j in torch.where(w2 != 0)[0]:
numerator = torch.exp(self.affinity_graph[i,j])
contrastive_loss += -torch.log(numerator / denominator)
return contrastive_loss
def loss_5_numpy(self, w2):
contrastive_loss = 0
for i in range(self.affinity_graph.shape[0]):
affinity = self.affinity_graph.to("cpu").detach().numpy()
denominator = (np.sum(
np.exp(affinity[i, :])) - np.exp(affinity[i, i]))
for j in torch.where(w2 != 0)[0]:
numerator = np.exp(affinity[i,j])
contrastive_loss += -np.log(numerator / denominator)
self.affinity_graph.to(self.device)
return torch.tensor(contrastive_loss.astype(np.float32))
def forward(self,w1,w2, lap_w2,lap_w1):
self.feature_fusion.data = torch.nn.functional.relu(self.feature_fusion.data)
self.base_spot_g.data = torch.nn.functional.relu(self.base_spot_g.data)
self.base_spot.data = torch.nn.functional.relu(self.base_spot.data)
self.affinity_graph.data = torch.nn.functional.relu(self.affinity_graph.data)
self.affinity_graph.data =(self.affinity_graph.data + self.affinity_graph.data.T)/2
return self.objective_function(w1,w2,lap_w2,lap_w1)
# test
def test(w1, w2, parameters):
w1_cos = init_graph(w1.T)
lap_w2 = get_laplace_matrix(w2).astype(np.float32)
lap_w1 = get_laplace_matrix(w1_cos).astype(np.float32)
model = MVFC(parameters=parameters)
model.affinity_graph.data = torch.from_numpy(w1_cos.astype(np.float32))
model = model.to(model.device)
w1 = torch.from_numpy(w1)
w2 = torch.from_numpy(w2)
lap_w2 = torch.from_numpy(lap_w2)
lap_w1 = torch.from_numpy(lap_w1)
w1 = w1.to(model.device)
w2 = w2.to(model.device)
lap_w2 = lap_w2.to(model.device)
lap_w1 = lap_w1.to(model.device)
model.initialize(w1, w2)
print("the model is built!")
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
loss_history = np.zeros((model.epochs, 6))
for k in range(model.epochs):
optimizer.zero_grad()
loss = model.forward(w1,w2,lap_w2,lap_w1)
loss_history[k,:] = loss.detach().numpy()[:]
loss = torch.sum(loss)
print(f"\rEpoch {k + 1}'s loss is:{loss}",end=" ")
#model.affinity_graph = nn.Parameter(torch.clamp(model.affinity_graph,min=0))
"""model.feature_fusion = nn.Parameter(torch.clamp(model.feature_fusion, min=0))
model.sr_gene = nn.Parameter(torch.clamp(model.sr_gene, min=0))
model.sr_spatial = nn.Parameter(torch.clamp(model.sr_spatial, min=0))"""
loss.backward()
optimizer.step()
print("optimized end!")
# clustering
#partition = spectral_clustering(model.feature_fusion.detach().numpy(), 11)
return (model.affinity_graph.to("cpu").detach().numpy(),
model.feature_fusion.to("cpu").detach().numpy(),
loss_history,
model.base_spot_g.to("cpu").detach().numpy(),
model.base_spot.to("cpu").detach().numpy())
w1 = np.random.normal(loc=1,scale=0.1,size=(20,100))
w2 = np.random.normal(loc=1,scale=0.1,size=(100,100))
parameters = {
"device": "cpu" if torch.cuda.is_available() else "cuda:0",
"gene_number": w1.shape[0],
"feature_dimension": 10,
"alpha": 0.8,
"beta": 0.8,
"gamma": 0.8,
"eta": 0.8,
"spot_number": w1.shape[1],
"epochs": 10,
"n_cluster":10
}
import time
start = time.time()
test(w1, w2, parameters)
end = time.time()
print(end - start)
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i7-13700
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 1
BogoMIPS: 4223.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht sy
scall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4
_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibr
s_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 576 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 24 MiB (12 instances)
L3 cache: 30 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Vulnerable: No microcode
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] numpy-groupies==0.11.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @msaroufim @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | needs reproduction,module: performance,module: cpu,triaged | low | Critical |
2,757,703,709 | vscode | Cmd+\ does not register |
Type: <b>Bug</b>
1. Press <kbd>Cmd</kbd>+<kbd>\\</kbd>.
2. Observe it does not trigger default action of Split Editor.
This still occurs with all extensions disabled and only seems to be an issue on macOS (equivalent windows shortcut works fine).
I can see that there is a previous issue that reported this same bug but was closed due to lack of information.
VS Code version: Code 1.96.2 (Universal) (fabdb6a30b49f79a7aba0f2ad9df9b399473380f, 2024-12-19T10:22:47.216Z)
OS version: Darwin arm64 24.2.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M2 Pro (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|3, 4, 9|
|Memory (System)|16.00GB (0.38GB free)|
|Process Argv|--crash-reporter-id fbfb57fd-178e-4b32-82cf-9b50cab2a96d|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (19)</summary>
Extension|Author (truncated)|Version
---|---|---
ruff|cha|2024.56.0
path-intellisense|chr|2.10.0
copilot|Git|1.254.0
copilot-chat|Git|0.23.2
vsc-python-indent|Kev|1.18.0
vscode-clangd|llv|0.1.33
debugpy|ms-|2024.14.0
python|ms-|2024.22.1
vscode-pylance|ms-|2024.12.1
datawrangler|ms-|1.15.1
jupyter|ms-|2024.11.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-speech|ms-|0.12.1
resourcemonitor|mut|1.0.7
material-icon-theme|PKi|5.16.0
cmantic|tde|0.9.0
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
(2 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
pythonvspyt551cf:31179979
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl1:31139838
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
```
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,757,766,318 | rust | Tracking issue for release notes of #67521: Tracking issue for const `alloc::Layout` |
This issue tracks the release notes text for #67521.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Const Stabilized APIs
- [`Layout::for_value`](https://doc.rust-lang.org/stable/std/alloc/struct.Layout.html#method.for_value)
- [`Layout::align_to`](https://doc.rust-lang.org/stable/std/alloc/struct.Layout.html#method.align_to)
- [`Layout::pad_to_align`](https://doc.rust-lang.org/stable/std/alloc/struct.Layout.html#method.pad_to_align)
- [`Layout::extend`](https://doc.rust-lang.org/stable/std/alloc/struct.Layout.html#method.extend)
- [`Layout::array`](https://doc.rust-lang.org/stable/std/alloc/struct.Layout.html#method.array)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @lukaslueg -- origin issue/PR authors and assignees for starting to draft text
| A-allocators,T-libs-api,relnotes,A-const-eval,needs-triage,relnotes-tracking-issue | low | Minor |
2,757,769,770 | bitcoin | b-msghand invoked oom-killer: Master (v28.99) crashing during IBD | Running master (specifically 318b2a2f90dfd8ae20beca58e55e6d240a7f3d27 which is HEAD of #31415) on Ubuntu 24.10 on a digital ocean droplet with 8GB RAM. Crashing regularly during IBD. Logs always end like this:
```
2024-12-23T20:39:07Z [validation] Enqueuing BlockConnected: block hash=000000000000000000008364a7396ba08c5ab141d10136e3ad0d9fa37899d7b0 block height=814565
2024-12-23T20:39:07Z [validation] Enqueuing UpdatedBlockTip: new block hash=000000000000000000008364a7396ba08c5ab141d10136e3ad0d9fa37899d7b0 fork block hash=000000000000000000023c4f2b1f2457d2f46cd7b6cb27f87ab4788cd54adde1 (in IBD=true)
2024-12-23T20:39:07Z [validation] ActiveTipChange: new block hash=000000000000000000008364a7396ba08c5ab141d10136e3ad0d9fa37899d7b0 block height=814565
2024-12-23T20:39:07Z [bench] - Load block from disk: 27.96ms
2024-12-23T20:39:07Z [bench] - Sanity checks: 3.86ms [20.30s (5.18ms/blk)]
2024-12-23T20:39:07Z [bench] - Fork checks: 0.13ms [1.19s (0.30ms/blk)]
```
Configuration is just `-txindex=1`. Have also added `-dbcache=100` and 4 GB swap which did not fix the issue.
Logs only report `Killed` but there is [OOM details in dmesg](https://gist.github.com/pinheadmz/00eaef782442bfc956fc8c82d5c8c47e)
Seems to be the same issue as https://github.com/bitcoin/bitcoin/issues/30001 | Linux/Unix,Resource usage | low | Critical |
2,757,773,273 | rust | Tracking issue for release notes of #132431: From iterator for more tuples |
This issue tracks the release notes text for #132431.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Stabilized APIs
- [FromIterator<(A, ...)> for (A, ...)](https://doc.rust-lang.org/stable/std/iter/trait.FromIterator.html#impl-FromIterator%3C(EA,)%3E-for-(A,)) (FromIterator for N-tuple for N in 1..=12)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @shahn, @Amanieu -- origin issue/PR authors and assignees for starting to draft text
| T-libs-api,relnotes,relnotes-tracking-issue | low | Minor |
2,757,779,118 | PowerToys | quick accent not showing overlay to choose characters | ### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Quick Accent
### Steps to reproduce
I am pressing a letter, i.e. A, and while holding down the A key I press the space bar (my chosen activation key).
### โ๏ธ Expected Behavior
I continue to hold and expect an overlay to pop up so I can choose the accented character by pressing the space key repeatably.
### โ Actual Behavior
Nothing Happens.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Minor |
2,757,816,180 | rust | Tracking issue for release notes of #95892: Tracking Issue for `sub_ptr` (feature `ptr_sub_ptr`) |
This issue tracks the release notes text for #95892.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Tracking Issue for `sub_ptr` (feature `ptr_sub_ptr`)](https://github.com/rust-lang/rust/issues/95892)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @scottmcm -- origin issue/PR authors and assignees for starting to draft text
| T-lang,T-libs-api,relnotes,needs-triage,relnotes-tracking-issue | low | Minor |
2,757,817,249 | excalidraw | FR: Change no-arrowhead buttons to a greyed out arrowhead (in place of a regular line) | A small UX improvement request on my side: it is currently unclear the two buttons work for arrowhead setting. I literally had to google it and only after arriving to https://github.com/excalidraw/excalidraw/issues/616 had I realized, toggling back and forth between github and the whiteboard that the little straight line is indeed a setting for the other arrowhead.
One example of how this could be solved would be to display a greyed out arrow head in the other icon, in place of a black straight line. This would achieve two objectives
1. Inform that clicking this button leads to something related to arrowheads
2. Indicate that it's currently inactive (grey)
Currently, the buttons focus on how the whole line looks (a straight line) rather than on focusing on the end points (pointy arrows) | UX/UI | low | Minor |
2,757,861,172 | material-ui | [Slider] Thumb is lagging behind | ### Steps to reproduce
Steps:
1. Drag the Slider component around fast enough
### Current behavior
The knob lags behind the mouse cursor. In slow-motion it's clearly visible, but I think it's also noticable at normal speed, especially in person.

### Expected behavior
No lag, or try reducing the lag.
### Context
_No response_
### Your environment
I tested it just here: https://mui.com/material-ui/react-slider/
So whatever the environment is there, I guess. It says MUI 6.3.0 at the top, so that's probably what it is.
I'm using Firefox.
**Search keywords**: slider lag/lagging | performance,component: slider,package: material-ui,ready to take | low | Major |
2,757,862,872 | vscode | Random pop up command prompt appearing when open vs code | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.0
- OS Version: Windows 11 24H2
Steps to Reproduce:
1. Open VS Code
2. Create c++ / python file
3. Random pop up command prompt appearing
4. Exit VS Code
5. Command prompt stop appearing
| info-needed | low | Critical |
2,757,866,653 | react | Bug: uwu by @something in footer is not responsive from around 1500px to 1670px | On react documentation website. After clicking uwu, the text "Logo by @sawaratsuki1004" is breaking. Not looking good.
Make it inline for this range of screen width. (1500px to 1670px). Works fine out of this range

Fix,
white-space: nowrap;
Add this css property to the tag. | Status: Unconfirmed | low | Critical |
2,757,886,384 | next.js | Returning `notFound` from `getStaticProps` with dynamic path and middleware shows error page instead of 404 page | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/dreamy-villani-mh2lrd?workspaceId=ws_PaH83X1dr2DdF1ajuypJkh
### To Reproduce
1. Start the application in development mode (next dev)
2. Navigate to the dynamic path, for instance: https://mh2lrd-3001.csb.app/products/3/hello
3. The following error is shown: `Error: Failed to load static props`
### Current vs. Expected behavior
**Expected:**
When navigating to a route with `getStaticProps` that returns `notFound: true`, I expect to see the custom 404 page.
**Actual:**
Instead, the default error page is shown.
If the middleware is removed, the bug does not reproduce - the 404 page is shown.
From the network calls, it appears that a fetch to 404.json with query params representing the dymanic path fails (with a 404 status code).
For instance - a call to: `_next/data/cYcaBxmaNOQIDHRyOqEZf/404.json?id=3&slug=hello` has a 404 response.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.1.1-canary.18 // Latest available version is detected (15.1.1-canary.18).
eslint-config-next: N/A
react: 19.0.0-beta-04b058868c-20240508
react-dom: 19.0.0-beta-04b058868c-20240508
typescript: 5.1.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware, Pages Router
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Other (Deployed)
### Additional context
The bug does not reproduce in [email protected].
Any version from [email protected] onwards reproduces the bug.
It reproduces in dev, running locally, and in our production environment (self hosted). | Middleware,Pages Router,linear: next | low | Critical |
2,757,936,575 | yt-dlp | Instagram Archived Story with cookies from Firefox - Unable to extract user id | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Germany
### Provide a description that is worded well enough to be understood
Hello, I am trying to download an archived story. I can download reels, but somehow I get the error message "Unable to extract user id; please report this issue on..." when I run the same command on an archived story.
This is on Windows and I have updated and verified the version.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp.exe -vU "https://www.instagram.com/stories/archive/17912540664048453/?size=l" --cookies-from-browser "firefox"
[debug] Command-line config: ['-vU', 'https://www.instagram.com/stories/archive/17912540664048453/?size=l', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [65cf46cdd] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg N-116344-g5c8523cef1-20240721 (setts), ffprobe N-116344-g5c8523cef1-20240721
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
Extracting cookies from firefox
[debug] Extracting cookies from: "C:\Users\Stefan\AppData\Roaming\Mozilla\Firefox\Profiles\lmab0g57.default\cookies.sqlite"
Extracted 668 cookies from firefox
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[instagram:story] Extracting URL: https://www.instagram.com/stories/archive/17912540664048453/?size=l
[instagram:story] 17912540664048453: Downloading webpage
ERROR: [instagram:story] 17912540664048453: Unable to extract user id; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\instagram.py", line 714, in _real_extract
```
| account-needed,site-enhancement,triage | low | Critical |
2,758,102,162 | PowerToys | Un able to remove connection from mouse without border | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
Un able to remove old connection and add new one
### โ๏ธ Expected Behavior
Able to remove any old connection and add new connection
### โ Actual Behavior
Un able to remove old connection and add new one
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,758,117,439 | rust | [AVR] Large const lookup table is placed in data section | When compiling for AVR, large const lookup table is placed in data section, wasting precious RAM.
Code in question:
```rust
const CURRENT_CALIBRATION_TABLE: [u16; 800] = [ /* ... */ ];
pub const fn get_current_by_adc(adc: u16) -> Fixed {
let i = if (adc as usize) < CURRENT_CALIBRATION_TABLE.len() {
adc as usize
} else {
CURRENT_CALIBRATION_TABLE.len() - 1
};
Fixed::from_bits(CURRENT_CALIBRATION_TABLE[i] as u32)
}
```
Target spec:
```
{
"arch": "avr",
"atomic-cas": false,
"cpu": "atmega328p",
"crt-objects-fallback": "false",
"data-layout": "e-P1-p:16:8-i8:8-i16:8-i32:8-i64:8-f32:8-f64:8-n8-a:8",
"eh-frame-header": false,
"exe-suffix": ".elf",
"late-link-args": {
"gnu-cc": [
"-lgcc"
],
"gnu-lld-cc": [
"-lgcc"
]
},
"linker": "avr-gcc",
"linker-flavor": "gnu-cc",
"llvm-target": "avr-unknown-unknown",
"max-atomic-width": 8,
"metadata": {
"description": null,
"host_tools": null,
"std": null,
"tier": null
},
"no-default-libraries": false,
"pre-link-args": {
"gnu-cc": [
"-mmcu=atmega328p",
"-Wl,--as-needed,--print-memory-usage"
],
"gnu-lld-cc": [
"-mmcu=atmega328p",
"-Wl,--as-needed,--print-memory-usage"
]
},
"relocation-model": "static",
"target-c-int-width": "16",
"target-pointer-width": "16"
}
```
I expected to see this happen: Table is placed into Flash ROM, flash read instructions are used to access data.
Instead, this happened: Table is placed into RAM.
### Meta
`rustc --version --verbose`:
```
rustc 1.85.0-nightly (426d17342 2024-12-21)
binary: rustc
commit-hash: 426d1734238e3c5f52e935ba4f617f3a9f43b59d
commit-date: 2024-12-21
host: aarch64-apple-darwin
release: 1.85.0-nightly
LLVM version: 19.1.6
```
</p>
</details>
| T-compiler,C-feature-request,O-AVR | low | Major |
2,758,125,478 | pytorch | XPU ConvTranspose2d Causes DataLoader Memory Leak | ### ๐ Describe the bug
I run the following notebook on XPU (device_type = "xpu") failed with "Too many open files" error. It seems the DataLoader does not close the files. The memory increases slowly from 2 GiB to 8 GiB within 3 epochs. Running on CPU (device_type = "cpu") is fine.
[Convolutional Autoencoder Notebook](https://github.com/ekaakurniawan/DLND/blob/a770-part3/assignments/P3-CNN/L5-autoencoder/Convolutional_Autoencoder_Exercise.ipynb)
I suspect the issue is caused by ConvTranspose2d layer because the following notebook without the layer is working fine on XPU.
[Simple Autoencoder Notebook](https://github.com/ekaakurniawan/DLND/blob/a770-part3/assignments/P3-CNN/L5-autoencoder/Simple_Autoencoder_Exercise.ipynb)
Please find the [steps to setup](https://github.com/ekaakurniawan/DLND/tree/a770-part3?tab=readme-ov-file#intel-gpu) as well as the following entire error message.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[8], line 11
6 train_loss = 0.0
8 ###################
9 # train the model #
10 ###################
---> 11 for data in train_loader:
12 # _ stands in for labels, here
13 # no need to flatten images
14 images, _ = data
15 images = images.to(device)
File [~/Workspace/pytorch_arc/pytorch_arc_env/lib/python3.12/site-packages/torch/utils/data/dataloader.py:708](http://localhost:8888/home/eka/Workspace/pytorch_arc/pytorch_arc_env/lib/python3.12/site-packages/torch/utils/data/dataloader.py#line=707), in _BaseDataLoaderIter.__next__(self)
705 if self._sampler_iter is None:
706 # TODO(https://github.com/pytorch/pytorch/issues/76750)
707 self._reset() # type: ignore[call-arg]
--> 708 data = self._next_data()
709 self._num_yielded += 1
710 if (
711 self._dataset_kind == _DatasetKind.Iterable
712 and self._IterableDataset_len_called is not None
713 and self._num_yielded > self._IterableDataset_len_called
714 ):
File ~/Workspace/pytorch_arc/pytorch_arc_env/lib/python3.12/site-packages/torch/utils/data/dataloader.py:1458, in _MultiProcessingDataLoaderIter._next_data(self)
1455 return self._process_data(data)
1457 assert not self._shutdown and self._tasks_outstanding > 0
-> 1458 idx, data = self._get_data()
1459 self._tasks_outstanding -= 1
1460 if self._dataset_kind == _DatasetKind.Iterable:
1461 # Check for _IterableDatasetStopIteration
File [~/Workspace/pytorch_arc/pytorch_arc_env/lib/python3.12/site-packages/torch/utils/data/dataloader.py:1420](http://localhost:8888/home/eka/Workspace/pytorch_arc/pytorch_arc_env/lib/python3.12/site-packages/torch/utils/data/dataloader.py#line=1419), in _MultiProcessingDataLoaderIter._get_data(self)
1416 # In this case, `self._data_queue` is a `queue.Queue`,. But we don't
1417 # need to call `.task_done()` because we don't use `.join()`.
1418 else:
1419 while True:
-> 1420 success, data = self._try_get_data()
1421 if success:
1422 return data
File [~/Workspace/pytorch_arc/pytorch_arc_env/lib/python3.12/site-packages/torch/utils/data/dataloader.py:1282](http://localhost:8888/home/eka/Workspace/pytorch_arc/pytorch_arc_env/lib/python3.12/site-packages/torch/utils/data/dataloader.py#line=1281), in _MultiProcessingDataLoaderIter._try_get_data(self, timeout)
1280 except OSError as e:
1281 if e.errno == errno.EMFILE:
-> 1282 raise RuntimeError(
1283 "Too many open files. Communication with the"
1284 " workers is no longer possible. Please increase the"
1285 " limit using `ulimit -n` in the shell or change the"
1286 " sharing strategy by calling"
1287 " `torch.multiprocessing.set_sharing_strategy('file_system')`"
1288 " at the beginning of your code"
1289 ) from None
1290 raise
```RuntimeError: Too many open files. Communication with the workers is no longer possible. Please increase the limit using `ulimit -n` in the shell or change the sharing strategy by calling `torch.multiprocessing.set_sharing_strategy('file_system')` at the beginning of your code```
### Versions
```
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0+xpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.39
Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) Ultra 9 285K
CPU family: 6
Model: 198
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 24
Stepping: 2
CPU(s) scaling MHz: 30%
CPU max MHz: 5100.0000
CPU min MHz: 800.0000
BogoMIPS: 7372.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault intel_ppin ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni lam wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid bus_lock_detect movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 768 KiB (20 instances)
L1i cache: 1.3 MiB (20 instances)
L2 cache: 40 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] pytorch-triton-xpu==3.2.0
[pip3] torch==2.6.0+xpu
[pip3] torchaudio==2.6.0+xpu
[pip3] torchvision==0.21.0+xpu
[pip3] triton==3.2.0
[conda] Could not collect
```
cc @gujinghui @EikanWang @fengyuan14 @guangyey | triaged,module: xpu | low | Critical |
2,758,142,032 | ui | [bug]: Invalid CSS variable name in chart component | ### Describe the bug
The chart component appears to [set a CSS variable dynamically](https://github.com/shadcn-ui/ui/blob/1081536246b44b6664f4c99bc3f1b3614e632841/apps/www/registry/default/ui/chart.tsx#L91) as `--color-${key}`. However, the value of `key` is allowed to contain characters that are invalid as part of a CSS variable name. For example, the `.` character is not valid in CSS identifiers as per the [CSS specification](https://www.w3.org/TR/CSS22/syndata.html#value-def-identifier).
**Actual result**
<img width="676" alt="image" src="https://github.com/user-attachments/assets/e03c1583-54d0-40c1-813a-438ca66119f2" />
**Expected result**
<img width="676" alt="image" src="https://github.com/user-attachments/assets/0cb6f77e-1557-4561-8985-77cd5d9661c4" />
### Affected component/components
Chart
### How to reproduce
(see linked StackBlitz example, based off-of [this](https://github.com/shadcn-ui/ui/blob/1081536246b44b6664f4c99bc3f1b3614e632841/apps/www/registry/default/charts/chart-area-interactive.tsx))
1. Use the chart component with data/chart config which contains a key that would be an invalid in a CSS variable (e.g. `example.com`). As a workaround, you can use a legal CSS variable name as the key, and customize the name using the label (however it was not immediately obvious where the issue lay).
### Codesandbox/StackBlitz link
https://stackblitz.com/edit/shadcn-chart-cssvar-issue
### Logs
_No response_
### System Info
```bash
Chrome, macOS
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,758,144,725 | flutter | I have flicker with Flutter App On Android and sometimes on iOS | ### Steps to reproduce
1. I have a page on scroll or speed movement i have flicker
### Expected results
The scroll does not working correctly
### Actual results
I have flicker
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:austro_digital/app/login/_children/register_austrotoken_login/presenter/blocs/camera_access_bloc/camera_access_bloc.dart';
import 'package:austro_digital/app/login/_children/register_austrotoken_login/presenter/screens/screens.dart';
import 'package:austro_digital/apps/ley_proteccion/models/args_for_law_terms_model.dart';
import 'package:austro_digital/core/widgets/inputs/terms_checkbox.dart';
import 'package:austro_digital/global_var.dart';
import 'package:ba_app_widgets/ba_components.dart';
import 'package:ba_mobile_core/ba_mobile_core.dart';
import 'package:ba_mobile_design_system/ba_mobile_design_system.dart';
import 'package:ba_mobile_utils/image_utils.dart';
import 'package:firebase_analytics_tracker/firebase_analytics_tracker.dart';
import 'package:flutter/material.dart';
import 'package:flutter_bloc/flutter_bloc.dart';
import 'package:permission_handler/permission_handler.dart';
import 'package:shimmer/shimmer.dart';
import 'package:sizer/sizer.dart';
List<Map<String, String>> items = [
{
'imagePath': ImageUtils.focoImage,
'text': 'Tener buena iluminaciรณn.',
},
{
'imagePath': ImageUtils.eyeglasses,
'text': 'No uses lentes o accesorios.',
},
{
'imagePath': ImageUtils.focusFace,
'text': 'Enfoca tu cรฉdula y rostro completo. Evita que estรฉ borroso.',
},
];
class CameraAccessPage extends StatefulWidget {
static const String routeName = '/camera-access';
const CameraAccessPage({super.key});
@override
State<CameraAccessPage> createState() => _CameraAccessPageState();
}
class _CameraAccessPageState extends State<CameraAccessPage>
with WidgetsBindingObserver {
@override
void initState() {
super.initState();
GlobalVars().timeLastSession = DateTime.now();
}
@override
void didChangeAppLifecycleState(AppLifecycleState state) {
super.didChangeAppLifecycleState(state);
switch (state) {
case AppLifecycleState.resumed:
BlocProvider.of<CameraAccessBloc>(context).add(
RequestCameraAccessEvent(),
);
break;
case AppLifecycleState.paused:
break;
case AppLifecycleState.inactive:
break;
case AppLifecycleState.detached:
default:
break;
}
}
@override
Widget build(BuildContext context) {
return BlocProvider.value(
value: BlocProvider.of<CameraAccessBloc>(context),
child: Scaffold(
appBar: AppBar(
title: const Text('Comprobar identidad'),
centerTitle: false,
bottom: const PreferredSize(
preferredSize: Size.fromHeight(2.5),
child: LinearProgressIndicator(
color: BaColor.secondary,
backgroundColor: Colors.transparent,
value: 2 / 5,
),
),
),
backgroundColor: BaColor.skyMist,
body: BlocListener<CameraAccessBloc, CameraAccessState>(
listener: (context, state) async {
if (state is CameraAccessGrantedState) {
// await Navigator.of(context).pushNamed(
// InfoScreen.routeName,
// arguments: InfoScreenArgs(
// appBarText: 'Comprobar identidad',
// title: 'Validaciรณn lista',
// messages: ['Hemos comprobado tu identidad exitosamente'],
// buttonText: 'Continuar',
// onPressed: () {
// Navigator.of(context).pushNamedAndRemoveUntil(
// CreateAustroTokenPinScreen.routeName,
// (route) => false,
// );
// },
// imagePath: imgCheckGlobal.path,
// imageColor: const Color(0xFF319E48),
// ),
// );
if (RemoteConfigService.instance
.getBool(RemoteConfigEnum.skipLifeTest)) {
Navigator.of(context).pushNamed(
CreateAustroTokenPinScreen.routeName,
);
} else {
Navigator.of(context).pushNamed(
//CreateAustroTokenPinScreen.routeName,
//AustroTokenActivatedScreen.routeName,
ProofOfLifeScreen.routeName,
);
}
}
if (state is CameraAccessDeniedState) {
Navigator.pushNamed(
context,
InfoScreen.routeName,
arguments: InfoScreenArgs(
appBarText: 'Comprobar identidad',
title: 'Acceso pendiente',
messages: [
'Es necesario los permisos de la cรกmara del dispositivo para continuar con el proceso de creaciรณn de cuenta.',
],
buttonText: 'Permitir',
onPressed: () {
BlocProvider.of<CameraAccessBloc>(context).add(
RequestCameraAccessEvent(),
);
Navigator.of(context).pop();
},
imagePath: ImageUtils.notCamera,
),
);
}
if (state is CameraAccessPermanentlyDeniedState) {
Navigator.pushNamed(
context,
InfoScreen.routeName,
arguments: InfoScreenArgs(
appBarText: 'Comprobar identidad',
title: 'Acceso pendiente',
messages: [
'Es necesario los permisos de la cรกmara del dispositivo para continuar con el proceso de creaciรณn de cuenta.',
'Puedes dar este permiso desde la configuraciรณn del sistema.',
],
buttonText: 'Configuraciรณn',
onPressed: () {
context.read<CameraAccessBloc>().logEvent(
AnalyticsEvent.loginCameraAccessOnOpenConfiguration,
);
openAppSettings();
Navigator.of(context).pop();
},
imagePath: ImageUtils.notCamera,
),
);
}
},
child: const _Body(),
),
),
);
// return BlocProvider.value(
// value: BlocProvider.of<CameraAccessBloc>(context),
// child: BaScreenBase(
// hasGoBack: true,
// hasPadding: false,
// customBackgroundColor: BaColor.skyMist,
// appBarText: 'Comprobar identidad',
// child: BlocListener<CameraAccessBloc, CameraAccessState>(
// listener: (context, state) {
// },
// child: const _Body(),
// ),
// ),
// );
}
}
class _Body extends StatefulWidget {
const _Body();
@override
State<_Body> createState() => _BodyState();
}
class _BodyState extends State<_Body> {
bool _isChecked = true;
bool _showTermsAndConditionsCheck = true;
bool _isLoading = false;
@override
void didChangeDependencies() {
super.didChangeDependencies();
_checkTermsAndConditions();
}
Future<void> _checkTermsAndConditions() async {
if (context.read<CameraAccessBloc>().title ==
'Recupera tu pin con identificaciรณn de tu rostro') {
return;
}
bool accepted = false;
if (!mounted) return;
setState(() {
_isLoading = true;
});
try {
final CameraAccessBloc bloc = context.read<CameraAccessBloc>();
accepted = await bloc.acceptedTermsAndConditions;
} finally {
if (mounted) {
setState(() {
_isLoading = false;
_showTermsAndConditionsCheck = !accepted;
});
}
}
}
@override
Widget build(BuildContext context) {
return Column(
children: [
Expanded(
child: ListView(
children: [
const SizedBox(height: 34.0),
Image.asset(
ImageUtils.cameraRounded,
height: 190.0,
),
const SizedBox(height: 50.0),
H3(
text: context.read<CameraAccessBloc>().title,
color: BaColor.secondary,
textAlign: TextAlign.center,
),
const SizedBox(height: 20.0),
Padding(
padding: const EdgeInsets.symmetric(horizontal: 16.0),
child: Column(
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
const H6(
text: 'Recuerda:',
color: BaColor.secondary,
textAlign: TextAlign.start,
fontWeight: FontWeight.w700,
),
...items
.map(
(item) => CustomRowWidget(
imagePath: item['imagePath']!,
text: item['text']!,
),
)
.toList(),
],
),
),
GeneralVerticalSpace(
customSpace: 2.h,
),
GeneralVerticalSpace(
customSpace: 12.h,
),
],
),
),
if (_isLoading &&
context.read<CameraAccessBloc>().title ==
'Pediremos permiso para acceder a tu cรกmara')
Shimmer.fromColors(
baseColor: Colors.grey[300]!,
highlightColor: Colors.grey[100]!,
child: Padding(
padding: const EdgeInsets.symmetric(horizontal: 16.0),
child: TermsCheckBox(
checked: _isChecked,
text: Text(
'He leรญdo y acepto la Polรญtica de Privacidad y Protecciรณn de Datos Personales.',
style: TextStyle(
fontSize: 11.sp,
decoration: TextDecoration.underline,
fontWeight: FontWeight.w400,
),
),
onChange: (value) {},
),
),
),
if (_showTermsAndConditionsCheck &&
!_isLoading &&
context.read<CameraAccessBloc>().title ==
'Pediremos permiso para acceder a tu cรกmara')
Padding(
padding: const EdgeInsets.symmetric(horizontal: 16.0),
child: TermsCheckBox(
checked: _isChecked,
text: Text(
'He leรญdo y acepto la Polรญtica de Privacidad y Protecciรณn de Datos Personales.',
style: TextStyle(
fontSize: 11.sp,
decoration: TextDecoration.underline,
fontWeight: FontWeight.w400,
),
),
onChange: (value) {
setState(() {
_isChecked = value ?? false;
});
},
openTerms: () async {
final bool? acepto = await Navigator.pushNamed<bool?>(
context,
TermsPdfPage.routeName,
arguments: ArgsForLawTermsModel(
pdfUrl: url,
),
);
//final CameraAccessBloc bloc = context.read<CameraAccessBloc>();
if (acepto != null && acepto) {
setState(() {
_isChecked = true;
});
}
//await _checkTermsAndConditions();
if (!mounted) return;
//setState(() => _isChecked = acepto ?? false);
},
),
),
const SizedBox(height: 16.0),
Padding(
padding: const EdgeInsets.symmetric(horizontal: 16.0),
child: BaButton(
text: 'Continuar',
onPressed: (_isLoading || !_isChecked) &&
context.read<CameraAccessBloc>().title !=
'Recupera tu pin con identificaciรณn de tu rostro'
? null
: () async {
if (context.read<CameraAccessBloc>().title ==
'Recupera tu pin con identificaciรณn de tu rostro') {
BlocProvider.of<CameraAccessBloc>(context).add(
RequestCameraAccessEvent(),
);
return;
}
setState(() {
_isLoading = true;
});
final CameraAccessBloc bloc =
context.read<CameraAccessBloc>();
await bloc.acceptDataProtection();
setState(() {
_isLoading = false;
});
BlocProvider.of<CameraAccessBloc>(context).add(
RequestCameraAccessEvent(),
);
},
backgroundColor: BaColor.primary,
),
),
const SizedBox(height: 25.0),
],
);
}
}
class CustomRowWidget extends StatelessWidget {
final String imagePath;
final String text;
const CustomRowWidget({
Key? key,
required this.imagePath,
required this.text,
}) : super(key: key);
@override
Widget build(BuildContext context) {
return ListTile(
leading: Image.asset(
imagePath,
),
title: M3(
text,
color: BaColor.black,
fontWeight: FontWeight.w400,
),
);
}
}
const url = 'https://www.bancodelaustro.com/politica-de-proteccion-de-datos';
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
Redmi Note 8 Pro (One time on Iphone 13, but anymore)
https://github.com/user-attachments/assets/ab7a1e36-d4ff-480f-95d7-8d00603c662e
</details>
### Logs
_No response_
### Flutter Doctor output
[โ] Flutter (Channel stable, 3.27.1, on macOS 15.2 24C101 darwin-arm64, locale en-EC)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[โ] Xcode - develop for iOS and macOS (Xcode 16.1)
[โ] Chrome - develop for the web
[โ] Android Studio (version 2024.1)
[โ] IntelliJ IDEA Community Edition (version 2023.3.6)
[โ] VS Code (version 1.92.2)
[โ] Connected device (5 available)
! Error: Browsing on the local area network for iPhone. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
! Error: Browsing on the local area network for iPhone de William. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[โ] Network resources
โข No issues found! | waiting for customer response,in triage | medium | Critical |
2,758,145,598 | PowerToys | Bug Report: Enabling PowerToys Mouse Pointer Crosshair Activates Windows 11 Do Not Disturb Mode | ### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Mouse Utilities
### Steps to reproduce
When the Crosshair feature in PowerToys is enabled, the "Do Not Disturb" mode in Windows 11 gets activated automatically, even when it was previously disabled by the user. This behavior occurs consistently and interrupts normal notification functionality.
1. Ensure "Do Not Disturb" mode is off
- Navigate to Settings > System > Notifications in Windows 11.
- Make sure the Do Not Disturb toggle is set to Off.
2. Open PowerToys and go to the Mouse Utilities section.
3. Enable the Mouse Pointer Crosshair feature.
4. Check the "Do Not Disturb" status again:
- Return to Settings > System > Notifications.
- Observe whether the Do Not Disturb toggle has been automatically turned On.
### โ๏ธ Expected Behavior
Enabling the Crosshair feature in PowerToys should not interfere with the notification settings in Windows 11, including the "Do Not Disturb" mode.
### โ Actual Behavior
Activating the Crosshair feature in PowerToys automatically turns on the "Do Not Disturb" mode in Windows 11, which silences notifications without user intent.
### Other Software
Windows 11Pro 23H2 (22631.4602) | Issue-Bug,Needs-Triage | low | Critical |
2,758,161,830 | ui | [feat]: Make date picker calendars move individually | ### Feature description
The current way the datepicker moves is weird and quite confusing for users. Whenever one of the calendars is moved, the other follows and so there's always a one month gap between the two. It would be better if the calendars could each have their own set of carets to navigate to different months.
Below is a video that shows the behaviour I'm talking about
[Screencast from 2024-12-24 18-35-42.webm](https://github.com/user-attachments/assets/114276f3-e78c-471b-b1c2-19224063652c)
### Affected component/components
Date Picker
### Additional Context
Give each calendar in date picker it's own set of carets to navigate between months
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,758,164,727 | flutter | [in_app_purchase_storekit] StoreKit2 buyNonConsumable options missing promotionoffer and winbackoffer | ### Steps to reproduce
1. InAppPurchaseStoreKitPlatform.enableStoreKit2();
2. There is no option to pass promotionaloffer
refer: https://developer.apple.com/documentation/storekit/product/purchaseoption
<img width="611" alt="image" src="https://github.com/user-attachments/assets/ad476c9f-a3b6-47fc-9497-9b1a0eb62f3b" />
### Expected results
buyNonConsumable should accept options as parameter
### Actual results
unable to use promotionoffer in storekit2
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.5, on macOS 14.6.1 23G93 darwin-arm64, locale en-IN)
โข Flutter version 3.24.5 on channel stable at /Users/arunrajkumar/Documents/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision dec2ee5c1f (6 weeks ago), 2024-11-13 11:13:06 -0800
โข Engine revision a18df97ca5
โข Dart version 3.5.4
โข DevTools version 2.37.3
```
</details>
| platform-ios,platform-mac,p: in_app_purchase,package,c: proposal,P2,team-ios,triaged-ios | low | Minor |
2,758,178,036 | ollama | version aware linux upgrade | ### What is the issue?
ollama install command, `curl -fsSL https://ollama.com/install.sh | sh`, removes and reinstalls even if there is no version update.
The script should not remove current version downloads, if there is no version update.
### OS
Linux
### GPU
Intel
### CPU
Intel
### Ollama version
0.5.4 | feature request,linux,install | low | Minor |
2,758,186,223 | terminal | Add ฮฑ to Highlight-Color | ### Description of the new feature
I have a MS-DOS terminal lookalike and am trying to make highlights invisible, so it most accurately mirrors the MS-DOS 3.2 terminal on a crt monitor. I tried setting the highlight code to:
```JSON
"#00000000"
```
and it says that rgbฮฑ codes are not supported for highlight colors. If anyone has a workaround, please comment it.
### Proposed technical implementation details
_No response_ | Issue-Feature,Needs-Triage,Needs-Tag-Fix | low | Minor |
2,758,186,776 | flutter | [pigeon] Enhanced enums fields are coalesced into the next class | ### What package does this bug report belong to?
pigeon
### What target platforms are you seeing this bug on?
Windows
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
pigeon:
dependency: "direct dev"
description:
name: pigeon
sha256: "6c318e3ef13f52db30ab85ef45b4746ba33b837aeecc549eb8578f78a3dde19a"
url: "https://pub.dev"
source: hosted
version: "22.7.0"
```
</details>
### Steps to reproduce
1. Run `dart pub add --dev pigeon`
2. Ensure that you get Pigeon version `22.7.0`
3. Save the attached `Code Sample` file to `pigeon/temp.dart`
4. Run `dart run pigeon --input pigeon/temp.dart`
5. Observe analysis errors
### Expected results
Since the Pigeon file has an enhanced enum with fields, the Dart output should as well. In other words:
```dart
enum MyEnum {
a(1),
b(2);
const MyEnum(this.myEnumValue)
final int myEnumValue;
}
class MyMessage1 {
MyMessage1({
this.field1,
});
int? field1;
// ...
}
class MyMessage2 {
MyMessage2({
this.field2,
});
int? field2;
// ...
}
```
### Actual results
Pigeon merges the enhanced enum's field with the preceding class. This is doubly bad if there are two enums with the same named field, as that will cause duplicate field errors in the class.
```dart
enum MyEnum {
a,
b,
// missing myEnumValue
}
class MyMessage1 {
MyMessage1({
this.field1,
required this.myEnumValue,
});
int? field1;
int myEnumValue; // <-- This shouldn't be here!
// ...
}
class MyMessage2 {
MyMessage2({
this.field2,
});
int? field2;
// ...
}
```
### Code sample
<details open><summary>Code sample</summary>
```dart
// pigeon/temp.dart
@ConfigurePigeon(PigeonOptions(
dartOut: 'pigeon/temp.g.dart',
dartOptions: DartOptions(),
))
library;
import 'package:pigeon/pigeon.dart';
class MyMessage1 {
int? field1;
}
enum MyEnum {
a(1),
b(2);
final int myEnumValue;
const MyEnum(this.myEnumValue);
}
class MyMessage2 {
int? field2;
}
```
</details>
### Screenshots or Videos (None)
<details>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs (None)
<details><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.27.0, on Microsoft Windows [Version 10.0.26100.2605], locale en-US)
โข Flutter version 3.27.0 on channel stable at C:\Users\Levi\flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 8495dee1fd (2 weeks ago), 2024-12-10 14:23:39 -0800
โข Engine revision 83bacfc525
โข Dart version 3.6.0
โข DevTools version 2.40.2
[โ] Windows Version (Installed version of Windows is version 10 or higher)
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
โข Android SDK at C:\Users\Levi\AppData\Local\Android\sdk
โข Platform android-35, build-tools 35.0.0
โข Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
โข Java version OpenJDK Runtime Environment (build 17.0.10+0--11572160)
โข All Android licenses accepted.
[โ] Chrome - develop for the web
โข Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[โ] Visual Studio - develop Windows apps (Visual Studio Build Tools 2022 17.7.1)
โข Visual Studio at C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools
โข Visual Studio Build Tools 2022 version 17.7.34009.444
โข Windows 10 SDK version 10.0.22621.0
[โ] Android Studio (version 2024.2)
โข Android Studio at C:\Program Files\Android\Android Studio
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.10+0--11572160)
[โ] VS Code, 64-bit edition (version 1.95.3)
โข VS Code at C:\Program Files\Microsoft VS Code
โข Flutter extension version 3.102.0
[โ] Connected device (3 available)
โข Windows (desktop) โข windows โข windows-x64 โข Microsoft Windows [Version 10.0.26100.2605]
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 131.0.6778.140
โข Edge (web) โข edge โข web-javascript โข Microsoft Edge 131.0.2903.112
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| package,team-ecosystem,has reproducible steps,p: pigeon,P2,triaged-ecosystem,found in release: 3.27,found in release: 3.28 | low | Critical |
2,758,193,537 | create-react-app | Unable start project: Cannot find module '@svgr/webpack' | I try create a new project using `npx create-react-app` [but is broken](https://github.com/facebook/create-react-app/issues/13721), i try reutilize a old project created 1 month ago and i delete the `node_modules` folder and `package-lock.json`, and execute `npm install`, its works fine but when try run the project says an error:
```bash
$ npm start
> [email protected] start
> react-scripts start
Cannot find module '@svgr/webpack'
Require stack:
- /home/.../webapp/node_modules/react-scripts/config/webpack.config.js
- /home/.../webapp/node_modules/react-scripts/scripts/start.js
```
I try install mannualy the package using `npm install @svgr/webpack` but does not work. The `package.json` file content is:
```json
{
"name": "web-app",
"version": "0.1.0-beta",
"description": "WEB App",
"homepage": "./",
"dependencies": {
"@svgr/webpack": "^8.1.0",
"@testing-library/jest-dom": "^5.17.0",
"@testing-library/react": "^13.4.0",
"@testing-library/user-event": "^13.5.0",
"bootstrap": "^5.3.3",
"bootstrap-icons": "^1.11.3",
"react": "^18.3.1",
"react-dom": "^18.3.1",
"react-markdown": "^9.0.1",
"react-scripts": "5.0.1",
"url": "^0.11.3",
"web-vitals": "^2.1.4"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": [
"react-app",
"react-app/jest"
]
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}
```
Can not create a new project and can not run any project.
My software version:
```bash
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.1 LTS
Release: 24.04
Codename: noble
$ node --version
v22.12.0
$ npm --version
10.9.0
```
I try use node 18 and 22 but does not work. | needs triage,issue: bug report | low | Critical |
2,758,193,952 | rust | `unreachable_patterns` emitted on `_` that matches a local of uninhabited type (that is uninitialized) | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
enum Void {}
fn main() {
let void: Void;
match void {
_ => (),
}
}
```
([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2024&gist=31ce10867db7efd14950febee77b5cd1))
I expected to see this happen: the code compiler without warnings and successfully runs doing nothing.
Instead, this happened: a warning is emitted (the code compiles and runs as expected):
```
warning: unreachable pattern
--> src/main.rs:6:9
|
6 | _ => (),
| ^-------
| |
| matches no values because `Void` is uninhabited
| help: remove the match arm
|
= note: to learn more about uninhabited types, see https://doc.rust-lang.org/nomicon/exotic-sizes.html#empty-types
= note: `#[warn(unreachable_patterns)]` on by default
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
rustc version: `1.85.0-nightly (2024-12-23 bdc6b3de48646345549f)`
| A-lints,T-compiler,C-bug,L-unreachable_patterns | low | Critical |
2,758,196,740 | tauri | [bug] "decorations": false inconsistent on macos and windows | ### Describe the bug
When setting `"decorations": false` in the tauri.conf.json file, macos and windows exhibit inconsistent behaviour. On windows, only the titlebar appears to be missing. On macos, both the titlebar and the window's rounded borders are missing.
I've managed to work around this issue by using tauri.[platform].conf.json. I have also managed to work around it by setting the window to transparent and then using a white background on body with border radius, but I'm not sure if it matches the native platform's style.
Regardless, I feel like this config option should do the same thing on both platforms.
macos:
<img width="846" alt="macos" src="https://github.com/user-attachments/assets/126cf3eb-bd44-46ef-89bf-2d58cb0cb287" />
windows:

### Reproduction
tauri.conf.json
```json
"windows": [
{
"title": "main",
"width": 800,
"height": 600,
"decorations": false
}
],
```
### Expected behavior
For consistent behaviour to be exhibited on both platforms.
### Full `tauri info` output
```text
[โ] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
โ WebView2: 131.0.2903.112
โ MSVC: Visual Studio Community 2022
โ rustc: 1.83.0 (90b35a623 2024-11-26)
โ cargo: 1.83.0 (5ffbef321 2024-10-29)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-pc-windows-msvc (environment override by RUSTUP_TOOLCHAIN)
- bun: 1.1.20
[-] Packages
- tauri ๐ฆ: 2.1.1
- tauri-build ๐ฆ: 2.0.3
- wry ๐ฆ: 0.47.2
- tao ๐ฆ: 0.30.8
- tauri-cli ๐ฆ: 2.1.0
[-] Plugins
- tauri-plugin-shell ๐ฆ: 2.0.2
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ./ui/dist
- devUrl: http://localhost:1420/
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,758,199,182 | rust | Missing unreachable code/arm warning, when a match guard is diverging | <!--
Thank you for filing a bug report! ๐ Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
fn main() {
match () {
_ if loop {} => (),
_ => (),
}
println!("nya :3");
}
```
([playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2024&gist=26c036b8ccf18d04209b39f2f623dafb))
I expected to see this happen: compiler to emit a warning that the first arm and later code is unreachable (since `loop{}` never terminates and has type `!`).
Instead, this happened: the code compiles without warnings.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
rustc version: ` 1.84.0-beta.4 (2024-12-07 202008a1b8de96d2e5b6)`
| A-lints,T-compiler,C-bug,L-unreachable_code | low | Critical |
2,758,210,713 | flutter | [pigeon] Support const constructors | ### Use case
Classes with final fields and a const constructor in the Pigeon file are generated as non-final and non-const in the generated code, even though adding those back in manually doesn't result in any errors. If Pigeon kept those, these classes could better replace user-facing APIs.
### Proposal
For a Pigeon file like the following:
```dart
class User {
const User(this.name);
final String name;
}
```
Instead of generating
```dart
class User {
User({
required this.name,
});
String name;
Object encode() {
return <Object?>[
name,
];
}
static User decode(Object result) {
result as List<Object?>;
return User(
name: result[0]! as String,
);
}
}
```
Pigeon should generate
```dart
class User {
const User({
required this.name,
});
final String name;
// ...
}
``` | package,team-ecosystem,p: pigeon,P3,triaged-ecosystem | low | Critical |
2,758,211,292 | vscode | Consider finding a new XML syntax highlighting grammar | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.2
- OS Version: Windows 11 Pro (10.0.22631
Steps to Reproduce:
1. In any xml file with an xml comment (<!-- some stuff -->) add a "--" in the comment (<!-- some -- stuff -->)
2. The entire rest of the xml file is now the xml comment color
FWIW - I was documenting some command line parameters in a project.xml file.
So this:
--app-file SomeFile
broke the rest of the file.
but anyone putting "--help", or other such command line commands would have the same problem.

| feature-request,grammar | low | Critical |
2,758,214,202 | svelte | `$ref()` rune for using directives on components | ### Describe the problem
A commonly requested feature (#12229, #2870, #5236) is to have parity between native elements and svelte components for directives such as `class:`, `style:`, `use:` or `transition:`. This currently errors:
```svelte
<Component style:color="red" />
<!-- This type of directive is not valid on components -->
```
Why is this bad? It breaks intuition for new developers who expect components to behave the same as elements. It makes wrapper components inherently more limited than native elements.
Several points of implementation difficulty have been brought up in the past:
1. How do you deal with scoped CSS classes?
2. Which element inside the component do you apply the directives to?
3. What if multiple elements in a component need to receive the directive?
### Describe the proposed solution
The `$ref` rune would provide a way for components to determine which element receives the directives.
```svelte
<script>
const props = $props();
let ref = $ref();
</script>
<button bind:this={ref} {...props}></button>
```
Any component that does not explicitly use `$ref()` still throws an error when a directive is used on it.
I believe this answers the main issues on this topic.
1. How do you deal with scoped CSS classes?
Right now, if you pass a `class` prop to a component, it uses the scoped css hash from that component ([playground](https://svelte.dev/playground/hello-world?version=5.16.0#H4sIAAAAAAAAA41QwWrDMAz9FSEGbSEktMfUDYxedux93iFNFBJwLGOr7UrIv4943Uq3duwmPb0n6b0BbdkT5vhCxjCc2Jsa5lR3QvUCE2w6QwHz1wHl7CbeBGDypXp2Lg1HMjJh-zLQPbxiK2QlYI4qVL5zUmirpesde4Et944tWYHGcw-zNPtGLitma21VdlVa1S6hMmUIG40Ns8ZiV3qyorJ2WWirritvWJB9qoOcDcUX0oYZhqnSUrFhn4Onej0BY7wZmZig0LtgLv5AY_Igip9f3wZyZ_ogloptEHCeXYANPMVivvgdwepiboiMNDZjsW07U6usXf3D6t4c6NixIfnb8dv4ARVBRTgmAgAA)). `class:` would behave the same way, i.e. `<Comp class:foo />` would use the `.foo` defined in `Comp.svelte`.
2. Which element inside the component do you apply the directives to?
This proposal leaves the choice to the component creator. They could specify which of the root elements receives the directives, or a non-root element, or a child component like this:
```svelte
<!-- Parent.svelte -->
<Child use:action />
<!-- Child.svelte -->
<script>
const ref = $ref();
</script>
<Grandchild bind:this={ref} />
<!-- Grandchild.svelte -->
<script>
const ref = $ref();
</script>
<div bind:this={ref} />
<!-- this element receives the action -->
```
3. What if multiple elements in a component need to receive the directive?
In this proposal, it would be impossible. However, here are some alternatives:
### Alternatives
1. Use the rune only inside the property:
```svelte
<div bind:this={$ref()}></div>
```
This doesn't allow using the bind:this for a local use, though.
2. Don't use bind:this
```svelte
<script>
let canvas;
const ref = $ref();
</script>
<canvas bind:this={canvas} {ref}></canvas>
```
Could also be `svelte:ref={ref}` or something.
3. Allow multiple `$ref()`s per component
```svelte
<script>
let ref1 = $ref();
let ref2 = $ref();
</script>
<div bind:this={ref1}></div>
<div bind:this={ref2}></div>
```
In this case, the directives would be duplicated and both elements would receive them.
I'm interested in knowing if there's some other barrier that makes this impossible, but I haven't found one while researching this issue!
### Importance
would make my life easier | feature request,runes | low | Critical |
2,758,215,035 | flutter | [pigeon] Respect original constructors | ### Use case
Currently, Pigeon generates constructors with named parameters for all fields, even if a different constructor is provided. For example:
```dart
class AndroidColor {
AndroidColor.fromARGB(this.a, this.r, this.g, this.b);
final int a;
final int r;
final int g;
final int b;
}
```
generates
```dart
/// A color on Android.
class AndroidColor {
AndroidColor({
required this.a,
required this.r,
required this.g,
required this.b,
});
int a;
int r;
int g;
int b;
// ...
}
```
Even though it would not have been an error to include the original `.fromARGB` constructor, Pigeon opted not to do so. This makes porting existing code to Pigeon difficult, and if the class is part of the public API, it's impossible to feed it through Pigeon without forcing a breaking change.
### Proposal
When generating constructors, Pigeon should first check if the unnamed constructor already exists, and if so, copy it as-is instead of overwriting it. When other constructors exist, Pigeon should copy those, even if it decides to generate its own unnamed constructor. | package,team-ecosystem,p: pigeon,P3,triaged-ecosystem | low | Critical |
2,758,225,991 | ollama | Requests begin to all fail after several independent prompts | ### What is the issue?
I've been having an issue with Ollama where the output is either gibberish or just a series of @@@@@ characters. I don't recall it being this way some weeks ago, but I've found a solution. (The gibberish seems to happen mostly when stream: true, and @@@ mostly with stream: false. w/e.)
I started having this issue with Llama3.1 lorablated some weeks ago, but I'm having the issue with wen2.5-coder:32b as well, now that I'm trying to use it.
An example of the issue, after several successful requests, one suddenly fails, and then everyone after starts to fail.


**The solution I discovered has been to just set the keep_alive to 0.** I suspect there's some sort of context caching going on and I'm hitting some memory limit. My Titan RTX 24GB just squeaks by with these models. Windows 11. 96GB RAM. Most recent Ollama.
My requests are pretty short; just a couple of sentences in most cases.
Works:
```
const response = await fetch(`${endpoint}/api/generate`, {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: model,
prompt: prompt,
stream: false,
keep_alive: 0 <===== note the keep_alive=0
})
});
```
Fails after a few prompts:
```
const response = await fetch(`${endpoint}/api/generate`, {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: model,
prompt: prompt,
stream: false
})
});
```
If this is a caching issue, it be pretty nice to have the option as to whether to cache the conversation or not, and if I am caching it, perhaps have a way to refer to that specific conversation. More fine grain control via the API that way.

[server log - Not working](https://github.com/user-attachments/files/18241593/not_working_default.log)
[server log - working](https://github.com/user-attachments/files/18241545/working_timeout_0.log)


This might not be a bug, but just me being stupid. Still, time_out = 0 works, however crude and slow it is.
Merry Christmas, all. ๐
### OS
Windows
### GPU
Nvidia
### CPU
Intel, AMD
### Ollama version
0.5.4 | bug | low | Critical |
2,758,230,733 | pytorch | Inductor with dynamic shapes fails for randint with >INT_MAX maximum value | The generated annotation for max value (`ks1`) is `i32`
```
@triton_heuristics.pointwise(
size_hints={'x': 1048576},
filename=__file__,
triton_meta={'signature': {'in_ptr0': '*i64', 'out_ptr0': '*i64', 'load_seed_offset': 'i32', 'ks1': 'i32', 'xnumel': 'i32'}, 'device': DeviceProperties(type='cuda', index=0, multi_processor_count=132, cc=90, major=9, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, warp_size=32), 'constants': {}, 'configs': [AttrsDescriptor.from_dict({'arg_properties': {'tt.divisibility': (0, 1), 'tt.equal_to': ()}, 'cls': 'AttrsDescriptor'})]},
inductor_meta={'autotune_hints': set(), 'kernel_name': 'triton_poi_fused_randint_0', 'mutated_arg_names': [], 'optimize_mem': True, 'no_x_dim': False, 'num_load': 0, 'num_reduction': 0, 'backend_hash': '0DEDF01B8E4DD92A8B59F7523F798A141186FCC78AC75613AB9342C0CD404D81', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': False, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False, 'compile_id': '0/0', 'is_forward': True},
min_elem_per_thread=0
)
@triton.jit
def triton_poi_fused_randint_0(in_ptr0, out_ptr0, load_seed_offset, ks1, xnumel, XBLOCK : tl.constexpr):
xoffset = tl.program_id(0) * XBLOCK
```
and at runtime, if max value is > INT_MAX, there's a failure.
To repro:
with #143787
```
python test/inductor/test_torchinductor_codegen_dynamic_shapes.py -v -k test_randint_distribution
```
#143787 doesn't make any inductor changes, it just adds a test to make sure inductor produces correct distribution.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | triaged,oncall: pt2,module: inductor | low | Critical |
2,758,319,223 | youtube-dl | overwrites my cookie file... and also tells me that I need to sign in? | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:
- First of, make sure you are using the latest version of youtube-dl. Run `youtube-dl --version` and ensure your version is 2021.12.17. If it's not, see https://yt-dl.org/update on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in http://yt-dl.org/escape.
- Search the bugtracker for similar issues: http://yt-dl.org/search-issues. DO NOT post duplicates.
- Read bugs section in FAQ: http://yt-dl.org/reporting
- Finally, put x into all relevant boxes (like this [x])
-->
- [x] I'm reporting a broken site support issue
- [x] I've verified that I'm running youtube-dl version **2021.12.17**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [ ] I've searched the bugtracker for similar bug reports including closed ones
- [ ] I've read bugs section in FAQ
## Verbose log
```
ERROR: [youtube] a07qXFXjEDI: Sign in to confirm youโre not a bot. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies. Also see https://github.com/yt-dlp/yt-dlp/wiki/Extractors#exporting-youtube-cookies for tips on effectively exporting YouTube cookies
```
## Description
I know this may look like broken site support, but it also overwrites my cookie file, which seems like a bug. I start with a cookie file I take from my browser that has quite a lot in it, but after I run it, the cookie file is completely different and very small.
Here is my code
```
url = f"https://www.youtube.com/watch?v={video_id}"
options = {
'quiet': True,
cookiefile': "cookies.txt"
}
try:
with YoutubeDL(options) as ydl:
info_dict = ydl.extract_info(url, download=False)
print("info_dict", info_dict)
except Exception as e:
print("Error", e)
return jsonify({"status": "error", "message": str(e)}), 500
``` | documentation,question | low | Critical |
2,758,349,233 | yt-dlp | [RFE] Support visir.is | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Germany
### Example URLs
- Single video: https://www.visir.is/k/3bb38186-f16d-481f-8608-4c404aab0455-1727694046807/briet-x-birnir-1000-ord
### Provide a description that is worded well enough to be understood
Just what it says on the tin. Visir.is is apparently not currently supported, and the generic fallback does not work. Would appreciate it if support was added. Thanks!
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.visir.is/k/3bb38186-f16d-481f-8608-4c404aab0455-1727694046807/briet-x-birnir-1000-ord']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version [email protected] from yt-dlp/yt-dlp [65cf46cdd] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.26100-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg N-107213-gfed07efcde-20220622 (setts), ffprobe N-107213-gfed07efcde-20220622
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: [email protected] from yt-dlp/yt-dlp
yt-dlp is up to date ([email protected] from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.visir.is/k/3bb38186-f16d-481f-8608-4c404aab0455-1727694046807/briet-x-birnir-1000-ord
[generic] briet-x-birnir-1000-ord: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] briet-x-birnir-1000-ord: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.visir.is/k/3bb38186-f16d-481f-8608-4c404aab0455-1727694046807/briet-x-birnir-1000-ord
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1634, in wrapper
File "yt_dlp\YoutubeDL.py", line 1769, in __extract_info
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\generic.py", line 2553, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.visir.is/k/3bb38186-f16d-481f-8608-4c404aab0455-1727694046807/briet-x-birnir-1000-ord
```
| site-request | low | Critical |
2,758,415,510 | flutter | Slight delay in loading assets when quickly scrolling over large image grid | ### Steps to reproduce
1. clone the [repo](https://github.com/BeADre/flutter_reproduce.git).
2. flutter pub get && run.
### Expected results
Everything is fine.
### Actual results
Blank appears.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
home: Scaffold(
backgroundColor: Colors.white,
body: CupertinoScrollbar(
child: GridView.builder(
gridDelegate: const SliverGridDelegateWithFixedCrossAxisCount(
crossAxisCount: 3,
crossAxisSpacing: 2,
mainAxisSpacing: 2,
),
itemCount: 200,
itemBuilder: (context, index) {
return Image.asset(
'assets/${index + 2}.jpg',
fit: BoxFit.cover,
);
},
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/32430df9-faf4-48c1-ba38-3f2b0e8a0915
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.4, on macOS 15.1.1 24B91 darwin-arm64, locale zh-Hans-CN)
[โ] Android toolchain - develop for Android devices (Android SDK version 33.0.2)
[โ] Xcode - develop for iOS and macOS (Xcode 16.1)
[โ] Chrome - develop for the web
[โ] Android Studio (version 2024.2)
[โ] VS Code (version 1.95.3)
[!] Proxy Configuration
! NO_PROXY is not set
[โ] Connected device (7 available)
! Error: Browsing on the local area network for Johnny iPhone. Ensure the device is unlocked and attached with a cable or associated with
the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
! Error: Browsing on the local area network for Hynnโs iPhone. Ensure the device is unlocked and attached with a cable or associated with
the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
! Error: Browsing on the local area network for BrankoTian. Ensure the device is unlocked and attached with a cable or associated with
the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[โ] Network resources
```
</details>
| framework,f: scrolling,a: assets,a: images,perf: speed,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.27,found in release: 3.28 | low | Critical |
2,758,425,805 | tauri | [bug] "copyright" parameter makes nsis build fail | ### Describe the bug
In the wsl environment on Windows, create a new application through `pnpm create tauri-app`. At this time, running `pnpm tauri build --runner cargo-xwin --target x86_64-pc-windows-msvc` will successfully build tauri-app_0.1.0_x64-setup.exe. However, if `"copyright": "Copyright ยฉ 2020-present, xxx"` is added to tauri.conf.json, the build will fail.
### Reproduction
Just run a new tauri 2 project in Linux, add `"copyright": "Copyright ยฉ 2020-present, xxx"` in tauri.conf.json, and use `pnpm tauri build --runner cargo-xwin --target x86_64-pc-windows-msvc` to build
```
{
"$schema": "https://schema.tauri.app/config/2",
"productName": "tauri-app",
"version": "0.1.0",
"identifier": "com.tauri-app.app",
"build": {
"beforeDevCommand": "pnpm dev",
"devUrl": "http://localhost:1420",
"beforeBuildCommand": "pnpm build",
"frontendDist": "../dist"
},
"app": {
"windows": [
{
"title": "tauri-app",
"width": 800,
"height": 600
}
],
"security": {
"csp": null
}
},
"bundle": {
"active": true,
"targets": "all",
"icon": [
"icons/32x32.png",
"icons/128x128.png",
"icons/[email protected]",
"icons/icon.icns",
"icons/icon.ico"
],
"copyright": "Copyright ยฉ 2020-present, xxx"
}
}
```
### Expected behavior
Error failed to build app: failed to build app
### Full `tauri info` output
```text
[โ] Environment
- OS: Ubuntu 24.4.0 x86_64 (X64)
โ webkit2gtk-4.1: 2.46.4
โ rsvg2: 2.58.0
โ rustc: 1.81.0 (eeb90cda1 2024-09-04)
โ cargo: 1.81.0 (2dbb1af80 2024-08-20)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 22.12.0
- pnpm: 10.0.0-beta.2
- npm: 10.9.0
[-] Packages
- tauri ๐ฆ: 2.1.1
- tauri-build ๐ฆ: 2.0.3
- wry ๐ฆ: 0.47.2
- tao ๐ฆ: 0.30.8
- @tauri-apps/api ๎: 2.1.1
- @tauri-apps/cli ๎: 2.1.0
[-] Plugins
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
```text
โ built in 562ms
Compiling tauri-app v0.1.0 (/root/tauri-app/src-tauri)
error: failed to run custom build command for `tauri-app v0.1.0 (/root/tauri-app/src-tauri)`
Caused by:
process didn't exit successfully: `/root/tauri-app/src-tauri/target/release/build/tauri-app-d330390941525bce/build-script-build` (exit status: 101)
--- stdout
cargo:rerun-if-env-changed=TAURI_CONFIG
cargo:rerun-if-changed=tauri.conf.json
cargo:rustc-check-cfg=cfg(desktop)
cargo:rustc-cfg=desktop
cargo:rustc-check-cfg=cfg(mobile)
cargo:rustc-env=TAURI_ANDROID_PACKAGE_NAME_APP_NAME=app
cargo:rustc-env=TAURI_ANDROID_PACKAGE_NAME_PREFIX=com_tauri_1app
cargo:rustc-check-cfg=cfg(dev)
cargo:PERMISSION_FILES_PATH=/root/tauri-app/src-tauri/target/x86_64-pc-windows-msvc/release/build/tauri-app-3dccacfca14bddf9/out/app-manifest/__app__-permission-files
cargo:rerun-if-changed=capabilities
cargo:rustc-env=TAURI_ENV_TARGET_TRIPLE=x86_64-pc-windows-msvc
package.metadata does not exist
OPT_LEVEL = Some(3)
OUT_DIR = Some(/root/tauri-app/src-tauri/target/x86_64-pc-windows-msvc/release/build/tauri-app-3dccacfca14bddf9/out)
TARGET = Some(x86_64-pc-windows-msvc)
cargo:rerun-if-env-changed=VCINSTALLDIR
VCINSTALLDIR = None
HOST = Some(x86_64-unknown-linux-gnu)
cargo:rerun-if-env-changed=CC_x86_64-pc-windows-msvc
CC_x86_64-pc-windows-msvc = None
cargo:rerun-if-env-changed=CC_x86_64_pc_windows_msvc
CC_x86_64_pc_windows_msvc = Some(clang-cl)
cargo:rerun-if-env-changed=CC_KNOWN_WRAPPER_CUSTOM
CC_KNOWN_WRAPPER_CUSTOM = None
RUSTC_WRAPPER = None
cargo:rerun-if-env-changed=CC_ENABLE_DEBUG_OUTPUT
cargo:rerun-if-env-changed=CRATE_CC_NO_DEFAULTS
CRATE_CC_NO_DEFAULTS = None
CARGO_CFG_TARGET_FEATURE = Some(cmpxchg16b,fxsr,sse,sse2,sse3)
DEBUG = Some(false)
cargo:rerun-if-env-changed=CFLAGS_x86_64-pc-windows-msvc
CFLAGS_x86_64-pc-windows-msvc = None
cargo:rerun-if-env-changed=CFLAGS_x86_64_pc_windows_msvc
CFLAGS_x86_64_pc_windows_msvc = Some(--target=x86_64-pc-windows-msvc -Wno-unused-command-line-argument -fuse-ld=lld-link /imsvc/root/.cache/cargo-xwin/xwin/crt/include /imsvc/root/.cache/cargo-xwin/xwin/sdk/include/ucrt /imsvc/root/.cache/cargo-xwin/xwin/sdk/include/um /imsvc/root/.cache/cargo-xwin/xwin/sdk/include/shared )
cargo:rerun-if-env-changed=CC_SHELL_ESCAPED_FLAGS
CC_SHELL_ESCAPED_FLAGS = None
CARGO_ENCODED_RUSTFLAGS = Some(-Clinker-flavor=lld-link-Lnative=/root/.cache/cargo-xwin/xwin/crt/lib/x86_64-Lnative=/root/.cache/cargo-xwin/xwin/sdk/lib/um/x86_64-Lnative=/root/.cache/cargo-xwin/xwin/sdk/lib/ucrt/x86_64)
failed to build app: failed to build app
Error failed to build app: failed to build app
```
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,758,442,708 | langchain | 27/10000 ๅฎๆถ็ฟป่ฏ ๅ่ฏ I encountered an issue when using Langchain chroma | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``
### Error Message and Stack Trace (if applicable)
I searched various information online, but couldn't find the result I was looking for. In the end, I had to leave a message here.
I use streamlit to build a simple user program. After executing the program once, he did not encounter any issues. When I execute again (all parameters remain unchanged). The program got stuck in the following code and exited without any error messages.
The stuck place.The second time it runs here, it won't execute further


The second terminal display result

### Description
I use streamlit to build a simple user program. After executing the program once, he did not encounter any issues. When I execute again (all parameters remain unchanged). The program got stuck in the following code and exited without any error messages.
### System Info

| โฑญ: vector store | low | Critical |
2,758,472,702 | react | Bug: source maps are missing from react npm packages | <!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version: 19.0.0, applicable to lower versions
## Steps To Reproduce
Install react npm package
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example:
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
Source maps is missing
## The expected behavior
Source maps should be generated and present | Status: Unconfirmed | low | Critical |
2,758,483,215 | vscode | config `http.proxy` in workspace settings | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
The `http.proxy` setting can only be configured in user settings. Iโm wondering if itโs possible to set it in workspace settings instead.
When using a dev container, the `http.proxy` setting is passed to the container. However, the container cannot use the proxy on the local machine unless additional configuration is done.
If `http.proxy` could be configured directly in `.vscode/settings.json` or `.devcontainer/devcontainer.json`, it would make things much easier for users. Currently, we have to either modify the user settings each time or set up a more complex container network configuration.
| bug,proxy | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.