id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
2,769,683,856
flutter
Missing `icudtl.dat`
### Steps to reproduce 1. Install flutter on an arm64 linux device (specifically using `apk` on PostmarketOS) 2. `flutter run` (for my example i used https://github.com/henry-hiles/brook) ### Actual results A error occurs (see logs) ### Logs <details open> <summary>Logs</summary> ```console Launching lib/main.dart on Linux in debug mode... Building Linux application... ninja: entering directory 'build/linux/arm64/debug' CMake Error at cmake_install.cmake:88 (file): file INSTALL cannot find "/home/quadradical/Documents/Code/brook/linux/flutter/ephemeral/icudtl.dat": No such file or directory. ninja: job failed: cd /home/quadradical/Documents/Code/brook/build/linux/arm64/debug && /usr/bin/cmake -P cmake_install.cmake ninja: subcommand failed Error: Build process failed ``` </details> ### Flutter Doctor output <details open> <summary>Doctor output</summary> ```console Doctor summary (to see all details, run flutter doctor -v): [✓] Flutter (Channel stable, 3.27.0, on postmarketOS edge 6.13.0-rc2-sdm845, locale C) [✗] Android toolchain - develop for Android devices ✗ Unable to locate Android SDK. Install Android Studio from: https://developer.android.com/studio/index.html On first launch it will assist you in installing the Android SDK components. (or visit https://flutter.dev/to/linux-android-setup for detailed instructions). If the Android SDK has been installed to a custom location, please use `flutter config --android-sdk` to update to that location. [✗] Chrome - develop for the web (Cannot find Chrome executable at google-chrome) ! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable. [✓] Linux toolchain - develop for Linux desktop [!] Android Studio (not installed) [✓] Connected device (1 available) [✓] Network resources ! Doctor found issues in 3 categories. ``` </details>
waiting for customer response,in triage
low
Critical
2,769,692,150
godot
Cannot select first character of a wrapped line in RichTextLabel
### Tested versions - Reproducible in v4.3.stable.official.77dcf97d8 ### System information Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz (16 Threads) ### Issue description https://github.com/user-attachments/assets/fcc75f09-d32a-46f6-bc3b-059766613cdc The first character of text on a wrapped line of RichTextLabel cannot be selected unless text from the preceding line is selected. Starting in the middle of a wrapped line and selecting backwards, the selection stops just before the first character. Trying to select from the beginning of a wrapped line begins the selection from the second character. Unwrapped lines function as expected, the issue only occurs on wrapped lines. ### Steps to reproduce 1. Create a RichTextLabel with text selection enabled. 2. Enter enough text to cause lines to wrap 3. Try to highlight the first character of a wrapped line. ### Minimal reproduction project (MRP) [rtldemo.zip](https://github.com/user-attachments/files/18312961/rtldemo.zip)
bug,topic:gui
low
Minor
2,769,696,414
godot
Viewport rotation gizmo can no longer have its current axis clicked to reverse it
- *Related to https://github.com/godotengine/godot-proposals/discussions/11497.* ### Tested versions - Reproducible in: 4.3.stable - Not reproducible in: 3.6.stable ### System information Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4090 (NVIDIA; 32.0.15.6636) - 13th Gen Intel(R) Core(TM) i9-13900K (32 Threads) ### Issue description In 4.3, nothing happens when clicking the axis that is currently viewed frontally: https://github.com/user-attachments/assets/b5228504-b6d3-44d2-88fd-f5f6f38efe20 In 3.6, clicking it reverses it (e.g. you'll see X- instead of X+). As a workaround, you can click the axis next to it twice, but this is less convenient and doesn't match Blender behavior. ### Steps to reproduce - Click on the rotation gizmo to view an axis frontally. - Click it again, and notice that nothing happens (it should get reversed instead, i.e. you view the same axis from behind). ### Minimal reproduction project (MRP) N/A
bug,topic:editor,usability,regression,topic:3d
low
Minor
2,769,696,535
vscode
export vscode glob implementation library
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> It's very useful to do some filters with include and exclude options when I do search/replace in vscode. ![Image](https://github.com/user-attachments/assets/c7496956-bae4-45e8-88b1-4f31d0a687aa) And I want to develop a extension and I hope my extension has same behavior when do some filter to files like below . ```typescript ... function filter(links: vscode.Location[], include: string, exclude: string): vscode.Location[] { return links.filter(item => { let includeMatch = true; if (include && include !== '') { match = glob.match(include, item.uri.toString()); } let excludeMatch = true; if (exclude && exclude !== '') { excludeMatch = !glob.match(exclude, item.uri.toString()); } return includeMatch && excludeMatch; }); } .... ``` And I found the (vscode glob pattern document)[https://code.visualstudio.com/docs/editor/glob-patterns], it seems that vscode implement its glob pattern, and doesn't exported in its [api](https://github.com/microsoft/vscode/blob/main/src/vscode-dts/vscode.d.ts)? (or I don't find it) So, could vscode export its glob api, and the extensions can behave same with the vscode.
feature-request
low
Minor
2,769,734,044
deno
deno check. RangeError: Maximum call stack size exceeded
Version: Deno 2.1.4 I'm trying to type check a file with `deno check --config=web.jsonc ./src/index.tsx`. After a short delay the following error is thrown ``` error: Uncaught RangeError: Maximum call stack size exceeded at resolveNameHelper (ext:deno_tsc/00_typescript.js:1:1) at resolveEntityName (ext:deno_tsc/00_typescript.js:1:1) at trackExistingEntityName (ext:deno_tsc/00_typescript.js:1:1) at tryVisitTypeQuery (ext:deno_tsc/00_typescript.js:1:1) at visitExistingNodeTreeSymbolsWorker (ext:deno_tsc/00_typescript.js:1:1) at visitExistingNodeTreeSymbols (ext:deno_tsc/00_typescript.js:1:1) at visitNode (ext:deno_tsc/00_typescript.js:1:1) at tryReuseExistingTypeNodeHelper (ext:deno_tsc/00_typescript.js:1:1) at tryReuseExistingNonParameterTypeNode (ext:deno_tsc/00_typescript.js:1:1) at createAnonymousTypeNode (ext:deno_tsc/00_typescript.js:1:1) ``` I'm working to make a minimum reproducible example, but that may take a while. In the meantime, I suspect there's more information to be gathered than the above stacktrace - which might be enough to find the issue - but I'm happy to provide that. Funnily enough, I can `deno check` all the immediate dependencies of `./src/index.tsx` and it works as expected each time. According to `deno info ./src/index.tsx` There are 179 unique dependencies, 611.24KB of size, and all of the files are local. I'm using `https://deno.land/x/[email protected]/mod.js` to transpile to javascript, and the generated code runs in the browser fine.
needs info,tsc
low
Critical
2,769,747,901
three.js
SVGLoader - Transforms don't apply to styles of a path
### Description When a path is scaled down with a transform the SVGLoader doesnt also scale the styles such as stroke-width ### Reproduction steps ![svg](https://github.com/user-attachments/assets/ce524dcd-a16e-451c-be96-d33f11894d5c) Take this svg and load it into the webgl_loader_svg example it should appear like ![Screenshot 2025-01-06 at 4 25 49 PM](https://github.com/user-attachments/assets/b7185bf3-1d61-4cf7-b0e7-996296b2f1be) But actually appears like ![Screenshot 2025-01-06 at 4 25 39 PM](https://github.com/user-attachments/assets/6f1aa660-a226-42a1-b267-35c2661e1fcc) As you can see the circle on the left is too thick ### Code Save the first svg from the repro steps into `models/svg/svg.svg` add `'svg': 'models/svg/svg.svg'` into the `webgl_loader_svg` example file ### Live example N/A ### Screenshots _No response_ ### Version 0.152.2 ### Device _No response_ ### Browser Chrome ### OS MacOS
Loaders
low
Minor
2,769,759,041
pytorch
grad_fn function disobeys broadcast rules
### grad_fn function disobeys broadcast rules In the following code, `z.grad_fn` is `MulBackward0`. It should be the inverse of multiplication. However, the shapes of `x` and `x_` differ. ``` import torch x = torch.randn(2, 1, requires_grad=True) y = torch.randn(2, 3, requires_grad=True) z = x * y x_, y_ = z.grad_fn(z) print(x_.shape, y_.shape) # Output: torch.Size([2, 3]) torch.Size([2, 3]) ``` ### Versions Collecting environment information... PyTorch version: 2.5.1+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Rocky Linux release 8.10 (Green Obsidian) (x86_64) GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22) Clang version: Could not collect CMake version: version 3.26.5 Libc version: glibc-2.28 Python version: 3.11.10 (main, Nov 5 2024, 07:57:54) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)] (64-bit runtime) Python platform: Linux-4.18.0-553.27.1.el8_10.x86_64-x86_64-with-glibc2.28 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 1 Core(s) per socket: 64 Socket(s): 2 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 25 Model: 1 Model name: AMD EPYC 7763 64-Core Processor Stepping: 1 CPU MHz: 3243.397 BogoMIPS: 4890.99 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 32768K NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-31 NUMA node2 CPU(s): 32-47 NUMA node3 CPU(s): 48-63 NUMA node4 CPU(s): 64-79 NUMA node5 CPU(s): 80-95 NUMA node6 CPU(s): 96-111 NUMA node7 CPU(s): 112-127 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm Versions of relevant libraries: [pip3] numpy==2.1.3 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] onnx==1.17.0 [pip3] torch==2.5.1 [pip3] torchvision==0.20.1 [pip3] triton==3.1.0 [conda] No relevant packages cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan
module: autograd,triaged
low
Critical
2,769,784,360
angular
Unclear Refer to locales by ID and extract-i18n error message
### Describe the problem that you experienced In the page https://angular.dev/guide/i18n/locale-id their is a table with French | France | fr-FR. The link in The [Angular repository](https://github.com/angular/angular/tree/main/packages/common/locales) includes common locales. it is not obvious, where to look for the common locales. But according to .../node_modules/@angular/common/locales/global/*.js "fr-FR", "de-DE", "en-US",... is not a common locales. The warning (during the build) Locale data for 'fr-FR' cannot be found. Using locale data for 'fr'. Is may either correct than the documentation should use French | France | fr. or it's misleading - it's a (red) warning - than the the documentation should mention this, (and it's more likely a issue for the toolkit) or I'm missing something. In the page https://angular.dev/guide/i18n/locale-id "By default, Angular uses en-US as the source locale of your project." differs from https://github.com/angular/angular/blob/main/packages/common/locales/generate-locales-tool/README.md "A default locale used within @angular/core. This is the locale_en.ts file." So locale_fr-FR.ts, locale_en-US.ts, ... does not exists, because of the later mentioned optimization. IMHO: The warning while build is misleading and the documentation should explain the this.. ### Enter the URL of the topic with the problem https://angular.dev/guide/i18n/locale-id
area: i18n
low
Critical
2,769,798,593
pytorch
Issues with onnx package for pytorch build in WIndows 11
### 🐛 Describe the bug Pytorch 2.5.1 build with clang fails in Windows 11 with unknown type name errors for various onnx sub modules. ``` C:/Users/mcw/Documents/Jenet/pytorch/third_party/onnx/onnx/checker.h:148:27: error: unknown type name 'SequenceProto' 148 | void check_sequence(const SequenceProto& sequence, const CheckerContext&); | ^ C:/Users/mcw/Documents/Jenet/pytorch/third_party/onnx/onnx/checker.h:149:22: error: unknown type name 'MapProto' 149 | void check_map(const MapProto& map, const CheckerContext&); | ^ C:/Users/mcw/Documents/Jenet/pytorch/third_party/onnx/onnx/checker.h:150:27: error: unknown type name 'OptionalProto' 150 | void check_optional(const OptionalProto& opt, const CheckerContext&); | ^ ``` Versions: Pytorch : 2.5.1 Clang: 19.1.3 ONNX module: 1.16.2 ### Versions ``` Collecting environment information... PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 Home (10.0.22631 64-bit) GCC version: (x86_64-win32-seh-rev0, Built by MinGW-Builds project) 14.2.0 Clang version: 19.1.3 (https://github.com/llvm/llvm-project.git ab51eccf88f5321e7c60591c5546b254b6afab99) CMake version: version 3.31.0-rc2 Libc version: N/A Python version: 3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22631-SP0 Is CUDA available: N/A CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A CPU: Name: AMD Ryzen 5 9600X 6-Core Processor Manufacturer: AuthenticAMD Family: 107 Architecture: 9 ProcessorType: 3 DeviceID: CPU0 CurrentClockSpeed: 3900 MaxClockSpeed: 3900 L2CacheSize: 6144 L2CacheSpeed: None Revision: 17408 Versions of relevant libraries: [pip3] flake8==6.0.0 [pip3] mypy==1.8.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.24.4 [pip3] numpydoc==1.5.0 [pip3] onnx==1.16.0 [pip3] optree==0.13.1 [pip3] torch==2.1.2 [conda] _anaconda_depends 2024.02 py311_mkl_1 [conda] blas 1.0 mkl [conda] mkl 2023.1.0 h6b88ed4_46358 [conda] mkl-include 2025.0.1 pypi_0 pypi [conda] mkl-service 2.4.0 py311h2bbff1b_1 [conda] mkl-static 2025.0.1 pypi_0 pypi [conda] mkl_fft 1.3.8 py311h2bbff1b_0 [conda] mkl_random 1.2.4 py311h59b6b97_0 [conda] numpy 1.24.4 pypi_0 pypi [conda] numpydoc 1.5.0 py311haa95532_0 [conda] optree 0.13.1 pypi_0 pypi [conda] torch 2.1.2 pypi_0 pypi ``` cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex
module: onnx,module: build,module: windows,triaged
low
Critical
2,769,856,759
transformers
Trainer: Use second last checkpoint if last checkpoint loading fails
### Feature request Currently, the checkpoint is saved in the [`_save_checkpoint()`](https://github.com/huggingface/transformers/blob/b2f2977533445c4f62bf58e10b1360e6856e78ce/src/transformers/trainer.py#L3197) method which saves the model, optimizer (optionally) and finally the Trainer state. The [`resume_from_checkpoint()`](https://github.com/huggingface/transformers/blob/b2f2977533445c4f62bf58e10b1360e6856e78ce/src/transformers/trainer.py#L2070) function gets the checkpoint directory from the `get_last_checkpoint` function. Then, the model etc. are loaded using the `self._load_from_checkpoint()` function and trainer state is loaded using the `TrainerState.load_from_json` call. If training program ended abruptly in the middle of checkpointing, the directory is created but some of the files are missing. For ex. if the trainer state was not yet written, this throws an error during the `TrainerState.load_from_json` call and training is not able to resume at all. Proposal: If either loading model or trainer state fails (in `resume_from_checkpoint()`) since the last directory is incomplete, then use the second last checkpoint folder for resuming. ### Motivation Motivation: Currently, our job can be killed in the middle of checkpointing and is not able to resume since last checkpoint is incomplete. We need to manually delete the folder to resume from previous checkpoint. Some info (not sure if relevant): I use accelerate launch to launch the training. ### Your contribution I am willing to submit a PR for this if this feature seems acceptable. ~~We would need to wrap the 2 loads in a try/catch block and if either of them fail, re-run using the second last directory.~~ Looks like there are a lot of places where `load_from_checkpoint` is done, with exceptions for FSDP etc. which would complicate things. Maybe we can have a list of expected files and check for existence else load second last dir.
Feature request
low
Critical
2,769,857,331
ui
[bug]: Sidebar issue in mobile view <SheetTitle> should be added in line 222.
### Describe the bug intercept-console-error.ts:40 DialogContent requires a DialogTitle for the component to be accessible for screen reader users. If you want to hide the DialogTitle, you can wrap it with our VisuallyHidden component.} ### Affected component/components Sidebar ### How to reproduce 1. In sidebar.tsx file add the below line <SheetTitle className="sr-only">Mobile Sidebar</SheetTitle> ### Codesandbox/StackBlitz link _No response_ ### Logs ```bash hook.js:600 DialogContent requires a DialogTitle for the component to be accessible for screen reader users. If you want to hide the DialogTitle, you can wrap it with our VisuallyHidden component. For more information, see https://radix-ui.com/primitives/docs/components/dialog Error Component Stack at _c1 (sheet.tsx:57:6) at Sidebar (sidebar.tsx:178:13) at AppSidebar (AppSidebar.tsx:28:28) at div (<anonymous>) at div (<anonymous>) at SidebarProvider (sidebar.tsx:60:13) at div (<anonymous>) at DashboardLayout [Server] (<anonymous>) at ThemeProvider (theme-provider.tsx:7:5) at QueryClientContextProvider (query-client-provider.tsx:13:5) at ClerkProvider [Server] (<anonymous>) at body (<anonymous>) at html (<anonymous>) at RootLayout [Server] (<anonymous>) hook.js:608 Warning: Missing `Description` or `aria-describedby={undefined}` for {DialogContent}. Error Component Stack at DescriptionWarning (Dialog.tsx:531:66) at Dialog.tsx:383:13 at Dialog.tsx:258:58 at Presence (Presence.tsx:12:11) at DialogContent (Dialog.tsx:233:64) at SlotClone (Slot.tsx:61:11) at Slot (Slot.tsx:13:11) at Primitive.div (Primitive.tsx:38:13) at Portal (Portal.tsx:22:22) at Presence (Presence.tsx:12:11) at Provider (createContext.tsx:59:15) at DialogPortal (Dialog.tsx:145:11) at _c1 (sheet.tsx:57:6) at Provider (createContext.tsx:59:15) at Dialog (Dialog.tsx:52:5) at Sidebar (sidebar.tsx:178:13) at AppSidebar (AppSidebar.tsx:28:28) at div (<anonymous>) at div (<anonymous>) at Provider (createContext.tsx:59:15) at TooltipProvider (Tooltip.tsx:68:5) at SidebarProvider (sidebar.tsx:60:13) at div (<anonymous>) at Provider (createContext.tsx:59:15) at TooltipProvider (Tooltip.tsx:68:5) at DashboardLayout [Server] (<anonymous>) at InnerLayoutRouter (layout-router.tsx:319:3) at RedirectErrorBoundary (redirect-boundary.tsx:43:5) at RedirectBoundary (redirect-boundary.tsx:74:36) at HTTPAccessFallbackErrorBoundary (error-boundary.tsx:49:5) at HTTPAccessFallbackBoundary (error-boundary.tsx:154:3) at LoadingBoundary (layout-router.tsx:454:3) at ErrorBoundary (error-boundary.tsx:183:3) at InnerScrollAndFocusHandler (layout-router.tsx:177:1) at ScrollAndFocusHandler (layout-router.tsx:294:3) at RenderFromTemplateContext (render-from-template-context.tsx:7:30) at OuterLayoutRouter (layout-router.tsx:507:3) at _ (index.mjs:1:843) at J (index.mjs:1:724) at ThemeProvider (theme-provider.tsx:7:5) at QueryClientProvider (QueryClientProvider.tsx:30:3) at QueryClientContextProvider (query-client-provider.tsx:13:5) at SWRConfig (config-context-client-x_C9_NWC.mjs:533:13) at OrganizationProvider (contexts.tsx:42:3) at ClerkContextProvider (ClerkContextProvider.tsx:20:11) at ClerkProviderBase (ClerkProvider.tsx:11:11) at Hoc (useMaxAllowedInstancesGuard.tsx:28:5) at ClerkNextOptionsProvider (NextOptionsContext.tsx:18:11) at NextClientClerkProvider (ClerkProvider.tsx:50:11) at ClientClerkProvider (ClerkProvider.tsx:127:11) at ClerkProvider [Server] (<anonymous>) at body (<anonymous>) at html (<anonymous>) at RootLayout [Server] (<anonymous>) ``` ### System Info ```bash I didn't find any relevant information. ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,769,887,844
ant-design
Color picker presets support gradients
目前 `5.22.7` 颜色选择器 presets 好像只支持当个颜色,是否可以考虑支持渐变色的预设? 如图: <img width="750" alt="image" src="https://github.com/user-attachments/assets/1e09d9ef-0bd4-4cae-8ae9-fa290128a003" /> <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
unconfirmed
low
Minor
2,769,931,512
next.js
fetch requests not cached between generateMetadata and page component as per docs
### Link to the code that reproduces this issue https://github.com/steveharrison/nextjs-fetch-caching ### To Reproduce 1. Run the demo server: `npm run server` and `npm run start`. It will start a server on port 9000. 2. Run the NextJS project: `cd nextjs-project` and `npm run dev`. 3. Load the NextJS project root page once (probably http://localhost:3000) 4. Refresh the page Side note: If you skip step 4, I only get one 200 GET request logged on initial page load, although the Next.js logs say two cache misses, whereas I'd expect one cache hit given only one GET request was received? ``` GET / 200 in 1021ms │ GET http://localhost:9000/index.html 200 in 90ms (cache skip) │ │ Cache skipped reason: (auto no cache) │ GET http://localhost:9000/index.html 200 in 56ms (cache skip) │ │ Cache skipped reason: (auto no cache) ``` ### Current vs. Expected behavior Current behaviour: 2 requests get made to the demo server. Expected behaviour: 1 request gets made to the demo server, as the documentation at https://nextjs.org/docs/app/building-your-application/caching#data-cache says that fetch requests get cached between generateMetadata and page components. I noticed that `cache` is manually used in this example: https://github.com/vercel/next.js/issues/62162, but the docs suggest this is not necessary. I have disabled strict mode as well, in case duplicates were coming from that. ### Provide environment information ```bash Operating System: Platform: darwin Arch: arm64 Version: Darwin Kernel Version 24.2.0: Fri Nov 15 18:56:27 PST 2024; root:xnu-11215.61.2.501.1~1/RELEASE_ARM64_T8103 Available memory (MB): 16384 Available CPU cores: 8 Binaries: Node: 21.7.1 npm: 10.5.0 Yarn: 1.22.22 pnpm: 9.2.0 Relevant Packages: next: 14.2.10 eslint-config-next: N/A react: 18.3.1 react-dom: 18.3.1 typescript: 5.4.5 Next.js Config: output: N/A ``` ### Which area(s) are affected? (Select all that apply) Not sure, Documentation, Metadata ### Which stage(s) are affected? (Select all that apply) next dev (local) ### Additional context _No response_
Metadata
low
Minor
2,769,992,356
electron
Wrong web browser opened when clicking links
### Preflight Checklist - [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project. - [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to. - [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success. ### Electron Version 33.3.0 ### What operating system(s) are you using? Ubuntu ### Operating System Version 24.04 ### What arch are you using? x64 ### Last Known Working Electron version _No response_ ### Expected Behavior Links open in the configured default browser ### Actual Behavior Links open in Chrome/Chromium instead, regardless of the configured default browser ### Testcase Gist URL https://gist.github.com/Viicos/d77af018be1b7b27294a74c378731992 ### Additional Information This is a re-open of https://github.com/electron/electron/issues/42611 (by @bittylicious) after @viicos was so kind to add a repro gist (adding a gist was an absolute requirement by the Electron team on the original issue). The original issue already well pointed out that this issue seems to be new with Ubuntu 24 and wasn't present on 22. Workarounds presented in in the original issue: * Uninstalling all chrome/chromium (then e.g. a firefox will be correctly opened) * Adding the following to `~/.local/share/applications/mimeapps.list` ``` [Default Applications] x-scheme-handler/http=firefox_firefox.desktop x-scheme-handler/https=firefox_firefox.desktop ``` There are some additional details available in the original issue.
platform/linux,bug :beetle:,has-repro-gist,33-x-y,34-x-y
low
Critical
2,770,024,104
go
runtime: async preemption (or alternative) for wasm
This is split from #65178, as the specific scheduler bug was fixed. Due to its single-threaded nature, js/wasm have no sysmon thread, and thus no asynchronous preemption. Thus the pitfalls of Go prior to asynchronous preemption apply: tight loops that fail to yield may delay scheduling indefinitely. This is a tracking issue for problems due to the lack of asynchronous preemption. As noted in #36365, due to the lack of threading, asynchronous preemption isn't possible today, but perhaps could be with wasm threads? Or we could have an alternative scheme for js/wasm (such as adding preemption points to loops). (Plan9 is also missing async preemption, but that simply needs an implementation)
NeedsDecision,arch-wasm,compiler/runtime
low
Critical
2,770,026,248
deno
ext/node: `assert.deepStrictEqual({}, Object.create(null))` should throw
The following script throws in Node, but not in Deno: ```js import assert from "node:assert" assert.deepStrictEqual({}, Object.create(null)) ```
bug,node compat
low
Minor
2,770,037,546
rust
"This generic parameter must be used with a generic lifetime parameter" on RPIT with precise capturing
<!-- Thank you for filing a bug report! 🐛 Please provide a short summary of the bug, along with any information you feel relevant to replicating the bug. --> I tried this code: ```rust struct Foo<'a> { children: Vec<&'a Foo<'a>>, } struct Bar; impl<'a> Foo<'a> { fn failing<'p>(&'a self, param: &'p Bar) -> impl Iterator<Item = String> + use<'a, 'p> { self.children.iter().flat_map(|child| child.failing(param)) } } ``` I expected to see this happen: the code compiles, since `param` gets used by all the `children` _Or_, if the code actually violates ownership rules, get a clearer error message. Instead, this happened: compiler complains with "This generic parameter must be used with a generic lifetime parameter" I've tried the following solutions: 1. use an anonymous lifetime instead of `'p`, like this: ```rust fn failing(&'a self, param: &Bar) -> impl Iterator<Item = String> + use<'a, '_> { /* ... */ } ``` 2. make the two lifetimes the same: ```rust fn failing(&'a self, param: &'a Bar) -> impl Iterator<Item = String> + use<'a> { /* ... */ } ``` And still got the same bizarre error. ### Meta ``` rustc 1.82.0 (f6e511eec 2024-10-15) (built from a source tarball) binary: rustc commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14 commit-date: 2024-10-15 host: x86_64-unknown-linux-gnu release: 1.82.0 LLVM version: 18.1.8 ``` also present on latest stable (1.83.0), beta (1.84.0-beta.6), and nightly (2025-01-05) <details><summary>Backtrace</summary> <p> ``` error[E0792]: expected generic lifetime parameter, found `'_` --> src/tree.rs:656:9 | 655 | fn failing<'p>(&'a self, param: &'p Bar) -> impl Iterator<Item = String> + use<'a, 'p> { | -- this generic parameter must be used with a generic lifetime parameter 656 | self.children.iter().flat_map(|child| child.failing(param)) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``` </p> </details>
T-compiler,A-impl-trait,C-bug,S-has-mcve,T-types
low
Critical
2,770,055,748
vscode
Feature Proposal for Enhanced Collaborative Programming in VSCode
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> 1. Live Pair Debugging Building on the current Live Share functionality, I propose adding a Live Pair Debugging feature. This would allow two or more collaborators to debug code together in real-time. Key functionalities could include: Synchronized breakpoints, so all participants can view and control them. Shared debugging sessions where participants can see variable states, stack traces, and logs simultaneously. Optional permissions for the session host to manage who can make changes during the debugging process. 2. Code Review Inline Chat This feature would enable inline discussions directly in the code editor. Collaborators could comment on specific lines or blocks of code, and these comments would appear as contextual notes within the editor. Key features could include: A real-time chat thread embedded in the editor for each commented code block. Notifications for new comments or replies during collaborative sessions. Integration with Git or other version control systems to track comments across commits. Benefits of These Features Improves the efficiency of pair programming and team debugging. Streamlines code review processes, making them faster and more interactive. Encourages better communication and knowledge sharing within teams. I believe these enhancements would significantly improve the collaborative programming experience in VSCode, particularly for remote and distributed teams. Thank you for considering this suggestion. I’d be happy to discuss or elaborate further if needed. Best regards, [janeblue]
feature-request
low
Critical
2,770,056,460
vscode
[themes] provide alternate color selections
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> I would really like if there was a way to switch between color sets within a single theme. You could make multiple themes, of course, but if you just want to give the user more options for something small like bracket colors, then it seems a bit inefficient and confusing to upload the same theme with small changes. In short, Allow the user to switch between color sets within a color theme. For example, the user could choose between bracket colorizations: 1. Blue brackets (default) 2. Red brackets 3. Green brackets 4. Red, blue, & green brackets
feature-request,themes
low
Minor
2,770,083,524
flutter
Photo Captured Using Flutter Camera Plugin (camera: ^0.11.0+2) Appears Tilted on some devices(Redmi Note 8 Pro, samsung A9, Pixel 8a)
![surajimagetiled](https://github.com/user-attachments/assets/473d3105-8590-42f5-b79c-9d3b122dab06)
e: device-specific,platform-android,p: camera,package,P1,team-android,triaged-android
medium
Critical
2,770,121,213
neovim
neovim prints key release event characters after exiting in kitten ssh session
### Problem When exiting neovim in a remote kitten ssh session, sometimes the characters "3;1:3u" shows in terminal. More information can be found at https://github.com/kovidgoyal/kitty/issues/8200 Not sure if https://github.com/neovim/neovim/pull/31727 is related? ### Steps to reproduce 1. ssh to a remote machine using kitten ssh 2. start nvim nvim -u NONE 3. exit nvim 4. See the sequence "3;1:3u" (the key release event sequence) ### Expected behavior No characters are printed ### Nvim version (nvim -v) NVIM v0.11.0-dev-1465+ga09c7a5d57 ### Vim (not Nvim) behaves the same? no, vim 8.2.2121 ### Operating system/version Ubuntu 22.04.3 LTS ### Terminal name/version kitty 0.38.1 (d23adce11c) created by Kovid Goyal ### $TERM environment variable xterm-kitty ### Installation Nightly releases
bug,tui,input
low
Major
2,770,124,096
pytorch
Integrate ULFM and FTI for OpenMPI (torch.distributed)
### 🚀 The feature, motivation and pitch I am working on a distributed framework and I want to be able to manage node failures. Is is possible to integrate the use of _User Level Failure Mitigation_ ([ULFM](https://github.com/ICLDisco/ulfm-testing)) and _Fault Tolerance Interface_ ([FTI](https://github.com/leobago/fti)) libraries, please ? ### Alternatives Create a manual checkpoint with `torch.save` or with Berkeley Lab Checkpoint/Restart (BLCR). ### Additional context _No response_ cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
oncall: distributed,feature
low
Critical
2,770,155,354
ollama
COULDN'T run qwen2.5-7b-instuct-q4_k on cpu; error wsarecv: An existing connection was forcibly closed by the remote host.
### What is the issue? (cmd) ollama run qwen2.5-7b-instuct-q4_k Error: Post "http://127.0.0.1:11434/api/generate": read tcp 127.0.0.1:50665->127.0.0.1:11434: wsarecv: An existing connection was forcibly closed by the remote host. ---------- server.log 2025/01/06 16:28:24 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:D:\\workspace\\ollama\\ollama_model OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]" time=2025-01-06T16:28:24.613+08:00 level=INFO source=images.go:757 msg="total blobs: 5" time=2025-01-06T16:28:24.613+08:00 level=INFO source=images.go:764 msg="total unused blobs removed: 0" time=2025-01-06T16:28:24.614+08:00 level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)" time=2025-01-06T16:28:24.614+08:00 level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cuda_v12_avx rocm_avx cpu cpu_avx cpu_avx2 cuda_v11_avx]" time=2025-01-06T16:28:24.614+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" time=2025-01-06T16:28:24.614+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1 time=2025-01-06T16:28:24.614+08:00 level=INFO source=gpu_windows.go:183 msg="efficiency cores detected" maxEfficiencyClass=1 time=2025-01-06T16:28:24.614+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=12 efficiency=4 threads=20 time=2025-01-06T16:28:24.629+08:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" time=2025-01-06T16:28:24.629+08:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="31.7 GiB" available="21.0 GiB" ---------- however I could run qwen2.5-0.5b-instruct-q8_0 ### OS Windows ### GPU Other ### CPU Intel ### Ollama version 0.5.4
bug
medium
Critical
2,770,196,506
svelte
Loudly error for invalid `:global` placement
### Describe the bug In https://discord.com/channels/457912077277855764/1325334674784129035 someone was confused by a certain syntax failing with a "unclosed comment" error. Turns out it's us no erroring on a wrong `:global` placement, which leads to subsequent wrong "comment this out" logic. The solution is to throw a validation error here. ### Reproduction [playground link](https://svelte.dev/playground/hello-world?version=5.16.2#H4sIAAAAAAAACm2OTW6EMAyFr2J5M4NEYZ9JkbqbnqHpgh_PNFJIosTQQYi7VwHasphIluLP9ntvRlv3hAKvZIyDbxdMB2fqNFOXYY43bSii-JiRJ5_2EsD89-rN-yKOZDixpo70jLfOMlmOKFDGNmjPYGp7f1XIUWGlLOxP994FhndrKcAtuB5ORbl2u9jpoqwsN41KWWVlpJa1swcR2enx0P5hX10p0CkCfxG4gSnAHkyW_tnBalzJLcDRoNwcZPlvnpLwZCh9Weh4TrWPodMjiLtxTW0yKGrvzfTS64e2GXiYN-HWGRcENGagSyILrAarJubI9GAUHAZaPpcfQY6NHrQBAAA=) ### Logs _No response_ ### System Info ```shell irrelevant ``` ### Severity annoyance
css
low
Critical
2,770,237,255
pytorch
Core dumped happens when avg_pool1d with torch.compile receives uint tensor with specific tensor shape
### 🐛 Describe the bug Program crashes when running the following code: ```python # toy.py import torch torch.random.manual_seed(0) input = torch.randn(1,8,1).to(torch.uint8) kernel_size = 4 stride = 46 padding = 2 f = torch.nn.functional.avg_pool1d cf = torch.compile(f) cf(input,kernel_size,stride,padding) ``` And the error message is non-deterministic (maybe it is related to my system status). ```shell >>> python toy.py free(): invalid next size (fast) Aborted (core dumped) >>> python toy.py corrupted size vs. prev_size Aborted (core dumped) >>> python toy.py free(): invalid pointer Aborted (core dumped) >>> python toy.py free(): invalid next size (fast) Aborted (core dumped) ``` - This issue only occurs on my CPU. The code will normally raise an Exception (`BackendCompilerFailed`) if I ran it on gpu. ### Versions Collecting environment information... PyTorch version: 2.6.0a0+gitace645a Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.31.2 Libc version: glibc-2.35 Python version: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:27:36) [GCC 13.3.0] (64-bit runtime) Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.4.131 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 GPU 1: NVIDIA GeForce RTX 3090 GPU 2: NVIDIA GeForce RTX 3090 GPU 3: NVIDIA GeForce RTX 3090 Nvidia driver version: 560.35.03 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Vendor ID: AuthenticAMD Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores CPU family: 23 Model: 49 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 1 Stepping: 0 Frequency boost: enabled CPU max MHz: 4368.1641 CPU min MHz: 2200.0000 BogoMIPS: 7000.73 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es Virtualization: AMD-V L1d cache: 1 MiB (32 instances) L1i cache: 1 MiB (32 instances) L2 cache: 16 MiB (32 instances) L3 cache: 128 MiB (8 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-63 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec rstack overflow: Mitigation; Safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] mypy==1.13.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.2 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] onnx==1.17.0 [pip3] onnxscript==0.1.0.dev20240817 [pip3] optree==0.13.0 [pip3] torch==2.6.0a0+gitace645a [pip3] triton==3.1.0 [conda] numpy 1.26.2 pypi_0 pypi [conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi [conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi [conda] optree 0.13.0 pypi_0 pypi [conda] torch 2.6.0a0+gitf611e8c pypi_0 pypi [conda] triton 3.1.0 pypi_0 pypi cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @BoyuanFeng @gujinghui @fengyuan14 @guangyey
triaged,oncall: pt2,module: inductor
low
Critical
2,770,247,196
flutter
[rfw] Alternative to `source.optionalChild()` with when constructor requires specific Widget?
### Use case This is regarding rfw https://pub.dev/packages/rfw I have a custom widget that has parameter `final Text? text` ``` class MarqueeContinuous extends StatelessWidget { final Text? text; // Need this final Duration duration; final Duration delay; final double step; final double endPadding; final int scrollCount; const MarqueeContinuous({ Key? key, this.text, this.delay = const Duration(seconds: 2), this.duration = const Duration(seconds: 2), this.step = 50, this.endPadding = 4, this.scrollCount = -1, }) : super(key: key); ``` There are only two function calls that returns Widget - `source.child()` and `source.optionalChild()`. But the compiler is complaining because `Widget` is not `Text`. Screenshot below is my custom `LocalWidgetLibrary` ![image](https://github.com/user-attachments/assets/a317f911-a3da-4c6d-9a66-ab5960130b5a) May I know if there is solution to this? Thanks in advance. ### Proposal Let `sourceChild()` return specific Widget.
waiting for customer response,in triage
low
Major
2,770,264,701
vscode
Add context menu on folders to add to chat
Today there is a Copilot menu for files: ![Image](https://github.com/user-attachments/assets/708f3ad7-fa34-4e41-9cbc-19767b01596d) Since we now support folders too, I would suggest to have it for folders as well.
feature-request,file-explorer,chat
low
Minor
2,770,268,197
pytorch
Exporting the operator 'aten::_transformer_encoder_layer_fwd' to ONNX opset version 17 is not supported
### 🐛 Describe the bug Using the following code to convert torch model (torch.nn.Module) to onnx but getting `Exporting the operator 'aten::_transformer_encoder_layer_fwd' to ONNX opset version 17 is not supported` ``` input_names = ["input__0", "input__1", "input__2"] output_names = ["output__0"] with torch.no_grad(): torch.onnx.export(model_sigmoid, (entity_emb_1, entity_emb_2, raw_data), f="/tmp/rough.onnx", verbose=False, do_constant_folding=True, input_names=input_names, output_names=output_names, export_params=True, dynamic_axes={'input__0' : {0 : 'batch_size'}, 'input__1' : {0 : 'batch_size'}, 'input__2' : {0 : 'batch_size'}, 'output__0' : {0 : 'batch_size'}} ) ``` Have tried with dynamo export as well, by setting `dynamo=True` in the above code, but did not help and getting the following error with dynamo ``` ConstraintViolationError: Constraints violated (batch_size)! For more information, run with TORCH_LOGS="+dynamic". - Not all values of batch_size = L['entity_emb_1'].size()[0] in the specified range are valid because batch_size was inferred to be a constant (1). - Not all values of batch_size = L['entity_emb_2'].size()[0] in the specified range are valid because batch_size was inferred to be a constant (1). - Not all values of batch_size = L['data'].size()[0] in the specified range are valid because batch_size was inferred to be a constant (1). Suggested fixes: batch_size = 1 During handling of the above exception, another exception occurred: ``` ### Versions Collecting environment information... PyTorch version: 2.5.1+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.28.1 Libc version: glibc-2.35 Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.1.100+-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB Nvidia driver version: 535.183.01 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 12 On-line CPU(s) list: 0-11 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) CPU @ 2.20GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 Stepping: 7 BogoMIPS: 4400.39 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities Hypervisor vendor: KVM Virtualization type: full L1d cache: 192 KiB (6 instances) L1i cache: 192 KiB (6 instances) L2 cache: 6 MiB (6 instances) L3 cache: 38.5 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-11 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Versions of relevant libraries: [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.23.5 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] onnx==1.17.0 [pip3] onnxscript==0.1.0.dev20250106 [pip3] torch==2.5.1 [pip3] torcheval==0.0.7 [pip3] torchlars==0.1.2 [pip3] torchmetrics==1.6.1 [pip3] torchvision==0.15.2+cu118 [pip3] triton==3.1.0 [conda] Could not collect
module: onnx,triaged
low
Critical
2,770,268,718
vscode
Explorer: Sticky folder cannot be dragged
In the explorer, folders appear sticky. They have a context menu but do not respond to drag and drop operations. ![Image](https://github.com/user-attachments/assets/8fa184ff-f11b-49f4-b404-8a27816c19a9)
feature-request,tree-sticky-scroll
low
Minor
2,770,271,299
vscode
Git blame: allow action to open commit on GH
I prefer to view a commit on GH.com, allow me to configure the click gesture to open the commit.
feature-request,git
low
Minor
2,770,306,316
flutter
TextFormField not focus again after recall or navigate to the other(or pushReplace to the same screen) screen
### Steps to reproduce 1. Create 2 textField, bold of them have focusNode 2. after submit TextFiled1 , request focus on TextField2. 3. When TextField2 Submitted: call API + if success response : clear all textField and request focus at textField1 + if error: show Aleart Dialog ### Expected results after textField2 submitted, call API success have to show snackbar and focus again at textField1 ### Actual results The problem is some time it work but sometime didn't. When successs there is few case happened like: + show the cursor at textField1, keyboard show up and disappear immediately. + not showing cursor and keyboard + show cursor and keyboard but typing not working ### Code sample <details open><summary>Code sample</summary> ```dart class DaoChuyenHangXuatKhau extends StatefulWidget { const DaoChuyenHangXuatKhau({super.key}); @override State<DaoChuyenHangXuatKhau> createState() => _DaoChuyenHangXuatKhauState(); } class _DaoChuyenHangXuatKhauState extends State<DaoChuyenHangXuatKhau> { TextEditingController maPalletController = TextEditingController(); TextEditingController maDMVitriController = TextEditingController(); TextEditingController ghiChuController = TextEditingController(); var fcMaPallet = FocusNode(); var fcViTri = FocusNode(); @override void initState() { super.initState(); WidgetsBinding.instance.addPostFrameCallback((timeStamp) { fcMaPallet.requestFocus(); }); } @override void dispose() { maPalletController.dispose(); maDMVitriController.dispose(); ghiChuController.dispose(); fcMaPallet = FocusNode(); fcViTri = FocusNode(); super.dispose(); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: const Text('Đảo chuyển hàng xuất khẩu'), ), body: Padding( padding: const EdgeInsets.symmetric(horizontal: 15, vertical: 10), child: Column( children: [ TextField( focusNode: fcMaPallet, onSubmitted: (value) { fcViTri.requestFocus(); }, decoration: InputDecoration( label: const Text('Mã Pallet'), border: OutlineInputBorder( borderSide: const BorderSide(width: 10), borderRadius: BorderRadius.circular(10), ), ), controller: maPalletController, ), const SizedBox(height: 10), TextField( focusNode: fcViTri, decoration: InputDecoration( label: const Text('Mã vị trí kho'), border: OutlineInputBorder( borderSide: const BorderSide(width: 10), borderRadius: BorderRadius.circular(10), ), ), controller: maDMVitriController, onSubmitted: (value) async { if (maPalletController.text.isNotEmpty && value.isNotEmpty) { // fcViTri.unfocus(); // await DaoChuyen2(context); // fcMaPallet.requestFocus(); ProgressDialog(context); //Kiểm tra trùng vị trí kho await ctDaoChuyenHangXuatKhauListCheckViTriKho( ctDaoChuyenHangXuatKhauListCheckViTriKhoRequest( MaDanhMucViTriKho: maDMVitriController.text, )).then((response) async { Navigator.of(context).pop(); if (response.Status == '0') { if (response.Data == "1") { QuestionMessageDialog(context, "Vị trí kho đã có hàng, bạn có muốn đặt trùng vị trí kho hay không?", () async { await DaoChuyen(context); }); } else { await DaoChuyen(context); } } else { ErrorMessageDialog(context, response.ErrorMsg, () {}); } }); } else { ErrorMessageDialog( context, "Hãy quét đủ mã pallet và vị trí ", () {}); } }, ), const SizedBox( height: 20, ), TextFormField( decoration: InputDecoration( label: const Text('Ghi chú'), border: OutlineInputBorder( borderSide: const BorderSide(width: 10), borderRadius: BorderRadius.circular(10), ), ), controller: ghiChuController, ), const SizedBox( height: 10, ), ], ), ), ); } Future<void> DaoChuyen(BuildContext context) async { ProgressDialog(context); await ctDaoChuyenHangXuatKhauInsert(ctDaoChuyenHangXuatKhauInsertRequest( ID: '', IDDanhMucDonVi: IDDanhMucDonVi, IDDanhMucChungTu: '', IDDanhMucChungTuTrangThai: '', MaPallet: maPalletController.text, MaDanhMucViTriKho: maDMVitriController.text, IDDanhMucNguoiSuDungCreate: IDDanhMucNguoiSuDung, GhiChu: ghiChuController.text)) .then((value) { Navigator.of(context).pop(); // log('pop lần 1 trong đảo chuyển'); if (value.Status == '0') { setState(() { maPalletController.text = ""; maDMVitriController.text = ""; }); ScaffoldMessenger.of(context).showSnackBar(SnackBar( backgroundColor: Colors.green[300], duration: const Duration(milliseconds: 1000), content: const Text( "Cập nhật thành công", style: TextStyle(fontSize: 20), ))); Future.delayed(const Duration(microseconds: 300)) .then((value) => WidgetsBinding.instance.addPostFrameCallback((_) { fcMaPallet.requestFocus(); })); // Navigator.of(context).pushReplacement( // MaterialPageRoute(builder: (_) => const DaoChuyenHangXuatKhau())); } else { ErrorMessageDialog(context, value.ErrorMsg, () {}); } }); } ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> https://github.com/user-attachments/assets/563acd8e-cf69-4285-89c5-42cdef302e41 </details> ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [√] Flutter (Channel stable, 3.16.2-0.0.pre.1, on Microsoft Windows [Version 10.0.19045.3570], locale en-US) [√] Windows Version (Installed version of Windows is version 10 or higher) [√] Android toolchain - develop for Android devices (Android SDK version 34.0.0) [√] Chrome - develop for the web [!] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.4) X Visual Studio is missing necessary components. Please re-run the Visual Studio installer for the "Desktop development with C++" workload, and include these components: MSVC v142 - VS 2019 C++ x64/x86 build tools - If there are multiple build tool versions available, install the latest C++ CMake tools for Windows Windows 10 SDK [√] Android Studio (version 2023.1) [√] IntelliJ IDEA Community Edition (version 2024.1) [√] VS Code (version 1.96.2) [√] Connected device (3 available) [√] Network resources ``` </details>
waiting for customer response,in triage
low
Critical
2,770,319,243
vscode
Missing screen reader output at opening / saving file input on remote
Type: <b>Bug</b> If there is a workaround, it may not necessarily be a bug. 1. Open a remote using the vscode-remote extension. E.G. WSL. 2. Try to open a folder, or save a new file. 3. Try to input filename, or edit the current input with screen reader. 4. inputs and deletions are not announced by screen reader at all. Moving cursor doesn't announce the focused character. 5. Expected: They are announced. I think this box was originally a simple text field. In a older version (probably 6 month to 1 year ago?), it also started to act as a combo box, allowing me to autocomplete directories. After the change, I am unable to smoothly finalize the destination to what I want the path to be. I just thought I would be able to adapt to this when it was introduced, but now feeling like reporting it here and seek for some workarounds. It does not occur in local development since it uses the native dialog, at least on Windows. It only happens when I am developing on a remote. I'm not really sure this custom dialog is provided by the vscode-side or the extension-side. Extension version: 0.88.5 VS Code version: Code 1.96.1 (42b266171e51a016313f47d0c48aca9295b9cbb2, 2024-12-17T17:50:05.206Z) OS version: Windows_NT x64 10.0.22631 Modes: Remote OS version: Linux x64 5.15.167.4-microsoft-standard-WSL2 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|12th Gen Intel(R) Core(TM) i7-1255U (12 x 2611)| |GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off| |Load (avg)|undefined| |Memory (System)|15.69GB (2.62GB free)| |Process Argv|--crash-reporter-id 16c020cb-6298-45d1-bf2f-54354940af72| |Screen Reader|yes| |VM|0%| |Item|Value| |---|---| |Remote|WSL: Ubuntu| |OS|Linux x64 5.15.167.4-microsoft-standard-WSL2| |CPUs|12th Gen Intel(R) Core(TM) i7-1255U (12 x 0)| |Memory (System)|7.60GB (4.88GB free)| |VM|0%| </details><details> <summary>A/B Experiments</summary> ``` vsliv368cf:30146710 vspor879:30202332 vspor708:30202333 vspor363:30204092 vswsl492:30256859 vscod805:30301674 binariesv615:30325510 vsaa593cf:30376535 py29gd2263:31024239 c4g48928:30535728 azure-dev_surveyone:30548225 962ge761:30959799 pythonnoceb:30805159 pythonmypyd1:30879173 h48ei257:31000450 pythontbext0:30879054 cppperfnew:31000557 dsvsc020:30976470 pythonait:31006305 dsvsc021:30996838 dvdeprecation:31068756 dwnewjupytercf:31046870 nativerepl2:31139839 pythonrstrctxt:31112756 nativeloc2:31192216 cf971741:31144450 iacca1:31171482 notype1cf:31157160 5fd0e150:31155592 dwcopilot:31170013 stablechunks:31184530 6074i472:31201624 ``` </details> <!-- generated by issue reporter -->
bug,accessibility,simple-file-dialog
low
Critical
2,770,321,417
transformers
Trainer: update `state.num_input_tokens_seen` to use `num_items_in_batch`
### Feature request Trainer has a state and inside the state there is a field called `num_input_tokens_seen`, which could be relevant for callbacks and other information. The problem is that this field is updated using numel() which means it counts the padding as well. https://github.com/huggingface/transformers/blob/241c04d36867259cdf11dbb4e9d9a60f9cb65ebc/src/transformers/trainer.py#L2492-L2496 In a recent update the class calculates `num_tokens_in_batch` so it can be used for the loss calculation, this ignores padding. and can be used to give a more accurate picture for `num_input_tokens_seen`. __ ### Motivation `num_input_tokens_seen` is improtant information when you train a model. Having a more accurate picture for it will help people understand their training. It can also be used for callbacks (e.g. stop training after X tokens). This will also eliminate the double calculation for num_tokens_seen (we are already calculating it once), and eliminate the discrepancy between the `num_input_tokens_seen` and `num_tokens_in_batch` ### Your contribution I maybe be able to do a PR for this
Feature request
low
Minor
2,770,329,194
opencv
Test cases failed while building OpenCV 4.10.0 version
### System Information **System Information** Opencv version: 4.10.0 Operating system : Ubuntu 24.04 Compiler version: GCC 13.2.0 ### Detailed description Detailed description I have been building opencv from source with following parameter ``` cmake ../opencv -DBUILD_DOCS=OFF -DBUILD_EXAMPLE=OFF -DBUILD_TESTS=ON -DBUILD_PERF_TESTS=OFF -DBUILD_SHARED_LIBS=ON -DBUILD_WITH_DEBUG_INFO=ON -DCMAKE_BUILD_TYPE=release -DBUILD_JAVA=OFF - DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=ON -DBUILD_opencv_dnn=ON -DBUILD_opencv_dnn_modern=OFF -DBUILD_opencv_cnn_3dobj=ON -DBUILD_PROTOBUF=OFF -DPROTOBUF_UPDATE_FILES=ON -DCMAKE_INSTALL_PREFIX=/path -DOPENCV_EXTRA_MODULES_PATH=/mnt/opencv_contrib/modules -DOPENCV_LICENSES_INSTALL_PATH=/path/share/doc/opencv -DOPENCV_GENERATE_PKGCONFIG=YES -DOPENCV_PYTHON3_VERSION=3.12 -DPYTHON3_PACKAGES_PATH=/path/lib/python3.12/dist-packages -DENABLE_CCACHE=OFF -DENABLE_FAST_MATH=ON -DWITH_VTK=ON -DBUILD_opencv_hdf=OFF -DWITH_OPENGL=ON -DWITH_GPHOTO2=OFF -DWITH_FREETYPE=ON -DFREETYPE_INCLUDE_DIRS=/usr/include/freetype2 -DTINYDNN_USE_OMP=ON -DINSTALL_C_EXAMPLES=OFF -DINSTALL_PYTHON_EXAMPLES=OFF -DOPENCV_DNN_CUDA=ON -DWITH_CUDA=ON -DWITH_CUBLAS=ON -DWITH_CUFFT=ON -DCUDA_NVCC_FLAGS=--expt-relaxed-constexpr -DCUDA_FAST_MATH=ON -DCUDA_ARCH_BIN=5.3,6.0,6.1,7.0,7.5,8.6,8.9 -DENABLE_PRECOMPILED_HEADERS=ON -DWITH_NVCUVID=ON -DWITH_OPENMP=ON -DOpenGL_GL_PREFERENCE=GLVND -DPYTHON_DEFAULT_EXECUTABLE=/PATH/python3.12 -DPYTHON3_EXECUTABLE=/PATH/python3.12 -DPYTHON3_INCLUDE_DIR=/miniconda/include -DPYTHON3_LIBRARY=/PATH/libpython3.12.so -DPYTHON3_NUMPY_INCLUDE_DIRS=/PATH/python3.12/site-packages/numpy/_core/include -DWITH_OPENVINO=ON -DWITH_TBB=ON -DMKL_WITH_OPENMP=ON -DMKL_WITH_TBB=ON /mnt/opencv_extra /mnt ``` There are several tests which are failed : ``` [ FAILED ] Core_SVD.orthogonality [ FAILED ] OCL_Arithm/InRange.Mat/33, where GetParam() = (CV_16U, Channels(1), false, true) [ FAILED ] OCL_Arithm/InRange.Mat/35, where GetParam() = (CV_16U, Channels(1), true, true) [ FAILED ] OCL_Arithm/InRange.Mat/37, where GetParam() = (CV_16U, Channels(2), false, true) [ FAILED ] OCL_Arithm/InRange.Mat/39, where GetParam() = (CV_16U, Channels(2), true, true) [ FAILED ] OCL_Arithm/InRange.Mat/49, where GetParam() = (CV_16S, Channels(1), false, true) [ FAILED ] OCL_Arithm/InRange.Mat/51, where GetParam() = (CV_16S, Channels(1), true, true) [ FAILED ] OCL_Arithm/InRange.Mat/57, where GetParam() = (CV_16S, Channels(3), false, true) [ FAILED ] OCL_Arithm/InRange.Mat/59, where GetParam() = (CV_16S, Channels(3), true, true) [ FAILED ] OCL_Arithm/InRange.Mat/65, where GetParam() = (CV_32S, Channels(1), false, true) [ FAILED ] OCL_Arithm/InRange.Mat/96, where GetParam() = (CV_64F, Channels(1), false, false) [ FAILED ] OCL_Arithm/InRange.Mat/98, where GetParam() = (CV_64F, Channels(1), true, false) [ FAILED ] OCL_Arithm/InRange.Mat/105, where GetParam() = (CV_64F, Channels(3), false, true) [ FAILED ] OCL_Arithm/InRange.Mat/107, where GetParam() = (CV_64F, Channels(3), true, true) [ FAILED ] Core_InRangeS/ElemWiseTest.accuracy/0, where GetParam() = 16-byte object <20-75 24-18 D2-55 00-00 60-52 59-18 D2-55 00-00> [ FAILED ] Core_InRange/ElemWiseTest.accuracy/0, where GetParam() = 16-byte object <C0-4A 53-18 D2-55 00-00 A0-55 59-18 D2-55 00-00> [ FAILED ] Core/CountNonZeroND.ndim/0, where GetParam() = (2, 0) [ FAILED ] Core/CountNonZeroND.ndim/1, where GetParam() = (2, 1) [ FAILED ] Core/CountNonZeroND.ndim/2, where GetParam() = (2, 5) [ FAILED ] Core/CountNonZeroND.ndim/3, where GetParam() = (3, 0) [ FAILED ] Core/CountNonZeroND.ndim/4, where GetParam() = (3, 1) [ FAILED ] Core/CountNonZeroND.ndim/5, where GetParam() = (3, 5) [ FAILED ] Core/CountNonZeroND.ndim/6, where GetParam() = (4, 0) [ FAILED ] Core/CountNonZeroND.ndim/7, where GetParam() = (4, 1) [ FAILED ] Core/CountNonZeroND.ndim/8, where GetParam() = (4, 5) [ FAILED ] Core/CountNonZeroND.ndim/9, where GetParam() = (5, 0) [ FAILED ] Core/CountNonZeroND.ndim/10, where GetParam() = (5, 1) [ FAILED ] Core/CountNonZeroND.ndim/11, where GetParam() = (5, 5) [ FAILED ] Core/CountNonZeroND.ndim/12, where GetParam() = (6, 0) [ FAILED ] Core/CountNonZeroND.ndim/13, where GetParam() = (6, 1) [ FAILED ] Core/CountNonZeroND.ndim/14, where GetParam() = (6, 5) [ FAILED ] Core/CountNonZeroND.ndim/18, where GetParam() = (8, 0) [ FAILED ] Core/CountNonZeroND.ndim/19, where GetParam() = (8, 1) [ FAILED ] Core/CountNonZeroND.ndim/20, where GetParam() = (8, 5) [ FAILED ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/0, where GetParam() = (5, 1x1) [ FAILED ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/1, where GetParam() = (5, 320x240) [ FAILED ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/2, where GetParam() = (5, 127x113) [ FAILED ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/3, where GetParam() = (5, 1x113) [ FAILED ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/4, where GetParam() = (6, 1x1) [ FAILED ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/5, where GetParam() = (6, 320x240) ``` After running tests individually i got below result : ``` root@buildhost:/mnt/opencv.build/bin# ./opencv_test_core --gtest_filter=OCL_Arithm/InRange.Mat/49 --verbose CTEST_FULL_OUTPUT OpenCV version: 4.10.0 OpenCV VCS version: 4.10.0 Build type: Release Compiler: /usr/bin/c++ (ver 11.4.0) Parallel framework: tbb (nthreads=16) CPU features: SSE SSE2 SSE3 SSSE3 SSE4.1 POPCNT SSE4.2 FP16 AVX AVX2 AVX512F AVX512-COMMON AVX512-SKX AVX512-CNL AVX512-CLX AVX512-ICL Intel(R) IPP version: ippIP AVX-512F/CD/BW/DQ/VL (k0) 2021.11.0 (-) Feb 1 2024 Intel(R) IPP features code: 0x7300000 OpenCL is disabled TEST: Skip tests with tags: 'mem_6gb', 'verylong' Note: Google Test filter = OCL_Arithm/InRange.Mat/49 [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from OCL_Arithm/InRange [ RUN ] OCL_Arithm/InRange.Mat/49, where GetParam() = (CV_16S, Channels(1), false, true) /mnt/opencv/modules/core/test/ocl/test_arithm.cpp:1532: Failure Expected: (cvtest::ocl::TestUtils::checkNorm2(dst_roi, udst_roi)) <= (0), actual: 255 vs 0 Size: [21 x 15] [ FAILED ] OCL_Arithm/InRange.Mat/49, where GetParam() = (CV_16S, Channels(1), false, true) (1 ms) [----------] 1 test from OCL_Arithm/InRange (1 ms total) [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. (1 ms total) [ PASSED ] 0 tests. [ FAILED ] 1 test, listed below: [ FAILED ] OCL_Arithm/InRange.Mat/49, where GetParam() = (CV_16S, Channels(1), false, true) 1 FAILED TEST ``` Expected Behavior The build should complete successfully with no test failed, producing OpenCV binaries with the specified configurations. Could someone help identify the cause of this issue and suggest a solution? Any guidance or help would be greatly appreciated. ### Steps to reproduce git clone https://github.com/opencv/opencv.git git checkout [version/tag/branch] Install dependencies Run below command : `cmake ../opencv -DBUILD_DOCS=OFF -DBUILD_EXAMPLE=OFF -DBUILD_TESTS=ON -DBUILD_PERF_TESTS=OFF -DBUILD_SHARED_LIBS=ON -DBUILD_WITH_DEBUG_INFO=ON -DCMAKE_BUILD_TYPE=release -DBUILD_JAVA=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=ON -DBUILD_opencv_dnn=ON -DBUILD_opencv_dnn_modern=OFF -DBUILD_opencv_cnn_3dobj=ON -DBUILD_PROTOBUF=OFF -DPROTOBUF_UPDATE_FILES=ON -DCMAKE_INSTALL_PREFIX=/path -DOPENCV_EXTRA_MODULES_PATH=/mnt/opencv_contrib/modules -DOPENCV_LICENSES_INSTALL_PATH=/path/share/doc/opencv -DOPENCV_GENERATE_PKGCONFIG=YES -DOPENCV_PYTHON3_VERSION=3.12 -DPYTHON3_PACKAGES_PATH=/path/lib/python3.12/dist-packages -DENABLE_CCACHE=OFF -DENABLE_FAST_MATH=ON -DWITH_VTK=ON -DBUILD_opencv_hdf=OFF -DWITH_OPENGL=ON -DWITH_GPHOTO2=OFF -DWITH_FREETYPE=ON -DFREETYPE_INCLUDE_DIRS=/usr/include/freetype2 -DTINYDNN_USE_OMP=ON -DINSTALL_C_EXAMPLES=OFF -DINSTALL_PYTHON_EXAMPLES=OFF -DOPENCV_DNN_CUDA=ON -DWITH_CUDA=ON -DWITH_CUBLAS=ON -DWITH_CUFFT=ON -DCUDA_NVCC_FLAGS=--expt-relaxed-constexpr -DCUDA_FAST_MATH=ON -DCUDA_ARCH_BIN=5.3,6.0,6.1,7.0,7.5,8.6,8.9 -DENABLE_PRECOMPILED_HEADERS=ON -DWITH_NVCUVID=ON -DWITH_OPENMP=ON -DOpenGL_GL_PREFERENCE=GLVND -DPYTHON_DEFAULT_EXECUTABLE=/PATH/python3.12 -DPYTHON3_EXECUTABLE=/PATH/python3.12 -DPYTHON3_INCLUDE_DIR=/miniconda/include -DPYTHON3_LIBRARY=/PATH/libpython3.12.so -DPYTHON3_NUMPY_INCLUDE_DIRS=/PATH/python3.12/site-packages/numpy/_core/include -DWITH_OPENVINO=ON -DWITH_TBB=ON -DMKL_WITH_OPENMP=ON -DMKL_WITH_TBB=ON` make -j$(no of cores) ### Issue submission checklist - [X] I report the issue, it's not a question - [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution - [X] I updated to the latest OpenCV version and the issue is still there - [X] There is reproducer code and related data files (videos, images, onnx, etc)
bug,needs investigation
low
Critical
2,770,359,889
rust
ICE: self-type `MyDispatcher<dyn Trait>` for ObjectPick never dereferenced to an object
<!-- Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for how to create smaller examples. http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/ --> ### Code ```Rust #![feature( unsize, dispatch_from_dyn, arbitrary_self_types )] use std::{ marker::Unsize, ops::{DispatchFromDyn, Receiver}, }; struct MyDispatcher<T: ?Sized>(*const T); impl<T: ?Sized, U: ?Sized> DispatchFromDyn<MyDispatcher<U>> for MyDispatcher<T> where T: Unsize<U> {} impl<T: ?Sized> Receiver for MyDispatcher<T> { type Target = T; } struct Test; trait Trait { fn test(self: MyDispatcher<Self>); } impl Trait for Test { fn test(self: MyDispatcher<Self>) { todo!() } } fn main() { MyDispatcher::<dyn Trait>(core::ptr::null_mut::<Test>()).test(); } ``` ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.85.0-nightly (c26db435b 2024-12-15) binary: rustc commit-hash: c26db435bf8aee2efc397aab50f3a21eb351d6e5 commit-date: 2024-12-15 host: aarch64-unknown-linux-gnu release: 1.85.0-nightly LLVM version: 19.1.5 ``` ### Error output ``` error: internal compiler error: compiler/rustc_hir_typeck/src/method/confirm.rs:365:17: self-type `MyDispatcher<dyn Trait>` for ObjectPick never dereferenced to an object --> shkwve_host_rust/src/main.rs:62:62 | 62 | MyDispatcher::<dyn Trait>(core::ptr::null_mut::<Test>()).test(); | ^^^^ ``` <!-- Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your environment. E.g. `RUST_BACKTRACE=1 cargo build`. --> <details><summary><strong>Backtrace</strong></summary> <p> ``` thread 'rustc' panicked at compiler/rustc_hir_typeck/src/method/confirm.rs:365:17: Box<dyn Any> stack backtrace: 0: 0xfffef6d07c38 - std::backtrace_rs::backtrace::libunwind::trace::h0e66f072c9f6f33f at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/std/src/../../backtrace/src/backtrace/libunwind.rs:116:5 1: 0xfffef6d07c38 - std::backtrace_rs::backtrace::trace_unsynchronized::hf8c0a269694b3bb1 at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5 2: 0xfffef6d07c38 - std::sys::backtrace::_print_fmt::heb6bcbeee94f96c1 at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/std/src/sys/backtrace.rs:66:9 3: 0xfffef6d07c38 - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h95f3b3f55a48b5af at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/std/src/sys/backtrace.rs:39:26 4: 0xfffef6d54c3c - core::fmt::rt::Argument::fmt::h0b3dff6d8ae61f8c at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/core/src/fmt/rt.rs:177:76 5: 0xfffef6d54c3c - core::fmt::write::hcee18e60cae876ea at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/core/src/fmt/mod.rs:1437:21 6: 0xfffef6cfbfb0 - std::io::Write::write_fmt::he9d0ad5bf930a7bc at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/std/src/io/mod.rs:1887:15 7: 0xfffef6d07aec - std::sys::backtrace::BacktraceLock::print::hff51a0ee739e8d2c at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/std/src/sys/backtrace.rs:42:9 8: 0xfffef6d09e28 - std::panicking::default_hook::{{closure}}::h5f7a3fbe6b0002c0 at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/std/src/panicking.rs:284:22 9: 0xfffef6d09c70 - std::panicking::default_hook::h11c63fc90a24cae2 at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/std/src/panicking.rs:311:9 10: 0xfffef0ca3774 - <alloc[8bd768c37744f19c]::boxed::Box<rustc_driver_impl[2443eb5be3ccdd89]::install_ice_hook::{closure#0}> as core[331cd29391fcfcc2]::ops::function::Fn<(&dyn for<'a, 'b> core[331cd29391fcfcc2]::ops::function::Fn<(&'a std[59b0c849ea92645c]::panic::PanicHookInfo<'b>,), Output = ()> + core[331cd29391fcfcc2]::marker::Sync + core[331cd29391fcfcc2]::marker::Send, &std[59b0c849ea92645c]::panic::PanicHookInfo)>>::call 11: 0xfffef6d0a668 - <alloc::boxed::Box<F,A> as core::ops::function::Fn<Args>>::call::h5546bed8e45d32bd at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/alloc/src/boxed.rs:1984:9 12: 0xfffef6d0a668 - std::panicking::rust_panic_with_hook::he0868b14e7082a82 at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/std/src/panicking.rs:825:13 13: 0xfffef699abe0 - std[59b0c849ea92645c]::panicking::begin_panic::<rustc_errors[378a044b648cc5d5]::ExplicitBug>::{closure#0} 14: 0xfffef6997de8 - std[59b0c849ea92645c]::sys::backtrace::__rust_end_short_backtrace::<std[59b0c849ea92645c]::panicking::begin_panic<rustc_errors[378a044b648cc5d5]::ExplicitBug>::{closure#0}, !> 15: 0xfffef0b4112c - std[59b0c849ea92645c]::panicking::begin_panic::<rustc_errors[378a044b648cc5d5]::ExplicitBug> 16: 0xfffef698f8c8 - <rustc_errors[378a044b648cc5d5]::diagnostic::BugAbort as rustc_errors[378a044b648cc5d5]::diagnostic::EmissionGuarantee>::emit_producing_guarantee 17: 0xfffef66b66f8 - <rustc_errors[378a044b648cc5d5]::DiagCtxtHandle>::span_bug::<rustc_span[1ea67f8784b35cf4]::span_encoding::Span, alloc[8bd768c37744f19c]::string::String> 18: 0xfffef67a22a0 - rustc_middle[620efa5bbc03cb39]::util::bug::opt_span_bug_fmt::<rustc_span[1ea67f8784b35cf4]::span_encoding::Span>::{closure#0} 19: 0xfffef67a0318 - rustc_middle[620efa5bbc03cb39]::ty::context::tls::with_opt::<rustc_middle[620efa5bbc03cb39]::util::bug::opt_span_bug_fmt<rustc_span[1ea67f8784b35cf4]::span_encoding::Span>::{closure#0}, !>::{closure#0} 20: 0xfffef67a02e8 - rustc_middle[620efa5bbc03cb39]::ty::context::tls::with_context_opt::<rustc_middle[620efa5bbc03cb39]::ty::context::tls::with_opt<rustc_middle[620efa5bbc03cb39]::util::bug::opt_span_bug_fmt<rustc_span[1ea67f8784b35cf4]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !> 21: 0xfffef0b367e8 - rustc_middle[620efa5bbc03cb39]::util::bug::span_bug_fmt::<rustc_span[1ea67f8784b35cf4]::span_encoding::Span> 22: 0xfffef45068b8 - <rustc_hir_typeck[372482503743e91f]::method::confirm::ConfirmContext>::confirm 23: 0xfffef4401934 - <rustc_hir_typeck[372482503743e91f]::fn_ctxt::FnCtxt>::check_expr_kind 24: 0xfffef43a1a6c - <rustc_hir_typeck[372482503743e91f]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args 25: 0xfffef43c1590 - <rustc_hir_typeck[372482503743e91f]::fn_ctxt::FnCtxt>::check_expr_block 26: 0xfffef43a1a6c - <rustc_hir_typeck[372482503743e91f]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args 27: 0xfffef43a27dc - <rustc_hir_typeck[372482503743e91f]::fn_ctxt::FnCtxt>::check_return_or_body_tail 28: 0xfffef45b5974 - rustc_hir_typeck[372482503743e91f]::check::check_fn 29: 0xfffef4507908 - rustc_hir_typeck[372482503743e91f]::typeck 30: 0xfffef57a1394 - rustc_query_impl[ddfd706ecb656705]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[ddfd706ecb656705]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[620efa5bbc03cb39]::query::erase::Erased<[u8; 8usize]>> 31: 0xfffef5898084 - <rustc_query_impl[ddfd706ecb656705]::query_impl::typeck::dynamic_query::{closure#2} as core[331cd29391fcfcc2]::ops::function::FnOnce<(rustc_middle[620efa5bbc03cb39]::ty::context::TyCtxt, rustc_span[1ea67f8784b35cf4]::def_id::LocalDefId)>>::call_once 32: 0xfffef5748514 - rustc_query_system[94f1fd8c6695e4d8]::query::plumbing::try_execute_query::<rustc_query_impl[ddfd706ecb656705]::DynamicConfig<rustc_data_structures[22bd557c4e7d130c]::vec_cache::VecCache<rustc_span[1ea67f8784b35cf4]::def_id::LocalDefId, rustc_middle[620efa5bbc03cb39]::query::erase::Erased<[u8; 8usize]>, rustc_query_system[94f1fd8c6695e4d8]::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl[ddfd706ecb656705]::plumbing::QueryCtxt, true> 33: 0xfffef58130e8 - rustc_query_impl[ddfd706ecb656705]::query_impl::typeck::get_query_incr::__rust_end_short_backtrace 34: 0xfffef4767e60 - <rustc_middle[620efa5bbc03cb39]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[19f56ebf043be680]::check_crate::{closure#4}>::{closure#0} 35: 0xfffef4761c74 - <rustc_data_structures[22bd557c4e7d130c]::sync::parallel::ParallelGuard>::run::<(), rustc_data_structures[22bd557c4e7d130c]::sync::parallel::par_for_each_in<&rustc_span[1ea67f8784b35cf4]::def_id::LocalDefId, &[rustc_span[1ea67f8784b35cf4]::def_id::LocalDefId], <rustc_middle[620efa5bbc03cb39]::hir::map::Map>::par_body_owners<rustc_hir_analysis[19f56ebf043be680]::check_crate::{closure#4}>::{closure#0}>::{closure#0}::{closure#1}::{closure#0}> 36: 0xfffef4844a84 - rustc_hir_analysis[19f56ebf043be680]::check_crate 37: 0xfffef0e775a4 - rustc_interface[fa84bd9c5abdf0ea]::passes::analysis 38: 0xfffef57a1478 - rustc_query_impl[ddfd706ecb656705]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[ddfd706ecb656705]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[620efa5bbc03cb39]::query::erase::Erased<[u8; 0usize]>> 39: 0xfffef5854da0 - <rustc_query_impl[ddfd706ecb656705]::query_impl::analysis::dynamic_query::{closure#2} as core[331cd29391fcfcc2]::ops::function::FnOnce<(rustc_middle[620efa5bbc03cb39]::ty::context::TyCtxt, ())>>::call_once 40: 0xfffef56bc90c - rustc_query_system[94f1fd8c6695e4d8]::query::plumbing::try_execute_query::<rustc_query_impl[ddfd706ecb656705]::DynamicConfig<rustc_query_system[94f1fd8c6695e4d8]::query::caches::SingleCache<rustc_middle[620efa5bbc03cb39]::query::erase::Erased<[u8; 0usize]>>, false, false, false>, rustc_query_impl[ddfd706ecb656705]::plumbing::QueryCtxt, true> 41: 0xfffef5802ac0 - rustc_query_impl[ddfd706ecb656705]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace 42: 0xfffef0c41fcc - <rustc_interface[fa84bd9c5abdf0ea]::queries::QueryResult<&rustc_middle[620efa5bbc03cb39]::ty::context::GlobalCtxt>>::enter::<core[331cd29391fcfcc2]::option::Option<rustc_interface[fa84bd9c5abdf0ea]::queries::Linker>, rustc_driver_impl[2443eb5be3ccdd89]::run_compiler::{closure#0}::{closure#1}::{closure#2}> 43: 0xfffef0c7821c - <rustc_interface[fa84bd9c5abdf0ea]::interface::Compiler>::enter::<rustc_driver_impl[2443eb5be3ccdd89]::run_compiler::{closure#0}::{closure#1}, core[331cd29391fcfcc2]::option::Option<rustc_interface[fa84bd9c5abdf0ea]::queries::Linker>> 44: 0xfffef0c565b0 - <scoped_tls[b59812e30f911d71]::ScopedKey<rustc_span[1ea67f8784b35cf4]::SessionGlobals>>::set::<rustc_interface[fa84bd9c5abdf0ea]::util::run_in_thread_with_globals<rustc_interface[fa84bd9c5abdf0ea]::util::run_in_thread_pool_with_globals<rustc_interface[fa84bd9c5abdf0ea]::interface::run_compiler<(), rustc_driver_impl[2443eb5be3ccdd89]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}::{closure#0}, ()> 45: 0xfffef0c903f8 - rustc_span[1ea67f8784b35cf4]::create_session_globals_then::<(), rustc_interface[fa84bd9c5abdf0ea]::util::run_in_thread_with_globals<rustc_interface[fa84bd9c5abdf0ea]::util::run_in_thread_pool_with_globals<rustc_interface[fa84bd9c5abdf0ea]::interface::run_compiler<(), rustc_driver_impl[2443eb5be3ccdd89]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}::{closure#0}> 46: 0xfffef0c4c1c8 - std[59b0c849ea92645c]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[fa84bd9c5abdf0ea]::util::run_in_thread_with_globals<rustc_interface[fa84bd9c5abdf0ea]::util::run_in_thread_pool_with_globals<rustc_interface[fa84bd9c5abdf0ea]::interface::run_compiler<(), rustc_driver_impl[2443eb5be3ccdd89]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()> 47: 0xfffef0c76ab4 - <<std[59b0c849ea92645c]::thread::Builder>::spawn_unchecked_<rustc_interface[fa84bd9c5abdf0ea]::util::run_in_thread_with_globals<rustc_interface[fa84bd9c5abdf0ea]::util::run_in_thread_pool_with_globals<rustc_interface[fa84bd9c5abdf0ea]::interface::run_compiler<(), rustc_driver_impl[2443eb5be3ccdd89]::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core[331cd29391fcfcc2]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0} 48: 0xfffef6d13f04 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hf45b9397904bf2a7 at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/alloc/src/boxed.rs:1970:9 49: 0xfffef6d13f04 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hdcf2fd8bedf8aa80 at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/alloc/src/boxed.rs:1970:9 50: 0xfffef6d13f04 - std::sys::pal::unix::thread::Thread::new::thread_start::hc240ba2e63076efa at /rustc/c26db435bf8aee2efc397aab50f3a21eb351d6e5/library/std/src/sys/pal/unix/thread.rs:105:17 51: 0xfffeefe971e8 - start_thread 52: 0xfffeeff0278c - thread_start 53: 0x0 - <unknown> note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md note: please make sure that you have updated to the latest nightly note: please attach the file at `/home/pitust/code/shkwve-std-model/rustc-ice-2025-01-06T10_20_28-177922.txt` to your bug report note: compiler flags: --crate-type bin -C opt-level=1 -C embed-bitcode=no -C debuginfo=2 -C debug-assertions=on -C incremental=[REDACTED] note: some of the compiler flags provided by cargo are hidden query stack during panic: #0 [typeck] type-checking `main` #1 [analysis] running analysis passes on this crate end of query stack[rustc-ice-2025-01-06T10_20_28-177922.txt](https://github.com/user-attachments/files/18318201/rustc-ice-2025-01-06T10_20_28-177922.txt) ``` </p> </details>
I-ICE,T-compiler,C-bug,F-arbitrary_self_types,requires-nightly,S-has-mcve
low
Critical
2,770,378,034
kubernetes
Using fake api client to query resources will cause the client-go package memory to continue to grow
### What happened? When I use fake api client in the simulated scheduler to simulate the resource changes of pods or nodes, the memory of the client-go package continues to increase.For example, after I create a pod, when I query this pod multiple times, the client-go memory continues to grow. The problem I'm currently facing is that all actions such as query, modify, and delete will be appended to this c.actions slice, causing the client-go package memory to continue to increase until the process is restarted to release the memory. ``` go func (c *Fake) Invokes(action Action, defaultReturnObj runtime.Object) (runtime.Object, error) { c.Lock() defer c.Unlock() actionCopy := action.DeepCopy() c.actions = append(c.actions, action.DeepCopy()) for _, reactor := range c.ReactionChain { if !reactor.Handles(actionCopy) { continue } handled, ret, err := reactor.React(actionCopy) if !handled { continue } return ret, err } return defaultReturnObj, nil } ``` ### What did you expect to happen? Fix the code here: c.actions = append(c.actions, action.DeepCopy()) , where actions such as add, delete, modify, and query are not continuously appended to a.action, causing the memory to continue to grow. ### How can we reproduce it (as minimally and precisely as possible)? ``` go import ( "context" "fmt" "net/http" "time" _ "net/http/pprof" "github.com/gofrs/uuid" v1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" fakeclientset "k8s.io/client-go/kubernetes/fake" ) func main() { go func() { fmt.Println(http.ListenAndServe("localhost:8065", nil)) }() // 创建fakeclient fc := fakeclientset.NewSimpleClientset() for i := 0; i < 100000; i++ { // 随机生成pod的name uuidName, _ := uuid.NewV4() podName := "test-" + uuidName.String() // 随机创建pod pod := &v1.Pod{ObjectMeta: metav1.ObjectMeta{Name: podName}} // 创建pod fc.CoreV1().Pods("default").Create(context.TODO(), pod, metav1.CreateOptions{}) pod.Spec.NodeName = "node1" // 更新pod fc.CoreV1().Pods("default").Update(context.TODO(), pod, metav1.UpdateOptions{}) // 删除pod err := fc.CoreV1().Pods("default").Delete(context.TODO(), podName, metav1.DeleteOptions{}) if err != nil { fmt.Println("delete pod failed") } } } ``` ### Anything else we need to know? _No response_ ### Kubernetes version <details> ```console $ kubectl version Client Version: v1.29.1 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29. ``` </details> ### Cloud provider <details> Self hosted </details> ### OS version <details> ```console # On Linux: $ cat /etc/os-release # paste output here $ uname -a # paste output here # On Windows: C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture # paste output here ``` </details> ### Install tools <details> </details> ### Container runtime (CRI) and version (if applicable) <details> </details> ### Related plugins (CNI, CSI, ...) and versions (if applicable) <details> </details>
kind/bug,needs-sig,needs-triage
low
Critical
2,770,381,009
tensorflow
Encountered unresolved custom op: XlaDynamicSlice
Hi, i am doing a task in converting T5 model to TFLite for Android. Currently, i am using T5ModelForConditionalGeneration from Huggingface to convert. The conversion is done with some below logging but when load the `generator` from Interpretor, and run an inference example, i have faced this error. You can reproduce with the colab provided below. AFAIK, this XlaDynamicSlice is in TF ops but why this op cannot be resolved in this cases. **System information** - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 20.04 - TensorFlow installed from (source or binary): PyPI 24.0 on Python 3.11.5 - TensorFlow version (or github SHA if from source): 2.18.0 **Provide the text output from tflite_convert** In colab version, tflite_convert doesn't log anything, below log is in my local version ``` INFO:tensorflow:Assets written to: /tmp/tmpaxxybw9x/assets INFO:tensorflow:Assets written to: /tmp/tmpaxxybw9x/assets W0000 00:00:1736157114.568747 1061359 tf_tfl_flatbuffer_helpers.cc:365] Ignored output_format. W0000 00:00:1736157114.568765 1061359 tf_tfl_flatbuffer_helpers.cc:368] Ignored drop_control_dependency. 2025-01-06 16:51:54.568997: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: /tmp/tmpaxxybw9x 2025-01-06 16:51:54.645325: I tensorflow/cc/saved_model/reader.cc:52] Reading meta graph with tags { serve } 2025-01-06 16:51:54.645352: I tensorflow/cc/saved_model/reader.cc:147] Reading SavedModel debug info (if present) from: /tmp/tmpaxxybw9x 2025-01-06 16:51:55.085153: I tensorflow/cc/saved_model/loader.cc:236] Restoring SavedModel bundle. 2025-01-06 16:51:56.061632: I tensorflow/cc/saved_model/loader.cc:220] Running initialization op on SavedModel bundle at path: /tmp/tmpaxxybw9x 2025-01-06 16:51:56.517300: I tensorflow/cc/saved_model/loader.cc:466] SavedModel load for tags { serve }; Status: success: OK. Took 1948307 microseconds. 2025-01-06 16:52:30.233639: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:3825] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s): Flex ops: FlexStridedSlice Details: tf.StridedSlice(tensor<?x?x?xf32>, tensor<4xi32>, tensor<4xi32>, tensor<4xi32>) -> (tensor<?x1x?x?xf32>) : {begin_mask = 13 : i64, device = "", ellipsis_mask = 0 : i64, end_mask = 13 : i64, new_axis_mask = 2 : i64, shrink_axis_mask = 0 : i64} See instructions: https://www.tensorflow.org/lite/guide/ops_select 2025-01-06 16:52:30.233666: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:3836] The following operation(s) need TFLite custom op implementation(s): Custom ops: XlaDynamicSlice Details: tf.XlaDynamicSlice(tensor<1x12x?x?xf32>, tensor<4xi64>, tensor<4xi64>) -> (tensor<1x12x1x?xf32>) : {device = ""} See instructions: https://www.tensorflow.org/lite/guide/ops_custom ``` **Standalone code to reproduce the issue** Provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab/Jupyter/any notebook. My reproduce code in Colab: https://colab.research.google.com/drive/1Rmhc_vpJSa7M1Vt-4ugV5uORaEfRJMOw?usp=sharing Also, please include a link to a GraphDef or the model if possible. **Any other info / logs** Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
stat:awaiting tensorflower,type:bug,comp:lite,TFLiteConverter,TF 2.18
medium
Critical
2,770,403,577
pytorch
F.interpolate returns NAN on MPS if align_corner is True.
### 🐛 Describe the bug When using interpolate on MPS with align_corner=True, the result consists only of NaN value, which is inconsistent to the CPU implementation. You can replicate this by the following code snippet: ```python import torch import torch.nn.functional as F test = torch.Tensor([[1],[2],[4]]).to("mps") result = F.interpolate(test.unsqueeze(1), 3, mode="linear", align_corners=True).squeeze(1) print(result) # tensor([[nan, nan, nan], # [nan, nan, nan], # [nan, nan, nan]], device='mps:0') test = torch.Tensor([[1],[2],[4]]).to("cpu") result = F.interpolate(test.unsqueeze(1), 3, mode="linear", align_corners=True).squeeze(1) print(result) # tensor([[1., 1., 1.], # [2., 2., 2.], # [4., 4., 4.]]) ``` ### Versions Collecting environment information... PyTorch version: 2.5.1 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 15.1 (arm64) GCC version: Could not collect Clang version: 16.0.0 (clang-1600.0.26.4) CMake version: Could not collect Libc version: N/A Python version: 3.11.9 (main, Jun 29 2024, 14:01:21) [Clang 15.0.0 (clang-1500.1.0.2.5)] (64-bit runtime) Python platform: macOS-15.1-arm64-arm-64bit Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Apple M2 Versions of relevant libraries: [pip3] flake8==7.1.1 [pip3] mypy==1.11.2 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] pytorch-forecasting==1.1.1 [pip3] pytorch-lightning==2.4.0 [pip3] pytorch_optimizer==2.12.0 [pip3] torch==2.5.1 [pip3] torchmetrics==1.4.1 [pip3] torchvision==0.19.0 [conda] Could not collect cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @kulinseth @malfet @DenisVieriu97 @jhavukainen
module: nn,triaged,module: correctness (silent),module: mps
low
Critical
2,770,407,601
godot
In canvas shader, INSTANCE_ID is always 0.
### Tested versions 4.3 ### System information win11 ### Issue description What **not** it supposed to: the color should not be sync; ![Image](https://github.com/user-attachments/assets/40cc04d5-bc29-4463-b117-feb59bfcb05c) What is supposed to: the color is not sync (intented) ![Image](https://github.com/user-attachments/assets/186be47f-5413-4089-92c5-5ae9ea9970d5) I want to use a different number in different shader instances to make them look different without changing the uniform parameter in the script. I can do this via INSTANCE_ID in 3d, but not in 2d. In Canvas, this INSTANCE_ID always seems to be 0. As demonstrated above, the shader code is meant to make different shader instances to do a color-changing animation with a time offset by a unique number which is INSTANCE_ID; Maybe this is not implemented in canvas shader, or intended? ### Steps to reproduce use the shader code below, create multiple sprites, and make shader Material unique. The Color of sprites is sync in the case, which means INSTANCE_ID is always the same. ``` shader_type canvas_item; varying flat float seed; void vertex() { seed = fract(float(INSTANCE_ID) * 0.1 + TIME); } void fragment() { COLOR = vec4(seed, 1.0, 1.0, 1.0); } ``` ### Minimal reproduction project (MRP) [4.3test (4).zip](https://github.com/user-attachments/files/18318602/4.3test.4.zip)
documentation,topic:shaders
low
Minor
2,770,416,120
ant-design
Table 设置 scroll={{x: 1500 }} 后,expandedRowRender 里的 table 的sticky 的特性会消失;
### Reproduction link [![Edit on CodeSandbox](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/p/sandbox/antd-reproduction-template-forked-jyh2k9) ### Steps to reproduce Table 中的expandable 也是一个 table,给expandable 里的 table 添加 sticky 属性,这时再个外层的 table 添加 scroll={{x: 1500 }} ,expandable里sticky,固定表头的特性就会消失 ### What is expected? 设置 scroll={{x: 1500 }} expandedRowRender 里的table 的sticky 特性不会消失 ### What is actually happening? 外层 table 设置scroll={{x: 1500 }} 后,expandedRowRender里的table 的sticky 特性会消失 | Environment | Info | | --- | --- | | antd | 5.23.0 | | React | 18.0.0 | | System | mac | | Browser | chrome | <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
🙅🏻‍♀️ WON'T RESOLVE
low
Major
2,770,456,877
excalidraw
Cursor reappearing while writing on tablet
### Discussed in https://github.com/excalidraw/excalidraw/discussions/8922 <div type='discussions-op-text'> <sup>Originally posted by **happyfunction** December 17, 2024</sup> When using the pen tool, the crosshair is always displayed when using the mouse, whether you are writing (pressing the left mouse button) or just moving the cursor. However, when using a digital pen and a graphics tablet, the cursor disappears when the digital pen touches the graphics tablet. When the digital pen leaves the graphics tablet, the cursor reappears. If you write relatively fast, the cursor will appear and disappear from time to time, and this kind of experience is not good.</div>
bug
low
Minor
2,770,457,615
vscode
Remove initialising of extension sizes
Remove initialising of extension sizes
debt,extensions
low
Minor
2,770,460,349
electron
setAspectRatio does not follow the useContentSize
### Preflight Checklist - [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project. - [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to. - [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success. ### Electron Version 33.3.0 ### What operating system(s) are you using? Windows ### Operating System Version Windows 10 22H2 ### What arch are you using? x64 ### Last Known Working Electron version _No response_ ### Expected Behavior I'm creating a program for a game and need to fix the window ratio to 16:9. Window size should stay 16:9 ### Actual Behavior when `setAspectRatio` the window does not follow the useContentSize. ### Testcase Gist URL [https://gist.github.com/nini22P/85113721f1f3aad7c78f1837c6f2a677](https://gist.github.com/nini22P/85113721f1f3aad7c78f1837c6f2a677) ### Additional Information I created a 16:9 centered div, then set the window to 1280 x 720 and set the `seContentSize: true`. The previous setting of `useContentSize: true` , the `mainWindow.setAspectRatio(16 / 9)` should directly make the content ratio 16:9. But the content ratio isn't 16:9, and a black border appears
platform/windows,bug :beetle:,33-x-y
low
Critical
2,770,482,434
langchain
TypeError: Additional kwargs key total_tokens already exists in left dict and value has unsupported type <class 'decimal.Decimal'>.
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code The error is thrown while using long conversations on Claude models. This is how the model is invoked: ``` conversation = RunnableWithMessageHistory( chain, lambda session_id: self.chat_history, history_messages_key="chat_history", input_messages_key="input", output_messages_key="output", ) for chunk in conversation.stream( input={"input": user_prompt}, config=config ): logger.debug("chunk", chunk=chunk) if "answer" in chunk: answer = answer + chunk["answer"] elif isinstance(chunk, AIMessageChunk): for c in chunk.content: if "text" in c: answer = answer + c.get("text") ``` ### Error Message and Stack Trace (if applicable) ``` "stack_trace": { "type": "TypeError", "value": "Additional kwargs key total_tokens already exists in left dict and value has unsupported type <class 'decimal.Decimal'>.", "module": "builtins", "frames": [ { "file": "/var/task/adapters/base/base.py", "line": 183, "function": "run_with_chain_v2", "statement": "for chunk in conversation.stream(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 5525, "function": "stream", "statement": "yield from self.bound.stream(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 5525, "function": "stream", "statement": "yield from self.bound.stream(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 3407, "function": "stream", "statement": "yield from self.transform(iter([input]), config, **kwargs)" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 3394, "function": "transform", "statement": "yield from self._transform_stream_with_config(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 2197, "function": "_transform_stream_with_config", "statement": "chunk: Output = context.run(next, iterator) # type: ignore" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 3357, "function": "_transform", "statement": "yield from final_pipeline" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 5561, "function": "transform", "statement": "yield from self.bound.transform(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 4820, "function": "transform", "statement": "yield from self._transform_stream_with_config(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 2197, "function": "_transform_stream_with_config", "statement": "chunk: Output = context.run(next, iterator) # type: ignore" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 4800, "function": "_transform", "statement": "for chunk in output.stream(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 5525, "function": "stream", "statement": "yield from self.bound.stream(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 3407, "function": "stream", "statement": "yield from self.transform(iter([input]), config, **kwargs)" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 3394, "function": "transform", "statement": "yield from self._transform_stream_with_config(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 2197, "function": "_transform_stream_with_config", "statement": "chunk: Output = context.run(next, iterator) # type: ignore" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 3357, "function": "_transform", "statement": "yield from final_pipeline" }, { "file": "/opt/python/langchain_core/runnables/passthrough.py", "line": 576, "function": "transform", "statement": "yield from self._transform_stream_with_config(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 2197, "function": "_transform_stream_with_config", "statement": "chunk: Output = context.run(next, iterator) # type: ignore" }, { "file": "/opt/python/langchain_core/runnables/passthrough.py", "line": 555, "function": "_transform", "statement": "for chunk in for_passthrough:" }, { "file": "/opt/python/langchain_core/utils/iter.py", "line": 61, "function": "tee_peer", "statement": "item = next(iterator)" }, { "file": "/opt/python/langchain_core/runnables/passthrough.py", "line": 576, "function": "transform", "statement": "yield from self._transform_stream_with_config(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 2197, "function": "_transform_stream_with_config", "statement": "chunk: Output = context.run(next, iterator) # type: ignore" }, { "file": "/opt/python/langchain_core/runnables/passthrough.py", "line": 566, "function": "_transform", "statement": "yield cast(dict[str, Any], first_map_chunk_future.result())" }, { "file": "/var/lang/lib/python3.11/concurrent/futures/_base.py", "line": 456, "function": "result", "statement": "return self.__get_result()" }, { "file": "/var/lang/lib/python3.11/concurrent/futures/_base.py", "line": 401, "function": "__get_result", "statement": "raise self._exception" }, { "file": "/var/lang/lib/python3.11/concurrent/futures/thread.py", "line": 58, "function": "run", "statement": "result = self.fn(*self.args, **self.kwargs)" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 3847, "function": "transform", "statement": "yield from self._transform_stream_with_config(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 2197, "function": "_transform_stream_with_config", "statement": "chunk: Output = context.run(next, iterator) # type: ignore" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 3832, "function": "_transform", "statement": "chunk = AddableDict({step_name: future.result()})" }, { "file": "/var/lang/lib/python3.11/concurrent/futures/_base.py", "line": 449, "function": "result", "statement": "return self.__get_result()" }, { "file": "/var/lang/lib/python3.11/concurrent/futures/_base.py", "line": 401, "function": "__get_result", "statement": "raise self._exception" }, { "file": "/var/lang/lib/python3.11/concurrent/futures/thread.py", "line": 58, "function": "run", "statement": "result = self.fn(*self.args, **self.kwargs)" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 5561, "function": "transform", "statement": "yield from self.bound.transform(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 1431, "function": "transform", "statement": "yield from self.stream(final, config, **kwargs)" }, { "file": "/opt/python/langchain_core/runnables/branch.py", "line": 367, "function": "stream", "statement": "for chunk in self.default.stream(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 3407, "function": "stream", "statement": "yield from self.transform(iter([input]), config, **kwargs)" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 3394, "function": "transform", "statement": "yield from self._transform_stream_with_config(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 2197, "function": "_transform_stream_with_config", "statement": "chunk: Output = context.run(next, iterator) # type: ignore" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 3357, "function": "_transform", "statement": "yield from final_pipeline" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 1413, "function": "transform", "statement": "for ichunk in input:" }, { "file": "/opt/python/langchain_core/output_parsers/transform.py", "line": 64, "function": "transform", "statement": "yield from self._transform_stream_with_config(" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 2161, "function": "_transform_stream_with_config", "statement": "final_input: Optional[Input] = next(input_for_tracing, None)" }, { "file": "/opt/python/langchain_core/runnables/base.py", "line": 1431, "function": "transform", "statement": "yield from self.stream(final, config, **kwargs)" }, { "file": "/opt/python/langchain_core/language_models/chat_models.py", "line": 365, "function": "stream", "statement": "BaseMessageChunk, self.invoke(input, config=config, stop=stop, **kwargs)" }, { "file": "/opt/python/langchain_core/language_models/chat_models.py", "line": 286, "function": "invoke", "statement": "self.generate_prompt(" }, { "file": "/opt/python/langchain_core/language_models/chat_models.py", "line": 786, "function": "generate_prompt", "statement": "return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)" }, { "file": "/opt/python/langchain_core/language_models/chat_models.py", "line": 643, "function": "generate", "statement": "raise e" }, { "file": "/opt/python/langchain_core/language_models/chat_models.py", "line": 633, "function": "generate", "statement": "self._generate_with_cache(" }, { "file": "/opt/python/langchain_core/language_models/chat_models.py", "line": 851, "function": "_generate_with_cache", "statement": "result = self._generate(" }, { "file": "/opt/python/langchain_aws/chat_models/bedrock_converse.py", "line": 491, "function": "_generate", "statement": "bedrock_messages, system = _messages_to_bedrock(messages)" }, { "file": "/opt/python/langchain_aws/chat_models/bedrock_converse.py", "line": 685, "function": "_messages_to_bedrock", "statement": "messages = merge_message_runs(messages)" }, { "file": "/opt/python/langchain_core/messages/utils.py", "line": 381, "function": "wrapped", "statement": "return func(messages, **kwargs)" }, { "file": "/opt/python/langchain_core/messages/utils.py", "line": 571, "function": "merge_message_runs", "statement": "merged.append(_chunk_to_msg(last_chunk + curr_chunk))" }, { "file": "/opt/python/langchain_core/messages/ai.py", "line": 395, "function": "__add__", "statement": "return add_ai_message_chunks(self, other)" }, { "file": "/opt/python/langchain_core/messages/ai.py", "line": 412, "function": "add_ai_message_chunks", "statement": "additional_kwargs = merge_dicts(" }, { "file": "/opt/python/langchain_core/utils/_merge.py", "line": 58, "function": "merge_dicts", "statement": "merged[right_k] = merge_dicts(merged[right_k], right_v)" }, { "file": "/opt/python/langchain_core/utils/_merge.py", "line": 68, "function": "merge_dicts", "statement": "raise TypeError(msg)" } ] } ``` ### Description I am getting the this error when the conversation becomes lengthy and while using RAG flow. I am using the `anthropic.claude-3-5-sonnet-20240620-v1:0` model Following are the versions of langchain libraries: ``` langchain==0.3.13 langchain-core==0.3.28 langchain-community==0.3.13 ``` ### System Info ``` System Information ------------------ > OS: Linux > OS Version: #1 SMP Mon Oct 28 22:41:53 UTC 2024 > Python Version: 3.11.10 (main, Sep 24 2024, 11:02:55) [GCC 7.3.1 20180712 (Red Hat 7.3.1-17)] Package Information ------------------- > langchain_core: 0.3.28 > langchain: 0.3.13 > langchain_community: 0.3.13 > langsmith: 0.2.10 > langchain_aws: 0.2.10 > langchain_openai: 0.2.14 > langchain_text_splitters: 0.3.4 Optional packages not installed ------------------------------- > langserve Other Dependencies ------------------ > aiohttp: 3.11.11 > async-timeout: Installed. No version info available. > boto3: 1.34.145 > dataclasses-json: 0.6.7 > httpx: 0.28.1 > httpx-sse: 0.4.0 > jsonpatch: 1.33 > langsmith-pyo3: Installed. No version info available. > numpy: 1.26.0 > openai: 1.59.3 > orjson: 3.10.13 > packaging: 24.2 > pydantic: 2.10.4 > pydantic-settings: 2.7.1 > PyYAML: 6.0.2 > requests: 2.31.0 > requests-toolbelt: 1.0.0 > SQLAlchemy: 2.0.36 > tenacity: 9.0.0 > tiktoken: 0.8.0 > typing-extensions: 4.12.2 > zstandard: Installed. No version info available. System Information ------------------ > OS: Linux > OS Version: #1 SMP Mon Oct 28 22:41:53 UTC 2024 > Python Version: 3.11.10 (main, Sep 24 2024, 11:02:55) [GCC 7.3.1 20180712 (Red Hat 7.3.1-17)] Package Information ------------------- > langchain_core: 0.3.28 > langchain: 0.3.13 > langchain_community: 0.3.13 > langsmith: 0.2.10 > langchain_aws: 0.2.10 > langchain_openai: 0.2.14 > langchain_text_splitters: 0.3.4 Optional packages not installed ------------------------------- > langserve Other Dependencies ------------------ > aiohttp: 3.11.11 > async-timeout: Installed. No version info available. > boto3: 1.34.145 > dataclasses-json: 0.6.7 > httpx: 0.28.1 > httpx-sse: 0.4.0 > jsonpatch: 1.33 > langsmith-pyo3: Installed. No version info available. > numpy: 1.26.0 > openai: 1.59.3 > orjson: 3.10.13 > packaging: 24.2 > pydantic: 2.10.4 > pydantic-settings: 2.7.1 > PyYAML: 6.0.2 > requests: 2.31.0 > requests-toolbelt: 1.0.0 > SQLAlchemy: 2.0.36 > tenacity: 9.0.0 > tiktoken: 0.8.0 > typing-extensions: 4.12.2 > zstandard: Installed. No version info available. ```
🤖:bug,investigate
low
Critical
2,770,483,065
flutter
[Camera] Camera preview not working in Samsung Galaxy tab S6 Lite
### Steps to reproduce Flutter version: 3.27.1 Camera version: 0.11.0+2 dependency_overrides: camera_android_camerax: 0.6.7+2 ### Expected results Should work properly with Flutter's latest version. ### Actual results After upgrading camera_android_camerax, the latest version, the application is facing an issue [(https://github.com/flutter/flutter/issues/160815)] Other camera_android_camerax facing camera rotation issue in devices [(https://github.com/flutter/flutter/issues/154241)] ### Code sample <details open><summary>Code sample</summary> ```dart void main() async { runApp(const MainApp()); } class MainApp extends StatefulWidget { const MainApp({super.key}); @override State<MainApp> createState() => _MainAppState(); } class _MainAppState extends State<MainApp> { final _runner = _Runner(); @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( body: ValueListenableBuilder( valueListenable: _runner, builder: _build, ), ), ); } Widget _build(BuildContext context, _State value, Widget? child) { switch (value) { case _Loaded(): return Center( child: CameraPreview(value.controller), ); case _Simple.initializing: return const Center( child: CircularProgressIndicator(), ); case _Simple.error: return const Center( child: Icon(Icons.error), ); } } } class _Runner with ChangeNotifier implements ValueListenable<_State> { _Runner() { _init(); } void _init() async { final available = await availableCameras(); if (available.isEmpty) { _emit(_Simple.error); } else { final controller = CameraController(available.first, ResolutionPreset.veryHigh); try { await controller.initialize(); _emit(_Loaded( controller: controller, available: available, )); } catch (e) { _emit(_Simple.error); } } } _State _value = _Simple.initializing; @override _State get value => _value; void _emit(_State state) { _value = state; notifyListeners(); } } sealed class _State {} enum _Simple implements _State { initializing, error, } class _Loaded implements _State { final CameraController controller; final List<CameraDescription> available; const _Loaded({ required this.controller, required this.available, }); } ``` </details> ### Screenshots or Video ![Screenshot_20250106_165057_Asite Field](https://github.com/user-attachments/assets/5481352e-f11b-4ac0-a50a-faef19f284a7) ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ``` [✓] Flutter (Channel stable, 3.27.1, on macOS 15.2 24C101 darwin-arm64, locale en-IN) [✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0) [✓] Xcode - develop for iOS and macOS (Xcode 15.4) [✓] Chrome - develop for the web [✓] Android Studio (version 2024.2) [✓] Connected device (7 available) ! Error: Browsing on the local area network for Field iPad Pro. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac. The device must be opted into Developer Mode to connect wirelessly. (code -27) [✓] Network resources ``` </details>
waiting for customer response,in triage
low
Critical
2,770,504,733
vscode
Screen cheese in the debug console
![Image](https://github.com/user-attachments/assets/eaf409e2-39cc-4a7a-aa68-46e276f66f62) I've also seen it appear to be empty, but with a scrollbar and a lot of distance to scroll.
bug,debug
low
Critical
2,770,509,828
pytorch
[Inductor] Unify the data type propagation between Triton and CPP Backend
### 🚀 The feature, motivation and pitch Previously, `dtype` is the attr of `CppCSEVariable` but not `CSEVariable`. And thus CPP backend has its data type propagation mechanism in `CppCSEVariable.update_on_args`: https://github.com/pytorch/pytorch/blob/d85ae4be734cfd53f5b893240894381ac65fe8b4/torch/_inductor/codegen/cpp_utils.py#L218 Now, triton backend has also introduced new the data type propagation in https://github.com/pytorch/pytorch/blob/d85ae4be734cfd53f5b893240894381ac65fe8b4/torch/_inductor/codegen/common.py#L1825-L1866. We may consolidate these 2 mechanisms to save redundant code. ### Alternatives No ### Additional context No cc @soulitzer @chauhang @penguinwu
oncall: pt2,oncall: cpu inductor
low
Major
2,770,550,726
tauri
[bug] Improve error messages if `cargo metadata` fails.
### Describe the bug Running: `cargo create-tauri-app` or `sh <(curl https://create.tauri.app/sh)` - Tried rust-based (tried vanilla and leptos) frontend as well as Typescript based (vanilla with pnpm or deno) then `cargo tauri android init` or `pnpm tauri android init` or `deno task tauri android init` _all_ produce: ``` failed to get cargo metadata: expected value at line 1 column 1 ``` Trying `cargo tauri dev` instead of `android init` gives the same issue. Adding verbose flag with `-v` doesn't give any useful info. _Apologies in advance if this is something trivial I've missed; at the same time, the quickstart should be as foolproof as possible IMO, so this could still highlight an area for improvement._ ### Reproduction ``` sh <(curl https://create.tauri.app/sh) # Select any frontend option cargo tauri android init # OR cargo tauri dev ``` ### Expected behavior Should `android init` should create android scaffold, or `cargo tauri dev` should correctly build app and launch dev server. ### Full `tauri info` output ```text cargo tauri info [✔] Environment - OS: EndeavourOS Rolling Release x86_64 (X64) (Unknown DE on wayland) ✔ webkit2gtk-4.1: 2.46.5 ✔ rsvg2: 2.59.2 ✔ rustc: 1.83.0 (90b35a623 2024-11-26) ✔ cargo: ✔ rustup: 1.27.1 (54dd3d00f 2024-04-24) ✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (environment override by RUSTUP_TOOLCHAIN) - node: 20.18.1 - pnpm: 9.15.3 - yarn: 1.22.22 - npm: 11.0.0 - bun: 1.0.7 - deno: deno 2.0.6 [-] Packages - tauri 🦀: 2 - tauri-build 🦀: No version detected - wry 🦀: No version detected - tao 🦀: No version detected - 🦀: cargo 1.83.0 (5ffbef321 2024-10-29) - @tauri-apps/api : 2.2.0 - @tauri-apps/cli : 2.2.2 [-] Plugins [-] App - build-type: bundle - CSP: unset - frontendDist: ../dist - devUrl: http://localhost:1420/ - bundler: Vite ``` ### Stack trace ```text failed to get cargo metadata: expected value at line 1 column 1 ``` ### Additional context _No response_
type: bug,good first issue,status: backlog
low
Critical
2,770,556,004
storybook
[Bug]: Storybook couldn't evaluate your
### Describe the bug when I try to run it gives an error: ``` SB_CORE-SERVER_0007 (MainFileEvaluationError): Storybook couldn't evaluate your .storybook\main.ts file. Original error: Error: Transform failed with 1 error: (define name):1:0: ERROR: Expected identifier but found "import" at failureErrorWithLog (\node_modules\esbuild\lib\main.js:1476:15) at responseCallbacks.<computed> (\node_modules\esbuild\lib\main.js:622:9) at Socket.readFromStdout (\node_modules\esbuild\lib\main.js:600:7) at Socket.emit (node:events:520:28) at addChunk (node:internal/streams/readable:559:12) at readableAddChunkPushByteMode (node:internal/streams/readable:510:3) at Readable.push (node:internal/streams/readable:390:5) at Pipe.onStreamRead (node:internal/stream_base_commons:191:23) at loadMainConfig (.\node_modules\@storybook\core\dist\common\index.cjs:17511:11) at async buildDevStandalone (.\node_modules\@storybook\core\dist\core-server\index.cjs:37134:11) at async withTelemetry (.\node_modules\@storybook\core\dist\core-server\index.cjs:35757:12) at async dev (.\node_modules\@storybook\core\dist\cli\bin\index.cjs:2591:3) at async s.<anonymous> (.\node_modules\@storybook\core\dist\cli\bin\index.cjs:2643:74) WARN Broken build, fix the error above. WARN You may need to refresh the browser. ``` I use next js version 14 I found answers for Vite but couldn't find one that suited me ### Reproduction link https://stackblitz.com/edit/github-p7lfsqcw?file=.storybook%2Fmain.ts,.storybook%2Fpreview.ts&preset=node ### Reproduction steps _No response_ ### System ```bash System: OS: Windows 10 10.0.19045 CPU: (12) x64 AMD Ryzen 5 2600X Six-Core Processor Binaries: Node: 22.0.0 - C:\Program Files\nodejs\node.EXE Yarn: 1.22.22 - C:\Program Files\nodejs\yarn.CMD <----- active npm: 10.5.1 - C:\Program Files\nodejs\npm.CMD Browsers: Edge: Chromium (129.0.2792.89) npmPackages: @storybook/addon-essentials: 8.4.2 => 8.4.2 @storybook/addon-interactions: 8.4.2 => 8.4.2 @storybook/addon-onboarding: 8.4.2 => 8.4.2 @storybook/blocks: 8.4.2 => 8.4.2 @storybook/nextjs: ^8.4.2 => 8.4.7 @storybook/preview-api: ^8.4.2 => 8.4.7 @storybook/react: 8.4.2 => 8.4.2 @storybook/test: 8.4.2 => 8.4.2 eslint-plugin-storybook: ^0.11.0 => 0.11.1 storybook: 8.4.2 => 8.4.2 ``` ### Additional context _No response_
bug,core
low
Critical
2,770,564,002
transformers
SAM mask-generation - crops_n_layers
### System Info If I increase in a mask-generation pipeline the "crops_n_layers" to > 0. I run into issues with the batch size: ```python from PIL import Image from transformers import pipeline relative_image_path = "data\image.png" raw_image = Image.open(relative_image_path) generator = pipeline("mask-generation", model="facebook/sam-vit-base", device=0) outputs = generator(raw_image, points_per_batch=64, crops_n_layers=1) ``` Error message: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File c:\Users\LabUser\Documents\repos\test-classification\main.py:4 [1](file:///C:/Users/LabUser/Documents/repos/test-classification/main.py:1) # %% [3](file:///C:/Users/LabUser/Documents/repos/test-classification/main.py:3) generator = pipeline("mask-generation", model="facebook/sam-vit-huge", device=0) ----> [4](file:///C:/Users/LabUser/Documents/repos/test-classification/main.py:4) outputs = generator([raw_image], points_per_batch=64, crops_n_layers=1) File c:\Users\LabUser\Documents\repos\test-classification\env\lib\site-packages\transformers\pipelines\mask_generation.py:166, in MaskGenerationPipeline.__call__(self, image, num_workers, batch_size, *args, **kwargs) [128](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/mask_generation.py:128) def __call__(self, image, *args, num_workers=None, batch_size=None, **kwargs): [129](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/mask_generation.py:129) """ [130](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/mask_generation.py:130) Generates binary segmentation masks [131](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/mask_generation.py:131) (...) [164](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/mask_generation.py:164) [165](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/mask_generation.py:165) """ --> [166](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/mask_generation.py:166) return super().__call__(image, *args, num_workers=num_workers, batch_size=batch_size, **kwargs) File c:\Users\LabUser\Documents\repos\test-classification\env\lib\site-packages\transformers\pipelines\base.py:1282, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs) [1278](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/base.py:1278) if can_use_iterator: [1279](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/base.py:1279) final_iterator = self.get_iterator( [1280](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/base.py:1280) inputs, num_workers, batch_size, preprocess_params, forward_params, postprocess_params [1281](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/base.py:1281) ) -> [1282](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/base.py:1282) outputs = list(final_iterator) [1283](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/base.py:1283) return outputs [1284](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/base.py:1284) else: File c:\Users\LabUser\Documents\repos\test-classification\env\lib\site-packages\transformers\pipelines\pt_utils.py:124, in PipelineIterator.__next__(self) [121](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/pt_utils.py:121) return self.loader_batch_item() [123](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/pt_utils.py:123) # We're out of items within a batch --> [124](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/pt_utils.py:124) item = next(self.iterator) [125](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/pt_utils.py:125) processed = self.infer(item, **self.params) [126](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/pt_utils.py:126) # We now have a batch of "inferred things". File c:\Users\LabUser\Documents\repos\test-classification\env\lib\site-packages\transformers\pipelines\pt_utils.py:269, in PipelinePackIterator.__next__(self) [266](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/pt_utils.py:266) return accumulator [268](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/pt_utils.py:268) while not is_last: --> [269](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/pt_utils.py:269) processed = self.infer(next(self.iterator), **self.params) [270](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/pt_utils.py:270) if self.loader_batch_size is not None: [271](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/pt_utils.py:271) if isinstance(processed, torch.Tensor): File c:\Users\LabUser\Documents\repos\test-classification\env\lib\site-packages\transformers\pipelines\base.py:1208, in Pipeline.forward(self, model_inputs, **forward_params) [1206](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/base.py:1206) with inference_context(): [1207](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/base.py:1207) model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device) -> [1208](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/base.py:1208) model_outputs = self._forward(model_inputs, **forward_params) [1209](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/base.py:1209) model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu")) [1210](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/base.py:1210) else: File c:\Users\LabUser\Documents\repos\test-classification\env\lib\site-packages\transformers\pipelines\mask_generation.py:233, in MaskGenerationPipeline._forward(self, model_inputs, pred_iou_thresh, stability_score_thresh, mask_threshold, stability_score_offset) [230](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/mask_generation.py:230) original_sizes = model_inputs.pop("original_sizes").tolist() [231](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/mask_generation.py:231) reshaped_input_sizes = model_inputs.pop("reshaped_input_sizes").tolist() --> [233](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/mask_generation.py:233) model_outputs = self.model(**model_inputs) [235](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/mask_generation.py:235) # post processing happens here in order to avoid CPU GPU copies of ALL the masks [236](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/pipelines/mask_generation.py:236) low_resolution_masks = model_outputs["pred_masks"] File c:\Users\LabUser\Documents\repos\test-classification\env\lib\site-packages\torch\nn\modules\module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs) [1734](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/torch/nn/modules/module.py:1734) return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] [1735](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/torch/nn/modules/module.py:1735) else: -> [1736](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/torch/nn/modules/module.py:1736) return self._call_impl(*args, **kwargs) File c:\Users\LabUser\Documents\repos\test-classification\env\lib\site-packages\torch\nn\modules\module.py:1747, in Module._call_impl(self, *args, **kwargs) [1742](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/torch/nn/modules/module.py:1742) # If we don't have any hooks, we want to skip the rest of the logic in [1743](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/torch/nn/modules/module.py:1743) # this function, and just call forward. [1744](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/torch/nn/modules/module.py:1744) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks [1745](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/torch/nn/modules/module.py:1745) or _global_backward_pre_hooks or _global_backward_hooks [1746](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/torch/nn/modules/module.py:1746) or _global_forward_hooks or _global_forward_pre_hooks): -> [1747](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/torch/nn/modules/module.py:1747) return forward_call(*args, **kwargs) [1749](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/torch/nn/modules/module.py:1749) result = None [1750](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/torch/nn/modules/module.py:1750) called_always_called_hooks = set() File c:\Users\LabUser\Documents\repos\test-classification\env\lib\site-packages\transformers\models\sam\modeling_sam.py:1371, in SamModel.forward(self, pixel_values, input_points, input_labels, input_boxes, input_masks, image_embeddings, multimask_output, attention_similarity, target_embedding, output_attentions, output_hidden_states, return_dict, **kwargs) [1368](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1368) input_labels = torch.ones_like(input_points[:, :, :, 0], dtype=torch.int, device=input_points.device) [1370](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1370) if input_points is not None and image_embeddings.shape[0] != input_points.shape[0]: -> [1371](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1371) raise ValueError( [1372](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1372) "The batch size of the image embeddings and the input points must be the same. ", [1373](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1373) "Got {} and {} respectively.".format(image_embeddings.shape[0], input_points.shape[0]), [1374](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1374) " if you want to pass multiple points for the same image, make sure that you passed ", [1375](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1375) " input_points of shape (batch_size, point_batch_size, num_points_per_image, 3) and ", [1376](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1376) " input_labels of shape (batch_size, point_batch_size, num_points_per_image)", [1377](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1377) ) [1379](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1379) sparse_embeddings, dense_embeddings = self.prompt_encoder( [1380](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1380) input_points=input_points, [1381](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1381) input_labels=input_labels, [1382](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1382) input_boxes=input_boxes, [1383](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1383) input_masks=input_masks, [1384](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1384) ) [1386](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1386) low_res_masks, iou_predictions, mask_decoder_attentions = self.mask_decoder( [1387](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1387) image_embeddings=image_embeddings, [1388](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1388) image_positional_embeddings=image_positional_embeddings, (...) [1394](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1394) output_attentions=output_attentions, [1395](file:///C:/Users/LabUser/Documents/repos/test-classification/env/lib/site-packages/transformers/models/sam/modeling_sam.py:1395) ) ValueError: ('The batch size of the image embeddings and the input points must be the same. ', 'Got 5 and 1 respectively.', ' if you want to pass multiple points for the same image, make sure that you passed ', ' input_points of shape (batch_size, point_batch_size, num_points_per_image, 3) and ', ' input_labels of shape (batch_size, point_batch_size, num_points_per_image)') ``` @Rocketknight1 @amyeroberts @qubvel ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run: ```python from PIL import Image from transformers import pipeline relative_image_path = "data\image.png" raw_image = Image.open(relative_image_path) generator = pipeline("mask-generation", model="facebook/sam-vit-base", device=0) outputs = generator(raw_image, points_per_batch=64, crops_n_layers=1) ``` ### Expected behavior It should support this change in parameter
bug
low
Critical
2,770,582,761
angular
Dynamic component setInput does not respect internal input value changes (signal based inputs/model)
### Which @angular/* package(s) are the source of the bug? core ### Is this a regression? Yes ### Description I am trying to switch to signal based inputs with dynamic components. I use model as input (because input is read only and can't be changed). ` open = model<boolean>();` A close() method (on dialog close click) calls: `this.open.set(false);` It all works as intended. However, when i try to open the dialog again, using: `this.componentRef.setInput('open', state);` It fails. Because it still sees it as true and does not update the signal value. If i add ``` this.componentRef.setInput('open', false); this.componentRef.setInput('open', true); ``` It works - because i explicitly set it to false first with setInput - but fails if signal model is updated internally. Why does it not check against the signal real value, why does it still assume it is true? Is there some internal change detection layer that does not respect the signal changes inside the component? `...instance.open.set(state);` works - but direct instance manipulation is not recommended. ### Please provide a link to a minimal reproduction of the bug https://stackblitz.com/edit/angular-awwtth?file=src%2Fapp%2Fapp.component.ts ### Please provide the exception or error you saw ```true ``` ### Please provide the environment you discovered this bug in (run `ng version`) ```true Angular CLI: 18.2.11 Node: 20.15.1 Package Manager: npm 10.7.0 OS: linux x64 Angular: 18.2.10 ... animations, common, compiler, compiler-cli, core, forms ... language-service, localize, platform-browser ... platform-browser-dynamic, router Package Version --------------------------------------------------------- @angular-devkit/architect 0.1802.11 @angular-devkit/build-angular 18.2.11 @angular-devkit/core 18.2.11 @angular-devkit/schematics 18.2.11 @angular/cdk 18.2.11 @angular/cli 18.2.11 @schematics/angular 18.2.11 ng-packagr 18.2.1 rxjs 7.8.1 typescript 5.5.4 zone.js 0.14.10 ``` ### Anything else? _No response_
area: core,core: dynamic view creation,core: reactivity
low
Critical
2,770,588,589
storybook
[Tracking]: Storybook Test onboarding via `storybook init`
## 🧑‍🤝‍🧑 Who: @ghengeveld and @ndelangen This is a tracking issue for the **Onboard to Storybook Test via init** project. Its purpose is to keep track of the overall status of the project and tasks and plan everything around it. ## 🏁 Goals 1. Simplify installation of Storybook Test by offering a zero-config setup in more complex projects. This includes Next.js users and custom Vite or Vitest setups. Right now the addon will fail to finish its postinstall step if it detects an unsupported scenario. 2. Improve visibility & awareness of Storybook Test. By making testing more prominent during initial Storybook setup, new users are more likely to adopt this capability, or at least be aware of its existence. 3. Gain insight into what users intend to use Storybook for, by explicitly asking this question during setup. Do they plan on using Storybook for testing, documentation, both, or just development? 4. Avoid installing unnecessary dependencies. Based on the intended use, we can intelligently install only the packages the user will need. 5. Improve the usability and maintainability of the Storybook CLI. Right now the CLI is poorly designed, hard to maintain, and not user-friendly. Its output is verbose, cluttered and inconsistent. This is an opportunity to address all those issues by completely reimplementing the `init` command of the CLI. We plan to use [Ink](https://github.com/vadimdemedes/ink) which allows us to provide a rich, interactive experience while being able to develop in Storybook and use known React patterns. ## 🚩 Milestones ```[tasklist] ## New `init` CLI (Jan 21) - [x] Setup Ink as subsystem under the `storybook ink` CLI command - [x] Configure our Storybook to render stories for Ink components - [x] Scaffold interactive `init` user flow (questions, fake compatibility checks and dependency installation, mock result) - [x] Support selecting options through CLI flags - [x] Track metrics for user choices ``` ```[tasklist] ## Compatibility resolvers (Jan 28) - [ ] https://github.com/storybookjs/storybook/issues/30226 - [ ] https://github.com/storybookjs/storybook/issues/30229 - [ ] https://github.com/storybookjs/storybook/issues/30228 - [ ] https://github.com/storybookjs/storybook/issues/30227 - [ ] https://github.com/storybookjs/storybook/issues/30327 - [ ] https://github.com/storybookjs/storybook/issues/30324 - [ ] https://github.com/storybookjs/storybook/issues/30325 - [ ] https://github.com/storybookjs/storybook/issues/30326 ``` ```[tasklist] ## Update Old CLI (Feb 4) - [ ] Add dev/docs/test intent prompt to old CLI - [ ] Add new compatibility checks to old CLI - [ ] Trigger `storybook add addon-test` at the end of old CLI if test intent and compatible - [ ] Update addon-test postinstall hook to update existing Vitest config rather than bail ``` ```[tasklist] ## Showcase testing in template stories (Jan 28) - [ ] Update `header` story with mocked auth logic and interaction assertions ``` ```[tasklist] ## Documentation updates (Jan 28) - [ ] https://github.com/storybookjs/storybook/issues/30230 ``` ```[tasklist] ### `create-storybook` optimizations - [ ] Don't depend on `storybook` in `create-storybook` - [ ] Don't depend on `prettier` to autoformat the main-config - [ ] Prebundle dependencies in `create-storybook` - [ ] Document `npm create storybook` instead of `npx storybook init` in docs ``` ```[tasklist] ### Other expandable scope - [ ] Re-design the "Configure Your Project" page to highlight testing - [ ] CLI flag or option to skip generating template stories and omit addon-onboarding - [ ] CLI command to add `test` or `docs` capabilities to existing Storybook ```
feature request,cli,Tracking,addon: test
low
Minor
2,770,616,555
ui
[bug]: Canary CLI monorepo doesn't Install dependencies in right place
### Describe the bug I've tried the new Canary CLI monorepo to initialize a new project and add a component, but the dependencies are being installed in the wrong location. They are being installed in the web application instead of the ui package. ### Affected component/components All ### How to reproduce 1. Initialize a new project using the Shadcn CLI with monorepo: ```bash pnpm dlx shadcn@canary init ``` 1.2 Picking the monorepo option ```bash ? Would you like to start a new project? Next.js ❯ Next.js (Monorepo) ``` 2. Add a component to the project (e.g., a button or dialog). ```bash pnpm dlx shadcn@latest add dialog -c apps/web ``` 3. Run pnpm install ``` pnpm i ``` 5. Observe that the Radix packages are either missing or not installed in the incorrect place. ### Codesandbox/StackBlitz link https://codesandbox.io/p/github/davidFeldqwe/mono-repo-shadcn-demo/main?import=true ### Logs _No response_ ### System Info ```bash macOS, chrome ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,770,625,946
vscode
copilot-debug terminal integration breaks python venv activation
- Copilot Chat Extension Version: 0.23.2 - VS Code Version: 1.96.2 - OS Version: Windows 11 - Logs: Steps to Reproduce: 1. Start with a clean VSCode profile 2. Install the [python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) 3. Create a virtual environment and set it as the python interpreter 4. Open a new terminal and run `(get-command python).Source` (or equivalent for your shell): it should point to the python interpreter from the virtual environment, because the Python extension activates the environment for you and prepends it to `env:PATH` 5. Now, install Copilot Chat and restart VSCode 6. Open a new terminal and re-run `(get-command python).Source`: this time, it should point to the global python interpreter. The issue seems to be that both Python extension and Copilot Chat seem to be trying to modify the PATH at the same time, probably overriding each others changes (even if in my case, Copilot Chat seems to always override Python). Even worse, Copilot Chat seems to completely ignore the `"github.copilot.chat.copilotDebugCommand.enabled": false` setting, with the extension *still* trying to add features to the terminal. **EDIT:** Adding a separate issue for this, microsoft/vscode-copilot-release#3499
bug,terminal-env-collection
low
Critical
2,770,643,439
pytorch
[inductor][cpu] torch.bitwise_and/or/xor incorrectly accepts float32 tensors
### 🐛 Describe the bug It's not a severe problem but I report this issue anyway since the actual behavior deviated from what it is expected in the documentation. According to the documentation (e.g., https://pytorch.org/docs/stable/generated/torch.bitwise_not.html), `torch.bitwise_and/or/xor` only accepts boolean or integer tensors. However, after `torch.compile` these three APIs can also accept float32 tensor when running on CPU. In contrast, if I run the code on torch.compile-cuda or eager mode, float tensors will be rejected by these APIs. ``` import torch for op in [torch.bitwise_and, torch.bitwise_not, torch.bitwise_or, torch.bitwise_xor]: cf = torch.compile(op) for dtype in [torch.float16, torch.float32, torch.float64]: input = torch.tensor([-1, -2, 3], dtype=dtype) other = torch.tensor([1, 0, 3], dtype=dtype) try: res = cf(input, other) print(f"[torch.compile] OP: {op.__name__} accepts dtype: {dtype}") except: print(f"[torch.compile] OP: {op.__name__} does not accept dtype: {dtype}") ``` Actual output: ``` [torch.compile] OP: bitwise_and does not accept dtype: torch.float16 [torch.compile] OP: bitwise_and accepts dtype: torch.float32 # incorrectly accept float32 [torch.compile] OP: bitwise_and does not accept dtype: torch.float64 [torch.compile] OP: bitwise_not does not accept dtype: torch.float16 [torch.compile] OP: bitwise_not does not accept dtype: torch.float32 [torch.compile] OP: bitwise_not does not accept dtype: torch.float64 [torch.compile] OP: bitwise_or does not accept dtype: torch.float16 [torch.compile] OP: bitwise_or accepts dtype: torch.float32 # incorrectly accept float32 [torch.compile] OP: bitwise_or does not accept dtype: torch.float64 [torch.compile] OP: bitwise_xor does not accept dtype: torch.float16 [torch.compile] OP: bitwise_xor accepts dtype: torch.float32 # incorrectly accept float32 [torch.compile] OP: bitwise_xor does not accept dtype: torch.float64 ``` ### Versions PyTorch version: 2.7.0.dev20250106+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64) GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3) Clang version: 17.0.6 (AlmaLinux OS Foundation 17.0.6-5.el9) CMake version: version 3.26.5 Libc version: glibc-2.34 Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.14.0-427.37.1.el9_4.x86_64-x86_64-with-glibc2.34 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 GPU 1: NVIDIA GeForce RTX 3090 GPU 2: NVIDIA GeForce RTX 3090 GPU 3: NVIDIA GeForce RTX 3090 Nvidia driver version: 560.35.03 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Vendor ID: AuthenticAMD Model name: AMD Ryzen Threadripper PRO 3975WX 32-Cores CPU family: 23 Model: 49 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 1 Stepping: 0 Frequency boost: enabled CPU(s) scaling MHz: 81% CPU max MHz: 4368.1641 CPU min MHz: 2200.0000 BogoMIPS: 7000.73 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es Virtualization: AMD-V L1d cache: 1 MiB (32 instances) L1i cache: 1 MiB (32 instances) L2 cache: 16 MiB (32 instances) L3 cache: 128 MiB (8 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-63 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec rstack overflow: Mitigation; Safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.2 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-cusparselt-cu12==0.6.2 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] optree==0.13.1 [pip3] pytorch-triton==3.2.0+git0d4682f0 [pip3] torch==2.7.0.dev20250106+cu124 [pip3] torchaudio==2.6.0.dev20250106+cu124 [conda] numpy 1.26.2 pypi_0 pypi [conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi [conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi [conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi [conda] optree 0.13.1 pypi_0 pypi [conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi [conda] torch 2.7.0.dev20250106+cu124 pypi_0 pypi [conda] torchaudio 2.6.0.dev20250106+cu124 pypi_0 pypi cc @malfet @chauhang @penguinwu
module: error checking,oncall: pt2,oncall: cpu inductor
low
Critical
2,770,731,563
flutter
[flutter_svg] Some SVG Images aren't rendered properly
### Steps to reproduce I observe this issue when I try to add SVG Images in my app using `SvgPicture.asset()`. Some of my SVGs aren't rendered properly. ### Expected results All SVG images should be rendered properly by the package. I observe issues in three of my images, each of these is just white strokes with no background: ![Screenshot 2025-01-06 192507](https://github.com/user-attachments/assets/fd95dc9b-05e4-425e-b54d-b4b6a0f0ce22) ![Screenshot 2025-01-06 192523](https://github.com/user-attachments/assets/d705a82d-9b8b-4d97-976f-17499e337ed5) ![Screenshot 2025-01-06 192536](https://github.com/user-attachments/assets/984dfb1d-fd14-4f4f-ad53-003c937bd3e4) ### Actual results I observe a weird black background of different shapes or weird black strokes in my images. ### Code sample <details open><summary>Code sample</summary> Uploading some simple code to render the SVG images, I have used a red background to distinguish the black background and strokes coming in the image: ``` dart @override Widget build(BuildContext context) { return SafeArea( child: Container( color: Colors.red, child: Column( children: [ SizedBox( height: 100, width: 100, ), SizedBox( height: 100, width: 100, child: SvgPicture.asset('vectors/logic_analyzer.svg'), ), SizedBox( height: 100, width: 100, ), SizedBox( height: 100, width: 100, child: SvgPicture.asset('vectors/power_source.svg'), ), SizedBox( height: 100, width: 100, ), SizedBox( height: 100, width: 100, child: SvgPicture.asset('vectors/wave_generator.svg'), ), ], ), ), ); } } ``` Here are the three SVG images: 1. _logic_analyzer.svg_ :- ``` svg <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 15 15" id="vector"> <path id="path_4" d="M 7.5 0.196 C 5.564 0.196 3.705 0.966 2.335 2.335 C 0.966 3.705 0.196 5.564 0.196 7.5 C 0.196 9.436 0.966 11.295 2.335 12.665 C 3.705 14.034 5.564 14.804 7.5 14.804 C 9.436 14.804 11.295 14.034 12.665 12.665 C 14.034 11.295 14.804 9.436 14.804 7.5 C 14.804 5.564 14.034 3.705 12.665 2.335 C 11.295 0.966 9.436 0.196 7.5 0.196 Z" fill="#00000000" stroke="#ffffff" stroke-width="0.39164442" stroke-linecap="round" stroke-linejoin="round"/> <path id="path_5" d="M 5.162 5.568 C 4.614 5.568 4.169 5.915 4.169 6.341 L 4.169 9.174 C 4.169 9.6 4.614 9.947 5.162 9.947 L 9.797 9.947 C 10.345 9.947 10.79 9.6 10.79 9.174 L 10.79 6.341 C 10.79 5.915 10.345 5.568 9.797 5.568 Z M 4.935 5.933 L 9.986 5.933 C 10.169 5.933 10.43 6.198 10.43 6.341 L 10.43 7.5 L 10.43 8.015 L 10.43 9.174 C 10.43 9.316 10.092 9.582 9.91 9.582 L 5.086 9.582 C 4.903 9.582 4.566 9.317 4.566 9.174 L 4.566 8.015 L 4.566 7.5 L 4.566 6.341 C 4.566 6.145 4.78 5.933 5.01 5.933 Z" fill="#ffffff" stroke="#00000000" stroke-width="1.20042455"/> <path id="path_6" d="M 5.318 4.023 C 5.318 3.881 5.421 3.765 5.548 3.765 L 9.453 3.765 C 9.58 3.765 9.683 3.881 9.683 4.023 C 9.683 4.165 9.786 4.281 9.913 4.281 C 10.04 4.281 10.143 4.166 10.143 4.023 C 10.142 3.597 9.833 3.25 9.453 3.25 L 5.547 3.25 C 5.167 3.25 4.858 3.597 4.858 4.023 C 4.858 4.165 4.961 4.281 5.088 4.281 C 5.215 4.281 5.318 4.166 5.318 4.023 Z" fill="#ffffff" stroke="#00000000"/> <path id="path_7" d="M 3.939 4.538 C 3.559 4.538 3.25 4.885 3.25 5.311 L 3.25 10.205 C 3.25 10.631 3.559 10.978 3.939 10.978 L 4.169 10.978 C 4.169 11.404 4.478 11.751 4.858 11.751 L 5.317 11.751 C 5.697 11.751 6.006 11.404 6.006 10.978 L 8.992 10.978 C 8.992 11.404 9.301 11.751 9.681 11.751 L 10.14 11.751 C 10.52 11.751 10.829 11.404 10.829 10.978 L 11.059 10.978 C 11.439 10.978 11.748 10.631 11.748 10.205 L 11.748 9.432 C 11.748 9.29 11.645 9.174 11.518 9.174 C 11.391 9.174 11.288 9.289 11.288 9.432 L 11.288 10.205 C 11.288 10.347 11.185 10.463 11.058 10.463 L 10.828 10.463 L 8.99 10.463 L 6.004 10.463 L 4.166 10.463 L 3.936 10.463 C 3.809 10.463 3.706 10.347 3.706 10.205 L 3.709 5.311 C 3.709 5.169 3.812 5.053 3.939 5.053 L 11.061 5.053 C 11.188 5.053 11.291 5.168 11.291 5.311 L 11.291 8.66 C 11.291 8.802 11.394 8.918 11.521 8.918 C 11.648 8.918 11.751 8.803 11.751 8.66 L 11.75 5.311 C 11.75 4.885 11.441 4.538 11.061 4.538 Z M 4.628 10.977 L 5.547 10.977 C 5.547 11.119 5.444 11.235 5.317 11.235 L 4.858 11.235 C 4.731 11.235 4.628 11.119 4.628 10.977 Z M 9.453 10.977 L 10.372 10.977 C 10.372 11.119 10.269 11.235 10.142 11.235 L 9.683 11.235 C 9.556 11.235 9.453 11.119 9.453 10.977 Z" fill="#ffffff" stroke="#00000000" stroke-width="6.48979425"/> <path id="path_8" d="M 7.734 6.292 C 7.668 6.292 7.592 6.293 7.506 6.293 C 6.913 6.297 6.887 6.299 6.842 6.328 C 6.816 6.345 6.779 6.378 6.76 6.401 C 6.726 6.442 6.725 6.458 6.72 7.574 L 6.715 8.706 L 6.58 8.706 L 6.445 8.706 L 6.44 7.938 L 6.435 7.17 L 6.386 7.113 C 6.302 7.015 6.28 7.012 5.739 7.017 C 5.284 7.021 5.255 7.023 5.211 7.053 C 5.099 7.128 5.095 7.141 5.089 7.453 L 5.084 7.739 L 4.82 7.739 L 4.564 7.739 C 4.557 7.744 4.549 7.748 4.542 7.752 C 4.541 7.753 4.539 7.753 4.538 7.754 C 4.535 7.755 4.532 7.756 4.53 7.757 C 4.53 7.759 4.53 7.762 4.53 7.764 C 4.53 7.802 4.53 7.839 4.53 7.877 L 4.53 7.978 C 4.53 8.01 4.53 8.042 4.53 8.074 C 4.53 8.107 4.53 8.14 4.53 8.174 C 4.531 8.187 4.531 8.2 4.529 8.213 C 4.538 8.215 4.547 8.217 4.555 8.221 C 4.556 8.222 4.558 8.222 4.559 8.223 L 4.983 8.223 C 5.462 8.223 5.49 8.218 5.574 8.12 C 5.619 8.067 5.62 8.065 5.625 7.782 L 5.63 7.498 L 5.764 7.498 L 5.898 7.498 L 5.903 8.267 C 5.908 9.017 5.909 9.037 5.943 9.078 C 5.962 9.101 5.999 9.134 6.025 9.151 C 6.07 9.181 6.093 9.182 6.577 9.182 L 7.082 9.182 L 7.142 9.141 C 7.263 9.059 7.258 9.121 7.258 7.884 L 7.258 6.772 L 7.529 6.772 L 7.8 6.772 L 7.8 7.762 C 7.8 8.859 7.796 8.818 7.911 8.896 L 7.975 8.94 L 8.478 8.94 L 8.981 8.94 L 9.041 8.899 C 9.157 8.82 9.157 8.825 9.157 8.126 L 9.157 7.498 L 9.293 7.498 L 9.429 7.498 L 9.429 7.763 C 9.429 8.064 9.44 8.104 9.54 8.172 L 9.604 8.216 L 10.053 8.22 L 10.451 8.224 C 10.46 8.216 10.469 8.208 10.479 8.2 C 10.479 8.196 10.479 8.191 10.479 8.187 C 10.479 8.16 10.479 8.132 10.479 8.105 C 10.479 8.048 10.479 7.991 10.481 7.937 C 10.482 7.911 10.484 7.886 10.487 7.86 C 10.487 7.854 10.487 7.847 10.487 7.841 C 10.487 7.835 10.487 7.83 10.486 7.824 C 10.485 7.819 10.485 7.814 10.483 7.809 C 10.481 7.801 10.48 7.793 10.479 7.785 C 10.479 7.782 10.478 7.779 10.477 7.776 C 10.46 7.767 10.443 7.754 10.429 7.738 C 10.389 7.741 10.327 7.74 10.236 7.74 L 9.974 7.74 L 9.969 7.456 L 9.964 7.172 L 9.915 7.115 C 9.831 7.017 9.809 7.014 9.268 7.019 C 8.813 7.023 8.784 7.025 8.74 7.055 C 8.714 7.072 8.677 7.105 8.658 7.128 C 8.624 7.168 8.623 7.19 8.618 7.818 L 8.613 8.466 L 8.48 8.465 L 8.345 8.465 L 8.341 7.456 L 8.336 6.446 L 8.287 6.389 C 8.212 6.302 8.198 6.29 7.733 6.292 Z M 4.586 7.541 C 4.581 7.545 4.576 7.55 4.571 7.554 C 4.575 7.554 4.579 7.554 4.583 7.554 C 4.584 7.55 4.584 7.545 4.585 7.541 Z" fill="#ffffff" stroke="#00000000" stroke-width="0.13160402"/> <path id="path_9" d="M 12.5 0 L 12.5 2.25" fill="#00000000" stroke="#fff9f9" stroke-width="0.40000001"/> <path id="path_10" d="M 0 12.5 L 2.25 12.5" fill="#00000000" stroke="#ffffff" stroke-width="0.40000001"/> </svg> ``` 2. _power_source.svg_ :- ```svg <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 15 15" id="vector"> <path id="path" d="M 0.196 7.5 C 0.196 9.436 0.966 11.295 2.335 12.665 C 3.705 14.034 5.564 14.804 7.5 14.804 C 9.436 14.804 11.295 14.034 12.665 12.665 C 14.034 11.295 14.804 9.436 14.804 7.5 C 14.804 5.564 14.034 3.705 12.665 2.335 C 11.295 0.966 9.436 0.196 7.5 0.196 C 5.564 0.196 3.705 0.966 2.335 2.335 C 0.966 3.705 0.196 5.564 0.196 7.5 Z" fill="#00000000" stroke="#ffffff" stroke-width="0.40234087" stroke-linecap="round" stroke-linejoin="round"/> <path id="path_1" d="M 11.484 7.766 L 10.421 7.766 C 10.274 7.766 10.155 7.647 10.155 7.5 C 10.155 7.353 10.274 7.234 10.421 7.234 L 11.484 7.234 C 11.631 7.234 11.75 7.353 11.75 7.5 C 11.75 7.647 11.631 7.766 11.484 7.766 Z M 4.578 7.766 L 3.516 7.766 C 3.369 7.766 3.25 7.647 3.25 7.5 C 3.25 7.354 3.369 7.235 3.516 7.235 L 4.579 7.235 C 4.726 7.235 4.845 7.354 4.845 7.501 C 4.845 7.648 4.726 7.767 4.579 7.767 Z M 10.031 6.305 C 9.939 6.305 9.849 6.257 9.8 6.172 C 9.727 6.045 9.771 5.883 9.898 5.809 L 10.818 5.278 C 10.944 5.205 11.107 5.248 11.181 5.375 C 11.254 5.502 11.21 5.665 11.083 5.738 L 10.163 6.269 C 10.121 6.294 10.076 6.305 10.031 6.305 Z M 4.05 9.758 C 3.958 9.758 3.869 9.711 3.82 9.625 C 3.746 9.498 3.789 9.336 3.917 9.262 L 4.858 8.719 C 4.985 8.646 5.147 8.689 5.221 8.816 C 5.294 8.943 5.251 9.106 5.124 9.179 L 4.182 9.723 C 4.141 9.747 4.095 9.758 4.05 9.758 Z M 10.95 9.758 C 10.905 9.758 10.859 9.747 10.817 9.722 L 9.897 9.191 C 9.771 9.118 9.727 8.956 9.8 8.828 C 9.873 8.701 10.036 8.657 10.163 8.731 L 11.083 9.262 C 11.21 9.335 11.254 9.498 11.18 9.625 C 11.131 9.71 11.042 9.758 10.95 9.758 Z M 4.99 6.317 C 4.945 6.317 4.899 6.306 4.857 6.281 L 3.917 5.738 C 3.789 5.665 3.746 5.502 3.82 5.375 C 3.893 5.248 4.055 5.204 4.182 5.278 L 5.123 5.821 C 5.25 5.894 5.294 6.057 5.22 6.184 C 5.171 6.269 5.082 6.317 4.99 6.317 Z M 6.438 11.75 C 6.403 11.75 6.368 11.743 6.334 11.729 C 6.215 11.679 6.15 11.55 6.179 11.425 L 6.9 8.297 L 5.641 8.297 C 5.543 8.297 5.454 8.244 5.407 8.159 C 5.361 8.073 5.365 7.969 5.418 7.887 L 8.34 3.371 C 8.41 3.263 8.549 3.221 8.666 3.271 C 8.785 3.321 8.85 3.45 8.821 3.575 L 8.1 6.703 L 9.359 6.703 C 9.456 6.703 9.546 6.756 9.592 6.842 C 9.638 6.928 9.635 7.032 9.582 7.113 L 6.66 11.629 C 6.61 11.707 6.525 11.75 6.438 11.75 Z M 6.129 7.766 L 7.234 7.766 C 7.315 7.766 7.392 7.803 7.442 7.866 C 7.492 7.929 7.511 8.012 7.493 8.091 L 7.037 10.07 L 8.871 7.235 L 7.766 7.235 C 7.685 7.235 7.608 7.198 7.558 7.135 C 7.508 7.072 7.489 6.989 7.507 6.91 L 7.964 4.932 Z" fill="#ffffff" stroke="#00000000"/> <path id="path_2" d="M 12.5 0 L 12.5 2.25" fill="#00000000" stroke="#ffffff" stroke-width="0.41055191"/> <path id="path_3" d="M 0 12.5 L 2.25 12.5" fill="#00000000" stroke="#ffffff" stroke-width="0.41055191"/> </svg> ``` 3. _wave_generator.svg_ :- ```svg <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 15 15" id="vector"> <path id="path" d="M 0.196 7.5 C 0.196 9.436 0.966 11.295 2.335 12.665 C 3.705 14.034 5.564 14.804 7.5 14.804 C 9.436 14.804 11.295 14.034 12.665 12.665 C 14.034 11.295 14.804 9.436 14.804 7.5 C 14.804 5.564 14.034 3.705 12.665 2.335 C 11.295 0.966 9.436 0.196 7.5 0.196 C 5.564 0.196 3.705 0.966 2.335 2.335 C 0.966 3.705 0.196 5.564 0.196 7.5 Z" fill="#00000000" stroke="#ffffff" stroke-width="0.40197626" stroke-linecap="round" stroke-linejoin="round"/> <path id="path_1" d="M 7.577 8.458 C 6.367 8.458 5.385 9.435 5.385 10.64 L 9.769 10.64 C 9.769 9.435 8.788 8.458 7.577 8.458 Z M 7.577 9.549 C 7.476 9.549 7.394 9.63 7.394 9.731 L 7.394 10.276 L 5.786 10.276 C 5.856 9.929 6.027 9.61 6.285 9.354 C 6.63 9.01 7.089 8.821 7.577 8.821 C 8.065 8.821 8.524 9.01 8.869 9.354 C 9.126 9.61 9.297 9.929 9.368 10.276 L 7.76 10.276 L 7.76 9.731 C 7.76 9.63 7.678 9.549 7.577 9.549 Z" fill="#ffffff" stroke="#00000000" stroke-width="5.65668058"/> <path id="path_2" d="M 5.684 4.471 L 5.643 4.381 C 5.568 4.169 5.365 4.017 5.126 4.017 C 4.823 4.017 4.578 4.261 4.578 4.562 C 4.578 4.799 4.731 5.001 4.943 5.076 L 4.943 6.198 L 4.76 6.199 C 4.659 6.199 4.578 6.28 4.578 6.381 L 4.578 7.29 L 4.029 7.29 C 3.928 7.29 3.847 7.371 3.847 7.472 L 3.847 11.472 C 3.847 11.573 3.929 11.654 4.029 11.654 L 10.97 11.654 C 11.071 11.654 11.152 11.573 11.152 11.472 L 11.152 7.472 C 11.152 7.371 11.07 7.29 10.97 7.29 L 10.422 7.29 L 10.422 6.381 C 10.422 6.281 10.34 6.199 10.239 6.199 L 10.056 6.199 L 10.056 5.077 C 10.269 5.002 10.421 4.8 10.421 4.563 C 10.421 4.262 10.176 4.018 9.873 4.018 C 9.634 4.018 9.432 4.17 9.356 4.382 L 9.323 4.471 C 9.317 4.512 9.313 4.637 9.323 4.663 L 9.356 4.746 C 9.411 4.901 9.534 5.024 9.69 5.079 L 9.69 6.201 L 9.508 6.201 C 9.407 6.201 9.325 6.282 9.325 6.383 L 9.325 7.292 L 5.671 7.292 L 5.671 6.383 C 5.671 6.283 5.589 6.201 5.489 6.201 L 5.306 6.201 L 5.306 5.079 C 5.462 5.024 5.585 4.901 5.64 4.746 L 5.678 4.615 Z M 5.122 4.5 C 5.223 4.5 5.172 4.416 5.172 4.516 C 5.172 4.616 5.226 4.612 5.125 4.612 C 5.024 4.612 5.068 4.64 5.068 4.54 C 5.068 4.44 5.021 4.501 5.122 4.501 Z M 9.875 4.535 C 9.976 4.535 9.915 4.462 9.915 4.562 C 9.915 4.662 9.982 4.621 9.881 4.621 C 9.78 4.621 9.817 4.652 9.817 4.552 C 9.817 4.452 9.773 4.535 9.874 4.535 Z M 4.943 6.563 L 5.308 6.563 L 5.308 7.29 L 4.943 7.29 Z M 9.692 6.563 L 10.057 6.563 L 10.057 7.29 L 9.692 7.29 Z M 4.212 7.653 L 10.788 7.653 L 10.788 11.29 L 4.212 11.29 Z" fill="#ffffff" stroke="#00000000" stroke-width="0.19838737"/> <path id="path_3" d="M 5.114 4.557 C 5.114 4.375 6.062 2.206 7.56 4.557 C 8.905 6.667 9.941 4.545 9.875 4.535" fill="#00000000" stroke="#ffffff" stroke-width="0.41055192" stroke-linecap="round" stroke-linejoin="bevel"/> <path id="path_11" d="M 12.5 0 L 12.5 2.25" fill="#00000000" stroke="#ffffff" stroke-width="0.41055191"/> <path id="path_12" d="M 0 12.5 L 2.25 12.5" fill="#00000000" stroke="#ffffff" stroke-width="0.41055191"/> </svg> ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> Attaching a screenshot of the app containing the three images: ![Screenshot_20250106_191557](https://github.com/user-attachments/assets/432b2995-5da9-456b-b949-d394df064d97) </details> ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console Doctor summary (to see all details, run flutter doctor -v): [√] Flutter (Channel stable, 3.27.1, on Microsoft Windows [Version 10.0.22631.4602], locale en-IN) [√] Windows Version (Installed version of Windows is version 10 or higher) [√] Android toolchain - develop for Android devices (Android SDK version 34.0.0) [X] Chrome - develop for the web (Cannot find Chrome executable at .\Google\Chrome\Application\chrome.exe) ! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable. [X] Visual Studio - develop Windows apps X Visual Studio not installed; this is necessary to develop Windows apps. Download at https://visualstudio.microsoft.com/downloads/. Please install the "Desktop development with C++" workload, including all of its default components [√] Android Studio (version 2024.2) [√] Connected device (3 available) [√] Network resources ! Doctor found issues in 2 categories. ``` </details>
package,c: rendering,has reproducible steps,team-engine,found in release: 3.27,p: flutter_svg,found in release: 3.28
low
Minor
2,770,772,862
go
crypto/cipher: Unnecessary allocations when using boringcrypto's AES-GCM implementation
### Go version go1.24-20241213-RC00 ### Output of `go env` in your module/workspace: ```shell AR='ar' CC='clang' CGO_CFLAGS='-O2 -g' CGO_CPPFLAGS='' CGO_CXXFLAGS='-O2 -g' CGO_ENABLED='1' CGO_FFLAGS='-O2 -g' CGO_LDFLAGS='-O2 -g' CXX='clang++' GCCGO='gccgo' GO111MODULE='' GOAMD64='v1' GOARCH='amd64' GOAUTH='netrc' GOBIN='' GOCACHE='/usr/local/google/home/juerg/.cache/go-build' GODEBUG='' GOENV='/usr/local/google/home/juerg/.config/go/env' GOEXE='' GOEXPERIMENT='fieldtrack,boringcrypto' GOFIPS140='off' GOFLAGS='' GOGCCFLAGS='-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build116742542=/tmp/go-build -gno-record-gcc-switches' GOHOSTARCH='amd64' GOHOSTOS='linux' GOINSECURE='' GOMOD='/dev/null' GOMODCACHE='/usr/local/google/home/juerg/go/pkg/mod' GONOPROXY='' GONOSUMDB='' GOOS='linux' GOPATH='/usr/local/google/home/juerg/go' GOPRIVATE='' GOPROXY='https://proxy.golang.org,direct' GOROOT='/usr/lib/google-golang' GOSUMDB='sum.golang.org' GOTELEMETRY='local' GOTELEMETRYDIR='/usr/local/google/home/juerg/.config/go/telemetry' GOTMPDIR='' GOTOOLCHAIN='auto' GOTOOLDIR='/usr/lib/google-golang/pkg/tool/linux_amd64' GOVCS='' GOVERSION='go1.24-20241213-RC00 cl/706019355 +e39e965e0e X:fieldtrack,boringcrypto' GOWORK='' PKG_CONFIG='pkg-config' ``` ### What did you do? I ran the following benchmark tests for AES-GCM encryption and decryption: ``` package aead_test import ( "crypto/aes" "crypto/cipher" "math/rand" "testing" ) const ( plaintextSize = 10 * 1024 * 1024 associatedDataSize = 256 nonceSize = 12 ) func GetRandomBytes(n uint32) []byte { buf := make([]byte, n) _, err := rand.Read(buf) if err != nil { panic(err) } return buf } func BenchmarkAesGcmEncrypt(b *testing.B) { b.ReportAllocs() a, err := aes.NewCipher(GetRandomBytes(16)) if err != nil { b.Fatal(err) } aesGCM, err := cipher.NewGCM(a) if err != nil { b.Fatal(err) } plaintext := GetRandomBytes(plaintextSize) associatedData := GetRandomBytes(associatedDataSize) nonce := GetRandomBytes(nonceSize) b.ResetTimer() for i := 0; i < b.N; i++ { _ = aesGCM.Seal(nil, nonce, plaintext, associatedData) } } func BenchmarkAesGcmDecrypt(b *testing.B) { b.ReportAllocs() a, err := aes.NewCipher(GetRandomBytes(16)) if err != nil { b.Fatal(err) } aesGCM, err := cipher.NewGCM(a) if err != nil { b.Fatal(err) } plaintext := GetRandomBytes(plaintextSize) associatedData := GetRandomBytes(associatedDataSize) nonce := GetRandomBytes(nonceSize) ciphertext := aesGCM.Seal(nil, nonce, plaintext, associatedData) b.ResetTimer() for i := 0; i < b.N; i++ { if _, err = aesGCM.Open(nil, nonce, ciphertext, associatedData); err != nil { b.Error(err) } } } ``` ### What did you see happen? I built the test target and run the benchmark tests with ``` ./aead_test --test.run=NONE --test.bench=. --test.count=1 ``` the output was: ``` goos: linux goarch: amd64 pkg: google3/third_party/tink/go/aead/aead_test cpu: AMD EPYC 7B12 BenchmarkAesGcmEncrypt-8 232 4958494 ns/op 10493980 B/op 2 allocs/op BenchmarkAesGcmDecrypt-8 58 26919690 ns/op 52263828 B/op 48 allocs/op PASS ``` The decryption takes 5x longer than the encryption, and does 48 allocations, while the encryption only does 2 allocations. ### What did you expect to see? That the decryption is about as fast as the encryption. I think the problem is here: https://github.com/golang/go/blob/master/src/crypto/internal/boring/aes.go#L366 When I replace this loop by this: ``` newDstLen := n + len(ciphertext) - gcmTagSize if cap(dst) < newDstLen { oldDst := dst dst = make([]byte, n, newDstLen) copy(dst, oldDst) } dst = dst[:newDstLen] ``` The issue goes away.
Performance,NeedsInvestigation
low
Critical
2,770,833,775
vscode
Git - remote name should be considered when checking out a branch
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> Please correct me if this is something that's already possible, but I couldn't find any way to do this. I frequently work in codebases that have multiple remotes, all of which have identical branch names (dev, main, etc.). When I check one of those branches out (say, `origin_a/dev`), my local branch is just named 'dev'. If I try to checkout another branch (say, `origin_b/dev`), the checkout fails because there's already a local branch called 'dev'. I know I can manually rename the existing branch before checking out the second one, but it's a pain to have to remember to do every time. It would be nice to be able to automatically prepend the origin name to the local branch name (with a setting to turn it on/off), or a setting similar to `git.branchPrefix` where you can specify what you'd like to prepend when cloning a branch.
bug,git
low
Minor
2,770,836,640
vscode
Support variable fonts in the editor
Related to https://github.com/microsoft/vscode/issues/147067 The issue above will be used to track variable line heights only This issue is about tracking the feature of variable fonts (font-family, font-size etc) in the editor
feature-request,editor-core
low
Minor
2,770,842,727
flutter
`NavigationRail` does not follow Material specification
### Steps to reproduce There are two main issues, both of which are related. --- Create a `NavigationRail` where the `alignment` is set to 0, with a couple of destinations. Create a non-zero height widget and use it as the `leading`. Note the destinations are centered in the space remaining after `leading` takes its space. ![image](https://github.com/user-attachments/assets/fb8cf618-b45f-4688-9662-ca59eac107fd) This occurs because of https://github.com/flutter/flutter/blob/a6c057c406103d68ecaa42da86063f533d4f7f72/packages/flutter/lib/src/material/navigation_rail.dart#L466-L476. A more advanced implementation would be required to center the destinations independently, but also not overlap with any `leading` (and assume an alignment of -1) when there are a larger number of destinations. I would be happy to contribute to fix this, but have no idea how to start this sort of layout. Maybe a dedicated `RenderObject` is necessary, as I would have thought the height of the destinations would need to be known? Additionally, the spec only makes provisions for top, center, and bottom alignment. Flutter is much more flexible - this is probably fine, but just another thing to point out. --- Add a `trailing` to the rail. It is not bottom aligned, instead it sits after the destinations. ![image](https://github.com/user-attachments/assets/fbd0a453-0d19-4a6c-a1ea-31269a215b76) Interestingly, the Material spec makes no mention of a trailing widget. But the official Material site uses one :D. (The spec also says never to use destinations without labels visible, but the demo screen ignores this as well.) ### Expected results ![image](https://github.com/user-attachments/assets/4a67b223-230a-4f15-9805-7a003872420c) ### Actual results See screenshots above. ### Code sample Pop a `Row` with one child a `NavigationRail` into a Scaffold`. Add destinations, alignment, and leading/trailing as necessary. I'm not adding an MRE as the offending code has been found. ### Screenshots or Video _No response_ ### Logs _No response_ ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [✓] Flutter (Channel beta, 3.28.0-0.1.pre, on Microsoft Windows [Version 10.0.22635.4660], locale en-GB) • Flutter version 3.28.0-0.1.pre on channel beta at C:\Users\lukas\flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision 3e493a3e4d (4 weeks ago), 2024-12-12 05:59:24 +0900 • Engine revision 2ba456fd7f • Dart version 3.7.0 (build 3.7.0-209.1.beta) • DevTools version 2.41.0 [✓] Windows Version (11 Home 64-bit, 23H2, 2009) [✓] Android toolchain - develop for Android devices (Android SDK version 36.0.0-rc3) • Android SDK at C:\Users\lukas\AppData\Local\Android\sdk • Platform android-35, build-tools 36.0.0-rc3 • Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java This is the JDK bundled with the latest Android Studio installation on this machine. To manually set the JDK path, use: `flutter config --jdk-dir="path/to/jdk"`. • Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11) • All Android licenses accepted. [✓] Chrome - develop for the web • CHROME_EXECUTABLE = C:\Program Files\Google\Chrome Dev\Application\chrome.exe [✓] Visual Studio - develop Windows apps (Visual Studio Build Tools 2022 17.10.4) • Visual Studio at C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools • Visual Studio Build Tools 2022 version 17.10.35027.167 • Windows 10 SDK version 10.0.22000.0 [✓] Android Studio (version 2024.2) • Android Studio at C:\Program Files\Android\Android Studio • Flutter plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: 🔨 https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11) [✓] VS Code, 64-bit edition (version 1.96.2) • VS Code at C:\Program Files\Microsoft VS Code • Flutter extension version 3.102.0 [✓] Connected device (3 available) • Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22635.4660] • Chrome (web) • chrome • web-javascript • unknown • Edge (web) • edge • web-javascript • Microsoft Edge 132.0.2957.26 [✓] Network resources • All expected network resources are available. • No issues found! ``` </details>
framework,f: material design,a: fidelity,P2,team-design,triaged-design
low
Minor
2,770,869,698
transformers
RagTokenizer Missing patch_token_id, patch_token, and encode Functionality
### Feature request I propose adding the following functionalities to the RagTokenizer in the Hugging Face Transformers library: Support for patch_token_id and patch_token attributes: These attributes are essential for specifying special tokens that can be used during tokenization, particularly for Retrieval-Augmented Generation (RAG) models. Implementation of the encode function: This function is critical for converting input text into token IDs, which are a standard input for Transformer-based models. These additions would bring RagTokenizer in line with other tokenizers in the library, making it easier to use in preprocessing pipelines for training and inference. Paper reference: [RAG: Retrieval-Augmented Generation](https://arxiv.org/abs/2005.11401) Current RagTokenizer documentation: Hugging Face Transformers ### Motivation The absence of the patch_token_id, patch_token, and encode functionalities in RagTokenizer introduces several limitations: It is challenging to preprocess data for RAG models without a way to specify and use special tokens like patch_token. The lack of an encode function makes it cumbersome to tokenize text into input IDs, which is a critical step for training and inference. This is a deviation from the expected behavior of tokenizers in the Transformers library. This can cause confusion and inefficiency for users accustomed to the functionality available in other tokenizers like BertTokenizer or GPT2Tokenizer. Addressing these issues will make RagTokenizer more consistent with the rest of the library and improve usability in RAG-related workflows. ### Your contribution I am willing to contribute by: Submitting a Pull Request (PR) to implement these functionalities, given guidance on the expected behavior and the existing code structure. Writing unit tests to verify the behavior of the patch_token_id, patch_token, and encode functionalities. Updating the documentation to reflect these changes. Let me know if this aligns with your vision for the RagTokenizer, and I’d be happy to assist further! cc @ArthurZucker @itazap
Feature request
low
Minor
2,770,894,499
flutter
[Firefox] Textfield becomes unresponsive after focusing it a certain way
### Steps to reproduce 1. Open the application in Firefox. 2. Copy some text to clipboard. 3. Click on the text field. 4. Type some things to ensure it still receives keyboard inputs. 5. Hit tab until you focus the textfield again. 6. Try typing or pasting text. The widget is now unresponsive. ### Expected results The TextField should still receive text inputs and pasted text. ### Actual results The TextField is no longer responsive, despite the blinking |. ### Code sample <details open><summary>Code sample</summary> ```dart import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { MyApp(); @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Slider A11Y Demo', home: MyHomePage(title: 'Flutter Demo Home Page'), ); } } class MyHomePage extends StatelessWidget { MyHomePage({Key? key, required this.title}) : super(key: key); final String title; @override Widget build(BuildContext context) { return Scaffold(appBar: AppBar(title: Text(title)), body: ParentView()); } } class ParentView extends StatefulWidget { @override _ParentViewState createState() => _ParentViewState(); } class _ParentViewState extends State<ParentView> { late FocusNode _focusNode; @override void initState() { super.initState(); _focusNode = FocusNode(debugLabel: 'ParentView'); } @override void dispose() { _focusNode.dispose(); super.dispose(); } @override Widget build(BuildContext context) { return Semantics( container: true, explicitChildNodes: true, label: 'Parent View', child: FocusTraversalGroup( child: Focus( focusNode: _focusNode, child: Column( mainAxisSize: MainAxisSize.min, children: [ Flexible( child: Card( margin: EdgeInsetsDirectional.zero, color: Theme.of(context).colorScheme.surface, surfaceTintColor: Theme.of(context).colorScheme.surface, child: Padding( padding: EdgeInsetsDirectional.only(bottom: 10.0), child: AnimatedContainer( duration: Durations.short4, constraints: BoxConstraints(maxWidth: 356.0), child: AnimatedSize( alignment: Alignment.topCenter, duration: Durations.short4, child: AnimatedSwitcher( duration: Durations.short4, switchInCurve: Curves.ease, switchOutCurve: Curves.ease, layoutBuilder: AnimatedSwitcher.defaultLayoutBuilder, child: MainView(), ), ), ), ), ), ), Container(), ], ), ), ), ); } } class MainView extends StatefulWidget { @override _MainViewState createState() => _MainViewState(); } class _MainViewState extends State<MainView> { late TextEditingController _textController; late FocusNode _focusNode; bool _textFieldShowing = false; @override void initState() { super.initState(); _textController = TextEditingController(); _focusNode = FocusNode(debugLabel: 'MainView'); } @override void dispose() { _textController.dispose(); _focusNode.dispose(); super.dispose(); } @override Widget build(BuildContext context) { WidgetsBinding.instance.addPostFrameCallback((_) { if (_focusNode.canRequestFocus && _textFieldShowing) { _focusNode.requestFocus(); } }); return Stack( children: <Widget>[ Align( alignment: AlignmentDirectional.bottomStart, child: _textFieldShowing ? TextFieldView( focusNode: _focusNode, textController: _textController, ) : TextButton( child: Text('Show Text Field'), onPressed: () { setState(() { _textFieldShowing = true; }); }, ), ), ], ); } } class TextFieldView extends StatelessWidget { final FocusNode focusNode; final TextEditingController textController; TextFieldView({ Key? key, required this.focusNode, required this.textController, }) : super(key: key); void onSubmitted(String prompt) { print('onSubmitted: $prompt'); } @override Widget build(BuildContext context) { final colorScheme = Theme.of(context).colorScheme; final _composerBorder = OutlineInputBorder( borderSide: BorderSide(color: colorScheme.surfaceContainerHigh), borderRadius: BorderRadius.circular(30), ); return Container( padding: EdgeInsetsDirectional.all(16.0), width: double.infinity, decoration: BoxDecoration( color: colorScheme.surface, border: Border( top: BorderSide(color: colorScheme.surfaceContainerHighest), ), ), child: Column( mainAxisSize: MainAxisSize.min, spacing: 16.0, children: [ Row( children: [ Flexible( child: TextField( focusNode: focusNode, minLines: 1, maxLines: 10, // keyboardType of TextInputType.text is necessary to // allow submission on Enter key with multiline // textfield. keyboardType: TextInputType.text, controller: textController, onSubmitted: onSubmitted, decoration: InputDecoration( suffixIcon: IconButton( icon: const Icon(Icons.send), onPressed: () => onSubmitted(textController.text), tooltip: 'Send Prompt', color: colorScheme.primary, ), hintText: 'Enter a prompt here', filled: true, fillColor: Theme.of(context).colorScheme.surfaceContainerHigh, focusColor: Theme.of(context).colorScheme.surfaceContainerHigh, border: _composerBorder, enabledBorder: _composerBorder, contentPadding: EdgeInsetsDirectional.symmetric( horizontal: 24.0, vertical: 16.0, ), ), ), ), ], ), Text.rich( TextSpan( style: Theme.of(context).textTheme.bodySmall, children: [ TextSpan(text: 'Disclaimer'), WidgetSpan(child: SizedBox(width: 4.0)), WidgetSpan( child: TextButton( onPressed: () {}, child: Text('Learn More'), ), ), ], ), textAlign: TextAlign.center, ), ], ), ); } } ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> https://youtu.be/87kionDxZU8 </details> ### Logs <details open><summary>Logs</summary> ```console $ dart-dev-runner run //experimental/users/dsanagustin/flutter_sliders/web .../dart-dev-runner run //experimental/users/dsanagustin/flutter_sliders/web ______ ______ _______ ( __ \ ( __ \ ( ____ ) Dart Dev Runner. | ( \ )| ( \ )| ( )| | | ) || | ) || (____)| About: http://go/dart-dev-runner | | | || | | || __) Contact: http://g/ddr-users | | ) || | ) || (\ ( File a bug: http://go/dart-dev-runner-bug | (__/ )| (__/ )| ) \ \__ (______/ (______/ |/ \__/ If Target `//experimental/users/dsanagustin/flutter_sliders/web` is a `dart_browser_binary`, you can use `$ dart-dev-runner run //experimental/users/dsanagustin/flutter_sliders/web --target-rule-class=dart_browser_binary <other args>...` to skip a secondary check to gain some startup speed. Retrieving entrypoint arguments from `//experimental/users/dsanagustin/flutter_sliders/web:_web__ddr_entrypoint_arguments`... Analyzing //experimental/users/dsanagustin/flutter_sliders/web:_web__ddr_entrypoint_arguments... DDR will run blaze with no additional args. 15:10:25.880 WARNING: PenEncoderService: Could not find IPv6 address for hostname <redacted> 15:10:25.884 INFO: DdrHandler: Using legacy pen encoder to encode urls.This might cause slower start of Dart DevTools. Machine spec: cpu=48,ssd=1,cloudtop=1,mem=117.0,mem_usage=45.0,cpu_usage=1.67 15:10:26.003 INFO: DWDS: Serving DevTools at http://<redacted>:36777 For access on the same machine: DDR is serving at: http://localhost:8080 with config at: http://localhost:8080/$dartDevRunnerConfig DDC App: http://localhost:8080/experimental/users/dsanagustin/flutter_sliders/web/web/index.ddc.html dart2js App: http://localhost:8080/experimental/users/dsanagustin/flutter_sliders/web/web/index.html For remote access: DDR is serving at: http://<redacted>:8080 with config at: http://<redacted>:8080/$dartDevRunnerConfig DDC App: http://<redacted>:8080/experimental/users/dsanagustin/flutter_sliders/web/web/index.ddc.html dart2js App: http://<redacted>:8080/experimental/users/dsanagustin/flutter_sliders/web/web/index.html 2025-01-06 15:10:26.159390 Starting daemon...2025-01-06 15:10:26.261921 Daemon started.15:10:26 Starting build, 1 target(s): //experimental/users/dsanagustin/flutter_sliders/web:web_ddr_ddr_bundleINFO: Streaming build results to: http://sponge2/0cb9633d-8006-4a00-bb78-1f225cf04b6b INFO: Analyzed target //experimental/users/dsanagustin/flutter_sliders/web:web_ddr_ddr_bundle (0 packages loaded, 0 targets configured). INFO: Found 1 target... Target //experimental/users/dsanagustin/flutter_sliders/web:web_ddr_ddr_bundle up-to-date: blaze-bin/experimental/users/dsanagustin/flutter_sliders/web/web_ddr_ddr_bundle_unused_executable INFO: Elapsed time: 0.410s, Critical Path: 0.13s, Remote (0.00% of the time): [queue: 0.00%, setup: 0.00%, process: 0.00%] INFO: Build completed successfully, 1 total action INFO: Streaming build results to: http://sponge2/0cb9633d-8006-4a00-bb78-1f225cf04b6b 15:10:26 Build succeeded. dart2js mode is INITIALIZING. This may take more than one minute. Analyzing //experimental/users/dsanagustin/flutter_sliders/web:_web__ddr_archive_arguments... dart2js mode is READY. 15:12:25.410 INFO: DwdsInjector: Received request for entrypoint at http://b2607f8b04800100000c24f63ac1229c11f90000000000000000001.proxy.googlers.com/experimental/users/dsanagustin/flutter_sliders/web/web/web_ddc_bundle.app.bootstrap.js 15:12:25.434 INFO: Google3LoadStrategy: Detected build options for entrypoint google3:///experimental/users/dsanagustin/flutter_sliders/web/web/main.dart: Flutter mode: canvaskit DDC canary features: false 15:12:25.435 INFO: DwdsInjector: Injected debugging metadata for entrypoint at http://b2607f8b04800100000c24f63ac1229c11f90000000000000000001.proxy.googlers.com/experimental/users/dsanagustin/flutter_sliders/web/web/web_ddc_bundle.app.bootstrap.js ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console [ +2 ms] ProcessManager.runSync('which java') [ +3 ms] executing: /usr/local/buildtools/java/jdk/bin/java --version [ ] ProcessManager.runSync('/usr/local/buildtools/java/jdk/bin/java --version') [ +52 ms] Exit code 0 from: /usr/local/buildtools/java/jdk/bin/java --version [ ] openjdk 23.0.1 2024-12-05 OpenJDK Runtime Environment (build 23.0.1+-google-release-sts-703074349) OpenJDK 64-Bit Server VM (build 23.0.1+-google-release-sts-703074349, mixed mode, sharing) [ ] executing: /usr/bin/adb devices -l [ ] ProcessManager.run('/usr/bin/adb devices -l') [ ] Gallium device polling: connecting to galliumd. [ ] [✓] Flutter (Channel google3, on Debian GNU/Linux rodete 6.10.11-1rodete2-amd64, locale en_US.UTF-8) [ ] • Framework revision 81fad61209 (39 days ago), 2024-11-28T00:00:00.000 [ ] • Engine revision d7c0bcfe7a [ ] • Dart version 0740ded7b9 [ ] [✓] Android toolchain - develop for Android devices (Android SDK version Stable) [ ] • Android SDK at google3 [ ] • Platform Stable, build-tools Stable [ ] • Java binary at: /usr/local/buildtools/java/jdk/bin/java [ ] This JDK was found in the system PATH. [ ] To manually set the JDK path, use: `flutter config --jdk-dir="path/to/jdk"`. [ ] • Java version OpenJDK Runtime Environment (build 23.0.1+-google-release-sts-703074349) [ ] [✓] VS Code (version 1.87.1) [ ] • VS Code at /usr/share/code [ ] • Flutter extension can be installed from: [ ] 🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter [ +9 ms] List of devices attached [ +2 ms] Gallium device polling failed to connect to galliumd: gRPC Error (code: 14, codeName: UNAVAILABLE, message: Error connecting: SocketException: Connection refused (OS Error: Connection refused, errno = 111), address = localhost, port = 37250, details: null, rawResponse: null, trailers: {}) [ ] executing: uname -m [ ] ProcessManager.runSync('uname -m') [ +1 ms] Exit code 0 from: uname -m [ ] x86_64 [ ] executing: /usr/bin/adb devices -l [ ] ProcessManager.run('/usr/bin/adb devices -l') [ +4 ms] List of devices attached [ ] [✓] Connected device (1 available) [ ] • Linux (desktop) • linux • linux-x64 • Debian GNU/Linux rodete 6.10.11-1rodete2-amd64 [ ] [✓] Google3 (on linux) [ ] • KVM enabled [ ] • No issues found! [ ] "flutter doctor" took 77ms. [ +60 ms] ensureAnalyticsSent: 59ms [ ] Running 2 shutdown hooks [ ] Shutdown hooks complete [ ] exiting with code 0 ``` </details> # Similar Issues #157579
a: text input,platform-web,browser: firefox,P1,customer: quake (g3),team-web,triaged-web
medium
Critical
2,770,914,707
vscode
Overlapping borders in Edits view with transparent input box theme
- Install the theme "Poimandres" <img width="410" alt="Image" src="https://github.com/user-attachments/assets/e08dec95-aa25-4acf-8591-f3c9c163700e" /> The input-background color is transparent so the chat view looks a bit messed up. Maybe there are no other inputs layered on other elements with different colors elsewhere in vscode? We need an opaque element to obscure the border.
bug,panel-chat,chat
low
Minor
2,770,918,602
ant-design
Customizable progress bar for Notification
### What problem does this feature solve? The recently added progress bar for Notification is great, and it would be even better if we could customize it. When we want to have different progress bars for different types of notification (success, warning, error), it would be great to have at notification.open() a possibility to override the current color. ### What does the proposed API look like? notification.open({ key: notificationKey, duration: 5, message: options.title, description: options.content, type: options.notificationType, pauseOnHover: true, showProgress: true, progressColor: '#FFFFFF' }) <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
unconfirmed
low
Critical
2,770,950,084
godot
[GDExtension] Binding to custom added GDScript virtual methods, causes method definition duplication
### Tested versions Reproducible in: v4.3.stable.official [77dcf97d8] (windows only) Not reproducible on Linux Mac remains untested. ### System information Godot v4.3.stable - Windows 10 ### Issue description When trying to return a default value/having base logic for these virtual methods, I discovered it might seem that the windows version of Godot does not handle their definitions properly in the editor. I seem to generate duplicate methods, however one is virtual and the other is not. One can check the auto generated documentation, that both of them are there. Not only, but when trying to override the method, the editor will throw an error and refuse to run the game. On linux, this seems to work fine. When compiling the extension for linux; I only get one virtual method. So I assume that this is how the feature is supposed to work. If more information is needed let me know. ### Steps to reproduce One sets up a normal GDExtension project. Then create a class with one method in c++. (We will call it `foo()`) In the `_bind_methods()` method, add a virtual method. And call the method foo. Then bind that function normally with `ClassDB::bind_method` to the c++ method Now compile just as normal, for windows. Check the documentation of the class you just created, you will see two methods. (one virtual and one not) Simple code extract: ```cpp // bar.hpp #include <godot_cpp/classes/object.hpp> using namespace godot; class Bar : public Object { GDCLASS(Bar, Object) protected: static void _bind_methods(); int foo(); } ``` ```cpp // bar.cpp #include "bar.hpp" void Bar::_bind_methods() { // adds a new virtual method called foo ClassDB::add_virtual_method(get_class_static(), MethodInfo(Variant::INT, "foo")); // binds to the newly added virtual method foo ClassDB::bind_method(D_METHOD("foo"), &Bar::foo); } int Bar::foo() { return -1; } ``` ```gdscript extends Bar # overriding will cause an error func foo() -> int: return 10 ``` ### Minimal reproduction project (MRP) [virtual_method_bug.zip](https://github.com/user-attachments/files/18321553/virtual_method_bug.zip)
documentation,topic:gdextension
low
Critical
2,770,963,909
PowerToys
VSCodeHelper: Add support for Cursor
### Description of the new feature / enhancement Currently the VSCode plugin for Run checks for specific "VS Code" strings. I believe if Cursor strings could be added then support could be easily added for Cursor. ### Scenario when this would be used? If you have Cursor installed, you'd be able to see paths that are in Cursor just like you can with VSCode. ### Supporting information `VSCodeInstances.cs` would need support for the various Cursor strings.
Needs-Triage,Run-Plugin
low
Minor
2,770,998,827
tauri
[bug] App data directory does not resolve to correct path
### Describe the bug [appDataDir](https://v2.tauri.app/reference/javascript/api/namespacepath/#appdatadir) does not resolve to the correct path on each platform. The resolved path _should exist_ without having to create it. See example below for macOS, iOS and Android. **macOS**: `/Users/johndoe/Library/Application Support/com.company.appname` **iOS**: `/Users/johndoe/Library/Developer/CoreSimulator/Devices/6C6435C0-2956-46E4-B7EE-006D5C14770A/data/Containers/Data/Application/5537853B-26D5-4B30-B99E-373146D19848/Library/Application Support/com.company.appname` **Android**: `/data/user/0/com.company.appname` ### Reproduction Use the [appDataDir](https://v2.tauri.app/reference/javascript/api/namespacepath/#appdatadir) API from @tauri-apps/api. ### Expected behavior **macOS** - Unsigned applications typically store application data under: `~/Library/Application Support/[App Name]` - Signed (sandboxed) macOS applications have directories located under the ~/Library/Containers folder. E.g. `~/Library/Containers/[Bundle Identifier]/Data/[Subfolder]` **iOS** - Sandbox paths are managed by the system and vary for each app. Each app gets its unique directory under /var/mobile/Containers E.g. `/var/mobile/Containers/Data/Application/[UUID]/[Subfolder]` ### Full `tauri info` output ```text [✔] Environment - OS: Mac OS 15.2.0 arm64 (X64) ✔ Xcode Command Line Tools: installed ✔ rustc: 1.83.0 (90b35a623 2024-11-26) ✔ cargo: 1.83.0 (5ffbef321 2024-10-29) ✔ rustup: 1.27.1 (54dd3d00f 2024-04-24) ✔ Rust toolchain: stable-aarch64-apple-darwin (default) - node: 22.12.0 - pnpm: 9.15.2 - npm: 10.9.0 [-] Packages - tauri 🦀: 2.1.1 - tauri-build 🦀: 2.0.3 - wry 🦀: 0.47.2 - tao 🦀: 0.30.8 - @tauri-apps/api : 2.1.1 (outdated, latest: 2.2.0) - @tauri-apps/cli : 2.1.0 (outdated, latest: 2.2.2) [-] Plugins - tauri-plugin-http 🦀: 2.2.0 - @tauri-apps/plugin-http : 2.2.0 - tauri-plugin-sql 🦀: 2.2.0 - @tauri-apps/plugin-sql : 2.2.0 - tauri-plugin-os 🦀: 2.2.0 - @tauri-apps/plugin-os : 2.0.0 (outdated, latest: 2.2.0) - tauri-plugin-fs 🦀: 2.2.0 - @tauri-apps/plugin-fs : 2.2.0 - tauri-plugin-upload 🦀: 2.2.1 - @tauri-apps/plugin-upload : 2.2.1 - tauri-plugin-log 🦀: 2.2.0 - @tauri-apps/plugin-log : 2.0.1 (outdated, latest: 2.2.0) [-] App - build-type: bundle - CSP: default-src 'self' ipc: http://ipc.localhost; img-src 'self' asset: http://asset.localhost - frontendDist: ../dist - devUrl: http://localhost:5173/ - framework: Vue.js - bundler: Rollup ``` ### Stack trace _No response_ ### Additional context #5263
type: bug,status: needs triage
low
Critical
2,771,063,964
excalidraw
[API - feature request] Be able to create ExcalidrawElementSkeleton from "Excalidraw library" programatically
Hi, I'm exploring to programmatically generate KSTD diagrams using the excalidraw API [excalidraw-element-skeleton](https://docs.excalidraw.com/docs/@excalidraw/excalidraw/api/excalidraw-element-skeleton). Currently, only the native excalidraw types (shapes, text, lines, arrows & bindings) can be created, but no pre-defined object groups. My library consists of rather complex components, and I would like to generate diagrams like e.g. ![Image](https://github.com/user-attachments/assets/a0dc1756-d8f1-4c55-9d96-237aaaa749cd) Is this a feature that can be implemented? That would be great for me regarding future plans for my project.
enhancement,package:excalidraw
low
Minor
2,771,084,454
vscode
Ability to close and folders to reduce resource consumption in multi-root workspaces
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> I want to re-open this feature request that was closed in 2019: https://github.com/microsoft/vscode/issues/63963 It has reached 20+ votes, so I thought about giving it one more shot for the vscode team to take a look at this. TL;DR: > Eclipse has a concept of closing and opening projects. Closed projects are still part of the workspace but no files are visible from that project and essentially reduces the resources Eclipse consumes. Also help avoid the clutter (when search for something across workspace or opening file etc). > Vscode has feature called remove folder from workspace but it completely removes it. If we need it back, we need to browser for it and add it. > I would like to request to add feature to close and open folders similar to Eclipse. Thank you!
feature-request,workbench-multiroot
low
Major
2,771,113,169
ollama
Manual linux install: runners/cuda_v11_avx/ollama_llama_server: undefined symbol: ggml_backend_cuda_reg
### What is the issue? Tried to update with the manual Linux install (and making sure I deleted lib/ollama first) and got this error: runners/cuda_v11_avx/ollama_llama_server: undefined symbol: ggml_backend_cuda_reg Looks like it may have been fixed for the cuda_v12 runner in #8166 Maybe just an oversight in the build for the cuda 11 runner? 0.5.1 (before the lib restructure) seems to work fine. Thanks ### OS Linux ### GPU Nvidia ### CPU Intel ### Ollama version 0.5.4
bug
low
Critical
2,771,123,880
neovim
shada: don't store jumplist
### Problem I use global marks for navigation across neovim instances by using the shada system. However this also means I have to sync the jumplist. Thus when I open nvim instance 2 and jump back with `<C-o>` I accidentally go back to a file previously open in nvim instance 1. ### Expected behavior Add an option to not store the jumplist in the shada file. E.g. allow `set shada=f1` to enable file marks without needing the `'` option.
enhancement,options,editor-state
low
Major
2,771,124,034
transformers
Mask2FormerImageProcessor support overlapping features
### System Info transformers version: 4.48.0.dev0 Python version: 3.13.1 OS: Linux (AWS CodeSpace) `Linux default 5.10.228-219.884.amzn2.x86_64 #1 SMP Wed Oct 23 17:17:00 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux` Virtual environment: Conda Output of `pip list` ``` Package Version ------------------------ ----------- aiohappyeyeballs 2.4.4 aiohttp 3.11.11 aiosignal 1.3.2 asttokens 3.0.0 attrs 24.3.0 Brotli 1.1.0 certifi 2024.12.14 cffi 1.17.1 charset-normalizer 3.4.1 colorama 0.4.6 comm 0.2.2 datasets 3.2.0 debugpy 1.8.11 decorator 5.1.1 dill 0.3.8 exceptiongroup 1.2.2 executing 2.1.0 filelock 3.16.1 frozenlist 1.5.0 fsspec 2024.9.0 h2 4.1.0 hpack 4.0.0 huggingface_hub 0.26.5 hyperframe 6.0.1 idna 3.10 importlib_metadata 8.5.0 ipykernel 6.29.5 ipython 8.31.0 jedi 0.19.2 Jinja2 3.1.5 jupyter_client 8.6.3 jupyter_core 5.7.2 MarkupSafe 3.0.2 matplotlib-inline 0.1.7 mpmath 1.3.0 multidict 6.1.0 multiprocess 0.70.16 nest_asyncio 1.6.0 networkx 3.4.2 numpy 2.2.1 nvidia-cublas-cu12 12.4.5.8 nvidia-cuda-cupti-cu12 12.4.127 nvidia-cuda-nvrtc-cu12 12.4.127 nvidia-cuda-runtime-cu12 12.4.127 nvidia-cudnn-cu12 9.1.0.70 nvidia-cufft-cu12 11.2.1.3 nvidia-curand-cu12 10.3.5.147 nvidia-cusolver-cu12 11.6.1.9 nvidia-cusparse-cu12 12.3.1.170 nvidia-nccl-cu12 2.21.5 nvidia-nvjitlink-cu12 12.4.127 nvidia-nvtx-cu12 12.4.127 packaging 24.2 pandas 2.2.3 parso 0.8.4 pexpect 4.9.0 pickleshare 0.7.5 pillow 11.1.0 pip 24.3.1 platformdirs 4.3.6 prompt_toolkit 3.0.48 propcache 0.2.1 psutil 6.1.1 ptyprocess 0.7.0 pure_eval 0.2.3 pyarrow 18.1.0 pycparser 2.22 Pygments 2.18.0 PySocks 1.7.1 python-dateutil 2.9.0.post0 pytz 2024.1 PyYAML 6.0.2 pyzmq 26.2.0 regex 2024.11.6 requests 2.32.3 safetensors 0.5.0 setuptools 75.7.0 six 1.17.0 stack_data 0.6.3 sympy 1.13.1 tokenizers 0.21.0 torch 2.5.1 tornado 6.4.2 tqdm 4.67.1 traitlets 5.14.3 transformers 4.48.0.dev0 typing_extensions 4.12.2 tzdata 2024.2 urllib3 2.3.0 wcwidth 0.2.13 xxhash 3.5.0 yarl 1.18.3 zipp 3.21.0 zstandard 0.23.0 ``` ### Who can help? @amyeroberts @qubvel ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction From The code below gives an error "ValueError: Unable to infer channel dimension format". Different permutations of ChannelDimension and location of `num_features` give the same or similar errors. ``` import numpy as np from transformers.image_utils import ChannelDimension from transformers import Mask2FormerImageProcessor # Assumes torchvision is installed processor = Mask2FormerImageProcessor(do_rescale=False, do_resize=False, do_normalize=False) num_classes = 2 num_features = 5 height, width = (16, 16) images = [np.zeros((height, width, 3))] segmentation_maps = [np.random.randint(0, num_classes, (height, width, num_features))] batch = processor(images, segmentation_maps=segmentation_maps, return_tensors="pt", input_data_format=ChannelDimension.LAST) ``` See https://stackoverflow.com/questions/79331752/does-the-huggingface-mask2formerimageprocessor-support-overlapping-features. ### Expected behavior Processor supports overlapping masks without error.
bug
low
Critical
2,771,148,551
godot
Syntax highlighting for #region keyword is too loose
### Tested versions v4.4.dev7.official [46c8f8c5c] ### System information windows 11 ### Issue description as seen in [this reddit post](https://www.reddit.com/r/godot/comments/1huuq5w/why_did_it_turn_purple_when_i_commented_out_the/): ![Image](https://github.com/user-attachments/assets/8e6ceaef-cfb3-4907-88e7-d71ee458b8a9) Should only allow for `#region`, with an end of line or a space. ### Steps to reproduce type `#regionasdasd` ### Minimal reproduction project (MRP) any
bug,topic:gdscript,topic:editor
low
Minor
2,771,212,087
vscode
Enable Select Typescript Version command in tsconfig.json
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> The `Select Typescript Version` command only shows up in the pallet when a `.ts` or `.tsx` file is focussed, but it also concerns `tsconfig.json`. In my case I was missing an error on [`module`](https://www.typescriptlang.org/tsconfig/#module).
bug,typescript,good first issue
low
Critical
2,771,248,350
pytorch
The axis name set by `torch.export.Dim` is not in ExportedProgram
### 🐛 Describe the bug ```python import torch class Model(torch.nn.Module): def forward(self, x, y): return x + y dim = torch.export.Dim("batch", min=1, max=6) ep = torch.export.export( Model(), (torch.randn(2, 3), torch.randn(2, 3)), dynamic_shapes=[{0: dim}, {0: dim}], ) ``` ``` ExportedProgram: class GraphModule(torch.nn.Module): def forward(self, x: "f32[s0, 3]", y: "f32[s0, 3]"): # File: /home/titaiwang/pytorch/test_export.py:5 in forward, code: return x + y add: "f32[s0, 3]" = torch.ops.aten.add.Tensor(x, y); x = y = None return (add,) Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='y'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)]) Range constraints: {s0: VR[1, 6]} ``` The graph module and range constraints still use s0 to represent the dynamic axis. ### Versions Nightly Versions of relevant libraries: [pip3] flake8==6.1.0 [pip3] flake8-bugbear==23.3.23 [pip3] flake8-coding==1.3.3 [pip3] flake8-comprehensions==3.15.0 [pip3] flake8-executable==2.1.3 [pip3] flake8-logging-format==0.9.0 [pip3] flake8-pyi==23.3.1 [pip3] flake8-simplify==0.19.3 [pip3] model-explorer-onnx==0.3.0 [pip3] mypy==1.13.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] onnx==1.17.0 [pip3] onnxruntime==1.20.1 [pip3] onnxscript==0.1.0.dev20241216 [pip3] optree==0.13.0 [pip3] pytorch-sphinx-theme==0.0.24 [pip3] pytorch-triton==3.1.0+cf34004b8a [pip3] torch==2.6.0a0+gitb5b1e94 [pip3] torchaudio==2.5.0a0+b4a286a [pip3] torchmetrics==1.5.2 [pip3] torchvision==0.20.0a0+945bdad [pip3] triton==3.2.0 [conda] mkl 2023.1.0 h213fc3f_46344 [conda] mkl-include 2023.1.0 h06a4308_46344 [conda] mkl-service 2.4.0 py311h5eee18b_1 [conda] numpy 1.26.4 pypi_0 pypi [conda] optree 0.13.0 pypi_0 pypi [conda] pytorch-sphinx-theme 0.0.24 dev_0 <develop> [conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi [conda] torch 2.6.0a0+gitb5b1e94 dev_0 <develop> [conda] torchaudio 2.5.0a0+b4a286a dev_0 <develop> [conda] torchfix 0.4.0 pypi_0 pypi [conda] torchmetrics 1.5.2 pypi_0 pypi [conda] torchvision 0.20.0a0+945bdad dev_0 <develop> [conda] triton 3.2.0 pypi_0 pypi cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
oncall: pt2,oncall: export
low
Critical
2,771,251,843
vscode
Can't use Notebook Find (ctrl+f) in notebook diff viewer
### Applies To - [x] Notebooks (.ipynb files) - [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers) ### What happened? Previously, I was able to search in the side-by-side comparsion of files that appears when you click on an entry in the "Timeline" section for a notebook, but now it doesn't work. It still works for normal Python files, however. ### VS Code Version Version: 1.96.2 (user setup) Commit: fabdb6a30b49f79a7aba0f2ad9df9b399473380f Date: 2024-12-19T10:22:47.216Z Electron: 32.2.6 ElectronBuildId: 10629634 Chromium: 128.0.6613.186 Node.js: 20.18.1 V8: 12.8.374.38-electron.0 OS: Windows_NT x64 10.0.22631 ### Jupyter Extension Version 2024.11.0 ### Jupyter logs ```shell ``` ### Coding Language and Runtime Version _No response_ ### Language Extension Version (if applicable) _No response_ ### Anaconda Version (if applicable) _No response_ ### Running Jupyter locally or remotely? Local
bug,notebook-find
low
Minor
2,771,289,417
godot
Resource class @icon overridden in FileSystem by export variables
### Tested versions - Reproducible in Godot v4.3.stable.official [77dcf97d8] - Somewhat reproducible in v4.2.2.stable.official [15073afe3] - In 4.2, the icons seem to not work at all, showing only the default blank page ### System information Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1070 Ti (NVIDIA; 32.0.15.6636) - Intel(R) Core(TM) i5-8600K CPU @ 3.60GHz (6 Threads) ### Issue description When viewing custom resources in the FileSystem dialog, if the resource has any `@export` properties that take other custom resources, the original resource shows up in the FileSystem with either the wrong icon or the default page icon. If `MainResource` has an exported property of type `OtherResource`, then adding a `OtherResource` to that property causes the instance of `MainResource` to show up in the FileSystem with the default "blank page" resource icon instead of the custom icon in `main_resource.gd`. The same thing happens if any of the `MainResource`'s properties are of type `Texture2D`; regardless of what the property is, it stops the `MainResource` from showing the correct icon in the FileSystem dialog. On the other hand, if `MainResource` has an exported property of type `Array[OtherResource]`, then all new resources of type `MainResource` that you create subsequently will display `OtherResource`'s class `@icon` instead of the one from its own script. Strangely, this doesn't seem to affect instances of `MainResource` that were saved *before* the array property was added to the `main_resource.gd` script. They still show the correct icons in various other places in the Godot editor, such as the Inspector and the Create New Node dialog. ### Steps to reproduce - Create 2 custom `Resource` scripts, each with a `class_name` and an `@icon` - Add a property to one of the two scripts, of either the other script's type or an array of that type - Create some resources of the two types, and add one to the other's relevant property - Save the project. ![Image](https://github.com/user-attachments/assets/2de01750-dab5-40e8-9e45-163b373ab445) ### Minimal reproduction project (MRP) [Resource Class Icon Bug.zip](https://github.com/user-attachments/files/18323389/Resource.Class.Icon.Bug.zip) - `ResourceB` has a property that holds a `ResourceC`. - `ResourceD` has a property that holds a `Texture2D` - `ResourceE` has a property that holds an `Array[ResourceA]` As shown in the above image, `res_b2.tres` has a `ResourceC` loaded to its exported property; it displays a blank icon. `res_d2.tres` has an `svg` file loaded to its exported property; it displays a blank icon. `res_e3.tres` and `res_e4.tres` were created *after* an exported `Array[ResourceA]` property was added to the `resource_e.gd` script; they display the icon for `ResourceA`. Strangely, `res_e1.tres` and `res_e2.tres` still show the proper icon for `ResourceE`.
bug,topic:editor
low
Critical
2,771,301,328
pytorch
Dynamo fails on `types.UnionType`s
### 🐛 Describe the bug ```python import torch class Mod(torch.nn.Module): def forward(self): int | float torch.export.export(Mod(), ()) ``` ### Error logs ```python Traceback (most recent call last): File "bug.py", line 7, in <module> torch.export.export(Mod(), ()) File "/.../lib/python3.12/site-packages/torch/export/__init__.py", line 368, in export return _export( ^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1031, in wrapper raise e File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1004, in wrapper ep = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/export/exported_program.py", line 128, in wrapper return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1961, in _export return _export_for_training( ^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1031, in wrapper raise e File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1004, in wrapper ep = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/export/exported_program.py", line 128, in wrapper return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1825, in _export_for_training export_artifact = export_func( # type: ignore[operator] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 1283, in _strict_export_lower_to_aten_ir gm_torch_level = _export_to_torch_ir( ^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/export/_trace.py", line 667, in _export_to_torch_ir gm_torch_level, _ = torch._dynamo.export( ^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 1583, in inner result_traced = opt_f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 576, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 1400, in __call__ return self._torchdynamo_orig_callable( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 565, in __call__ return _compile( ^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 997, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 725, in compile_inner return _compile_inner(code, one_graph, hooks, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_utils_internal.py", line 95, in wrapper_function return function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 760, in _compile_inner out_code = transform_code_object(code, transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/bytecode_transformation.py", line 1414, in transform_code_object transformations(instructions, code_options) File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 236, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/convert_frame.py", line 680, in transform tracer.run() File "/.../lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2906, in run super().run() File "/.../lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 1076, in run while self.step(): ^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 986, in step self.dispatch_table[inst.opcode](self, inst) File "/.../lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 2317, in BINARY_OP return _binary_op_lookup[inst.arg](self, inst) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/symbolic_convert.py", line 339, in impl self.push(fn_var.call_function(self, self.popn(nargs), {})) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 1003, in call_function return handler(tx, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 851, in builtin_dispatch rv = fn(tx, args, kwargs) ^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py", line 831, in constant_fold_handler return VariableTracker.build(tx, res) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/variables/base.py", line 452, in build return builder.SourcelessBuilder.create(tx, value) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/.../lib/python3.12/site-packages/torch/_dynamo/variables/builder.py", line 2996, in create unimplemented( File "/.../lib/python3.12/site-packages/torch/_dynamo/exc.py", line 356, in unimplemented raise Unsupported(msg, case_name=case_name) torch._dynamo.exc.Unsupported: Unexpected type in sourceless builder types.UnionType from user code: File "bug.py", line 5, in forward int | float Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information ``` ### Versions ``` Collecting environment information... PyTorch version: 2.7.0.dev20250106+cpu Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 24.04.1 LTS (x86_64) GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 Clang version: Could not collect CMake version: version 3.28.3 Libc version: glibc-2.39 Python version: 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (64-bit runtime) Python platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.39 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 20 On-line CPU(s) list: 0-19 Vendor ID: GenuineIntel Model name: 13th Gen Intel(R) Core(TM) i7-1370P CPU family: 6 Model: 186 Thread(s) per core: 2 Core(s) per socket: 14 Socket(s): 1 Stepping: 2 CPU(s) scaling MHz: 13% CPU max MHz: 5200.0000 CPU min MHz: 400.0000 BogoMIPS: 4377.60 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 544 KiB (14 instances) L1i cache: 704 KiB (14 instances) L2 cache: 11.5 MiB (8 instances) L3 cache: 24 MiB (1 instance) NUMA node(s): 1 NUMA node0 CPU(s): 0-19 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Mitigation; Clear Register File Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==2.2.1 [pip3] torch==2.7.0.dev20250106+cpu [conda] Could not collect ``` cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
triaged,oncall: pt2,module: dynamo,oncall: export
low
Critical
2,771,307,504
godot
Graphics shake violently if physics interpolation is enabled at runtime
### Tested versions Reproducible in dev 216b3302 ### System information Godot v4.4.dev (7d645fbea) - Windows 11 (build 22631) - Multi-window, 2 monitors - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6636) - AMD Ryzen 7 5800X3D 8-Core Processor (16 threads) ### Issue description If physics interpolation is initially disabled, and then enabled at runtime, some graphics will start shaking violently as if they're constantly interpolating between two positions but the old and new positions don't get updated. https://github.com/user-attachments/assets/9a9ecdb4-0bf2-4bf6-bb1c-e760586e7aa0 ### Steps to reproduce - Have `physics/common/physics_interpolation` disabled in Project Settings - Call `get_tree().set_physics_interpolation_enabled(true);` at some point during runtime Workaround is to call `get_tree().root.reset_physics_interpolation();` *after* the call to `set_physics_interpolation_enabled()` Update: This workaround will not work for invisible objects. In fact there is currently no way to automatically reset interpolation on all invisible nodes without additional manual work for every affected object. ### Minimal reproduction project (MRP) [PhysicsLerpIssue.zip](https://github.com/user-attachments/files/18323729/PhysicsLerpIssue.zip)
bug,topic:physics
low
Major
2,771,324,384
vscode
SCM Commit & Push makes user choose which repository they want to push to
- VS Code Version: Insiders - OS Version: Windows 11 Steps to Reproduce: 1. In a multi-folder setup where each folder has its own remote, and where one of the folders lives inside another but is ignored by the outer folder, change files in the outer folder. 2. Type a commit message, and press Commit & Push for the outer folder. 3. :bug: The SCM viewlet still asks which repository I want to push to. I expect it to automatically push to the remote corresponding to the outer folder.
bug,git
low
Critical
2,771,349,163
next.js
mdx js loader: paths must be valid file:// URLs (windows only)
### Link to the code that reproduces this issue https://github.com/chrisweb/nextjs_mdx-js-loader_windows_reproduction ### To Reproduce 1. clone the repo (`git clone https://github.com/chrisweb/nextjs_mdx-js-loader_windows_reproduction.git`) 2. install the dependencies (`npm i`) 3. run the development server (`npm run dev`) ### Current vs. Expected behavior if you are on windows you will now get an error in your terminal and can also see it if you open localhost:3000 in your browser ![Error: Only URLs with a scheme in: file, data, and node are supported by the default ESM loader. On Windows, absolute paths must be valid file:// URLs. Received protocol 'c:'](https://raw.githubusercontent.com/chrisweb/nextjs_mdx-js-loader_windows_reproduction/refs/heads/main/public/problem_windows_screenshot.png) ### Provide environment information ```bash Operating System: Platform: win32 Arch: x64 Version: Windows 10 Binaries: Node: 23.5.0 ``` ### Which area(s) are affected? (Select all that apply) Markdown (MDX) ### Which stage(s) are affected? (Select all that apply) next dev (local) ### Additional context bug is windows specific, turbopack needs to be enabled, mdx support in next.config using @next/mdx, using mdx plugins where name is a string and options need to be serializable (this example has no options for simplicity)
Markdown (MDX)
low
Critical
2,771,357,320
ui
[bug]: Trying to init a new project but it cant generate a new components.json
### Describe the bug Following all the steps on https://ui.shadcn.com/docs/installation/vite But it still gives me this error. ### Affected component/components No components ### How to reproduce 1. Installing vite, creating react-ts template. 2 . Installing tailwind, postcss and autoprefixer. 3. Configuring tailwind.config.js, tsconfig.json and tsconfig.app.json 4. installing types/node and then replacing in vite.config.ts 5. Running `npx shadcn@latest init` Then getting error: ``` Something went wrong. Please check the error below for more details. If the problem persists, please open an issue on Github. Invalid configuration found in C:\path\to\my\project\components.json ``` Trying to create a own `components.json` then its giving the error that a components.json already exists... ### Codesandbox/StackBlitz link _No response_ ### Logs ```bash ✔ Preflight checks. ✔ Verifying framework. Found Vite. ✔ Validating Tailwind CSS. ✔ Validating import alias. Something went wrong. Please check the error below for more details. If the problem persists, please open an issue on GitHub. Invalid configuration found in C:\path\to\my\project/components.json. ``` ### System Info ```bash Windows, Vite, x64. Idk what else ``` ### Before submitting - [X] I've made research efforts and searched the documentation - [X] I've searched for existing issues
bug
low
Critical
2,771,392,342
svelte
Documentation: "Is there a router?" update for new hash-based routing
### Describe the bug https://svelte.dev/docs/svelte/faq#Is-there-a-router describes: > If you need hash-based routing on the client side, check out [svelte-spa-router](https://github.com/ItalyPaleAle/svelte-spa-router) or [abstract-state-router](https://github.com/TehShrike/abstract-state-router/). This should be updated since SvelteKit now supports hash-based routing. ### Reproduction Browse to https://svelte.dev/docs/svelte/faq#Is-there-a-router ### Logs _No response_ ### System Info ```shell N/A ``` ### Severity annoyance
documentation
low
Critical
2,771,397,941
node
utimes should support `undefined` arguments to leave the time alone
### What is the problem this feature will solve? Sometimes one only wants to modify access time, not modification time or vice versa. I'm working on the Emscripten where we have an implementation of a linux file system backed by the node fs module. The `utimens` family of functions are supposed to leave the time value alone if `UTIME_OMIT` is passed in the nanosecond field. ### What is the feature you are proposing to solve the problem? It would be convenient if `FS.utime(path, atime, undefined)` would update the atime and leave the mtime alone, and vice versa. node uses libuv to call utimens itself, but the libuv api only takes doubles and converts from them to fill the tv_nsec struct with `uv__fs_to_timeval`. It's not possible to make a double that is converted to `UTIME_OMIT` because `uv__fs_to_timeval` rounds the nanosecond field: https://github.com/libuv/libuv/blame/v1.x/src/unix/fs.c#L216. So libuv would need to be updated to support this. At the same time it could also be changed to support higher precision times #50859. ### What alternatives have you considered? Stat the file and use the stat to fill the time I want unchanged. Because of #50859, this will round trip the time through a double and so leave it only approximately unchanged not exactly unchanged.
feature request,libuv
low
Minor
2,771,409,922
flutter
[SwiftPM] Support conditional compilation in plugins
### Background CocoaPod plugins can use conditional compilation to allow an app to opt-in or out of certain features. For example, the [`just_audio`](https://pub.dev/packages/just_audio) plugin is an audio player that has an optional microphone feature. Since microphone use has privacy concerns, the app must provide a usage description. If the app doesn't need this microphone feature, `just_audio` provides a compilation flag to remove the microphone feature entirely. The app can set this compilation flag in its CocoaPods configuration. See: https://github.com/ryanheise/just_audio/issues/1368#issuecomment-2484546203 Unfortunately, Swift Package Manager does not support conditional compilation. Known affected packages: 1. [`just_audio`](https://pub.dev/packages/just_audio) 1. [`permission_handler`](https://pub.dev/packages/permission_handler) ### Work Options: 1. Document the known [workaround](https://github.com/loic-sharma/swiftpm_conditional_compilation). The workaround is hacky and not ideal... 2. Update Flutter to support [Swift package traits](https://github.com/swiftlang/swift-evolution/blob/main/proposals/0450-swiftpm-package-traits.md) if/when that lands
c: new feature,platform-ios,platform-mac,P3,team-ios,triaged-ios
low
Minor
2,771,457,772
flutter
[go_router] Add navigation destination to onExit state
### Use case I would like to register an onExit handler that can conditionally prevent navigation based on the topRoute of the upcoming location. e.g. I would like to show a dialog confirming the users desire to exit this portion of the app when hardware back is used, but I would like to allow navigation to a specific route ("/profile") without any confirmation dialog. ### Proposal Have the onExit callback's state reflect the upcoming destination instead of the previous destination. OR Add an additional property to the state information to include information about the upcoming destination.
c: new feature,package,c: proposal,P3,p: go_router,team-go_router,triaged-go_router
low
Minor
2,771,469,091
ollama
Add a CUDA+AVX2(VNNI) runner to the Docker image.
**Description**: I would like to ask to add a CUDA+AVX2 (maybe VNNI) model runner to the default Docker image for Ollama. I think this can help with performance in partial offload scenarios. This should be supported at build time (#2281), but for some reason I cant find the runner in the docker image I think that it should be possible for me to do locally by adding `--build-arg CUSTOM_CPU_FLAGS=avx2,avxvnni` to Docker build, but I still think that it can be beneficial to add a runner target named something like `cuda_v12_moderncpu` that will enable AVX2 and AVX-VNNI by default and that will build by default. I might work on this and submit a PR. **Benefits**: * Squeeze some more performance in partial offload **Example Use Case**: Running a 32B model on a 16GB VRAM GPU (in my case 13900HX + Laptop 4090). **Environment Variables/Configuration**: No additional configuration needed. **Related Files and Code**: * make/cuda.make (I guess)
feature request
low
Major
2,771,473,967
terminal
📜scrollback search highlight 1️⃣ is incorrect, and 2️⃣ incorrectly snaps in response to STDOUT
### Windows Terminal version 1.23.3232.0 ### Windows build number 10.0.19045.5247 ### Other Software ### Steps to reproduce 1) Hit Ctrl-Shift-F while a script is generating output 2) Enter in something to search for. I suggest “e” for bug-finding purposes. 3) Watch the highlights incorrectly dance around the screen. 4) Fail to ever find your highlighted text as long as a script is running. ### Expected Behavior 1) The highlights to what I am searching for are only displayed on top of what I am searching for. 2) If I am in search/scrollback mode and more STDOUT happens, it doesn’t snap me anywhere. (currently it snaps me all over the place) Side-note: You know what else would be nice? A pause button that suspends the process temporarily. So we can go back and find our error message before it scrolls off-buffer. ### Actual Behavior ![Image](https://github.com/user-attachments/assets/927e499c-2b4d-4fb7-9167-be6229eafbdf)
Issue-Bug,Needs-Triage
low
Critical
2,771,557,709
yt-dlp
[core] Windows reserved names are not sanitized with --windows-filenames
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm reporting a bug unrelated to a specific site - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) ### Provide a description that is worded well enough to be understood On Windows, there are certain filenames which are reserved and the creation of files will fail. See [here](https://learn.microsoft.com/en-us/windows/win32/fileio/naming-a-file#naming-conventions) for the list of all of them and [here](https://github.com/thombashi/pathvalidate/blob/68b004fa15cf696285a6ae591ef318aca69c64fa/pathvalidate/_filename.py#L127-L137) for a sanitization implementation. Note: I haven't tested on a Windows machine, but the `--windows-filenames` option should handle this even on non-Windows machines. ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [X] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell yt-dlp "soundcloud.com/one-thousand-and-one/test-track" -o "CON.%(ext)s" --windows-filenames [soundcloud] Extracting URL: soundcloud.com/one-thousand-and-one/test-track [soundcloud] one-thousand-and-one/test-track: Downloading info JSON [soundcloud] 1855267053: Downloading hls_mp3 format info JSON [soundcloud] 1855267053: Downloading http_mp3 format info JSON [soundcloud] 1855267053: Downloading hls_opus format info JSON [info] 1855267053: Downloading 1 format(s): hls_opus_0_0 [hlsnative] Downloading m3u8 manifest [hlsnative] Total fragments: 1 [download] Destination: CON.opus ```
enhancement,core-triage
low
Critical
2,771,560,775
godot
Excessive error spam when a global script is not valid
### Tested versions 4.4 dev7 ### System information W10 ### Issue description If you have a global script (with `class_name`) and it has an error, the error will be spammed occasionally when using editor. I observed it e.g. when opening CreateDialog to add a node (happens once), or every time you try to autocomplete something. https://github.com/user-attachments/assets/97e7b7f2-e148-42ee-a481-025a07a2d38f What's relevant though is that the error appears at all. It means the script is loaded and/or parsed needlessly *very often* and it probably applies to all global scripts. We should find a way to avoid that. The scripts should be parsed only after they change and use some cache whenever the editor needs something script-related. ### Steps to reproduce 1. Add a script with class name 2. Introduce some parsing error, e.g. a random word wherever 3. In another script, try autocompleting something ### Minimal reproduction project (MRP) N/A
bug,topic:gdscript,topic:editor
low
Critical
2,771,601,030
rust
`--target bpfel-unknown-none` generates LLVM bitcode instead of eBPF
I built this code with `cargo build --target bpfel-unknown-none -Zbuild-std`: ```rust #![no_std] #[unsafe(no_mangle)] #[inline(never)] pub fn oof(a: u64, b: u64, c: u64, d: u64, e: u64, f: u64, g: u64) -> u64 { a + b + c + d + e + f + g } #[unsafe(no_mangle)] pub fn argh(a: u64, b: u64, c: u64, d: u64, e: u64, f: u64, g: u64) -> u64 { a + b + c + d + e + f + g } ``` I expected to see this happen: An BPF binary was produced of some kind (or compilation failed). Instead, this happened: It produced LLVM bitcode! this was discovered via (frequent `ls` omitted): ```sh cargo new bpf-test cd bpf-test "$EDITOR" . cd target cd bpfel-unknown-none cd debug file libbpf_test.rlib ar x libbpf_test.rlib ~/.rustup/toolchains/nightly-"$HOST_TUPLE"/lib/rustlib/"$HOST_TUPLE"/bin/llvm-dis bpf_test-*.rcgu.o "$EDITOR" . bpf_test-*.rcgu.o.ll ``` This is a bug because we normally require that targets are capable of producing object code. This can be fixed by building the [bpf-linker](https://github.com/aya-rs/bpf-linker) for this target, but I suspect it is mostly like the [existing bitcode linker](https://github.com/rust-lang/rust/blob/master/src/tools/llvm-bitcode-linker/src/bin/llvm-bitcode-linker.rs) that we already ship for the purposes of the nvptx targets. ### Meta `rustc --version --verbose`: ``` rustc 1.85.0-nightly (45d11e51b 2025-01-01) binary: rustc commit-hash: 45d11e51bb66c2deb63a006fe3953c4b6fbc50c2 commit-date: 2025-01-01 host: x86_64-unknown-linux-gnu release: 1.85.0-nightly LLVM version: 19.1.6 ```
T-compiler,C-bug,O-eBPF
low
Critical
2,771,606,491
go
go/types: stack overflow in Alignof
``` #!stacks "crash/crash" && "go/types.(*gcSizes).Alignof:+29" && "go/types.(*Named).resolve:+0" ``` Issue created by [stacks](https://pkg.go.dev/golang.org/x/tools/gopls/internal/telemetry/cmd/stacks). Looks like Alignof was applied to a named struct type (illegally) containing itself as its first field. This stack `gYYnog` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2025-01-02.json): - `crash/crash` - [`go/types.(*Named).resolve:+0`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/named.go;l=161) - [`go/types.(*Named).Underlying:+2`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/named.go;l=498) - [`go/types.(*Named).under:+1`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/named.go;l=528) - [`go/types.under:+2`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/under.go;l=16) - [`go/types.(*gcSizes).Alignof:+7`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/gcsizes.go;l=22) - [`go/types.(*gcSizes).Alignof:+29`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/gcsizes.go;l=44) - [`go/types.(*gcSizes).Alignof:+29`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/gcsizes.go;l=44) - [`go/types.(*gcSizes).Alignof:+29`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/gcsizes.go;l=44) - [`go/types.(*gcSizes).Alignof:+29`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/gcsizes.go;l=44) - [`go/types.(*gcSizes).Alignof:+29`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/gcsizes.go;l=44) - [`go/types.(*gcSizes).Alignof:+29`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/gcsizes.go;l=44) - [`go/types.(*gcSizes).Alignof:+29`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/gcsizes.go;l=44) - [`go/types.(*gcSizes).Alignof:+29`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/gcsizes.go;l=44) - [`go/types.(*gcSizes).Alignof:+29`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/gcsizes.go;l=44) - [`go/types.(*gcSizes).Alignof:+29`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/gcsizes.go;l=44) - [`go/types.(*gcSizes).Alignof:+29`](https://cs.opensource.google/go/go/+/go1.23.3:src/go/types/gcsizes.go;l=44) ``` golang.org/x/tools/[email protected] go1.23.3 windows/amd64 vscode (1) ```
NeedsInvestigation,gopls,Tools,gopls/telemetry-wins,BugReport
low
Critical
2,771,620,164
godot
Camera2D stops scrolling if physics interpolation is enabled at runtime
### Tested versions Reproducible in dev 216b3302 ### System information Godot v4.4.dev (7d645fbea) - Windows 11 (build 22631) - Multi-window, 2 monitors - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6636) - AMD Ryzen 7 5800X3D 8-Core Processor (16 threads) ### Issue description If physics interpolation is initially disabled, and then enabled at runtime, Camera2D stops updating any scrolling, and will only do so if you force it via `force_update_scroll()`. Even though the camera's position updates, the scrolling does not. https://github.com/user-attachments/assets/2203e82f-ddd5-41fe-823c-228543ef34b6 ### Steps to reproduce - Have `physics/common/physics_interpolation` disabled in Project Settings - Call `get_tree().set_physics_interpolation_enabled(true);` at some point during runtime Workaround is to call `force_update_scroll()` on the Camera2D every frame in `_process()`. ### Minimal reproduction project (MRP) [PhysicsLerpCameraIssue.zip](https://github.com/user-attachments/files/18325692/PhysicsLerpCameraIssue.zip)
enhancement,topic:rendering
low
Minor
2,771,643,873
flutter
Migrate Flutter's Skia Gold instances to spanner
null
team-infra
low
Minor
2,771,655,919
godot
[Godot 4.3-4.4beta1] GPUParticle2D spawned out of camera zoom does not animate correctly
### Tested versions Reproducible in 4.3 - 4.4beta1, Windows + Linux, AMD and Nvidia ### System information (Forward+) Godot 4.3 - 4.4beta1, Windows 11, Linux KDE NEON, AMD and Nvidia gpu ### Issue description Creating particles out of screen does not play alpha animations correctly. The example and the video shows how creating particles inside the camera zoom animate OK (fade until disappear), but when creating outside camera zoom, they "just start animating late, and then disappear" https://www.youtube.com/watch?v=S5k2E4AQg24&ab_channel=FrancoGast%C3%B3nPellegrini ### Steps to reproduce In general: 1. Spawn particles outside camera 2. wait some seconds so the animation progress 3. find the particles outside the camera, and see the problem Using the project attached: 1. right click to spawn 5 GPU Particles 2. zoom out (scroll wheel) to see the 4 particles spawned outside the camera 3. animation get broken ### Minimal reproduction project (MRP) [bug-gpu-particle-2d.zip](https://github.com/user-attachments/files/18445744/bug-gpu-particle-2d.zip)
bug,topic:2d,topic:particles
low
Critical
2,771,660,260
godot
Property completion can't be filtered
### Tested versions 4.4 dev7 ### System information W10 ### Issue description `tween_property()` method provides completion for property names. The problem is that you can't search for a property. https://github.com/user-attachments/assets/8902e3a0-8684-430f-a74c-e288d105d6ba As soon as you type something, the completion popup switches to properties of the current object instead of property names. If you try to start with quote, the completion disappears completely. ### Steps to reproduce 1. Add this code in a script ```GDScript func _ready() -> void: var tw := create_tween() tw.tween_property($icon, ``` ($icon is a Sprite2D) 2. Try to filter a property in autocompletion popup ### Minimal reproduction project (MRP) N/A
bug,topic:gdscript,topic:editor,usability
low
Major
2,771,670,452
kubernetes
Fix message formatting in AttachDetach controller on VA status error
### What happened? Attachdetach Controller records events on a PVC when errors occur: * https://github.com/kubernetes/kubernetes/blob/c3f3fdc1aa62002a58bec1141fe69e86bbb27491/pkg/volume/util/operationexecutor/operation_generator.go#L315 The message is logged with `Eventf`, however a single string is passed to the `messageFmt` argument. This treats a message (which may potentially contain format characters that should be treated literally) as a format string. This can result in incorrectly formatted messages, depending on the error message. This is problematic for CSI, as error messages are propagated up from the CSI driver, and presented on a VolumeAttachment resource as a status message. This can result in an event with special characters being replaced by `!!(MISSING)`. ### What did you expect to happen? Expected error message to be printed as string literal, not format string. ### How can we reproduce it (as minimally and precisely as possible)? Reproduction requires configuring a CSI driver to emit a particular error message that includes special characters interpreted as golang string format characters on `ControllerPublish` RPC. ### Anything else we need to know? _No response_ ### Kubernetes version <details> ```console 1.30 ``` </details> ### Cloud provider <details> GCP </details> ### OS version <details> ```console # On Linux: $ cat /etc/os-release # paste output here $ uname -a # paste output here # On Windows: C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture # paste output here ``` </details> ### Install tools <details> </details> ### Container runtime (CRI) and version (if applicable) <details> </details> ### Related plugins (CNI, CSI, ...) and versions (if applicable) <details> </details>
kind/bug,sig/storage,needs-triage
medium
Critical
2,771,674,794
terminal
Want to disable mouse selection and copy function in Windows terminal
### Description of the new feature windows terminal can not use SetConsoleMode ~ENABLE_QUICK_EDIT_MODE to disable mouse selection. ### Proposed technical implementation details _No response_
Issue-Feature,Needs-Triage,Needs-Tag-Fix
low
Minor
2,771,689,115
PowerToys
copy text fail
### Microsoft PowerToys version 0.87.1 ### Installation method Microsoft Store ### Running as admin Yes ### Area(s) with issue? General ### Steps to reproduce ![Image](https://github.com/user-attachments/assets/736e6de0-b2c0-4311-9052-81b9e5adc40f) ### ✔️ Expected Behavior _No response_ ### ❌ Actual Behavior _No response_ ### Other Software _No response_
Issue-Bug,Needs-Triage,Needs-Team-Response
low
Minor
2,771,689,489
terminal
Windows API GetConsoleScreenBufferInfo() does not return expected values
### Windows Terminal version Windows Console Host ### Windows build number _No response_ ### Other Software _No response_ ### Steps to reproduce In windows 11 console host (not windows terminal), F11 go to full screen, Use CTRL+Mouse Wheel zoom in/out, there is an issue: Windows API GetConsoleScreenBufferInfo() returns realtime dwSize and srWindow values, not following windows font scaling operation. F11 again, the expected values have returned. ### Expected Behavior GetConsoleScreenBufferInfo() return realtime values ### Actual Behavior GetConsoleScreenBufferInfo() return old values until hit F11 key
Issue-Bug,Needs-Author-Feedback,Needs-Triage,No-Recent-Activity
low
Minor